Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
11,400
|
<ASSISTANT_TASK:>
Python Code:
from pyspark import SparkContext
sc = SparkContext('local','example')
from pyspark.mllib.linalg import Vectors
x = Vectors.dense([1,2,3,4])
x[0]
x = [Vectors.dense([1,2,3,4,5]), Vectors.dense([6,7,8,9,10])]
xrdd = sc.parallelize(x, 2)
xrdd.glom().collect()
from pyspark.mllib.regression import LabeledPoint as LP
pt = LP(1, Vectors.dense(2,-1,4))
print("Label: ", pt.label)
print("Feature Vector: ", pt.features)
from pysparl.mllib.classification import NaiveBayes as NB
nbmodel = NB.train(Xrdd_train)
testpred = NB.train(Xrdd_test.features)
trainpred = nbmodel.predict(Xrdd_train.features)
cf_mat [Xrdd_train.label][trainpred] += 1
from pyspark.mllib.classification import DecisionTree
dtmodel = DecisionTree.trainClassifier(Xrdd_train,
numClasses = 2,
impurity = 'entropy', ## options: gini or entropy
maxDepth = 5,
maxBins = 32,
minInstancesPerNode = 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example Labeled Points
Step2: Example Creating a Word-count RDD
Step3: Confusion matrix
Step4: Decision Tree
|
11,401
|
<ASSISTANT_TASK:>
Python Code:
from ndreg import *
sparseImg = sitk.Cast(imgRead("../dat/sparse/sparse.img"), sitk.sitkFloat32)
imgShow(sparseImg)
denseImg = sitk.Cast(imgRead("../dat/sparse/dense.img"), sitk.sitkFloat32)
imgShow(denseImg)
imgShow(sparseImg - denseImg, cmap=plt.cm.jet)
alpha=0.1
epsilon = None
(field, invField) = imgMetamorphosis(sparseImg, denseImg, alpha=alpha, epsilon=epsilon, verbose=True)
outSparseImg = imgApplyField(sparseImg, field)
imgShow(outSparseImg)
imgShow(outSparseImg - denseImg, cmap=plt.cm.jet)
(field, invField) = imgMetamorphosis(denseImg, sparseImg, alpha=alpha, verbose=True)
outDenseImg = imgApplyField(denseImg, field)
imgShow(outDenseImg)
imgShow(sparseImg - outDenseImg, cmap=plt.cm.jet)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we read our images. This one is a sparse image
Step2: This image is a dense image
Step3: Here's the difference between the 2 images before registration
Step4: Sparse to Dense Registration
Step5: We evaluate the registration using the difference between the deformed sparse image and dense image
Step6: Dense to sparse Registration
Step7: We evaluate the registration using the difference between the sparse image and deformed dense image
|
11,402
|
<ASSISTANT_TASK:>
Python Code:
def runningMeanFast(x, N):
return np.convolve(x, np.ones((N,))/N, mode='valid')
def powerme(x1,x2,n):
X = []
for m in range(n+1):
for i in range(m+1):
X.append(np.multiply(np.power(x1,i),np.power(x2,(m-i))))
return np.hstack(X)
def safeSigmoid(x, eps=0):
y = 1.0/(1.0 + np.exp(-x))
# przytnij od dolu i gory
if eps > 0:
y[y < eps] = eps
y[y > 1 - eps] = 1 - eps
return y
def h(theta, X, eps=0.0):
return safeSigmoid(X*theta, eps)
def J(h,theta,X,y, lamb=0):
m = len(y)
f = h(theta, X, eps=10**-7)
j = -np.sum(np.multiply(y, np.log(f)) +
np.multiply(1 - y, np.log(1 - f)), axis=0)/m# \
#+ lamb/(2*m) * np.sum(np.power(theta[1:],2))
return j
def dJ(h,theta,X,y,lamb=0):
g = 1.0/y.shape[0]*(X.T*(h(theta,X)-y))
#g[1:] += lamb/float(y.shape[0]) * theta[1:]
return g
def SGD(h, fJ, fdJ, theta, X, Y,
alpha=0.001, maxEpochs=1.0, batchSize=100,
adaGrad=False, logError=True, validate=0.0, valStep=100, lamb=0):
errorsX, errorsY = [], []
errorsVX, errorsVY = [], []
XT, YT = X, Y
if validate > 0:
mv = int(X.shape[0] * validate)
XV, YV = X[:mv], Y[:mv]
XT, YT = X[mv:], Y[mv:]
m, n = XT.shape
start, end = 0, batchSize
maxSteps = (m * float(maxEpochs)) / batchSize
if adaGrad:
hgrad = np.matrix(np.zeros(n)).reshape(n,1)
for i in range(int(maxSteps)):
XBatch, YBatch = XT[start:end,:], YT[start:end,:]
grad = fdJ(h, theta, XBatch, YBatch, lamb=lamb)
if adaGrad:
hgrad += np.multiply(grad, grad)
Gt = 1.0 / (10**-7 + np.sqrt(hgrad))
theta = theta - np.multiply(alpha * Gt, grad)
else:
theta = theta - alpha * grad
if logError:
errorsX.append(float(i*batchSize)/m)
errorsY.append(fJ(h, theta, XBatch, YBatch).item())
if validate > 0 and i % valStep == 0:
errorsVX.append(float(i*batchSize)/m)
errorsVY.append(fJ(h, theta, XV, YV).item())
if start + batchSize < m:
start += batchSize
else:
start = 0
end = min(start + batchSize, m)
return theta, (errorsX, errorsY, errorsVX, errorsVY)
def classifyBi(theta, X):
prob = h(theta, X)
return prob
n = 6
sgd = True
data = np.matrix(np.loadtxt("ex2data2.txt",delimiter=","))
np.random.shuffle(data)
X = powerme(data[:,0], data[:,1],n)
Y = data[:,2]
pyplot.figure(figsize=(16,8))
pyplot.subplot(121)
pyplot.scatter(X[:,2].tolist(),
X[:,1].tolist(),
c=Y.tolist(),
s=100, cmap=pyplot.cm.get_cmap('prism'));
if sgd:
theta = np.matrix(np.zeros(X.shape[1])).reshape(X.shape[1],1)
thetaBest, err = SGD(h, J, dJ, theta, X, Y, alpha=1, adaGrad=True, maxEpochs=2500, batchSize=100,
logError=True, validate=0.25, valStep=1, lamb=0)
xx, yy = np.meshgrid(np.arange(-1.5, 1.5, 0.02),
np.arange(-1.5, 1.5, 0.02))
l = len(xx.ravel())
C = powerme(xx.reshape(l,1),yy.reshape(l,1),n)
z = classifyBi(thetaBest, C).reshape(np.sqrt(l),np.sqrt(l))
pyplot.contour(xx, yy, z, levels=[0.5], lw=3);
pyplot.ylim(-1,1.2);
pyplot.xlim(-1,1.2);
pyplot.legend();
pyplot.subplot(122)
pyplot.plot(err[0],err[1], lw=3, label="Training error")
pyplot.plot(err[2],err[3], lw=3, label="Validation error");
pyplot.legend()
pyplot.ylim(0.2,0.8);
def J(h,theta,X,y,lamb=0):
m = len(y)
f = h(theta, X, eps=10**-7)
j = -np.sum(np.multiply(y, np.log(f)) +
np.multiply(1 - y, np.log(1 - f)), axis=0)/m \
+ lamb/(2*m) * np.sum(np.power(theta[1:] ,2))
return j
def dJ(h,theta,X,y,lamb=0):
m = float(y.shape[0])
g = 1.0/y.shape[0]*(X.T*(h(theta,X)-y))
g[1:] += lamb/m * theta[1:]
return g
n = 6
lam = 0.01
data = np.matrix(np.loadtxt("ex2data2.txt",delimiter=","))
np.random.shuffle(data)
X = powerme(data[:,0], data[:,1],n)
Y = data[:,2]
theta = np.matrix(np.zeros(X.shape[1])).reshape(X.shape[1],1)
thetaBest, err = SGD(h, J, dJ, theta, X, Y, alpha=1, adaGrad=True, maxEpochs=2500, batchSize=100,
logError=True, validate=0.25, valStep=1, lamb=lam)
xx, yy = np.meshgrid(np.arange(-1.5, 1.5, 0.02),
np.arange(-1.5, 1.5, 0.02))
l = len(xx.ravel())
C = powerme(xx.reshape(l,1),yy.reshape(l,1),n)
z = classifyBi(thetaBest, C).reshape(np.sqrt(l),np.sqrt(l))
pyplot.figure(figsize=(16,8))
pyplot.subplot(121)
pyplot.scatter(X[:,2].tolist(),
X[:,1].tolist(),
c=Y.tolist(),
s=100, cmap=pyplot.cm.get_cmap('prism'));
pyplot.contour(xx, yy, z, levels=[0.5], lw=3);
pyplot.ylim(-1,1.2);
pyplot.xlim(-1,1.2);
pyplot.legend();
pyplot.subplot(122)
pyplot.plot(err[0],err[1], lw=3, label="Training error")
pyplot.plot(err[2],err[3], lw=3, label="Validation error");
pyplot.legend()
pyplot.ylim(0.2,0.8);
%matplotlib inline
import numpy as np
from matplotlib import pyplot
def h(theta, X):
return X.dot(theta)
def norm(X,Y):
return np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(Y)
def rho(a,b):
return np.sqrt((a-b)**2)
def kneighbors(k, t, x, y):
d = rho(t, x)
nn = np.argsort(d)[:k]
return y[nn]
def Gauss(x,t,l=.01):
t = rho(x,t)/l
return 1./np.sqrt(2*np.pi)*np.exp(-1./2.*t**2)
def Kernel(K,t,x,y,l=.01):
return np.array([np.sum(K(x,t1,l)*y)/np.sum(K(x,t1,l), axis=1) for t1 in t])
def J(Yp,Y):
return np.sqrt(1.0/(2*len(y))*(Yp-Y).dot(Yp-Y))
def true(x):
return 2*x*np.sin(4*x)
m = 300
x = np.linspace(0, 5, m)
ytrue = true(x)
y = ytrue + 3*np.random.randn(m)
t = np.linspace(0, 5, m)
ttrue = true(t)
n = 11 #stopień wielomianu
show1 = True
xtuple = [x**i for i in range(n+1)]
xreg = np.vstack(xtuple).T
theta = norm(xreg,y)
ttuple = [t**i for i in range(n+1)]
treg = np.vstack(ttuple).T
pyplot.figure(figsize=(16,10))
if show1:
pyplot.plot(x, ytrue, label="originał", lw=3)
pyplot.scatter(x,y, s=40)
if n > 0:
predict = h(theta, treg)
pyplot.plot(t, predict, label="reg. lin. n=" + str(n), lw=3, color="red")
print("RMSE: ", J(predict,ttrue))
pyplot.xlim(0,5)
pyplot.legend();
k = 16
show2 = True
pyplot.figure(figsize=(16,10))
if show2:
pyplot.plot(x, ytrue, label="originał", lw=3)
pyplot.scatter(x,y, s=40)
if k > 0:
predict = np.array([np.mean(kneighbors(k, i, x, y)) for i in t])
#predict = Kernel(D1,t,x,y,0.1)
pyplot.plot(t, predict, label="k="+str(k), lw=3, color="red")
print("RMSE: ", J(predict,ttrue))
pyplot.xlim(0,5)
pyplot.legend();
k = 16
data = np.matrix(np.loadtxt("ex2data2.txt",delimiter=","))
#np.random.shuffle(data)
n = 1
X = powerme(data[:,0], data[:,1],n)
Y = data[:,2]
def rho(a,b):
p = np.sqrt(np.sum(np.power((a-b),2), axis=1))
return p
def kneighbors(k, t, x, y):
d = rho(t, x)
nn = np.argsort(d.ravel()).ravel()[0,:k]
return np.array(y[nn]).reshape(k,).astype(int)
xx, yy = np.meshgrid(np.arange(-1.5, 1.5, 0.02),
np.arange(-1.5, 1.5, 0.02))
l = len(xx.ravel())
C = powerme(xx.reshape(l,1),yy.reshape(l,1),n)
Z = [np.argmax(np.bincount(kneighbors(k,c,X,Y))) for c in C]
pyplot.figure(figsize=(10,10))
pyplot.scatter(X[:,2].tolist(),
X[:,1].tolist(),
c=Y.tolist(),
s=100, cmap=pyplot.cm.get_cmap('prism'));
pyplot.contour(xx, yy, np.array(Z).reshape(np.sqrt(l),np.sqrt(l)), levels=[0.5], lw=3);
pyplot.ylim(-1,1.2);
pyplot.xlim(-1,1.2);
xg = np.linspace(-3,3,300)
yg = 1/np.sqrt(2*np.pi)*np.exp(-1./2.*xg**2)
pyplot.figure(figsize=(12,6))
pyplot.plot(xg,yg,lw=3);
n = 6
k = 16
l = 0.1
show3 = False
pyplot.figure(figsize=(16,10))
if show3:
pyplot.plot(x, ytrue, label="originał", lw=3)
pyplot.scatter(x,y, s=40)
if k > 0:
predict1 = np.array([np.mean(kneighbors(k, i, x, y)) for i in t])
pyplot.plot(t, predict1, label="k="+str(k), lw=3, color="green")
if l > 0:
predict2 = Kernel(D1,t,x,y,l)
pyplot.plot(t, predict2, label="lambda=" + str(l), lw=3, color="red")
print("RMSE: ", J(predict2,ttrue))
pyplot.xlim(0,5)
pyplot.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Regularyzacja
Step2: Różne twarze (nie)dopasowania
Step3: Regresja liniowa (wielomianowa)
Step4: Regresja za pomocą k-NN
Step5: Regresja
Step6: Klasyfikacja
|
11,403
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('data/human_body_temperature.csv')
df.info()
df.head()
df['temperature'].hist()
m=df['temperature'].mean()
m
from scipy.stats import t, norm
from math import sqrt
patients=df.shape[0]
n=patients-1
patients
SE=df['temperature'].std()/sqrt(n)
SE
t.cdf((m-98.6)/SE,n)
norm.cdf((m-98.6)/SE)
print(m+t.ppf(0.95,n)*SE)
print(m-t.ppf(0.95,n)*SE)
t.ppf(0.95,n)*SE
df['temperature'].quantile([.1,.9])
males=df[df['gender']=='M']
males.describe()
females=df[df['gender']=='F']
females.describe()
SEgender=sqrt(females['temperature'].std()/females.shape[0]+males['temperature'].std()/males.shape[0])
SEgender
mgender=females['temperature'].mean()-males['temperature'].mean()
mgender
2*(1-t.cdf(mgender/SEgender,21))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: No, this sample isn't normal, it is definitely skewed. However "this is a condition for the CLT... to apply" is just wrong. The whole power of the CLT is that it says that the distribution of sample means (not the sample distribution) tends to a normal distribution regardless of the distribution of the population or sample. What we do care about for the CLT is that our data is independent, which, assuming the data was gathered in a traditional manner, should be the case.
Step3: With 130 data points, it really doesn't matter if we use the normal or t distribution. A t distribution with 129 degrees of freedom is essentially a normal distribution, so the results should not be very different. However, in this day in age I don't see the purpose of even bothering with the normal distribution. Looking up t distribution tables is awfully annoying, so it once had purpose, however nowdays I'm just going to let a computer calculate either for me, and both are equally simple.
Step4: Our null hypothosis is that the true average body temperature is $98.6^\circ F$. We'll be calculating the probability of finding a value less than or equal to the mean we obtained in this data given that this null hypothosis is true, i.e. our alternative hypothosis is that the true average body temperature is less than $98.6^\circ F$
Step5: Regardless of what distribution we assume we are drawing our sample means from, the probability of seeing this data or averages less than it if the true average body temperature was 98.6 is basically zero.
Step6: Our estimate of the true average human body temperature is thus $98.2^\circ F \pm 0.1$.
Step7: This range, 97.29-99.10 degrees F includes 80% of the patients in our sample.
|
11,404
|
<ASSISTANT_TASK:>
Python Code:
j = {'ants_dwi_to_t1': u'ANTS 3 -m CC[ %s, %s, 1, 4] -r Gauss[0,3] -t Elast[1.5] -i 30x20x10 -o %s',
'warp_md_to_t1': u'WarpImageMultiTransform 3 %s %s -R %s %s %s'}
json.dump(j, open('/tmp/templates.json','w'))
cmd = './thesaurus.py -n --name ants_dwi_to_t1 /tmp/templates.json target.nii moving.nii trans_'
print '$', cmd
print shell_execute(cmd)
cmd = './thesaurus.py -n --name warp_md_to_t1 /tmp/templates.json target.nii moving.nii output.nii trans_Warp.nii trans_Affine.txt'
print '\n$', cmd
print shell_execute(cmd)
j = {'ants_dwi_to_t1': [u'@ALFA Denoised Nobias SPM Dilated White matter',
u'@ALFA DWI B0 Brain FSL FAST White matter',
u'ALFA ANTS Elast DWI to T1 Transformation Template Filename'],
'warp_md_to_t1': [u'@ALFA Mean Diffusivity Image',
u'>ALFA DWI MD Map Warped to T1 space',
u'@ALFA Denoised Nobias T1 Image',
u'@ALFA ANTS Elast DWI to T1 Transformation',
u'@ALFA ANTS Elast DWI to T1 Affine Transformation']}
json.dump(j, open('/tmp/templates_io.json', 'w'))
import thesaurus as t
a = t.ALFAHelper() #jsonfile='/tmp/templates_io.json')
a.parse_command(subject='10013', name='warp_md_to_t1')
step1 = a.current_stage('10013')
print 'step:', step1, '\n'
step2 = a.current_stage('55855')
print 'step:', step2
d = '/home/grg/data/ALFA_DWI/'
from glob import glob
subjects = [e.split('/')[-1] for e in glob(osp.join(d, '*')) if osp.isdir(e)]
a.generate_batch(subjects, step='ants_dwi_to_t1', batchfile='/tmp/batch.sh', create_workflow=True)
b = open('/tmp/batch.sh').readlines()
for each in b[:20]:
print each
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The $\%s$ represent parameters (such as filenames) that will be provided later.
Step2: The $-n$ option is just a safety lock to display the command only and not to run it.
Step3: Then call the desired command by its keyname, with the desired subject and feel light...
Step4: Based on this file it can also recreate the sequence of a pipeline and that provide you with the progression state of a subject along this pipeline.
Step5: OK now enough talking. Can you please run that d@£( step over my entire dataset ?
|
11,405
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
# The resulted buildings:
Image("./data/buildings_distribution.png")
Image('./data/train_process.png')
Image('./data/feature_f_scores.png')
Image('./data/bst_tree.png')
Image('./data/ROC_Curve_combined.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Objective
Step2: Features
Step3: This model resulted in an AUC score of 0.858 on test data. Feature importances are shown below
Step4: Locations were most important features in this model. Although I tried using more features generated by differentiating different kind of crimes or violations, the AUC scores did not improve.
Step5: To reduce variance of the model, since overfitting was observed during training. I also tried to reduce variance by including in more nonblighted buildings by sampling again multiple times with replacement (bagging).
|
11,406
|
<ASSISTANT_TASK:>
Python Code:
import autofig
import numpy as np
import matplotlib.pyplot as plt
#autofig.inline()
n = 75
x = np.linspace(0, 4*np.pi, n)
y1 = np.sin(x)
y2 = -np.sin(x)
z1 = np.cos(x)
z2 = -2*np.cos(x)
yerr = np.random.rand(n)*0.3
zerr = np.random.rand(n)
autofig.reset()
plt.gcf().set_size_inches(14,8)
autofig.plot(x, y1, z1, i='x',
xlabel='x', ylabel='y',
yerror=yerr,
c=z1, cmap='winter', clabel='z',
s=z1, smap=(0.01, 0.04), slabel='z',
marker='none', linestyle='solid',
highlight_marker='s', highlight_linestyle='dashed')
autofig.plot(x, y2, z2, i='x',
xlabel='x', ylabel='y',
yerror=yerr,
c=z2, cmap='winter', clabel='z',
s=z2, smap=(0.01, 0.04), slabel='z',
marker='o', linestyle='none',
uncover=True)
mplfig = autofig.draw(save='color_size_zorder.png')
%matplotlib inline
from matplotlib.collections import LineCollection
fig = plt.figure(figsize=(14,8))
# both sizes need their own normalization
s1norm = plt.Normalize(min(z1), max(z1))
s2norm = plt.Normalize(min(z2), max(z2))
# need to manually map s1 onto a reasonable linewidth range (we'll skip converting from axes/figure units)
s1 = s1norm(z1)*8+2
# need to manually map s2 onto a reasonable markersize range (we'll skip converting from axes/figure units)
s2 = s2norm(z2)*400+50
# both colorscales need to follow the SAME normalization
cmin, cmax = np.min([z1, z2]), np.max([z1, z2])
cnorm = plt.Normalize(cmin, cmax)
# plot both sets of errorbars... we'd have to loop to make the colors match
plt.errorbar(x, y1, yerr=yerr, linestyle='none')
plt.errorbar(x, y2, yerr=yerr, linestyle='none')
# handle the line changing color and width
points = np.array([x, y1]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
lc = LineCollection(segments, norm=cnorm, cmap='winter', linewidths=s1)
lc.set_array(z1)
plt.gca().add_collection(lc)
# handle the markers changing color and markersize
# for some reason, we don't want to pass the normalized colors to scatter
# even though we had to send norm to LineCollection
plt.scatter(x, y2, s=s2, c=z2, cmap='winter', edgecolors='none')
# make the sidebars
plt.colorbar(label='z')
# we need to manually force the colorbar to use the same normalization that we used
plt.clim(cmin, cmax)
# label the axes
plt.xlabel('x')
plt.ylabel('y')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting in Autofig
Step2: Replicating in Matplotlib
Step3: We'll replicated everything except highlight and uncover (which require interpolation at the given value) and zorder (which requires looping through each point and setting a re-scaled z-order)
|
11,407
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
def find_peaks(a):
Find the indices of the local maxima in a sequence.
b=[]
count=0
while count<(len(a)): # while count (our index indicator) is less than the length of the sequence+1
if count==len(a):
break
if count==0 and a[count]>a[count+1]: # if the first element is greater than the 2nd, add the index number
b.append(count) # to our list b
count+=2 # count is now 2. which means we will analyze the 3rd element.
elif count==0 and a[count]<a[count+1]: # if the first value is less than the 2nd, move to evaluating the 2nd element.
count+=1
elif count==(len(a)-1) and (a[count]>a[count-1]): # if count = the very last element, and is greater than the previous, append to b.
b.append(count)
count+=1
elif count==(len(a)-1) and (a[count]<a[count-1]):
break
elif a[count]>a[count+1] and a[count]>a[count-1]: # if the element is greater than elements before and after, append.
b.append(count)
count+=1 #count moves two over.
elif a[count]<a[count+1] or a[count]<a[count-1]: # if element isnt greater than elements before and after,
count+=1 #move over to next element
return b
print(find_peaks([0,1,2,3]))
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
a=pi_digits_str
b=[]
for i in a:
b.append(int(i))
c=np.array(b)
find_peaks(c)
assert True # use this for grading the pi digits histogram
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Peak finding
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
|
11,408
|
<ASSISTANT_TASK:>
Python Code:
T, N, K = 5, 10, 8
x = np.linspace(0, 1, N)
t = np.linspace(0, T, K)
h, tau = 1/(N - 1), T/(K - 1)
u0 = 1*x
y1 = np.zeros_like(t)
y2 = 1/2*t + 1
xv, tv = np.meshgrid(x, t, sparse=True)
f = xv**2 / 2 - tv
u_ans = xv**2 / 2 * tv + xv
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection = '3d')
ax.plot_wireframe(xv, tv, u_ans)
ax.set_xlabel('position')
ax.set_ylabel('time')
ax.set_zlabel('temp')
ax.set_title('Solution')
plt.show()
def explicit(h, tau, u0, y1, y2, f):
K = y1.size
N = u0.size
y = np.empty((K, N))
y[0, :] = u0
y[:, 0] = y1
y[:, N - 1] = y2
for j in range(1, K):
for i in range(1, N - 1):
y[j, i] = y[j - 1, i] + tau*(y[j - 1, i + 1] - 2*y[j - 1, i] + y[j - 1, i - 1])/h**2 + tau*f[j - 1, i]
return y
u_exp = explicit(h, tau, u0, y1, y2, f)
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection = '3d')
ax.plot_wireframe(xv, tv, u_exp)
ax.set_xlabel('position')
ax.set_ylabel('time')
ax.set_zlabel('temp')
ax.set_title('Explicit scheme')
plt.show()
np.linalg.norm(u_exp - u_ans)
def thomas(a, b, c, d):
n = len(d)
A = np.empty_like(d)
B = np.empty_like(d)
A[0] = -c[0]/b[0]
B[0] = d[0]/b[0]
for i in range(1, n):
A[i] = -c[i] / (b[i] + a[i]*A[i - 1])
B[i] = (d[i] - a[i]*B[i - 1])/(b[i] + a[i]*A[i - 1])
y = np.empty_like(d)
y[n - 1] = B[n - 1]
for i in range(n - 2, -1, -1):
y[i] = A[i]*y[i + 1] + B[i]
return y
def implicit(h, tau, u0, y1, y2, f):
K = y1.size
N = u0.size
y = np.empty((K, N))
y[0, :] = u0
y[:, 0] = y1
y[:, N - 1] = y2
for j in range(1, K):
a = np.ones(N - 2) * 1/h**2
a[0] = 0
b = np.ones(N - 2) * -(1/tau + 2/h**2)
c = np.ones(N - 2) * 1/h**2
c[-1] = 0
d = -y[j - 1, 1:-1]/tau - f[j, 1:-1]
d[0] = d[0] - y1[j]/h**2
d[-1] = d[-1] - y2[j]/h**2
y[j, 1:-1] = thomas(a, b, c, d)
return y
u_imp = implicit(h, tau, u0, y1, y2, f)
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection = '3d')
ax.plot_wireframe(xv, tv, u_imp)
ax.set_xlabel('position')
ax.set_ylabel('time')
ax.set_zlabel('temp')
ax.set_title('Implicit scheme')
plt.show()
np.linalg.norm(u_imp - u_ans)
N = 100
Ks = range(2, 8)
err_exp = []
err_imp = []
for K in Ks:
x = np.linspace(0, 1, N)
t = np.linspace(0, T, K)
h, tau = 1/(N - 1), T/(K - 1)
u0 = 1*x
y1 = np.zeros_like(t)
y2 = 1/2*t + 1
xv, tv = np.meshgrid(x, t, sparse=True)
f = xv**2 / 2 - tv
u_ans = xv**2 / 2 * tv + xv
u_exp = explicit(h, tau, u0, y1, y2, f)
u_imp = implicit(h, tau, u0, y1, y2, f)
err_exp.append(np.linalg.norm(u_exp - u_ans))
err_imp.append(np.linalg.norm(u_imp - u_ans))
plt.figure(figsize=(15, 10))
plt.semilogy(Ks, err_exp, Ks, err_imp)
plt.legend(['explicit', 'implicit'], loc='best')
plt.xlabel('K')
plt.ylabel('norm')
plt.title('Error(frobenius norm) of explicit vs implicit schemes')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explicit scheme
Step2: Implicit scheme
Step3: Stability of solutions
|
11,409
|
<ASSISTANT_TASK:>
Python Code:
import json
import os
import shutil
import subprocess
if not os.path.isfile('./data/hg38.ml.fa'):
print('downloading hg38.ml.fa')
subprocess.call('curl -o ./data/hg38.ml.fa.gz https://storage.googleapis.com/basenji_barnyard/hg38.ml.fa.gz', shell=True)
subprocess.call('gunzip ./data/hg38.ml.fa.gz', shell=True)
if not os.path.exists('./data/coolers'):
os.mkdir('./data/coolers')
if not os.path.isfile('./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'):
subprocess.call('curl -o ./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'+
' https://storage.googleapis.com/basenji_hic/tutorials/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', shell=True)
subprocess.call('curl -o ./data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool'+
' https://storage.googleapis.com/basenji_hic/tutorials/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', shell=True)
ls ./data/coolers/
lines = [['index','identifier','file','clip','sum_stat','description']]
lines.append(['0', 'HFF', './data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', '2', 'sum', 'HFF'])
lines.append(['1', 'H1hESC', './data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', '2', 'sum', 'H1hESC'])
samples_out = open('data/microc_cools.txt', 'w')
for line in lines:
print('\t'.join(line), file=samples_out)
samples_out.close()
if os.path.isdir('data/1m'):
shutil.rmtree('data/1m')
! akita_data.py --sample 0.05 -g ./data/hg38_gaps_binsize2048_numconseq10.bed -l 1048576 --crop 65536 --local -o ./data/1m --as_obsexp -p 8 -t .1 -v .1 -w 2048 --snap 2048 --stride_train 262144 --stride_test 32768 ./data/hg38.ml.fa ./data/microc_cools.txt
! cut -f4 data/1m/sequences.bed | sort | uniq -c
! head -n3 data/1m/sequences.bed
# specify model parameters json to have only two targets
params_file = './params.json'
with open(params_file) as params_file:
params_tutorial = json.load(params_file)
params_tutorial['model']['head_hic'][-1]['units'] =2
with open('./data/1m/params_tutorial.json','w') as params_tutorial_file:
json.dump(params_tutorial,params_tutorial_file)
### note that training with default parameters requires GPU with >12Gb RAM ###
! akita_train.py -k -o ./data/1m/train_out/ ./data/1m/params_tutorial.json ./data/1m/
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download a few Micro-C datasets, processed using distiller (https
Step2: Write out these cooler files and labels to a samples table.
Step3: Next, we want to choose genomic sequences to form batches for stochastic gradient descent, divide them into training/validation/test sets, and construct TFRecords to provide to downstream programs.
Step4: The data for training is now saved in data/1m as tfrecords (for training, validation, and testing), where contigs.bed contains the original large contiguous regions from which training sequences were taken, and sequences.bed contains the train/valid/test sequences.
Step5: Now train a model!
|
11,410
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
query = query_schema +
with cpc as ( select distinct patientunitstayid from careplancareprovider )
, cpeol as ( select distinct patientunitstayid from careplaneol )
, cpge as ( select distinct patientunitstayid from careplangeneral )
, cpgo as ( select distinct patientunitstayid from careplangoal )
, cpid as ( select distinct patientunitstayid from careplaninfectiousdisease )
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct cpc.patientunitstayid) as n_cpc
, count(distinct cpeol.patientunitstayid) as n_cpeol
, count(distinct cpge.patientunitstayid) as n_cpge
, count(distinct cpgo.patientunitstayid) as n_cpgo
, count(distinct cpid.patientunitstayid) as n_cpid
from patient pt
left join cpc
on pt.patientunitstayid = cpc.patientunitstayid
left join cpeol
on pt.patientunitstayid = cpeol.patientunitstayid
left join cpge
on pt.patientunitstayid = cpge.patientunitstayid
left join cpgo
on pt.patientunitstayid = cpgo.patientunitstayid
left join cpid
on pt.patientunitstayid = cpid.patientunitstayid
group by pt.hospitalid
.format()
df = pd.read_sql_query(query, con)
# convert to percent
cols = ['n_cpc', 'n_cpeol', 'n_cpge', 'n_cpgo', 'n_cpid']
for c in cols:
df[c] = df[c].astype(float) / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients', ascending=False, inplace=True)
df.head(n=10)
# rename columns within the call for clarity in the figure
column_label = {'n_cpc': 'Care provider',
'n_cpeol': 'EOL',
'n_cpge': 'General',
'n_cpgo': 'Goal',
'n_cpid': 'Infectious disease'}
df[cols].rename(columns=column_label).vgplot.hist(bins=10, stacked=True,
var_name='Number of hospitals',
value_name='Percent of patients with data')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Summarize data available in each table
|
11,411
|
<ASSISTANT_TASK:>
Python Code:
from pathlib import Path
import os
import csv
import pandas as pd
import yaml
import numpy as np
import statistics
from operator import itemgetter
def get_mixtures(row):
total = row['A'] + row['T'] + row['C'] + row['G']
thresh = 0.05 * total
alleles = {
'qpos': int(row['query.nuc.pos']),
'mixtures': {}
}
if row['coverage'] > 100:
for aa in ('A', 'C', 'T', 'G'):
if row[aa] > thresh:
alleles['mixtures'][aa] = round((row[aa] / row['coverage'] * 100), 2)
if len(alleles['mixtures']) > 1:
return alleles
else:
return None
BASE = Path('/path/to/fastq_files')
ROOT = BASE / 'Results' / 'scratch'
samples = os.listdir(ROOT)
results = {}
for sample in samples:
results[sample] = {}
data = ROOT / sample / 'data.csv'
data_extended = ROOT / sample / 'data_extended.csv'
nucs = ROOT / sample / 'nuc.csv'
conseq_all = ROOT / sample / 'conseq_all.csv'
# Get length of conseq
conseqs = pd.read_csv(conseq_all)
conseq_len = len(conseqs[conseqs['region'].isnull()]['sequence'].iloc[0])
results[sample]['conseq_len'] = conseq_len
# Get concordances
with open(data_extended) as f:
reader = csv.DictReader(f)
for row in reader:
results[sample]['concordance'] = row['concordance']
# Get coverages
datafile = pd.read_csv(data)
for _type in ('mismatch', 'deletion', 'addition'):
info = datafile[
(datafile['type'] == _type)
]
results[sample][_type] = info
# Get mixtures
nucsfile = pd.read_csv(nucs)
nucsfile = nucsfile[
~nucsfile['region'].str.contains('nsp')
]
nucsfile['mixtures'] = nucsfile.apply(compute, axis=1)
nucsfile = nucsfile[
~nucsfile['mixtures'].isnull()
]['mixtures']
results[sample]['mixtures'] = nucsfile
N = len(results)
print(f'We tested {N} samples')
total_nucleotides = sum([results[x]['conseq_len'] for x in results])
mismatches = pd.concat(
[results[sample]['mismatch'] for sample in results]
)
n_mismatches = len(mismatches)
unique_mismacthes = len(mismatches['sample'].unique())
mismatch_data = []
for i,row in mismatches.iterrows():
thing = row.to_dict()
mixtures = results[row['sample']]['mixtures']
matching_mixture = None
for mix in mixtures:
if mix['qpos'] == row.pos:
matching_mixture = mix
break
if matching_mixture:
thing['match'] = True
else:
thing['match'] = False
mismatch_data.append(thing)
mismatches_per_sample = [len(results[x]['mismatch']) for x in results]
median = statistics.median(mismatches_per_sample)
mean = round(statistics.mean(mismatches_per_sample), 3)
q75, q25 = np.percentile(mismatches_per_sample, [75 ,25], interpolation='midpoint')
iqr = q75 - q25
print(f'median number of mismatches per sample: {median}')
print(f'mean number of mismatches per sample: {mean}')
print(f'25th percentile of mismatches: {q25}')
print(f'75th percentile of mismatches: {q75}')
print(f'iqr number of mismatches per sample: {iqr}')
concord = [float(results[x]['concordance']) for x in results]
median = statistics.median(concord)
mean = round(statistics.mean(concord), 3)
q75, q25 = np.percentile(concord, [75 ,25], interpolation='midpoint')
iqr = round(q75 - q25, 4)
print(f'median concordance: {median}')
print(f'mean concordance: {mean}')
print(f'25th percentile: {q25}')
print(f'75th percentile: {q75}')
print(f'iqr concordance: {iqr}')
mixnums = [len(results[s]['mixtures']) for s in results]
median = statistics.median(mixnums)
mean = round(statistics.mean(mixnums), 3)
q75, q25 = np.percentile(mixnums, [75 ,25], interpolation='midpoint')
iqr = q75 - q25
print(f'total number of mixtures: {sum(mixnums)}')
print(f'median number of mixtures: {median}')
print(f'mean number of mixtures: {mean}')
print(f'25th percentile: {q25}')
print(f'75th percentile: {q75}')
print(f'iqr number of mixtures: {iqr}')
minor_freqs = []
for sample in results:
_min = None
for mix in results[sample]['mixtures']:
if not _min:
_min = min(mix['mixtures'].items(), key=itemgetter(1))
continue
mymin = min(mix['mixtures'].items(), key=itemgetter(1))
if mymin < _min:
_min = mymin
if _min:
minor_freqs.append(_min)
minor_freqs = [x[1] for x in minor_freqs]
median = round(statistics.median(minor_freqs), 2)
mean = round(statistics.mean(minor_freqs), 2)
q75, q25 = np.percentile(minor_freqs, [75 ,25], interpolation='midpoint')
iqr = round(q75 - q25, 4)
print(f'median minor allele frequency: {median}')
print(f'mean minor allele frequency: {mean}')
print(f'25th percentile: {q25}')
print(f'75th percentile: {q75}')
print(f'iqr minor allele frequency: {iqr}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the number of samples N
Step2: Get the total number of nucleotides across all samples
Step3: Get a dataframe of the mismatches only
Step4: Get the number of mismatches
Step5: Get the number of unique mismatches
Step6: Match mixtures with mismatch positions
Step7: Print information on the mismatches per sample
Step8: Print information on the concordances per sample
Step9: Print information on the number of mixtures per sample
Step10: Print information on the minor frequency alleles (If a position has 60% A and 40% T, the minor frequency allele is T)
|
11,412
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import sys
print(sys.version)
import sys
sys.path.append("../python")
import setup_dataset
data, labels = setup_dataset.setup_simple_iterables("with_dc")
X_train, X_test, y_train, y_test = setup_dataset.slice_data(data, labels)
# Setting up various complexities for the different algorithms.
# Number of neighbors
knn_c = (2, 4, 10, 50)
# Maximum depth in a decision tree
dtc_c = (2, 5, 10, 50)
# complexities for the rbf kernel
svc_c = (1, 1000, 1000000)
# Number of estimators in the random forest classifier
rfc_c = (1, 10, 100, 1000, 10000, 100000)
# Number of parallel jobs (CPU)
rfc_jobs = (3, -2)
gpc_jobs = (3, -2)
# Number of iteration in the Gaussian Process Classifier
gpc_c = (20, 50, 100)
from sklearn.preprocessing import MaxAbsScaler
from sklearn.preprocessing import StandardScaler
scaler, X_train_scaled, X_test_scaled = setup_dataset.scale_sliced_data(X_train, X_test, StandardScaler())
knn_list, knn_accs, knn_pred, knn_pred_times, knn_fit_times = \
setup_dataset.run_knn(X_train, X_test, y_train, y_test, knn_c)
setup_dataset.compute_cm(y_test, knn_pred, knn_c)
knn_list_scaled, knn_accs_scaled, knn_pred_scaled, knn_pred_times_scaled, knn_fit_times_scaled =\
setup_dataset.run_knn(X_train_scaled, X_test_scaled, y_train, y_test, knn_c)
setup_dataset.compute_cm(y_test, knn_pred_scaled, knn_c)
for line in knn_accs :
print(line)
print("====================")
for line in knn_accs_scaled:
print(line)
dtc_list, dtc_accs, dtc_pred, dtc_pred_times, dtc_fit_times = \
setup_dataset.run_decision_tree(X_train, X_test, y_train, y_test, dtc_c)
dtc_list_scaled, dtc_accs_scaled, dtc_pred_scaled, dtc_pred_times_scaled, dtc_fit_times_scaled = \
setup_dataset.run_decision_tree(X_train_scaled, X_test_scaled, y_train, y_test, dtc_c)
setup_dataset.compute_cm(y_test, dtc_pred, dtc_c)
setup_dataset.compute_cm(y_test, dtc_pred_scaled, dtc_c)
for line in dtc_accs :
print(line)
print("====================")
for line in dtc_accs_scaled:
print(line)
nbc_list, nbc_accs, nbc_pred, nbc_pred_times, nbc_fit_times = \
setup_dataset.run_naive_bayes(X_train, X_test, y_train, y_test, (1,))
nbc_list_scaled, nbc_accs_scaled, nbc_pred_scaled, nbc_pred_times_scaled, nbc_fit_times_scaled = \
setup_dataset.run_naive_bayes(X_train_scaled, X_test_scaled, y_train, y_test, (1,))
setup_dataset.compute_cm(y_test, nbc_pred, [1])
setup_dataset.compute_cm(y_test, nbc_pred_scaled, [1])
abc_list, abc_accs, abc_pred, abc_pred_times, abc_fit_times = \
setup_dataset.run_adaboost(X_train, X_test, y_train, y_test, (1,))
abc_list_scaled, abc_accs_scaled, abc_pred_scaled, abc_pred_times_scaled, abc_fit_times_scaled = \
setup_dataset.run_adaboost(X_train_scaled, X_test_scaled, y_train, y_test, (1,))
setup_dataset.compute_cm(y_test, abc_pred, [1])
setup_dataset.compute_cm(y_test, abc_pred_scaled, [1])
qda_list, qda_accs, qda_pred, qda_pred_times, qda_fit_times = \
setup_dataset.run_quadratic(X_train, X_test, y_train, y_test, (1,))
qda_list_scaled, qda_accs_scaled, qda_pred_scaled, qda_pred_times_scaled, qda_fit_times_scaled = \
setup_dataset.run_quadratic(X_train_scaled, X_test_scaled, y_train, y_test, (1,))
setup_dataset.compute_cm(y_test, qda_pred, [1])
setup_dataset.compute_cm(y_test, qda_pred_scaled, [1])
svc_list, svc_accs, svc_pred, svc_pred_times, svc_fit_times = \
setup_dataset.run_svc(X_train, X_test, y_train, y_test, svc_c)
svc_list_scaled, svc_accs_scaled, svc_pred_scaled, svc_pred_times_scaled, svc_fit_times_scaled = \
setup_dataset.run_svc(X_train_scaled, X_test_scaled, y_train, y_test, svc_c)
setup_dataset.compute_cm(y_test, svc_pred, svc_c)
setup_dataset.compute_cm(y_test, svc_pred_scaled, svc_c)
for line in svc_accs :
print(line)
print("====================")
for line in svc_accs_scaled:
print(line)
for line in svc_accs :
print(line)
print("====================")
for line in svc_accs_scaled:
print(line)
# THIS MAKES THE KERNEL CRASH!
rfc_list, rfc_accs, rfc_pred, rfc_pred_times, rfc_fit_times = \
setup_dataset.run_random_forest(X_train, X_test, y_train, y_test, rfc_c, rfc_jobs)
rfc_list_scaled, rfc_accs_scaled, rfc_pred_scaled, rfc_pred_times_scaled, rfc_fit_times_scaled = \
setup_dataset.run_random_forest(X_train_scaled, X_test_scaled, y_train, y_test, rfc_c, rfc_jobs)
setup_dataset.compute_cm(y_test, rfc_pred, rfc_c)
setup_dataset.compute_cm(y_test, rfc_pred_scaled, rfc_c)
gpc_list, gpc_accs, gpc_pred, gpc_pred_times, gpc_fit_times = \
setup_dataset.run_gaussian(X_train, X_test, y_train, y_test, gpc_c, gpc_jobs)
gpc_list_scaled, gpc_accs_scaled, gpc_pred_scaled, gpc_pred_times_scaled, gpc_fit_times_scaled = \
setup_dataset.run_gaussian(X_train_scaled, X_test_scaled, y_train, y_test, gpc_c, rfc_jobs)
setup_dataset.compute_cm(y_test, gpc_pred, gpc_c)
setup_dataset.compute_cm(y_test, gpc_pred_scaled, gpc_c)
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
x = np.arange(len(knn_accs[0]))
y = [[] for _ in range(len(knn_accs[0]))]
for i in range(len(knn_accs[0])):
y[i] = knn_accs[i]
plt.plot(x, y[i], linestyle='-', label="complexity {}".format(i))
# plt.scatter(x, y[i], label="data {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
plt.figure()
x = np.arange(len(knn_accs[0]))
y = [[] for _ in range(len(knn_accs[0]))]
width = 0.2
for i in range(len(knn_accs[0])):
y[i] = knn_accs[i]
plt.bar(x- 1.5*width + width*i, y[i], width, align='center', label="complexity {}".format(i), alpha=0.8)
# plt.scatter(x, y[i], label="data {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
plt.figure()
x = np.arange(len(knn_fit_times[0]))
y = [[] for _ in range(len(knn_fit_times[0]))]
for i in range(len(knn_fit_times[0])):
y[i] = knn_fit_times[i]
plt.plot(x, y[i], linestyle='-', label="complexity {}".format(i))
# plt.scatter(x, y[i], label="data {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
plt.figure()
x = np.arange(len(knn_accs_scaled[0]))
y = [[] for _ in range(len(knn_accs_scaled[0]))]
for i in range(len(knn_accs_scaled[0])):
y[i] = knn_accs_scaled[i]
plt.plot(x, y[i], linestyle='-', label="complexity {}".format(i))
# plt.scatter(x, y[i], label="data {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
plt.figure()
x = np.arange(len(svc_accs[0]))
y = [[] for _ in range(len(svc_accs[0]))]
for i in range(len(svc_accs[0])):
y[i] = svc_accs[i]
plt.plot(x, y[i], linestyle='-', label="complexity {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
plt.figure()
x = np.arange(len(svc_accs_scaled[0]))
y = [[] for _ in range(len(svc_accs_scaled[0]))]
for i in range(len(svc_accs_scaled[0])):
y[i] = svc_accs_scaled[i]
plt.plot(x, y[i], linestyle='-', label="complexity {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
plt.figure()
x = np.arange(len(dtc_accs_scaled[0]))
y = [[] for _ in range(len(dtc_accs_scaled[0]))]
for i in range(len(dtc_accs_scaled[0])):
y[i] = dtc_accs_scaled[i]
plt.plot(x, y[i], linestyle='-', label="complexity {}".format(i))
# plt.scatter(x, y[i], label="data {}".format(i))
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.show()
for line in dtc_accs :
print(line)
print("====================")
for line in dtc_accs_scaled:
print(line)
import pickle
pickle.dump(knn_list[3][1], open('../weights/knn_full_data_set_4_neighbors.sav', 'wb'))
pickle.dump(dtc_list[3][3], open('../weights/dtc_full_data_set_depth_50.sav', 'wb'))
pickle.dump(svc_list_scaled[3][2], open('../weights/svc_full_data_set_rbf_1e6.sav', 'wb'))
from sklearn.externals import joblib
joblib.dump(knn_list[3][1], '../weights/knn_full_data_set_4_neighbors.pkl', protocol=2)
joblib.dump(dtc_list[3][3], '../weights/dtc_full_data_set_depth_50.pkl', protocol=2)
joblib.dump(svc_list_scaled[3][2], '../weights/svc_full_data_set_rbf_1e6.pkl', protocol=2)
X_test_large = X_test[3]
X_test_scaled_large = X_test_scaled[3]
y_test_large = y_test[3]
knn_saved = pickle.load(open('../weights/knn_full_data_set_4_neighbors.sav', 'rb'))
dtc_saved = pickle.load(open('../weights/dtc_full_data_set_depth_50.sav', 'rb'))
svc_saved = pickle.load(open('../weights/svc_full_data_set_rbf_1e6.sav', 'rb'))
print("The score achieved with the saved model is:\n")
print("K-nearest Neighbors =", knn_saved.score(X_test_large, y_test_large))
print("Decision Tree =", dtc_saved.score(X_test_large,y_test_large))
print("Support Vector Machine =", svc_saved.score(X_test_scaled_large,y_test_large))
knn_saved = joblib.load('../weights/knn_full_data_set_4_neighbors.pkl')
dtc_saved = joblib.load('../weights/dtc_full_data_set_depth_50.pkl')
svc_saved = joblib.load('../weights/svc_full_data_set_rbf_1e6.pkl')
result = knn_saved.score(X_test_large,y_test_large)
print("The score achieved with the saved model is:\n")
print("K-nearest Neighbors =", knn_saved.score(X_test_large, y_test_large))
print("Decision Tree =", dtc_saved.score(X_test_large,y_test_large))
print("Support Vector Machine =", svc_saved.score(X_test_scaled_large,y_test_large))
print X_test_large[4000]
print len(X_test_large)
scaler_saved = joblib.dump(scaler, '../weights/scaler_saved.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Placeholder for small graph
Step2: Saving the models to disk
Step3: The same can be done without additional dependencies using joblib, which is shipped with scikit-learn. This has the advantage of being able to take either a file object or just the path to the file as an argument
Step4: Joblib works wonders with sklearn, as well as with numpy arrays in general. For general purposes they both do their job just fine (as joblib uses pickle in the background), but joblib can be significantly faster
|
11,413
|
<ASSISTANT_TASK:>
Python Code:
R = 40.
H = 60
x_w = 30.
y_w = 90.
Rw = 20
seed = 1
def f(x,y):
np.random.seed(seed)
return - 3.*np.exp( -((x-x_w)**2. + (y-y_w)**2.)/(Rw**2.) ) + \
- 3.*np.exp( -((x-30)**2. + (y-40)**2.)/(20**2.) ) + \
- 1.5*np.exp( -((x-np.random.uniform(-60,60))**2. + (y-np.random.uniform(40,70))**2.)/(60.**2.) ) #+ 4.*(y/H)**0.0
x_MC = np.random.uniform(-R,R,9e6)
y_MC = np.random.uniform(-R,R,9e6) + H
r = np.sqrt(x_MC**2. + (y_MC-H)**2.)
ind_in = np.where(r <= R)
u_eq_MC = np.mean(f(x_MC[ind_in],y_MC[ind_in]))
u_eq_MC
x_mesh,y_mesh = np.meshgrid(np.arange(-50.,100.,0.1),np.arange(0.,120.,0.1))
F = f(x_mesh,y_mesh)
plt.contourf(x_mesh,y_mesh,F,20)
cb = plt.colorbar()
circ = plt.Circle([0.,H],R,ec='k',fill=False)
ax = plt.gca()
ax.add_patch(circ)
plt.plot(x_MC[ind_in],y_MC[ind_in],'.k',alpha=0.005)
x_plot = np.arange(-50.,50.1,0.1)
N_x = 5
N_y = 5
root_x_1D, weight_x = np.polynomial.legendre.leggauss(N_x)
root_y_1D, weight_y = np.polynomial.legendre.leggauss(N_y)
root_x,root_y = np.meshgrid(root_x_1D,root_y_1D)
root_x,root_y = root_x.flatten(), root_y.flatten()
print root_x
print root_y
print
print root_x_1D
print root_y_1D
print weight_x
print weight_y
y1 = lambda x: np.sqrt(R**2. - x**2.) + H
y2 = lambda x: -np.sqrt(R**2. - x**2.) + H
xi_1D = R*root_x_1D
xi = R*root_x
yij = 0.5*(y1(xi) - y2(xi))*root_y + 0.5*(y1(xi) + y2(xi))
fij = f(xi,yij)
fij.reshape(N_x,N_y)
np.dot(fij.reshape(N_x,N_y),weight_y)
G_xi = 0.5*(y1(xi_1D) - y2(xi_1D)) * np.dot(fij.reshape(N_x,N_y),weight_y)
I = np.dot(R*G_xi,weight_x)/(np.pi*R**2.)
print I
print 100*(I-u_eq_MC)/u_eq_MC, '%'
plt.contourf(x_mesh,y_mesh,F,20)
plt.colorbar()
circ = plt.Circle([0.,H],R,ec='k',fill=False)
ax = plt.gca()
ax.add_patch(circ)
x_plot = np.arange(-50.,50.2,0.1)
plt.plot(x_plot,y1(x_plot))
plt.plot(x_plot,y2(x_plot))
plt.plot(xi,yij,'ok')
N_x = 5
N_y = 5
root_x_1D, weight_x = np.polynomial.legendre.leggauss(N_x)
root_y_1D, weight_y = np.polynomial.legendre.leggauss(N_y)
root_x,root_y = np.meshgrid(root_x_1D,root_y_1D)
root_x,root_y = root_x.flatten(), root_y.flatten()
x1 = lambda y: np.sqrt(R**2. - (y-H)**2.)
x2 = lambda y: -np.sqrt(R**2. - (y-H)**2.)
yi_1D = R*root_y_1D+H
yi = R*root_y+H
xij = 0.5*(x1(yi) - x2(yi))*root_x + 0.5*(x1(yi) + x2(yi))
fij = f(xij,yi)
fij.reshape(N_x,N_y)
G_yi = 0.5*(x1(yi_1D) - x2(yi_1D)) * np.dot(weight_x,fij.reshape(N_x,N_y))
I = np.dot(R*G_yi,weight_y)/(np.pi*R**2.)
print I
print 100*(I-u_eq_MC)/u_eq_MC, '%'
plt.contourf(x_mesh,y_mesh,F,20)
plt.colorbar()
circ = plt.Circle([0.,H],R,ec='k',fill=False)
ax = plt.gca()
ax.add_patch(circ)
y_plot = np.arange(-50.,50.2,0.1)+H
plt.plot(x1(y_plot),y_plot)
plt.plot(x2(y_plot),y_plot)
plt.plot(xij,yi,'ok')
f_r = lambda r,th: f(-r*np.sin(th),r*np.cos(th)+H)
def gaussN(R, func, varargin, NGr=4, NGth=4):
Calculate numerically the gauss integration.
[1] eq. 38
Inputs
----------
R (float): Wind turbine radius [m]
func (function): Wind speed function
varargin: Other arguments for the function besides [r,te]
NG (int): Number of Ga
Outputs
----------
Ua (float):
A = np.pi*R**2
#coefficients
if (NGr==4)&(NGth==4): #for speed give the full values
rt = np.array([[ -0.339981043584856, -0.861136311594053,
0.339981043584856, 0.861136311594053]])
te = rt.T
w = np.array([[0.652145154862546, 0.347854845137454,
0.652145154862546, 0.347854845137454]])
wt=w
else:
rt,w = np.polynomial.legendre.leggauss(NGr)
rt = np.array([rt])
#te = rt.T
w = np.array([w])
te,wt = np.polynomial.legendre.leggauss(NGr)
te = np.array([te]).T
wt = np.array([wt])
return np.sum((np.pi/4.0)*(R**2./A)*w*wt.T*func(R*(rt+1.0)/2.0,
np.pi*(te+1.0),*varargin)*(rt+1.0))
N_r = 4
N_th = 4
I = gaussN(R,f_r,[],N_r,N_th)
print I
print 100*(I-u_eq_MC)/u_eq_MC, '%'
plt.contourf(x_mesh,y_mesh,F,20)
plt.colorbar()
circ = plt.Circle([0.,H],R,ec='k',fill=False)
ax = plt.gca()
ax.add_patch(circ)
rt,w = np.polynomial.legendre.leggauss(N_r)
rt = np.array([rt])
te,wt = np.polynomial.legendre.leggauss(N_th)
te = np.array([te])
re = R*(rt+1.0)/2.0
te = np.pi*(te.T+1.0)
plt.plot(-re*np.sin(te),re*np.cos(te)+H,'ok')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Monte-Carlo integration
Step2: Cartessian Gausss-Legendre quadrature
Step3: integrating before in y
Step5: Radial Gauss-Leg
|
11,414
|
<ASSISTANT_TASK:>
Python Code::
# define the captioning model
def define_model(vocab_size, max_length):
# feature extractor model
inputs1 = Input(shape=(4096,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
# sequence model
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
# decoder model
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
# tie it together [image, seq] [word]
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam')
# summarize model
print(model.summary())
plot_model(model, to_file='model.png', show_shapes=True)
return model
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,415
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import openpathsampling as paths
import numpy as np
import math
# the openpathsampling OpenMM engine
import openpathsampling.engines.openmm as eng
old_store = paths.AnalysisStorage("ala_mstis_bootstrap.nc")
print "PathMovers:", len(old_store.pathmovers)
print "Engines:", len(old_store.engines)
print "Samples:", len(old_store.samples)
print "Trajectories:", len(old_store.trajectories)
print "Ensembles:", len(old_store.ensembles)
print "SampleSets:", len(old_store.samplesets)
print "Snapshots:", len(old_store.snapshots)
print "Networks:", len(old_store.networks)
# template = old_store.snapshots[0]
engine = old_store.engines['default']
mstis = old_store.networks[0]
sset = old_store.tag['sampleset']
platform = 'CUDA'
engine.initialize(platform)
print 'Engine uses platform `%s`' % engine.platform
sset.sanity_check()
# logging creates ops_output.log file with details of what the calculation is doing
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
storage = paths.storage.Storage("ala_mstis_production.nc", "w")
storage.snapshots.save(old_store.snapshots[0]);
scheme = paths.DefaultScheme(mstis, engine)
mstis_calc = paths.PathSampling(
storage=storage,
sample_set=sset,
move_scheme=scheme
)
mstis_calc.save_frequency = 10
mstis_calc.run(5)
print len(storage.steps)
# commented out during development, so we can "run all" and then do more
storage.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading things from storage
Step2: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
Step3: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.
Step4: initialize engine
Step5: Running RETIS
Step6: Before we can sample we still need to set the actual MoveScheme which determines the
Step7: and finally generate the PathSampler object to conduct the simulation.
Step8: Now everything is ready
|
11,416
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("mrSmaCo29U4")
YouTubeVideo("xIq8Sg59UdY")
# NOTE: THIS CODE IS NOT TO BE RUN, AS IT HAS NO DATA! IT IS THE SHELL FOR THE CODE, WHICH WILL BE RUN IN PROBLEM 3.
# GATHERING DATA FROM A WEB ARTICLE & EXTRACTING FEATURES
# import re
# import nltk
# import pprint
# from urllib import urlopen
# from bs4 import BeautifulSoup
# from collections import Counter
# # Input your own text
# raw = raw_input("Enter some text: ")
# # Online articles
# url = "http://www.foxnews.com/science/2016/04/22/nasa-marks-hubbles-birthday-with-this-captivating-image.html"
# html = urlopen(url).read()
# raw2 = BeautifulSoup(html).get_text()
# # Add to temporary local file
# f = open('text.txt', 'r')
# raw3 = f.read()
# f.close()
# # Filter stopwords
# stopwords = nltk.corpus.stopwords.words('english')
# text= [w for w in raw3 if w.lower() not in stopwords]
# # Number of words in title
# title=[t for t in raw3 if t.istitle()]
# wordcounttitle = Counter(title.split( ))
# n_tokens_title=len(wordcounttitle)
# # Number of words in the text body
# wordcount = Counter(raw3.split( ))
# n_tokens_content=len(wordcount)
# n_tokens_content
# # Number of unique words in text body
# n_unique_tokens=len(set(raw3))
# n_unique_tokens
# #number of unique non-stop words in text body
# n_non_stop_unique_tokens=len(set(text))
# n_non_stop_unique_tokens
# #average word length for original text
# average_token_length=len(text)/n_tokens_content
# average_token_length
# # Text subjectivity
# from textblob import TextBlob
# Global_subjectivity=TextBlob(text).sentiment.subjectivity
# # Overall text poluarity
# Global_sentiment_polarity=TextBlob(text).sentiment.polarity
#FEATURE IMPORTANCE
#INPUTS:
# df: data frame containing the (structure) features for the training data.
# feautures: list of (non-response) features
# target: target variable
#feature_selection_model = ExtraTreesClassifier().fit(df[features], df['unpopular'])
#feature_importance=feature_selection_model.feature_importances_
#importance_matrix=np.array([features,list(feature_importance)]).T
#def sortkey(s):
# return s[1]
#sort=zip(features,list(feature_importance))
#f = pd.DataFrame(sorted(sort,key=sortkey,reverse=True),columns=['variables','importance'])[:10]
# EXTRACT TOP FEATURES, DETERMINE TRAINING DOCUMENTS
#features2=f['variables']
#split data into two parts
#np.random.seed(0)
#x_train, x_test, y_train, y_test = train_test_split(df[features2], df.unpopular, test_size=0.4, random_state=None)
#x_train.shape
# ENSEMBLE LEARNING
# Random Forest
#print "RandomForest"
#rf = RandomForestClassifier(n_estimators=100,n_jobs=1)
#clf_rf = rf.fit(x_train,y_train)
#y_predicted_rf = clf_rf.predict(x_test)
# K-NN:
# Determine K by cross-validation.
#x_cv_train, x_cv_test, y_cv_train, y_cv_test = train_test_split(x_train, y_train, test_size=0.3, random_state=None)
# We use K-values ranging from 1-10
# k=[5,10,15,20,25,30,35,40,45,50]
# Train a model on the trainng set and use that model to predict on the testing set
# predicted_knn=[KNeighborsClassifier(n_neighbors=i).fit(x_cv_train,y_cv_train).predict(x_cv_test) for i in k]
#Compute accuracy on the testing set for each value of k
#score_knn=[metrics.accuracy_score(predicted_knn[i],y_cv_test) for i in range(10)]
# Plot accuracy on the test set vs. k
#fig=plt.figure(figsize=(8,6))
#plt.plot([5,10,15,20,25,30,35,40,45,50], score_knn, 'bo--',label='knn')
#plt.xlabel('K')
#plt.ylabel('score')
# Make predictions based on the best model above
# y_predicted_knn = KNeighborsClassifier(n_neighbors=6).fit(x_train,y_train).predict(x_test)
####FINAL PREDICTIONS: ENSEMBLE LEARNING
#y_predicted = y_predicted_knn + y_predicted_rf
#cm = metrics.confusion_matrix(y_test, y_predicted)
#print(cm)
#plt.matshow(cm)
#plt.title('Confusion matrix')
#plt.colorbar()
#plt.ylabel('True label')
#plt.xlabel('Predicted label')
#plt.show()
#print 'Prediction Accuracy'
#print (cm[0,0]+cm[1,1])/float(cm[0,0] + cm[0,1] + cm[1,0] + cm[1,1])
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
csv_path = 'data/OnlineNewsPopularity.csv'
hdf_path = 'data/online_news_popularity.h5'
blue = '#5898f1'
green = '#00b27f'
yellow = '#FEC04C'
red = '#fa5744'
import requests, StringIO, csv, zipfile, sys
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00332/OnlineNewsPopularity.zip'
request = requests.get(url)
print "Downloading data ...\nRequest status: {}".format(request.status_code)
archive = ZipFile(StringIO.StringIO(request.content))
print "Unzipping ..."
csv_data = archive.read('OnlineNewsPopularity/OnlineNewsPopularity.csv', 'r')
outfile = file(csv_path, 'w')
outfile.write(csv_data)
print "Saving into {}".format(csv_path)
outfile.close()
# Read .csv file into data frame
data_frame = pd.read_csv(csv_path, sep=', ', engine='python')
# Rename *channel* columns
data_frame.rename(columns={
'data_channel_is_lifestyle': 'is_lifestyle',
'data_channel_is_entertainment': 'is_entertainment',
'data_channel_is_bus': 'is_business',
'data_channel_is_socmed': 'is_social_media',
'data_channel_is_tech': 'is_tech',
'data_channel_is_world': 'is_world',
}, inplace=True)
# Rename *weekday* columns
data_frame.rename(columns={
'weekday_is_monday': 'is_monday',
'weekday_is_tuesday': 'is_tuesday',
'weekday_is_wednesday': 'is_wednesday',
'weekday_is_thursday': 'is_thursday',
'weekday_is_friday': 'is_friday',
'weekday_is_saturday': 'is_saturday',
'weekday_is_sunday': 'is_sunday',
}, inplace=True)
# Store data into HDF5 file
data_hdf = pd.HDFStore(hdf_path)
data_hdf['data_frame'] = data_frame
data_hdf.close()
# Read .h5 file into data frame
data_frame = pd.read_hdf(hdf_path)
# Drop columns included in the sample Dataset that are not considered in our methods.
data_frame.drop('LDA_00', axis=1, inplace=True)
data_frame.drop('LDA_01', axis=1, inplace=True)
data_frame.drop('LDA_02', axis=1, inplace=True)
data_frame.drop('LDA_03', axis=1, inplace=True)
data_frame.drop('LDA_04', axis=1, inplace=True)
data_frame.drop('kw_min_min', axis=1, inplace=True)
data_frame.drop('kw_max_min', axis=1, inplace=True)
data_frame.drop('kw_avg_min', axis=1, inplace=True)
data_frame.drop('kw_min_max', axis=1, inplace=True)
data_frame.drop('kw_max_max', axis=1, inplace=True)
data_frame.drop('kw_avg_max', axis=1, inplace=True)
data_frame.drop('kw_min_avg', axis=1, inplace=True)
data_frame.drop('kw_max_avg', axis=1, inplace=True)
data_frame.drop('kw_avg_avg', axis=1, inplace=True)
data_frame.drop('n_non_stop_words', axis=1, inplace=True)
data_frame.drop('url', axis=1, inplace=True)
# Data frame column headers
list(data_frame)
# keep points that are within +2 to -2 standard deviations in column 'Data'.
df = data_frame[data_frame.shares-data_frame.shares.mean()<=(2*data_frame.shares.std())]
df.shape
buzz_factor = df['shares'] / df['timedelta']
popularity = pd.qcut(df['shares'], 4, labels=[
"Unpopular",
"Midly Popular",
"Popular",
"Very Popular"
])
# Add these two statistics to the Data Frame
df.is_copy = False # turn off chain index warning
df['buzz_factor'] = buzz_factor.values
df['popularity'] = popularity.values
buzz = pd.qcut(df['buzz_factor'], 4, labels=["No Buzz","Some Buzz","Buzz","Lots of Buzz"])
df['buzz']=buzz.values
# Compute Target variables and add to data frame
unpopular = df['popularity']== 'Unpopular'
df['unpopular'] = unpopular
no_buzz = df['buzz']=='No Buzz'
df['no_buzz'] = no_buzz
df.shape
# Brief Exploration of the "Popularity" classes: Consider mean number of shares per "popularity" bin
df_popularity = df.pivot_table('shares', index='popularity', aggfunc='mean')
df_popularity_count = df.pivot_table('shares', index='popularity', aggfunc='count')
print df_popularity_count
df_popularity.plot(kind='bar', color=green)
plt.title('Article Popularity')
plt.ylabel('mean shares')
# Brief Exploration of the "buzz" classes: Consider mean number of shares per day for each "buzz" bin
df_buzz = df.pivot_table('buzz_factor', index='buzz', aggfunc='mean')
df_buzz_count = df.pivot_table('buzz_factor', index='buzz', aggfunc='count')
print df_buzz_count
df_buzz.plot(kind='bar', color=green)
plt.title('Article Buzz')
plt.ylabel('buzz factor')
# Isolate non-response features
all_features = df.columns.values
excluded_features = [
'buzz',
'buzz_factor',
'no_buzz',
'popularity',
'shares',
'unpopular'
]
features = [f for f in all_features if f not in excluded_features]
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import PCA
from sklearn.svm import SVC
import pandas as pd
from sklearn import metrics
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.lda import LDA
from sklearn.qda import QDA
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import NearestNeighbors
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import LinearSVC
from sklearn import cross_validation, metrics
from sklearn.naive_bayes import BernoulliNB
from time import time
# Rank features by importance
feature_selection_model = ExtraTreesClassifier().fit(df[features], df['unpopular'])
feature_importance=feature_selection_model.feature_importances_
importance_matrix=np.array([features,list(feature_importance)]).T
def sortkey(s):
return s[1]
sort=zip(features,list(feature_importance))
# Extract top 10 important features
f = pd.DataFrame(sorted(sort,key=sortkey,reverse=True),columns=['variables','importance'])[:10]
f
features2=f['variables']
#split data into two parts
np.random.seed(0)
x_train, x_test, y_train, y_test = train_test_split(df[features2], df.unpopular, test_size=0.4, random_state=None)
x_train.shape
# Decision Tree accuracy and time elapsed caculation
#t0=time()
#print "DecisionTree"
#dt = DecisionTreeClassifier(min_samples_split=25,random_state=1)
#clf_dt=dt.fit(x_train,y_train)
#y_predicted_dt = clf_dt.predict(x_test)
#t1=time()
# Observe how decision tree performed on its own.
#print(metrics.classification_report(y_test, y_predicted_dt))
#print "time elapsed: ", t1-t0
# Random Forest
# Trainc classifier and time elapsed caculation
t2=time()
print "RandomForest"
rf = RandomForestClassifier(n_estimators=100,n_jobs=1)
clf_rf = rf.fit(x_train,y_train)
y_predicted_rf = clf_rf.predict(x_test)
t3=time()
# See how Random forest performed on its own.
print "Acurracy: ", clf_rf.score(x_test,y_test)
print "time elapsed: ", t3-t2
#KNN
from sklearn.neighbors import KNeighborsClassifier
# Determine K by cross-validation.
x_cv_train, x_cv_test, y_cv_train, y_cv_test = train_test_split(x_train, y_train, test_size=0.3, random_state=None)
# We use K-values ranging from 1-10
k=[5,10,15,20,25,30,35,40,45,50]
# Train a model on the training set and use that model to predict on the testing set
predicted_knn=[KNeighborsClassifier(n_neighbors=i).fit(x_cv_train,y_cv_train).predict(x_cv_test) for i in k]
#Compute accuracy on the testing set for each value of k
score_knn=[metrics.accuracy_score(predicted_knn[i],y_cv_test) for i in range(10)]
print score_knn
# Plot accuracy on the test set vs. k
fig=plt.figure(figsize=(8,6))
plt.plot([5,10,15,20,25,30,35,40,45,50], score_knn, 'bo--',label='knn', color=green)
plt.title('Unpopular K-NN')
plt.xlabel('K')
plt.ylabel('score')
# Make predictions based on best model above
y_predicted_knn = KNeighborsClassifier(n_neighbors=6).fit(x_train,y_train).predict(x_test)
# See how KNN did on its own.
# Print and plot a confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted_knn)
print(cm)
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# Predict unpopular = True if either RF or KNN predict unpopular = True
y_predicted = y_predicted_knn + y_predicted_rf
#Print and plot confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted)
print(cm)
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# Print prediction Accuracy
print 'Prediction Accuracy'
print (cm[0,0]+cm[1,1])/float(cm[0,0] + cm[0,1] + cm[1,0] + cm[1,1])
# Select features
all_features = df.columns.values
excluded_features = [
'buzz',
'buzz_factor',
'no_buzz',
'popularity',
'shares',
'timedelta',
'unpopular'
]
features1 = [f for f in all_features if f not in excluded_features]
#Rank features by importance
feature_selection_model = ExtraTreesClassifier().fit(df[features1], df['no_buzz'])
feature_importance=feature_selection_model.feature_importances_
importance_matrix=np.array([features, list(feature_importance)]).T
def sortkey(s):
return s[1]
sort=zip(features,list(feature_importance))
# Extract top 10 important features
f_b=pd.DataFrame(sorted(sort,key=sortkey,reverse=True),columns=['variables','importance'])[:10]
f_b
features_b=f_b['variables']
#split data into two parts
np.random.seed(0)
x_train, x_test, y_train, y_test = train_test_split(df[features_b], df.no_buzz, test_size=0.4, random_state=None)
x_train.shape
# Decision Tree accuracy and time elapsed caculation
#t0=time()
#print "DecisionTree"
#dt = DecisionTreeClassifier(min_samples_split=25,random_state=1)
#clf_dt=dt.fit(x_train,y_train)
#y_predicted = clf_dt.predict(x_test)
#print(metrics.classification_report(y_test, y_predicted))
#t1=time()
#print "time elapsed: ", t1-t0
#Random Forest
# Train classifier and time elapsed caculation
t2=time()
print "RandomForest"
rf = RandomForestClassifier(n_estimators=100,n_jobs=1)
clf_rf = rf.fit(x_train,y_train)
y_predicted_rf = clf_rf.predict(x_test)
# See how random forest did on its own.
print "Acurracy: ", clf_rf.score(x_test,y_test)
t3=time()
print "time elapsed: ", t3-t2
# KNN
from sklearn.neighbors import KNeighborsClassifier
# Determine K by cross-validation.
x_cv_train, x_cv_test, y_cv_train, y_cv_test = train_test_split(x_train, y_train, test_size=0.3, random_state=None)
# We use K-values ranging from 1-10
k=[5,10,15,20,25,30,35,40,45,50]
# Train a model on the trainng set and use that model to predict on the testing set
predicted_knn=[KNeighborsClassifier(n_neighbors=i).fit(x_cv_train,y_cv_train).predict(x_cv_test) for i in k]
#Compute accuracy on the testing set for each value of k
score_knn=[metrics.accuracy_score(predicted_knn[i],y_cv_test) for i in range(10)]
print score_knn
# Plot accuracy on the test set vs. k
fig=plt.figure(figsize=(8,6))
plt.plot([5,10,15,20,25,30,35,40,45,50], score_knn, 'bo--',label='knn', color=green)
plt.title('No Buzz K-NN')
plt.xlabel('K')
plt.ylabel('score')
# Make predictions based on the best model above
y_predicted_knn = KNeighborsClassifier(n_neighbors=7).fit(x_train,y_train).predict(x_test)
# See how KNN did on its own.
# Print and plot a confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted_knn)
print(cm)
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# Predict no_buzz = True if either RF or KNN predict no_buzz = True
y_predicted = y_predicted_knn + y_predicted_rf
# Print and plot confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted)
print(cm)
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# Print prediction accuracy
print 'Prediction Accuracy'
print (cm[0,0]+cm[1,1])/float(cm[0,0] + cm[0,1] + cm[1,0] + cm[1,1])
from sklearn.decomposition import PCA
pca = PCA(4)
plot_columns = pca.fit_transform(df[features1])
plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=df['unpopular'], color=[red, blue])
plt.show()
np.random.seed(0)
x_train, x_test, y_train, y_test = train_test_split(plot_columns, df.buzz_factor, test_size=0.4, random_state=None)
x_train.shape
#linear regression
from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model.fit(x_train, y_train)
from sklearn.metrics import mean_squared_error
predictions = model.predict(x_test)
mean_squared_error(predictions, y_test)
# Import the random forest model.
from sklearn.ensemble import RandomForestRegressor
model2 = RandomForestRegressor(n_estimators=60, min_samples_leaf=10, random_state=1)
model2.fit(x_train, y_train)
predictions = model2.predict(x_test)
mean_squared_error(predictions, y_test)
df[
['shares','n_tokens_title', 'n_tokens_content', 'n_unique_tokens']
].describe()
# Avg number of words in title and popular article content
popular_articles = df.ix[data_frame['shares'] >= 1400]
popular_articles[
['shares','n_tokens_title', 'n_tokens_content','n_unique_tokens']
].describe()
# Mean shares for each article type
type_articles = df.pivot_table('shares', index=[
'is_lifestyle', 'is_entertainment', 'is_business', 'is_social_media', 'is_tech', 'is_world'
], aggfunc=[np.mean])
print type_articles
type_articles.plot(kind='bar', color=red)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Mean shares by article type')
plt.xlabel('Article type')
plt.ylabel('Shares')
#### On avg, which day has more shares
# On avg, which day has more shares
day_articles = df.pivot_table('shares', index=[
'is_monday', 'is_tuesday', 'is_wednesday', 'is_thursday', 'is_friday', 'is_saturday', 'is_sunday'
], aggfunc=[np.mean])
print day_articles
day_articles.plot(kind='bar', color=green)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Mean shares by day')
plt.xlabel('Day')
plt.ylabel('Shares')
# Mean shares for tech and not tech channels during and not during weekends
df.pivot_table('shares', index=['is_weekend'], columns=['is_tech'], aggfunc=[np.mean], margins=True)
# Mean tech shares during work week (Monday to Friday)
tech_articles = df.ix[data_frame['is_tech'] == 1]
tech_articles = tech_articles.pivot_table('shares', index=[
'is_monday', 'is_tuesday', 'is_wednesday', 'is_thursday', 'is_friday'
], aggfunc=[np.mean])
print tech_articles
tech_articles.plot(kind='bar', color=blue)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Mean Tech Article Shares by Week Day')
plt.xlabel('Week Day')
plt.ylabel('Shares')
# Explore relationship with some features and number of shares
df.plot(kind='scatter', x='n_tokens_title', y='shares')
df.plot(kind='scatter', x='n_tokens_content', y='shares')
df.plot(kind='scatter', x='n_unique_tokens', y='shares')
json = df.to_json()
print json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two videos on storytelling
Step2: Problem 3
Step3: Download data, unzip, and save .csv
Step4: Read .csv file into Pandas Data Frame & save into HDF5 (Hierarchical Data Format)
Step5: Read HDF5 file into Pandas Data Frame
Step6: Remove outliers if applicable
Step7: Added 'buzz_factor' column
Step8: Added 'popularity' column
Step9: Similarly, split buzz factor into four percentile bins
Step10: The real quantity of interest here is, in some sense, the LEAST successful articles. While it is interesting to predict the level of popularity/buzz factor, what we need (at the very least) to be able to do is predict whether an article will be "unpopular" or generate "no buzz." Thus we isolate these two bins.
Step11: Import Sklearn packages.
Step12: Unpopular Articles
Step13: Final Predictions
Step14: Articles with no Buzz
Step15: Final Predictions
Step16: Background analysis/exploration.
Step17: A brief exploration
|
11,417
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pylab
from __future__ import division
pylab.rcParams['figure.figsize'] = (16, 10)
import scipy.io as sio
import numpy as np
import math
import matplotlib.pyplot as plt
data = sio.loadmat('../data/programming/hw1progde.mat')
test_data = data['x_te']
training_data = data['x_tr']
h = 0.01
## 10 different measures, whcih one isbest? training or validation
def K(x, XI, h, kernel_type='gaussian'):
Parameters
----------
x: Point location
XI: Vector of training data
if kernel_type == 'gaussian':
s=0
l=0
for xi in XI:
u = (x-xi)/h
l+=1
e = (1/math.sqrt(2*math.pi))*math.exp(-u*u/2)
s+=e
return s/(l*h)
elif kernel_type == 'epanechnikov':
s=0
l=0
for xi in XI:
u = (x-xi)/h
l+=1
if abs(u)<=1:
s+=(3/4)*(1-u*u)
return s/(l*h)
elif kernel_type == 'histogram':
s=0
l=0
for xi in XI:
u = (x-xi)/h
l+=1
if abs(u)<=1:
s+=0.5
return s/(l*h)
def inrange(x,Y):
if x<Y[1] and X>=Y[0]:
return True
return False
def histogram(x, bandwidth=1):
min_x = np.min(x)
max_x = np.max(x)
range_x = max_x-min_x
bins = math.ceil(range_x/bandwidth)
list_of_bins = []
for i in range(bins):
list_of_bins.append([min_x+bandwidth*i, min_x+bandwidth*(i+1)])
counts = np.zeros(bins)
for xi in x:
for lindex, lb in enumerate(list_of_bins):
if inrange(xi, lb):
counts[lindex]+=1
return counts
## Optimum bandwidth calculation
sigma_tr = np.std(training_data)
sigma_te = np.std(test_data)
h = 0.94*sigma_tr*len(training_data)**(-0.2)
print(sigma_tr)
print(sigma_te)
print(h)
H = [0.01,0.07,0.05,0.1,0.5, 0.7]
output_gaussian = []
x_points = np.linspace(0.1,1)
for h in H:
gaussian_r = [K(x,training_data, h, 'gaussian') for x in x_points]
output_gaussian.append(gaussian_r)
f, axarr = plt.subplots(2, 3)
for i, gaussian_r in enumerate(output_gaussian):
if i<3:
axarr[0, i].plot(np.sort(x_points), gaussian_r)
axarr[0, i].set_title('Gaussian, h={}'.format(H[i]))
else:
axarr[1, i-3].plot(np.sort(x_points), gaussian_r)
axarr[1, i-3].set_title('Gaussian, h={}'.format(H[i]))
f.show()
ev_r = []
for h in H:
r = [K(x,training_data, h, 'epanechnikov') for x in x_points]
ev_r.append(r)
f, axarr = plt.subplots(2, 3)
for i, gaussian_r in enumerate(ev_r):
if i<3:
axarr[0, i].plot(np.sort(x_points), gaussian_r)
axarr[0, i].set_title('Ev, h={}'.format(H[i]))
else:
axarr[1, i-3].plot(np.sort(x_points), gaussian_r)
axarr[1, i-3].set_title('Ev, h={}'.format(H[i]))
f.show()
hi_r = []
for h in H:
r = [K(x,training_data, h, 'histogram') for x in x_points]
hi_r.append(r)
f, axarr = plt.subplots(2, 3)
for i, gaussian_r in enumerate(hi_r):
if i<3:
axarr[0, i].plot(np.sort(x_points), gaussian_r)
axarr[0, i].set_title('hist, h={}'.format(H[i]))
else:
axarr[1, i-3].plot(np.sort(x_points), gaussian_r)
axarr[1, i-3].set_title('hist, h={}'.format(H[i]))
f.show()
## Randomly partion test_Data into 19 subsets
import random
N_split = 19
test_data_shuffled = test_data.copy()
random.shuffle(test_data_shuffled)
list_size = int(len(test_data_shuffled)/N_split)
list_of_shuffled_te = [test_data_shuffled[0+list_size*i : list_size*(i+1)] for i in xrange(N_split)]
x_points = np.linspace(0.1,1)
def calc_variance(x_points, list_of_shuffled_te,h,kernel):
f_scores=[]
for index, lte in enumerate(list_of_shuffled_te):
f_scores.append([])
for x in x_points:
de = K(x, lte, h, kernel);all
f_scores[index].append(de)
f_hn = np.array([np.sum(a) for a in f_scores])
f_h = np.mean(f_hn)
q = np.sum((f_hn-f_h)**2)
E=q/N_split
return E
kernel = 'gaussian'
g_h = []
e_h = []
h_h = []
for h in H:
g_h.append(calc_variance(x_points, list_of_shuffled_te,h,kernel))
print g_h
kernel = 'epanechnikov'
for h in H:
e_h.append(calc_variance(x_points, list_of_shuffled_te,h,kernel))
kernel = 'histogram'
for h in H:
h_h.append(calc_variance(x_points, list_of_shuffled_te,h,kernel))
plt.plot(H,g_h)
plt.plot(H,e_h)
plt.plot(H,h_h)
import numpy as np
import math
m = [(10,49),(-12,38),(-9,47)]
ee = [(29,19),(32,31),(37,38)]
cs = [(8,9),(30,-28),(-18,-19),(-21,12)]
t = m+ee+cs
x = np.array([i[0] for i in t])
y = np.array([i[1] for i in t])
xm = np.mean(x)
ym = np.mean(y)
xsd = np.std(x, ddof=1)
ysd = np.std(y, ddof=1)
def normalise(d):
return (round((d[0]-xm)/xsd,4), round((d[1]-ym)/ysd,4))
data = (9,18)
data_normal = normalise(data)
normalised_t = [normalise(d) for d in t]
for i in normalised_t:
print '{} & {}'.format(i[0], i[1])
print xm, xsd
print ym, ysd
def distance_calculator(data):
for d in normalised_t:
l1 = round(abs(d[0]-data[0])+abs(d[1]-data[1]),4)
l2 = round(math.sqrt(abs(d[0]-data[0])**2+abs(d[1]-data[1])**2),4)
print ' & {} & {}'.format(l1,l2)
distance_calculator(data_normal)
m+ee+cs
print data_normal
p_rain = (9+5+6+3+7+2+3+1)/80.0
p_norain = 1-p_rain
p_temp_hot = 4/8.0
p_temp_cold = 1-p_temp_hot
p_humidity_high = 4/8.0
p_humidity_low = 1-p_humidity_high
p_sky_clear = 4/8.0
p_sky_cloudy = 1-p_sky_clear
HY = -(p_rain*math.log(p_rain,2)+p_norain*math.log(p_norain,2))
print(HY)
##temperature
p_rain_hot = (9+5+6+3)/40.0
p_norain_hot = 1-p_rain_hot
p_rain_cold = (7+2+3+1)/40.0
p_norain_cold = 1-p_rain_cold
h_y_temp = p_temp_hot * -(p_rain_hot*math.log(p_rain_hot,2)+p_norain_hot*math.log(p_norain_hot,2)) + p_temp_cold *-(p_rain_cold*math.log(p_rain_cold,2)+p_norain_cold*math.log(p_norain_cold,2))
g_temp = HY-h_y_temp
##humiditiy
p_rain_high = (9+5+7+2)/40.0
p_norain_high = 1-p_rain_high
p_rain_low = (6+3+3+1)/40.0
p_norain_low = 1-p_rain_low
h_y_humidity = p_humidity_high * -(p_rain_high*math.log(p_rain_high,2)+p_norain_high*math.log(p_norain_high,2))\
+ p_humidity_low *-(p_rain_low*math.log(p_rain_low,2)+p_norain_low*math.log(p_norain_low,2))
g_humidity = HY-h_y_humidity
##sky
p_rain_cloudy = (9+6+7+3)/40.0
p_norain_cloudy = 1-p_rain_cloudy
p_rain_clear = (5+3+2+1)/40.0
p_norain_clear = 1-p_rain_clear
h_y_sky = p_sky_cloudy * -(p_rain_cloudy*math.log(p_rain_cloudy,2)+p_norain_cloudy*math.log(p_norain_cloudy,2))\
+ p_sky_clear *-(p_rain_clear*math.log(p_rain_clear,2)+p_norain_clear*math.log(p_norain_clear,2))
g_sky = HY-h_y_sky
print(h_y_temp)
print(h_y_sky)
print 'Gain for Temperature: {}\tHumidity:{}\tSky:{}'.format(g_temp, g_humidity, g_sky)
#H[Y|Temperature,Cloudy]
p_rain_cloudy_hot = (9+6)/20.0
p_norain_cloudy_hot = 1-p_rain_cloudy_hot
p_rain_cloudy_cold = (7+3)/20.0
p_norain_cloudy_cold = 1-p_rain_cloudy_cold
p_cloudy_hot = 2/4.0
p_cloudy_cold = 2/4.0
h_cloudy_temp = p_cloudy_hot * -(p_rain_cloudy_hot*math.log(p_rain_cloudy_hot,2)+p_norain_cloudy_hot*math.log(p_norain_cloudy_hot,2))\
+p_cloudy_cold * -(p_rain_cloudy_cold*math.log(p_rain_cloudy_cold,2)+p_norain_cloudy_cold*math.log(p_norain_cloudy_cold,2))
g_cloudy_temp = HY-h_cloudy_temp
#H[Y|Temperature,Cloudy]
p_rain_cloudy_high = (9+7)/20.0
p_norain_cloudy_high = 1-p_rain_cloudy_high
p_rain_cloudy_low = (6+3)/20.0
p_norain_cloudy_low = 1-p_rain_cloudy_low
p_cloudy_high = 2/4.0
p_cloudy_low = 1-p_cloudy_high
h_cloudy_hum = p_cloudy_high * -(p_rain_cloudy_high*math.log(p_rain_cloudy_high,2)+p_norain_cloudy_high*math.log(p_norain_cloudy_high,2))\
+p_cloudy_low * -(p_rain_cloudy_low*math.log(p_rain_cloudy_low,2)+p_norain_cloudy_low*math.log(p_norain_cloudy_low,2))\
g_cloudy_hum = HY-h_cloudy_hum
print 'For Cloudhy Sky: Temperature: {}\t HUmidity:{}'.format(g_cloudy_temp, g_cloudy_hum)
##For clear sky, temperature =hot
p_rain_clear_hot = (5+3)/20.0
p_norain_clear_hot = 1-p_rain_clear_hot
p_rain_clear_cold = (2+1)/20.0
p_norain_clear_cold = 1-p_rain_clear_cold
# So for Clear, hot => It rains
#p_rain_clear_cold
#h_clear_cold = p_rain_clear_cold*()
%pylab inline
from __future__ import division
pylab.rcParams['figure.figsize'] = (16, 10)
import scipy.io as sio
import numpy as np
import math
import matplotlib.pyplot as plt
data = sio.loadmat('../data/programming/hw1boundary.mat')
print(data)
#
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 5.1
Step2: Gaussian
Step3: Epanechnikov kernel
Step4: Histogram
Step5: Problem 2
Step6: Problem 3
Step7: So the root is on Sky!
Step8: For cloudy sky, we choose humdity as the next node
|
11,418
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import pandas as pd
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
train_path = tf.keras.utils.get_file(
"iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
"iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
train.head()
train_y = train.pop('Species')
test_y = test.pop('Species')
# 标签列现已从数据中删除
train.head()
def input_evaluation_set():
features = {'SepalLength': np.array([6.4, 5.0]),
'SepalWidth': np.array([2.8, 2.3]),
'PetalLength': np.array([5.6, 3.3]),
'PetalWidth': np.array([2.2, 1.0])}
labels = np.array([2, 1])
return features, labels
def input_fn(features, labels, training=True, batch_size=256):
An input function for training or evaluating
# 将输入转换为数据集。
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# 如果在训练模式下混淆并重复数据。
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
# 特征列描述了如何使用输入。
my_feature_columns = []
for key in train.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
# 构建一个拥有两个隐层,隐藏节点分别为 30 和 10 的深度神经网络。
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# 隐层所含结点数量分别为 30 和 10.
hidden_units=[30, 10],
# 模型必须从三个类别中做出选择。
n_classes=3)
# 训练模型。
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
eval_result = classifier.evaluate(
input_fn=lambda: input_fn(test, test_y, training=False))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
# 由模型生成预测
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
'SepalLength': [5.1, 5.9, 6.9],
'SepalWidth': [3.3, 3.0, 3.1],
'PetalLength': [1.7, 4.2, 5.4],
'PetalWidth': [0.5, 1.5, 2.1],
}
def input_fn(features, batch_size=256):
An input function for prediction.
# 将输入转换为无标签数据集。
return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)
predictions = classifier.predict(
input_fn=lambda: input_fn(predict_x))
for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
SPECIES[class_id], 100 * probability, expec))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 预创建的 Estimators
Step2: 数据集
Step3: 接下来,使用 Keras 与 Pandas 下载并解析鸢尾花数据集。注意为训练和测试保留不同的数据集。
Step4: 通过检查数据您可以发现有四列浮点型特征和一列 int32 型标签。
Step5: 对于每个数据集都分割出标签,模型将被训练来预测这些标签。
Step6: Estimator 编程概述
Step8: 您的输入函数可以以您喜欢的方式生成 features 字典与 label 列表。但是,我们建议使用 Tensorflow 的 Dataset API,该 API 可以用来解析各种类型的数据。
Step9: 定义特征列(feature columns)
Step10: 特征列可能比上述示例复杂得多。您可以从指南获取更多关于特征列的信息。
Step11: ## 训练、评估和预测
Step12: 注意将 input_fn 调用封装在 lambda 中以获取参数,同时提供不带参数的输入函数,如 Estimator 所预期的那样。step 参数告知该方法在训练多少步后停止训练。
Step14: 与对 train 方法的调用不同,我们没有传递 steps 参数来进行评估。用于评估的 input_fn 只生成一个 epoch 的数据。
Step15: predict 方法返回一个 Python 可迭代对象,为每个样本生成一个预测结果字典。以下代码输出了一些预测及其概率:
|
11,419
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from utils import *
import random
from random import shuffle
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
None
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = None
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = None
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = None
z = None
y = None
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = None
# Append the index to "indices"
None
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = None
x[None] = None
# Update "a_prev" to be "a"
a_prev = None
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
n, n_a = 20, 100
a0 = np.random.randn(n_a, 1)
i0 = 1 # first character is ix_to_char[i0]
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = None
# Backpropagate through time (≈1 line)
gradients, a = None
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = None
# Update parameters (≈1 line)
parameters = None
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = None
X = None
Y = None
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = None
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
parameters = model(data, ix_to_char, char_to_ix)
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Problem Statement
Step2: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
Step3: 1.2 - Overview of the model
Step5: Expected output
Step7: Expected output
Step12: Expected output
Step13: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
Step14: Conclusion
Step15: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
|
11,420
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
# we can import the CSV data as a numpy rec array
from matplotlib.pylab import csv2rec
trends = csv2rec('trends.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Use the "trends.csv" file and csv2rec() to import the data and reproduce this plot
|
11,421
|
<ASSISTANT_TASK:>
Python Code:
#|export
def module(*flds, **defaults):
"Decorator to create an `nn.Module` using `f` as `forward` method"
pa = [inspect.Parameter(o, inspect.Parameter.POSITIONAL_OR_KEYWORD) for o in flds]
pb = [inspect.Parameter(k, inspect.Parameter.POSITIONAL_OR_KEYWORD, default=v)
for k,v in defaults.items()]
params = pa+pb
all_flds = [*flds,*defaults.keys()]
def _f(f):
class c(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
for i,o in enumerate(args): kwargs[all_flds[i]] = o
kwargs = merge(defaults,kwargs)
for k,v in kwargs.items(): setattr(self,k,v)
__repr__ = basic_repr(all_flds)
forward = f
c.__signature__ = inspect.Signature(params)
c.__name__ = c.__qualname__ = f.__name__
c.__doc__ = f.__doc__
return c
return _f
#|export
@module()
def Identity(self, x):
"Do nothing at all"
return x
test_eq(Identity()(1), 1)
#|export
@module('func')
def Lambda(self, x):
"An easy way to create a pytorch layer for a simple `func`"
return self.func(x)
def _add2(x): return x+2
tst = Lambda(_add2)
x = torch.randn(10,20)
test_eq(tst(x), x+2)
tst2 = pickle.loads(pickle.dumps(tst))
test_eq(tst2(x), x+2)
tst
#|export
class PartialLambda(Lambda):
"Layer that applies `partial(func, **kwargs)`"
def __init__(self, func, **kwargs):
super().__init__(partial(func, **kwargs))
self.repr = f'{func.__name__}, {kwargs}'
def forward(self, x): return self.func(x)
def __repr__(self): return f'{self.__class__.__name__}({self.repr})'
def test_func(a,b=2): return a+b
tst = PartialLambda(test_func, b=5)
test_eq(tst(x), x+5)
#|export
@module(full=False)
def Flatten(self, x):
"Flatten `x` to a single dimension, e.g. at end of a model. `full` for rank-1 tensor"
return TensorBase(x.view(-1) if self.full else x.view(x.size(0), -1))
tst = Flatten()
x = torch.randn(10,5,4)
test_eq(tst(x).shape, [10,20])
tst = Flatten(full=True)
test_eq(tst(x).shape, [200])
#|export
@module(tensor_cls=TensorBase)
def ToTensorBase(self, x):
"Remove `tensor_cls` to x"
return self.tensor_cls(x)
ttb = ToTensorBase()
timg = TensorImage(torch.rand(1,3,32,32))
test_eq(type(ttb(timg)), TensorBase)
#|export
class View(Module):
"Reshape `x` to `size`"
def __init__(self, *size): self.size = size
def forward(self, x): return x.view(self.size)
tst = View(10,5,4)
test_eq(tst(x).shape, [10,5,4])
#|export
class ResizeBatch(Module):
"Reshape `x` to `size`, keeping batch dim the same size"
def __init__(self, *size): self.size = size
def forward(self, x): return x.view((x.size(0),) + self.size)
tst = ResizeBatch(5,4)
test_eq(tst(x).shape, [10,5,4])
#|export
@module()
def Debugger(self,x):
"A module to debug inside a model."
set_trace()
return x
#|export
def sigmoid_range(x, low, high):
"Sigmoid function with range `(low, high)`"
return torch.sigmoid(x) * (high - low) + low
test = tensor([-10.,0.,10.])
assert torch.allclose(sigmoid_range(test, -1, 2), tensor([-1.,0.5, 2.]), atol=1e-4, rtol=1e-4)
assert torch.allclose(sigmoid_range(test, -5, -1), tensor([-5.,-3.,-1.]), atol=1e-4, rtol=1e-4)
assert torch.allclose(sigmoid_range(test, 2, 4), tensor([2., 3., 4.]), atol=1e-4, rtol=1e-4)
#|export
@module('low','high')
def SigmoidRange(self, x):
"Sigmoid module with range `(low, high)`"
return sigmoid_range(x, self.low, self.high)
tst = SigmoidRange(-1, 2)
assert torch.allclose(tst(test), tensor([-1.,0.5, 2.]), atol=1e-4, rtol=1e-4)
#|export
class AdaptiveConcatPool1d(Module):
"Layer that concats `AdaptiveAvgPool1d` and `AdaptiveMaxPool1d`"
def __init__(self, size=None):
self.size = size or 1
self.ap = nn.AdaptiveAvgPool1d(self.size)
self.mp = nn.AdaptiveMaxPool1d(self.size)
def forward(self, x): return torch.cat([self.mp(x), self.ap(x)], 1)
#|export
class AdaptiveConcatPool2d(Module):
"Layer that concats `AdaptiveAvgPool2d` and `AdaptiveMaxPool2d`"
def __init__(self, size=None):
self.size = size or 1
self.ap = nn.AdaptiveAvgPool2d(self.size)
self.mp = nn.AdaptiveMaxPool2d(self.size)
def forward(self, x): return torch.cat([self.mp(x), self.ap(x)], 1)
tst = AdaptiveConcatPool2d()
x = torch.randn(10,5,4,4)
test_eq(tst(x).shape, [10,10,1,1])
max1 = torch.max(x, dim=2, keepdim=True)[0]
maxp = torch.max(max1, dim=3, keepdim=True)[0]
test_eq(tst(x)[:,:5], maxp)
test_eq(tst(x)[:,5:], x.mean(dim=[2,3], keepdim=True))
tst = AdaptiveConcatPool2d(2)
test_eq(tst(x).shape, [10,10,2,2])
#|export
class PoolType: Avg,Max,Cat = 'Avg','Max','Cat'
#|export
def adaptive_pool(pool_type):
return nn.AdaptiveAvgPool2d if pool_type=='Avg' else nn.AdaptiveMaxPool2d if pool_type=='Max' else AdaptiveConcatPool2d
#|export
class PoolFlatten(nn.Sequential):
"Combine `nn.AdaptiveAvgPool2d` and `Flatten`."
def __init__(self, pool_type=PoolType.Avg): super().__init__(adaptive_pool(pool_type)(1), Flatten())
tst = PoolFlatten()
test_eq(tst(x).shape, [10,5])
test_eq(tst(x), x.mean(dim=[2,3]))
#|export
NormType = Enum('NormType', 'Batch BatchZero Weight Spectral Instance InstanceZero')
#|export
def _get_norm(prefix, nf, ndim=2, zero=False, **kwargs):
"Norm layer with `nf` features and `ndim` initialized depending on `norm_type`."
assert 1 <= ndim <= 3
bn = getattr(nn, f"{prefix}{ndim}d")(nf, **kwargs)
if bn.affine:
bn.bias.data.fill_(1e-3)
bn.weight.data.fill_(0. if zero else 1.)
return bn
#|export
@delegates(nn.BatchNorm2d)
def BatchNorm(nf, ndim=2, norm_type=NormType.Batch, **kwargs):
"BatchNorm layer with `nf` features and `ndim` initialized depending on `norm_type`."
return _get_norm('BatchNorm', nf, ndim, zero=norm_type==NormType.BatchZero, **kwargs)
#|export
@delegates(nn.InstanceNorm2d)
def InstanceNorm(nf, ndim=2, norm_type=NormType.Instance, affine=True, **kwargs):
"InstanceNorm layer with `nf` features and `ndim` initialized depending on `norm_type`."
return _get_norm('InstanceNorm', nf, ndim, zero=norm_type==NormType.InstanceZero, affine=affine, **kwargs)
tst = BatchNorm(15)
assert isinstance(tst, nn.BatchNorm2d)
test_eq(tst.weight, torch.ones(15))
tst = BatchNorm(15, norm_type=NormType.BatchZero)
test_eq(tst.weight, torch.zeros(15))
tst = BatchNorm(15, ndim=1)
assert isinstance(tst, nn.BatchNorm1d)
tst = BatchNorm(15, ndim=3)
assert isinstance(tst, nn.BatchNorm3d)
tst = InstanceNorm(15)
assert isinstance(tst, nn.InstanceNorm2d)
test_eq(tst.weight, torch.ones(15))
tst = InstanceNorm(15, norm_type=NormType.InstanceZero)
test_eq(tst.weight, torch.zeros(15))
tst = InstanceNorm(15, ndim=1)
assert isinstance(tst, nn.InstanceNorm1d)
tst = InstanceNorm(15, ndim=3)
assert isinstance(tst, nn.InstanceNorm3d)
test_eq(BatchNorm(15, affine=False).weight, None)
test_eq(InstanceNorm(15, affine=False).weight, None)
#|export
class BatchNorm1dFlat(nn.BatchNorm1d):
"`nn.BatchNorm1d`, but first flattens leading dimensions"
def forward(self, x):
if x.dim()==2: return super().forward(x)
*f,l = x.shape
x = x.contiguous().view(-1,l)
return super().forward(x).view(*f,l)
tst = BatchNorm1dFlat(15)
x = torch.randn(32, 64, 15)
y = tst(x)
mean = x.mean(dim=[0,1])
test_close(tst.running_mean, 0*0.9 + mean*0.1)
var = (x-mean).pow(2).mean(dim=[0,1])
test_close(tst.running_var, 1*0.9 + var*0.1, eps=1e-4)
test_close(y, (x-mean)/torch.sqrt(var+1e-5) * tst.weight + tst.bias, eps=1e-4)
#|export
class LinBnDrop(nn.Sequential):
"Module grouping `BatchNorm1d`, `Dropout` and `Linear` layers"
def __init__(self, n_in, n_out, bn=True, p=0., act=None, lin_first=False):
layers = [BatchNorm(n_out if lin_first else n_in, ndim=1)] if bn else []
if p != 0: layers.append(nn.Dropout(p))
lin = [nn.Linear(n_in, n_out, bias=not bn)]
if act is not None: lin.append(act)
layers = lin+layers if lin_first else layers+lin
super().__init__(*layers)
tst = LinBnDrop(10, 20)
mods = list(tst.children())
test_eq(len(mods), 2)
assert isinstance(mods[0], nn.BatchNorm1d)
assert isinstance(mods[1], nn.Linear)
tst = LinBnDrop(10, 20, p=0.1)
mods = list(tst.children())
test_eq(len(mods), 3)
assert isinstance(mods[0], nn.BatchNorm1d)
assert isinstance(mods[1], nn.Dropout)
assert isinstance(mods[2], nn.Linear)
tst = LinBnDrop(10, 20, act=nn.ReLU(), lin_first=True)
mods = list(tst.children())
test_eq(len(mods), 3)
assert isinstance(mods[0], nn.Linear)
assert isinstance(mods[1], nn.ReLU)
assert isinstance(mods[2], nn.BatchNorm1d)
tst = LinBnDrop(10, 20, bn=False)
mods = list(tst.children())
test_eq(len(mods), 1)
assert isinstance(mods[0], nn.Linear)
#|export
def sigmoid(input, eps=1e-7):
"Same as `torch.sigmoid`, plus clamping to `(eps,1-eps)"
return input.sigmoid().clamp(eps,1-eps)
#|export
def sigmoid_(input, eps=1e-7):
"Same as `torch.sigmoid_`, plus clamping to `(eps,1-eps)"
return input.sigmoid_().clamp_(eps,1-eps)
#|export
from torch.nn.init import kaiming_uniform_,uniform_,xavier_uniform_,normal_
#|export
def vleaky_relu(input, inplace=True):
"`F.leaky_relu` with 0.3 slope"
return F.leaky_relu(input, negative_slope=0.3, inplace=inplace)
#|export
for o in F.relu,nn.ReLU,F.relu6,nn.ReLU6,F.leaky_relu,nn.LeakyReLU:
o.__default_init__ = kaiming_uniform_
#|export
for o in F.sigmoid,nn.Sigmoid,F.tanh,nn.Tanh,sigmoid,sigmoid_:
o.__default_init__ = xavier_uniform_
#|export
def init_default(m, func=nn.init.kaiming_normal_):
"Initialize `m` weights with `func` and set `bias` to 0."
if func and hasattr(m, 'weight'): func(m.weight)
with torch.no_grad():
if getattr(m, 'bias', None) is not None: m.bias.fill_(0.)
return m
#|export
def init_linear(m, act_func=None, init='auto', bias_std=0.01):
if getattr(m,'bias',None) is not None and bias_std is not None:
if bias_std != 0: normal_(m.bias, 0, bias_std)
else: m.bias.data.zero_()
if init=='auto':
if act_func in (F.relu_,F.leaky_relu_): init = kaiming_uniform_
else: init = getattr(act_func.__class__, '__default_init__', None)
if init is None: init = getattr(act_func, '__default_init__', None)
if init is not None: init(m.weight)
#|export
def _conv_func(ndim=2, transpose=False):
"Return the proper conv `ndim` function, potentially `transposed`."
assert 1 <= ndim <=3
return getattr(nn, f'Conv{"Transpose" if transpose else ""}{ndim}d')
#|hide
test_eq(_conv_func(ndim=1),torch.nn.modules.conv.Conv1d)
test_eq(_conv_func(ndim=2),torch.nn.modules.conv.Conv2d)
test_eq(_conv_func(ndim=3),torch.nn.modules.conv.Conv3d)
test_eq(_conv_func(ndim=1, transpose=True),torch.nn.modules.conv.ConvTranspose1d)
test_eq(_conv_func(ndim=2, transpose=True),torch.nn.modules.conv.ConvTranspose2d)
test_eq(_conv_func(ndim=3, transpose=True),torch.nn.modules.conv.ConvTranspose3d)
#|export
defaults.activation=nn.ReLU
#|export
class ConvLayer(nn.Sequential):
"Create a sequence of convolutional (`ni` to `nf`), ReLU (if `use_activ`) and `norm_type` layers."
@delegates(nn.Conv2d)
def __init__(self, ni, nf, ks=3, stride=1, padding=None, bias=None, ndim=2, norm_type=NormType.Batch, bn_1st=True,
act_cls=defaults.activation, transpose=False, init='auto', xtra=None, bias_std=0.01, **kwargs):
if padding is None: padding = ((ks-1)//2 if not transpose else 0)
bn = norm_type in (NormType.Batch, NormType.BatchZero)
inn = norm_type in (NormType.Instance, NormType.InstanceZero)
if bias is None: bias = not (bn or inn)
conv_func = _conv_func(ndim, transpose=transpose)
conv = conv_func(ni, nf, kernel_size=ks, bias=bias, stride=stride, padding=padding, **kwargs)
act = None if act_cls is None else act_cls()
init_linear(conv, act, init=init, bias_std=bias_std)
if norm_type==NormType.Weight: conv = weight_norm(conv)
elif norm_type==NormType.Spectral: conv = spectral_norm(conv)
layers = [conv]
act_bn = []
if act is not None: act_bn.append(act)
if bn: act_bn.append(BatchNorm(nf, norm_type=norm_type, ndim=ndim))
if inn: act_bn.append(InstanceNorm(nf, norm_type=norm_type, ndim=ndim))
if bn_1st: act_bn.reverse()
layers += act_bn
if xtra: layers.append(xtra)
super().__init__(*layers)
tst = ConvLayer(16, 32)
mods = list(tst.children())
test_eq(len(mods), 3)
test_eq(mods[1].weight, torch.ones(32))
test_eq(mods[0].padding, (1,1))
x = torch.randn(64, 16, 8, 8)#.cuda()
#Padding is selected to make the shape the same if stride=1
test_eq(tst(x).shape, [64,32,8,8])
#Padding is selected to make the shape half if stride=2
tst = ConvLayer(16, 32, stride=2)
test_eq(tst(x).shape, [64,32,4,4])
#But you can always pass your own padding if you want
tst = ConvLayer(16, 32, padding=0)
test_eq(tst(x).shape, [64,32,6,6])
#No bias by default for Batch NormType
assert mods[0].bias is None
#But can be overridden with `bias=True`
tst = ConvLayer(16, 32, bias=True)
assert first(tst.children()).bias is not None
#For no norm, or spectral/weight, bias is True by default
for t in [None, NormType.Spectral, NormType.Weight]:
tst = ConvLayer(16, 32, norm_type=t)
assert first(tst.children()).bias is not None
#Various n_dim/tranpose
tst = ConvLayer(16, 32, ndim=3)
assert isinstance(list(tst.children())[0], nn.Conv3d)
tst = ConvLayer(16, 32, ndim=1, transpose=True)
assert isinstance(list(tst.children())[0], nn.ConvTranspose1d)
#No activation/leaky
tst = ConvLayer(16, 32, ndim=3, act_cls=None)
mods = list(tst.children())
test_eq(len(mods), 2)
tst = ConvLayer(16, 32, ndim=3, act_cls=partial(nn.LeakyReLU, negative_slope=0.1))
mods = list(tst.children())
test_eq(len(mods), 3)
assert isinstance(mods[2], nn.LeakyReLU)
# #export
# def linear(in_features, out_features, bias=True, act_cls=None, init='auto'):
# "Linear layer followed by optional activation, with optional auto-init"
# res = nn.Linear(in_features, out_features, bias=bias)
# if act_cls: act_cls = act_cls()
# init_linear(res, act_cls, init=init)
# if act_cls: res = nn.Sequential(res, act_cls)
# return res
# #export
# @delegates(ConvLayer)
# def conv1d(ni, nf, ks, stride=1, ndim=1, norm_type=None, **kwargs):
# "Convolutional layer followed by optional activation, with optional auto-init"
# return ConvLayer(ni, nf, ks, stride=stride, ndim=ndim, norm_type=norm_type, **kwargs)
# #export
# @delegates(ConvLayer)
# def conv2d(ni, nf, ks, stride=1, ndim=2, norm_type=None, **kwargs):
# "Convolutional layer followed by optional activation, with optional auto-init"
# return ConvLayer(ni, nf, ks, stride=stride, ndim=ndim, norm_type=norm_type, **kwargs)
# #export
# @delegates(ConvLayer)
# def conv3d(ni, nf, ks, stride=1, ndim=3, norm_type=None, **kwargs):
# "Convolutional layer followed by optional activation, with optional auto-init"
# return ConvLayer(ni, nf, ks, stride=stride, ndim=ndim, norm_type=norm_type, **kwargs)
#|export
def AdaptiveAvgPool(sz=1, ndim=2):
"nn.AdaptiveAvgPool layer for `ndim`"
assert 1 <= ndim <= 3
return getattr(nn, f"AdaptiveAvgPool{ndim}d")(sz)
#|export
def MaxPool(ks=2, stride=None, padding=0, ndim=2, ceil_mode=False):
"nn.MaxPool layer for `ndim`"
assert 1 <= ndim <= 3
return getattr(nn, f"MaxPool{ndim}d")(ks, stride=stride, padding=padding)
#|export
def AvgPool(ks=2, stride=None, padding=0, ndim=2, ceil_mode=False):
"nn.AvgPool layer for `ndim`"
assert 1 <= ndim <= 3
return getattr(nn, f"AvgPool{ndim}d")(ks, stride=stride, padding=padding, ceil_mode=ceil_mode)
#|export
def trunc_normal_(x, mean=0., std=1.):
"Truncated normal initialization (approximation)"
# From https://discuss.pytorch.org/t/implementing-truncated-normal-initializer/4778/12
return x.normal_().fmod_(2).mul_(std).add_(mean)
#|export
class Embedding(nn.Embedding):
"Embedding layer with truncated normal initialization"
def __init__(self, ni, nf, std=0.01):
super().__init__(ni, nf)
trunc_normal_(self.weight.data, std=std)
std = 0.02
tst = Embedding(10, 30, std)
assert tst.weight.min() > -2*std
assert tst.weight.max() < 2*std
test_close(tst.weight.mean(), 0, 1e-2)
test_close(tst.weight.std(), std, 0.1)
#|export
class SelfAttention(Module):
"Self attention layer for `n_channels`."
def __init__(self, n_channels):
self.query,self.key,self.value = [self._conv(n_channels, c) for c in (n_channels//8,n_channels//8,n_channels)]
self.gamma = nn.Parameter(tensor([0.]))
def _conv(self,n_in,n_out):
return ConvLayer(n_in, n_out, ks=1, ndim=1, norm_type=NormType.Spectral, act_cls=None, bias=False)
def forward(self, x):
#Notation from the paper.
size = x.size()
x = x.view(*size[:2],-1)
f,g,h = self.query(x),self.key(x),self.value(x)
beta = F.softmax(torch.bmm(f.transpose(1,2), g), dim=1)
o = self.gamma * torch.bmm(h, beta) + x
return o.view(*size).contiguous()
tst = SelfAttention(16)
x = torch.randn(32, 16, 8, 8)
test_eq(tst(x),x)
tst.gamma.data.fill_(1.)
y = tst(x)
test_eq(y.shape, [32,16,8,8])
q,k,v = tst.query[0].weight.data,tst.key[0].weight.data,tst.value[0].weight.data
test_eq([q.shape, k.shape, v.shape], [[2, 16, 1], [2, 16, 1], [16, 16, 1]])
f,g,h = map(lambda m: x.view(32, 16, 64).transpose(1,2) @ m.squeeze().t(), [q,k,v])
test_eq([f.shape, g.shape, h.shape], [[32,64,2], [32,64,2], [32,64,16]])
beta = F.softmax(torch.bmm(f, g.transpose(1,2)), dim=1)
test_eq(beta.shape, [32, 64, 64])
out = torch.bmm(h.transpose(1,2), beta)
test_eq(out.shape, [32, 16, 64])
test_close(y, x + out.view(32, 16, 8, 8), eps=1e-4)
#|export
class PooledSelfAttention2d(Module):
"Pooled self attention layer for 2d."
def __init__(self, n_channels):
self.n_channels = n_channels
self.query,self.key,self.value = [self._conv(n_channels, c) for c in (n_channels//8,n_channels//8,n_channels//2)]
self.out = self._conv(n_channels//2, n_channels)
self.gamma = nn.Parameter(tensor([0.]))
def _conv(self,n_in,n_out):
return ConvLayer(n_in, n_out, ks=1, norm_type=NormType.Spectral, act_cls=None, bias=False)
def forward(self, x):
n_ftrs = x.shape[2]*x.shape[3]
f = self.query(x).view(-1, self.n_channels//8, n_ftrs)
g = F.max_pool2d(self.key(x), [2,2]).view(-1, self.n_channels//8, n_ftrs//4)
h = F.max_pool2d(self.value(x), [2,2]).view(-1, self.n_channels//2, n_ftrs//4)
beta = F.softmax(torch.bmm(f.transpose(1, 2), g), -1)
o = self.out(torch.bmm(h, beta.transpose(1,2)).view(-1, self.n_channels//2, x.shape[2], x.shape[3]))
return self.gamma * o + x
#|export
def _conv1d_spect(ni:int, no:int, ks:int=1, stride:int=1, padding:int=0, bias:bool=False):
"Create and initialize a `nn.Conv1d` layer with spectral normalization."
conv = nn.Conv1d(ni, no, ks, stride=stride, padding=padding, bias=bias)
nn.init.kaiming_normal_(conv.weight)
if bias: conv.bias.data.zero_()
return spectral_norm(conv)
#|export
class SimpleSelfAttention(Module):
def __init__(self, n_in:int, ks=1, sym=False):
self.sym,self.n_in = sym,n_in
self.conv = _conv1d_spect(n_in, n_in, ks, padding=ks//2, bias=False)
self.gamma = nn.Parameter(tensor([0.]))
def forward(self,x):
if self.sym:
c = self.conv.weight.view(self.n_in,self.n_in)
c = (c + c.t())/2
self.conv.weight = c.view(self.n_in,self.n_in,1)
size = x.size()
x = x.view(*size[:2],-1)
convx = self.conv(x)
xxT = torch.bmm(x,x.permute(0,2,1).contiguous())
o = torch.bmm(xxT, convx)
o = self.gamma * o + x
return o.view(*size).contiguous()
#|export
def icnr_init(x, scale=2, init=nn.init.kaiming_normal_):
"ICNR init of `x`, with `scale` and `init` function"
ni,nf,h,w = x.shape
ni2 = int(ni/(scale**2))
k = init(x.new_zeros([ni2,nf,h,w])).transpose(0, 1)
k = k.contiguous().view(ni2, nf, -1)
k = k.repeat(1, 1, scale**2)
return k.contiguous().view([nf,ni,h,w]).transpose(0, 1)
tst = torch.randn(16*4, 32, 1, 1)
tst = icnr_init(tst)
for i in range(0,16*4,4):
test_eq(tst[i],tst[i+1])
test_eq(tst[i],tst[i+2])
test_eq(tst[i],tst[i+3])
#|export
class PixelShuffle_ICNR(nn.Sequential):
"Upsample by `scale` from `ni` filters to `nf` (default `ni`), using `nn.PixelShuffle`."
def __init__(self, ni, nf=None, scale=2, blur=False, norm_type=NormType.Weight, act_cls=defaults.activation):
super().__init__()
nf = ifnone(nf, ni)
layers = [ConvLayer(ni, nf*(scale**2), ks=1, norm_type=norm_type, act_cls=act_cls, bias_std=0),
nn.PixelShuffle(scale)]
if norm_type == NormType.Weight:
layers[0][0].weight_v.data.copy_(icnr_init(layers[0][0].weight_v.data))
layers[0][0].weight_g.data.copy_(((layers[0][0].weight_v.data**2).sum(dim=[1,2,3])**0.5)[:,None,None,None])
else:
layers[0][0].weight.data.copy_(icnr_init(layers[0][0].weight.data))
if blur: layers += [nn.ReplicationPad2d((1,0,1,0)), nn.AvgPool2d(2, stride=1)]
super().__init__(*layers)
psfl = PixelShuffle_ICNR(16)
x = torch.randn(64, 16, 8, 8)
y = psfl(x)
test_eq(y.shape, [64, 16, 16, 16])
#ICNR init makes every 2x2 window (stride 2) have the same elements
for i in range(0,16,2):
for j in range(0,16,2):
test_eq(y[:,:,i,j],y[:,:,i+1,j])
test_eq(y[:,:,i,j],y[:,:,i ,j+1])
test_eq(y[:,:,i,j],y[:,:,i+1,j+1])
psfl = PixelShuffle_ICNR(16, norm_type=None)
x = torch.randn(64, 16, 8, 8)
y = psfl(x)
test_eq(y.shape, [64, 16, 16, 16])
#ICNR init makes every 2x2 window (stride 2) have the same elements
for i in range(0,16,2):
for j in range(0,16,2):
test_eq(y[:,:,i,j],y[:,:,i+1,j])
test_eq(y[:,:,i,j],y[:,:,i ,j+1])
test_eq(y[:,:,i,j],y[:,:,i+1,j+1])
psfl = PixelShuffle_ICNR(16, norm_type=NormType.Spectral)
x = torch.randn(64, 16, 8, 8)
y = psfl(x)
test_eq(y.shape, [64, 16, 16, 16])
#ICNR init makes every 2x2 window (stride 2) have the same elements
for i in range(0,16,2):
for j in range(0,16,2):
test_eq(y[:,:,i,j],y[:,:,i+1,j])
test_eq(y[:,:,i,j],y[:,:,i ,j+1])
test_eq(y[:,:,i,j],y[:,:,i+1,j+1])
#|export
def sequential(*args):
"Create an `nn.Sequential`, wrapping items with `Lambda` if needed"
if len(args) != 1 or not isinstance(args[0], OrderedDict):
args = list(args)
for i,o in enumerate(args):
if not isinstance(o,nn.Module): args[i] = Lambda(o)
return nn.Sequential(*args)
#|export
class SequentialEx(Module):
"Like `nn.Sequential`, but with ModuleList semantics, and can access module input"
def __init__(self, *layers): self.layers = nn.ModuleList(layers)
def forward(self, x):
res = x
for l in self.layers:
res.orig = x
nres = l(res)
# We have to remove res.orig to avoid hanging refs and therefore memory leaks
res.orig, nres.orig = None, None
res = nres
return res
def __getitem__(self,i): return self.layers[i]
def append(self,l): return self.layers.append(l)
def extend(self,l): return self.layers.extend(l)
def insert(self,i,l): return self.layers.insert(i,l)
#|export
class MergeLayer(Module):
"Merge a shortcut with the result of the module by adding them or concatenating them if `dense=True`."
def __init__(self, dense:bool=False): self.dense=dense
def forward(self, x): return torch.cat([x,x.orig], dim=1) if self.dense else (x+x.orig)
res_block = SequentialEx(ConvLayer(16, 16), ConvLayer(16,16))
res_block.append(MergeLayer()) # just to test append - normally it would be in init params
x = torch.randn(32, 16, 8, 8)
y = res_block(x)
test_eq(y.shape, [32, 16, 8, 8])
test_eq(y, x + res_block[1](res_block[0](x)))
x = TensorBase(torch.randn(32, 16, 8, 8))
y = res_block(x)
test_is(y.orig, None)
#|export
class Cat(nn.ModuleList):
"Concatenate layers outputs over a given dim"
def __init__(self, layers, dim=1):
self.dim=dim
super().__init__(layers)
def forward(self, x): return torch.cat([l(x) for l in self], dim=self.dim)
layers = [ConvLayer(2,4), ConvLayer(2,4), ConvLayer(2,4)]
x = torch.rand(1,2,8,8)
cat = Cat(layers)
test_eq(cat(x).shape, [1,12,8,8])
test_eq(cat(x), torch.cat([l(x) for l in layers], dim=1))
#|export
class SimpleCNN(nn.Sequential):
"Create a simple CNN with `filters`."
def __init__(self, filters, kernel_szs=None, strides=None, bn=True):
nl = len(filters)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [ConvLayer(filters[i], filters[i+1], kernel_szs[i], stride=strides[i],
norm_type=(NormType.Batch if bn and i<nl-1 else None)) for i in range(nl)]
layers.append(PoolFlatten())
super().__init__(*layers)
tst = SimpleCNN([8,16,32])
mods = list(tst.children())
test_eq(len(mods), 3)
test_eq([[m[0].in_channels, m[0].out_channels] for m in mods[:2]], [[8,16], [16,32]])
tst = SimpleCNN([8,16,32], kernel_szs=[1,3])
mods = list(tst.children())
test_eq([m[0].kernel_size for m in mods[:2]], [(1,1), (3,3)])
tst = SimpleCNN([8,16,32], strides=[1,2])
mods = list(tst.children())
test_eq([m[0].stride for m in mods[:2]], [(1,1),(2,2)])
#|export
class ProdLayer(Module):
"Merge a shortcut with the result of the module by multiplying them."
def forward(self, x): return x * x.orig
#|export
inplace_relu = partial(nn.ReLU, inplace=True)
#|export
def SEModule(ch, reduction, act_cls=defaults.activation):
nf = math.ceil(ch//reduction/8)*8
return SequentialEx(nn.AdaptiveAvgPool2d(1),
ConvLayer(ch, nf, ks=1, norm_type=None, act_cls=act_cls),
ConvLayer(nf, ch, ks=1, norm_type=None, act_cls=nn.Sigmoid),
ProdLayer())
#|export
class ResBlock(Module):
"Resnet block from `ni` to `nh` with `stride`"
@delegates(ConvLayer.__init__)
def __init__(self, expansion, ni, nf, stride=1, groups=1, reduction=None, nh1=None, nh2=None, dw=False, g2=1,
sa=False, sym=False, norm_type=NormType.Batch, act_cls=defaults.activation, ndim=2, ks=3,
pool=AvgPool, pool_first=True, **kwargs):
norm2 = (NormType.BatchZero if norm_type==NormType.Batch else
NormType.InstanceZero if norm_type==NormType.Instance else norm_type)
if nh2 is None: nh2 = nf
if nh1 is None: nh1 = nh2
nf,ni = nf*expansion,ni*expansion
k0 = dict(norm_type=norm_type, act_cls=act_cls, ndim=ndim, **kwargs)
k1 = dict(norm_type=norm2, act_cls=None, ndim=ndim, **kwargs)
convpath = [ConvLayer(ni, nh2, ks, stride=stride, groups=ni if dw else groups, **k0),
ConvLayer(nh2, nf, ks, groups=g2, **k1)
] if expansion == 1 else [
ConvLayer(ni, nh1, 1, **k0),
ConvLayer(nh1, nh2, ks, stride=stride, groups=nh1 if dw else groups, **k0),
ConvLayer(nh2, nf, 1, groups=g2, **k1)]
if reduction: convpath.append(SEModule(nf, reduction=reduction, act_cls=act_cls))
if sa: convpath.append(SimpleSelfAttention(nf,ks=1,sym=sym))
self.convpath = nn.Sequential(*convpath)
idpath = []
if ni!=nf: idpath.append(ConvLayer(ni, nf, 1, act_cls=None, ndim=ndim, **kwargs))
if stride!=1: idpath.insert((1,0)[pool_first], pool(stride, ndim=ndim, ceil_mode=True))
self.idpath = nn.Sequential(*idpath)
self.act = defaults.activation(inplace=True) if act_cls is defaults.activation else act_cls()
def forward(self, x): return self.act(self.convpath(x) + self.idpath(x))
#|export
def SEBlock(expansion, ni, nf, groups=1, reduction=16, stride=1, **kwargs):
return ResBlock(expansion, ni, nf, stride=stride, groups=groups, reduction=reduction, nh1=nf*2, nh2=nf*expansion, **kwargs)
#|export
def SEResNeXtBlock(expansion, ni, nf, groups=32, reduction=16, stride=1, base_width=4, **kwargs):
w = math.floor(nf * (base_width / 64)) * groups
return ResBlock(expansion, ni, nf, stride=stride, groups=groups, reduction=reduction, nh2=w, **kwargs)
#|export
def SeparableBlock(expansion, ni, nf, reduction=16, stride=1, base_width=4, **kwargs):
return ResBlock(expansion, ni, nf, stride=stride, reduction=reduction, nh2=nf*2, dw=True, **kwargs)
#|export
def _stack_tups(tuples, stack_dim=1):
"Stack tuple of tensors along `stack_dim`"
return tuple(torch.stack([t[i] for t in tuples], dim=stack_dim) for i in range_of(tuples[0]))
#|export
class TimeDistributed(Module):
"Applies `module` over `tdim` identically for each step, use `low_mem` to compute one at a time."
def __init__(self, module, low_mem=False, tdim=1):
store_attr()
def forward(self, *tensors, **kwargs):
"input x with shape:(bs,seq_len,channels,width,height)"
if self.low_mem or self.tdim!=1:
return self.low_mem_forward(*tensors, **kwargs)
else:
#only support tdim=1
inp_shape = tensors[0].shape
bs, seq_len = inp_shape[0], inp_shape[1]
out = self.module(*[x.view(bs*seq_len, *x.shape[2:]) for x in tensors], **kwargs)
return self.format_output(out, bs, seq_len)
def low_mem_forward(self, *tensors, **kwargs):
"input x with shape:(bs,seq_len,channels,width,height)"
seq_len = tensors[0].shape[self.tdim]
args_split = [torch.unbind(x, dim=self.tdim) for x in tensors]
out = []
for i in range(seq_len):
out.append(self.module(*[args[i] for args in args_split]), **kwargs)
if isinstance(out[0], tuple):
return _stack_tups(out, stack_dim=self.tdim)
return torch.stack(out, dim=self.tdim)
def format_output(self, out, bs, seq_len):
"unstack from batchsize outputs"
if isinstance(out, tuple):
return tuple(out_i.view(bs, seq_len, *out_i.shape[1:]) for out_i in out)
return out.view(bs, seq_len,*out.shape[1:])
def __repr__(self):
return f'TimeDistributed({self.module})'
bs, seq_len = 2, 5
x, y = torch.rand(bs,seq_len,3,2,2), torch.rand(bs,seq_len,3,2,2)
tconv = TimeDistributed(nn.Conv2d(3,4,1))
test_eq(tconv(x).shape, (2,5,4,2,2))
tconv.low_mem=True
test_eq(tconv(x).shape, (2,5,4,2,2))
class Mod(Module):
def __init__(self):
self.conv = nn.Conv2d(3,4,1)
def forward(self, x, y):
return self.conv(x) + self.conv(y)
tmod = TimeDistributed(Mod())
out = tmod(x,y)
test_eq(out.shape, (2,5,4,2,2))
tmod.low_mem=True
out_low_mem = tmod(x,y)
test_eq(out_low_mem.shape, (2,5,4,2,2))
test_eq(out, out_low_mem)
class Mod2(Module):
def __init__(self):
self.conv = nn.Conv2d(3,4,1)
def forward(self, x, y):
return self.conv(x), self.conv(y)
tmod2 = TimeDistributed(Mod2())
out = tmod2(x,y)
test_eq(len(out), 2)
test_eq(out[0].shape, (2,5,4,2,2))
tmod2.low_mem=True
out_low_mem = tmod2(x,y)
test_eq(out_low_mem[0].shape, (2,5,4,2,2))
test_eq(out, out_low_mem)
show_doc(TimeDistributed)
from fastai.vision.all import *
encoder = create_body(resnet18)
time_resnet = TimeDistributed(encoder)
image_sequence = torch.rand(2, 5, 3, 64, 64)
time_resnet(image_sequence).shape
time_resnet.low_mem_forward(image_sequence).shape
#|export
from torch.jit import script
#|export
@script
def _swish_jit_fwd(x): return x.mul(torch.sigmoid(x))
@script
def _swish_jit_bwd(x, grad_output):
x_sigmoid = torch.sigmoid(x)
return grad_output * (x_sigmoid * (1 + x * (1 - x_sigmoid)))
class _SwishJitAutoFn(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return _swish_jit_fwd(x)
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_variables[0]
return _swish_jit_bwd(x, grad_output)
#|export
def swish(x, inplace=False): return _SwishJitAutoFn.apply(x)
#|export
class Swish(Module):
def forward(self, x): return _SwishJitAutoFn.apply(x)
#|export
@script
def _mish_jit_fwd(x): return x.mul(torch.tanh(F.softplus(x)))
@script
def _mish_jit_bwd(x, grad_output):
x_sigmoid = torch.sigmoid(x)
x_tanh_sp = F.softplus(x).tanh()
return grad_output.mul(x_tanh_sp + x * x_sigmoid * (1 - x_tanh_sp * x_tanh_sp))
class MishJitAutoFn(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return _mish_jit_fwd(x)
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_variables[0]
return _mish_jit_bwd(x, grad_output)
#|export
def mish(x): return F.mish(x) if torch.__version__ >= '1.9' else MishJitAutoFn.apply(x)
#|export
class Mish(Module):
def forward(self, x): return MishJitAutoFn.apply(x)
#|export
if ismin_torch('1.9'): Mish = nn.Mish
#|export
for o in swish,Swish,mish,Mish: o.__default_init__ = kaiming_uniform_
#|export
class ParameterModule(Module):
"Register a lone parameter `p` in a module."
def __init__(self, p): self.val = p
def forward(self, x): return x
#|export
def children_and_parameters(m):
"Return the children of `m` and its direct parameters not registered in modules."
children = list(m.children())
children_p = sum([[id(p) for p in c.parameters()] for c in m.children()],[])
for p in m.parameters():
if id(p) not in children_p: children.append(ParameterModule(p))
return children
class TstModule(Module):
def __init__(self): self.a,self.lin = nn.Parameter(torch.randn(1)),nn.Linear(5,10)
tst = TstModule()
children = children_and_parameters(tst)
test_eq(len(children), 2)
test_eq(children[0], tst.lin)
assert isinstance(children[1], ParameterModule)
test_eq(children[1].val, tst.a)
#|export
def has_children(m):
try: next(m.children())
except StopIteration: return False
return True
class A(Module): pass
assert not has_children(A())
assert has_children(TstModule())
#|export
def flatten_model(m):
"Return the list of all submodules and parameters of `m`"
return sum(map(flatten_model,children_and_parameters(m)),[]) if has_children(m) else [m]
tst = nn.Sequential(TstModule(), TstModule())
children = flatten_model(tst)
test_eq(len(children), 4)
assert isinstance(children[1], ParameterModule)
assert isinstance(children[3], ParameterModule)
#|export
class NoneReduce():
"A context manager to evaluate `loss_func` with none reduce."
def __init__(self, loss_func): self.loss_func,self.old_red = loss_func,None
def __enter__(self):
if hasattr(self.loss_func, 'reduction'):
self.old_red = self.loss_func.reduction
self.loss_func.reduction = 'none'
return self.loss_func
else: return partial(self.loss_func, reduction='none')
def __exit__(self, type, value, traceback):
if self.old_red is not None: self.loss_func.reduction = self.old_red
x,y = torch.randn(5),torch.randn(5)
loss_fn = nn.MSELoss()
with NoneReduce(loss_fn) as loss_func:
loss = loss_func(x,y)
test_eq(loss.shape, [5])
test_eq(loss_fn.reduction, 'mean')
loss_fn = F.mse_loss
with NoneReduce(loss_fn) as loss_func:
loss = loss_func(x,y)
test_eq(loss.shape, [5])
test_eq(loss_fn, F.mse_loss)
#|export
def in_channels(m):
"Return the shape of the first weight layer in `m`."
for l in flatten_model(m):
if getattr(l, 'weight', None) is not None and l.weight.ndim==4:
return l.weight.shape[1]
raise Exception('No weight layer')
test_eq(in_channels(nn.Sequential(nn.Conv2d(5,4,3), nn.Conv2d(4,3,3))), 5)
test_eq(in_channels(nn.Sequential(nn.AvgPool2d(4), nn.Conv2d(4,3,3))), 4)
test_eq(in_channels(nn.Sequential(BatchNorm(4), nn.Conv2d(4,3,3))), 4)
test_eq(in_channels(nn.Sequential(InstanceNorm(4), nn.Conv2d(4,3,3))), 4)
test_eq(in_channels(nn.Sequential(InstanceNorm(4, affine=False), nn.Conv2d(4,3,3))), 4)
test_fail(lambda : in_channels(nn.Sequential(nn.AvgPool2d(4))))
#|hide
from nbdev.export import *
notebook2script()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pooling layers
Step2: If the input is bs x nf x h x h, the output will be bs x 2*nf x 1 x 1 if no size is passed or bs x 2*nf x size x size
Step3: BatchNorm layers
Step4: kwargs are passed to nn.BatchNorm and can be eps, momentum, affine and track_running_stats.
Step5: If affine is false the weight should be None
Step6: The BatchNorm layer is skipped if bn=False, as is the dropout if p=0.. Optionally, you can add an activation for after the linear layer with act.
Step7: Inits
Step8: Convolutions
Step9: The convolution uses ks (kernel size) stride, padding and bias. padding will default to the appropriate value ((ks-1)//2 if it's not a transposed conv) and bias will default to True the norm_type is Spectral or Weight, False if it's Batch or BatchZero. Note that if you don't want any normalization, you should pass norm_type=None.
Step10: Embeddings
Step11: Truncated normal initialization bounds the distribution to avoid large value. For a given standard deviation std, the bounds are roughly -2*std, 2*std.
Step12: Self attention
Step13: Self-attention layer as introduced in Self-Attention Generative Adversarial Networks.
Step14: Then during training gamma will probably change since it's a trainable parameter. Let's see what's happening when it gets a nonzero value.
Step15: The attention mechanism requires three matrix multiplications (here represented by 1x1 convs). The multiplications are done on the channel level (the second dimension in our tensor) and we flatten the feature map (which is 8x8 here). As in the paper, we note f, g and h the results of those multiplications.
Step16: The key part of the attention layer is to compute attention weights for each of our location in the feature map (here 8x8 = 64). Those are positive numbers that sum to 1 and tell the model to pay attention to this or that part of the picture. We make the product of f and the transpose of g (to get something of size bs by 64 by 64) then apply a softmax on the first dimension (to get the positive numbers that sum up to 1). The result can then be multiplied with h transposed to get an output of size bs by channels by 64, which we can then be viewed as an output the same size as the original input.
Step17: Self-attention layer used in the Big GAN paper.
Step18: PixelShuffle
Step19: ICNR init was introduced in this article. It suggests to initialize the convolution that will be used in PixelShuffle so that each of the r**2 channels get the same weight (so that in the picture above, the 9 colors in a 3 by 3 window are initially the same).
Step20: The convolutional layer is initialized with icnr_init and passed act_cls and norm_type (the default of weight normalization seemed to be what's best for super-resolution problems, in our experiments).
Step21: Sequential extensions
Step22: This is useful to write layers that require to remember the input (like a resnet block) in a sequential way.
Step23: Concat
Step24: Ready-to-go models
Step25: The model is a succession of convolutional layers from (filters[0],filters[1]) to (filters[n-2],filters[n-1]) (if n is the length of the filters list) followed by a PoolFlatten. kernel_szs and strides defaults to a list of 3s and a list of 2s. If bn=True the convolutional layers are successions of conv-relu-batchnorm, otherwise conv-relu.
Step26: Test kernel sizes
Step27: Test strides
Step28: This is a resnet block (normal or bottleneck depending on expansion, 1 for the normal block and 4 for the traditional bottleneck) that implements the tweaks from Bag of Tricks for Image Classification with Convolutional Neural Networks. In particular, the last batchnorm layer (if that is the selected norm_type) is initialized with a weight (or gamma) of zero to facilitate the flow from the beginning to the end of the network. It also implements optional Squeeze and Excitation and grouped convs for ResNeXT and similar models (use dw=True for depthwise convs).
Step29: Time Distributed Layer
Step30: This module is equivalent to Keras TimeDistributed Layer. This wrapper allows to apply a layer to every temporal slice of an input. By default it is assumed the time axis (tdim) is the 1st one (the one after the batch size). A typical usage would be to encode a sequence of images using an image encoder.
Step31: A resnet18 will encode a feature map of 512 channels. Height and Width will be divided by 32.
Step32: a synthetic batch of 2 image-sequences of lenght 5. (bs, seq_len, ch, w, h)
Step33: This way, one can encode a sequence of images on feature space.
Step34: Swish and Mish
Step35: Helper functions for submodules
Step36: Export -
|
11,422
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.preprocessing import StandardScaler
X = np.array([[3,6], [6,7]])
#스케일러 객체 생성
scaler = StandardScaler()
#분포 추정
scaler.fit(X)
#스케일링
X2 = scaler.transform(X)
X
X2
X = np.array([[0, 2], [1, 1]])
X
from sklearn.preprocessing import OneHotEncoder
OneHotEncoder().fit_transform(X).toarray()
from sklearn.datasets import load_boston
boston = load_boston()
dfX0 = pd.DataFrame(boston.data, columns=boston.feature_names)
dfy = pd.DataFrame(boston.target, columns=["MEDV"])
dfX0.tail() #독립변수 DataFrame
dfy.tail() #종속변수 DataFrame
import statsmodels.api as sm
#augmentation
dfX = sm.add_constant(dfX0)
#모형 객체 생성
m = sm.OLS(dfy, dfX)
#추정 및 결과 객체 생성
r = m.fit()
#결과 객체에서 보고서 출력
print(r.summary())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 문제4
Step2: 문제6
|
11,423
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.transpose(tf.reshape(a_C,[n_H*n_W, n_C]))
a_G_unrolled = tf.transpose(tf.reshape(a_G,[n_H*n_W, n_C]))
# compute the cost with tensorflow (≈1 line)
J_content = 1/(4*n_H*n_W*n_C)* tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled,a_G_unrolled)))
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S,[n_H*n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G,[n_H*n_W, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = 1/(4*(n_H*n_W*n_C)**2)* tf.reduce_sum(tf.square(tf.subtract(GS,GG)))
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
def compute_style_cost(model, STYLE_LAYERS):
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
### START CODE HERE ### (≈1 line)
J = alpha*J_content + beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha=10, beta=40)
### END CODE HERE ###
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'].assign(input_image)) #somehow it seems to work with the .assign(input_image) part
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
model_nn(sess, generated_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Problem Statement
Step2: The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the tf.assign function. In particular, you will use the assign function like this
Step4: The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
Step5: Expected Output
Step7: This painting was painted in the style of impressionism.
Step9: Expected Output
Step10: Expected Output
Step12: You can combine the style costs for different layers as follows
Step14: Note
Step15: Expected Output
Step16: Let's load, reshape, and normalize our "content" image (the Louvre museum picture)
Step17: Let's load, reshape and normalize our "style" image (Claude Monet's painting)
Step18: Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in nst_utils.py to see the details of generate_noise_image(...); to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
Step19: Next, as explained in part (2), let's load the VGG16 model.
Step20: To get the program to compute the content cost, we will now assign a_C and a_G to be the appropriate hidden layer activations. We will use layer conv4_2 to compute the content cost. The code below does the following
Step21: Note
Step22: Exercise
Step23: You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. See reference
Step24: Exercise
Step25: Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
|
11,424
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape=[None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4,4,3,8], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable("W2", [2,2,8,16], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X,W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME')
# FLATTEN
F = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(F, num_outputs=6,activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
Step5: 1.1 - Create placeholders
Step7: Expected Output
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output
|
11,425
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none', model='no_spot')
b.add_feature('spot', component='primary', feature='spot01', relteff=0.9, radius=30, colat=45, long=90)
b.run_compute(irrad_method='none', model='with_spot')
afig, mplfig = b.plot(show=True, legend=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Model without Spots
Step3: Adding Spots
Step4: Comparing Light Curves
|
11,426
|
<ASSISTANT_TASK:>
Python Code:
linear = linear_model.LinearModel(features, labels, terms)
linear.inference()
#PYTEST_VALIDATE_IGNORE_OUTPUT
linear.plot_direction_accuracy()
#PYTEST_VALIDATE_IGNORE_OUTPUT
linear.plot_profit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: accuracy
Step2: profit
|
11,427
|
<ASSISTANT_TASK:>
Python Code:
import rebound
import numpy as np
sim = rebound.Simulation()
sim.add(m=1., hash=0)
for i in range(1,10):
sim.add(a=i, hash=i)
sim.move_to_com()
print("Particle hashes:{0}".format([sim.particles[i].hash for i in range(sim.N)]))
sim.add(a=10, hash="Saturn")
print("Particle hashes:{0}".format([sim.particles[i].hash for i in range(sim.N)]))
Noutputs = 1000
xs = np.zeros((sim.N, Noutputs))
ys = np.zeros((sim.N, Noutputs))
times = np.linspace(0.,50*2.*np.pi, Noutputs, endpoint=False)
for i, time in enumerate(times):
sim.integrate(time)
xs[:,i] = [sim.particles[j].x for j in range(sim.N)]
ys[:,i] = [sim.particles[j].y for j in range(sim.N)]
%matplotlib inline
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(15,5))
for i in range(sim.N):
plt.plot(xs[i,:], ys[i,:])
ax.set_aspect('equal')
print("Hash\t\tx")
for i in range(sim.N):
print("{0}\t{1}".format(sim.particles[i].hash, xs[i,-1]))
for i in reversed(range(1,sim.N)):
if xs[i,-1] > 0:
sim.remove(i)
print("Number of particles after cut = {0}".format(sim.N))
print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles]))
sim.remove(2, keepSorted=0)
print("Number of particles after cut = {0}".format(sim.N))
print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles]))
sim.remove(hash="Saturn")
print("Number of particles after cut = {0}".format(sim.N))
print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles]))
try:
sim.remove(hash="Planet 9")
except RuntimeError as e:
print("A runtime error occured: {0}".format(e))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us add one more particle, this time with a custom name
Step2: Now let us run perform a short integration to isolate the particles that interest us for a longer simulation
Step3: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x < 0$ at the end of the preliminary integration. Let's first print out the particle hashes and x positions.
Step4: Note that 4066125545 is the hash corresponding to the string "Saturn" we added above. We can use the remove() function to filter out particles. As an argument, we pass the corresponding index in the particles array.
Step5: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved. Note that this is helpful for example if you use an integrator such as WHFast which uses Jacobi coordinates.
Step6: We see that the order of the particles array has changed.
Step7: If you try to remove a particle with an invalid index or hash, an exception is thrown, which might be catch using the standard python syntax
|
11,428
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
from mne import read_evokeds
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
# Reading
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0),
proj=True)
evoked.plot(exclude=[], time_unit='s')
# Show result as a 2D image (x: time, y: channels, color: amplitude)
evoked.plot_image(exclude=[], time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show result as a butterfly plot
|
11,429
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
11,430
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
input_size = 32 * 32 * 3
hidden_size = 80
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=5e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
input_size = 32 * 32 * 3
num_classes = 10
learning_rates = [1.0e-3] # 1e-4, 5e-4, 8e-4,
hidden_sizes = [80]
val_accs = {}
best_val = -1
best_net = []
best_param = []
for lr in learning_rates:
for hs in hidden_sizes:
net = TwoLayerNet(input_size, hs, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate = lr, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_accs[(lr,hs)] = (net.predict(X_val) == y_val).mean()
# update best results
if val_accs[(lr,hs)] > best_val:
best_val = val_accs[(lr,hs)]
best_net = net
best_param = (lr,hs)
print "learning rate:", lr, "hidden size:", hs,'Validation accuracy: ', val_accs[(lr,hs)]
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
11,431
|
<ASSISTANT_TASK:>
Python Code:
from xml.etree import ElementTree as ET
document_tree = ET.parse( 'data/mondial_database_less.xml' )
# print names of all countries
for child in document_tree.getroot():
print (child.find('name').text)
# print names of all countries and their cities
for element in document_tree.iterfind('country'):
print ('* ' + element.find('name').text + ':',)
capitals_string = ''
for subelement in element.getiterator('city'):
capitals_string += subelement.find('name').text + ', '
print (capitals_string[:-2])
document = ET.parse( './data/mondial_database.xml' )
import pandas as pd
# gather all data into a list
root = document.getroot()
df = []
for country in root.findall('country'):
infmort = country.find('infant_mortality')
cname = country.find('name')
if infmort != None:
df.append([cname.text, pd.to_numeric(infmort.text)])
# convert into a dataframe
df = pd.DataFrame(df)
# assign column names
df.columns = ['name', 'infant_mortality']
# sort by infant mortality, ascending=True
pd.options.display.float_format = '{:,.2f}'.format
df = df.sort_values(['infant_mortality'], ascending=[1]).head(10).reset_index(drop=True)
df
# gather population tags per country (excluding per city/prov)
df = []
for root in document.getroot():
country = root.find('name').text
for child in root:
if ((root.tag == 'country') & (child.tag=='population')):
yr = child.attrib['year']
pop = child.text
df.append([country, yr, pd.to_numeric(pop)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country', 'year', 'population'])
# sort by population, ascending=False
df.population = df.population.astype(float)
pd.options.display.float_format = '{:,.0f}'.format
print(df.sort_values(['population'], ascending=[0]).head(10).reset_index(drop=True))
df_eth_cnt = []
for root in document.getroot():
country = root.find('name').text
df_eth = []
for child in root:
if ((root.tag == 'country') & (child.tag=='population')):
yr = child.attrib['year']
pop = child.text
if ((root.tag == 'country') & (child.tag=='ethnicgroup')):
percent = pd.to_numeric(child.attrib['percentage'])
ethnic = child.text
df_eth.append([percent, ethnic])
for k in range(len(df_eth)):
df_eth_cnt.append([df_eth[k][1], (df_eth[k][0])*(pd.to_numeric(pop)/100)])
#df_pop.append([country, yr, pd.to_numeric(pop), df_eth_cnt])
df = pd.DataFrame(df_eth_cnt, columns = ['ethnicgroup', 'population'])
print(df.sort_values(['population'], ascending=[0]).head(10).reset_index(drop=True))
# gather all data into a list
root = document.getroot()
dfcntry = []
for country in root.findall('country'):
dfcntry.append([country.attrib['car_code'], country.find('name').text])
dfcntry = pd.DataFrame(dfcntry, columns=['country_code','country_name'])
dfUniq = dfcntry[(dfcntry.country_name != '')].drop_duplicates('country_code')
dfUniq = pd.DataFrame(dfUniq, columns=['country_code','country_name'])
df = []
for river in root.findall('river'):
length = river.find('length')
rivername = river.find('name')
if length != None:
df.append([river.attrib['country'], rivername.text, pd.to_numeric(length.text)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country(ies)', 'river_name', 'length'])
# print answer
df = df.sort_values(['length'], ascending=[0]).head(1).reset_index(drop=True)
# extract country names
cntry = []
for i in df['country(ies)'].str.split():
for j in range(len(i)):
cntry.append(dfUniq.loc[dfUniq['country_code'] == i[j]])
# concat names into a string
str =''
for k in range(len(cntry)):
str += (cntry[k]['country_name'].values[0]) + " "
#remove last space
str = str[:-1]
# update country name
df['country(ies)'] = str
# display answer
df
# gather all data into a list
dfcntry = []
for country in root.findall('country'):
dfcntry.append([country.attrib['car_code'], country.find('name').text])
dfcntry = pd.DataFrame(dfcntry, columns=['country_code','country_name'])
dfUniq = dfcntry[(dfcntry.country_name != '')].drop_duplicates('country_code')
dfUniq = pd.DataFrame(dfUniq, columns=['country_code','country_name'])
root = document.getroot()
df = []
for lake in root.findall('lake'):
area = lake.find('area')
lakename = lake.find('name')
if area != None:
df.append([lake.attrib['country'], lakename.text, pd.to_numeric(area.text)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country(ies)', 'lake_name', 'area'])
# get largest lake by area
df = df.sort_values(['area'], ascending=[0]).head(1).reset_index(drop=True)
# extract country names
cntry = []
for i in df['country(ies)'].str.split():
for j in range(len(i)):
cntry.append(dfUniq.loc[dfUniq['country_code'] == i[j]])
# concat names into a string
str =''
for k in range(len(cntry)):
str += (cntry[k]['country_name'].values[0]) + " "
#remove last space
str = str[:-1]
# update country name
df['country(ies)'] = str
# display answer
df
# create a new dataframe with unique project code and name
root = document.getroot()
dfcntry = []
for country in root.findall('country'):
dfcntry.append([country.attrib['car_code'], country.find('name').text])
dfcntry = pd.DataFrame(dfcntry, columns=['country_code','country_name'])
dfUniq = dfcntry[(dfcntry.country_name != '')].drop_duplicates('country_code')
dfUniq = pd.DataFrame(dfUniq, columns=['country_code','country_name'])
# gather all data into a list
df = []
for airport in root.findall('airport'):
area = airport.find('elevation')
airportname = airport.find('name')
if area.text != None:
df.append([airport.attrib['country'], airportname.text, pd.to_numeric(area.text)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country_code', 'airport_name', 'elevation'])
# get country with highest elevation
df = df.sort_values(['elevation'], ascending=[0]).head(1).reset_index(drop=True)
# locate the country names from list of unique country codes
dfCntry = dfUniq.loc[dfUniq['country_code'] == df['country_code'].values[0]]
# merge two dataframes, print answer
df = pd.merge(dfCntry, df, on='country_code', how='outer')
# i don't want to display country code
del df['country_code']
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: XML example
Step2: XML exercise
Step3: 10 countries with the lowest infant mortality rates
Step4: 10 cities with the largest population
Step5: 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
Step6: name and country of a) longest river
Step7: name and country of b) largest lake
Step8: name and country of c) airport at highest elevation
|
11,432
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
import re
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import rcParams
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(style="whitegrid")
sns.set_palette("colorblind")
exps = ['sigopt_test1', 'sigopt_test2', 'sigopt_test3']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
test_string = '0_learning_'
re.match('^\d+', test_string)[0]
df.head(5)
df.columns
df['experiment_file_name'].unique()
def fix_name(s):
if s == '/Users/lsouza/nta/results/sigopt_test1/experiment_state-2020-03-20_02-59-23.json':
return 'SigOpt-A'
elif s == '/Users/lsouza/nta/results/sigopt_test2/experiment_state-2020-03-20_18-07-05.json':
return "SigOpt-B"
elif s == '/Users/lsouza/nta/results/sigopt_test3/experiment_state-2020-03-25_01-36-27.json':
return 'Random Search'
df['experiment_file_name'] = df['experiment_file_name'].apply(fix_name)
df['experiment_file_name'].unique()
def get_index(s):
return int(re.match('^\d+', s)[0])
df['index_pos'] = df['Experiment Name'].apply(get_index)
df['density'] = df['on_perc']
df.iloc[17]
df.groupby('experiment_file_name')['model'].count()
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
agg(['experiment_file_name'])
def plot_acc_over_time(plot_title):
plt.figure(figsize=(12,6))
df_plot = df[df['experiment_file_name'] == plot_title]
sns.lineplot(df_plot['index_pos'], y=df_plot['val_acc_last'])
plt.xticks(np.arange(0,300,30))
plt.ylim(0,0.80)
plt.title(plot_title)
# how to plot?
plot_acc_over_time('Random Search')
plot_acc_over_time('SigOpt-A')
plot_acc_over_time('SigOpt-B')
def accumulate(series):
series = list(series)
cum_series = [series[0]]
for i in range(1, len(series)):
cum_series.append(max(cum_series[i-1], series[i]))
return cum_series
def plot_best_acc_over_time(plot_title):
plt.figure(figsize=(12,6))
df_plot = df[df['experiment_file_name'] == plot_title].sort_values('index_pos')
df_plot['cum_acc'] = accumulate(df_plot['val_acc_last'])
sns.lineplot(df_plot['index_pos'], y=df_plot['cum_acc'])
plt.xticks(np.arange(0,301,30))
plt.ylim(0,0.80)
plt.title(plot_title)
plot_best_acc_over_time('Random Search')
plot_best_acc_over_time('SigOpt-A')
plot_best_acc_over_time('SigOpt-B')
# list top 5 values of each
# show best values
def show_best(experiment):
df_exp = df[df['experiment_file_name'] == experiment].sort_values('val_acc_last', ascending=False)[:5]
return df_exp[['index_pos', 'learning_rate', 'density', 'momentum', 'weight_decay', 'val_acc_last']]
show_best('Random Search')
show_best('SigOpt-A')
show_best('SigOpt-B')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and check data
Step2: ## Analysis
|
11,433
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from __future__ import division
import nltk
oracion1 = 'This is the lost dog I found at the park'.split()
oracion2 = 'The progress of the humankind as I progress'.split()
print(nltk.pos_tag(oracion1))
print(nltk.pos_tag(oracion2))
oracion3 = 'Green colorless ideas sleep furiously'.split()
print(nltk.pos_tag(oracion3))
print(nltk.pos_tag('colorless green ideas sleep furiously'.split()))
print(nltk.pos_tag(["My", "name", "is", "Prince"]))
print(nltk.pos_tag('He was born during the summer of 1988'.split()))
print(nltk.pos_tag('''She's Tony's sister'''.split()))
print(nltk.pos_tag('''My name is Sasolamatom and I have the stromkupft dog'''.split()))
from nltk.corpus import brown
brown_sents = brown.sents(categories='news')
brown_tagged_sents = brown.tagged_sents(categories='news')
# imprimimos la primera oración de las noticias de Brown
print(brown_sents[0]) # sin anotar
print(brown_tagged_sents[0]) # etiquetada morfológicamente
defaultTagger = nltk.DefaultTagger('NN')
print(defaultTagger.tag(oracion1))
print(defaultTagger.tag(oracion2))
defaultTagger.evaluate(brown_tagged_sents)
patrones = [
(r'[Aa]m$', 'BEM'), # irregular forms of 'to be'
(r'[Aa]re$', 'BER'), #
(r'[Ii]s$', 'BEZ'), #
(r'[Ww]as$', 'BEDZ'), #
(r'[Ww]ere$', 'BED'), #
(r'[Bb]een$', 'BEN'), #
(r'[Hh]ave$', 'HV'), # irregular forms of 'to be'
(r'[Hh]as$', 'HVZ'), #
(r'[Hh]ad$', 'HVD'), #
(r'^I$', 'PRP'), # personal pronouns
(r'[Yy]ou$', 'PRP'), #
(r'[Hh]e$', 'PRP'), #
(r'[Ss]he$', 'PRP'), #
(r'[Ii]t$', 'PRP'), #
(r'[Tt]hey$', 'PRP'), #
(r'[Aa]n?$', 'AT'), #
(r'[Tt]he$', 'AT'), #
(r'[Ww]h.+$', 'WP'), # wh- pronoun
(r'.*ing$', 'VBG'), # gerunds
(r'.*ed$', 'VBD'), # simple past
(r'.*es$', 'VBZ'), # 3rd singular present
(r'[Cc]an(not|n\'t)?$', 'MD'), # modals
(r'[Mm]ight$', 'MD'), #
(r'[Mm]ay$', 'MD'), #
(r'.+ould$', 'MD'), # modals: could, should, would
(r'.*ly$', 'RB'), # adverbs
(r'.*\'s$', 'NN$'), # possessive nouns
(r'.*s$', 'NNS'), # plural nouns
(r'-?[0-9]+(.[0-9]+)?$', 'CD'), # cardinal numbers
(r'^to$', 'TO'), # to
(r'^in$', 'IN'), # in prep
(r'^[A-Z]+([a-z])*$', 'NNP'), # proper nouns
(r'.*', 'NN') # nouns (default)
]
regexTagger = nltk.RegexpTagger(patrones)
print(regexTagger.tag('I was taking a sunbath in Alpedrete'.split()))
print(regexTagger.tag('She would have found 100 dollars in the bag'.split()))
print(regexTagger.tag('DSFdfdsfsd 1852 to dgdfgould fXXXdg in XXXfdg'.split()))
regexTagger.evaluate(brown_tagged_sents)
print(len(brown_tagged_sents))
print((len(brown_tagged_sents) * 90) / 100)
size = int(len(brown_tagged_sents) * 0.9) # asegúrate de que conviertes esto a entero
corpusEntrenamiento = brown_tagged_sents[:size]
corpusTest = brown_tagged_sents[size:]
# como ves, estos corpus contienen oraciones diferentes
print(corpusEntrenamiento[0])
print(corpusTest[0])
unigramTagger = nltk.UnigramTagger(corpusEntrenamiento)
print(unigramTagger.evaluate(corpusTest))
# ¿qué tal se etiquetan nuestras oraciones de ejemplo?
print(unigramTagger.tag(oracion1))
print(unigramTagger.tag(oracion2))
print(unigramTagger.tag(oracion3))
bigramTagger = nltk.BigramTagger(corpusEntrenamiento)
trigramTagger = nltk.TrigramTagger(corpusEntrenamiento)
# funciona fatal :-(
print(bigramTagger.tag(oracion2))
print(trigramTagger.tag(oracion2))
#aquí hago trampas, le pido que analice una oración que ya ha visto durante el entrenamiento
print(bigramTagger.tag(['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of',
"Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.']))
print(bigramTagger.evaluate(corpusTest))
print(trigramTagger.evaluate(corpusTest))
unigramTagger = nltk.UnigramTagger(corpusEntrenamiento, backoff=regexTagger)
bigramTagger = nltk.BigramTagger(corpusEntrenamiento, backoff=unigramTagger)
trigramTagger = nltk.TrigramTagger(corpusEntrenamiento, backoff=bigramTagger)
print(trigramTagger.evaluate(corpusTest))
print(trigramTagger.tag(oracion1))
print(trigramTagger.tag(oracion2))
print(trigramTagger.tag(oracion3))
# escribe tu código aquí
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Como primer ejemplo, podemos utilizar la función nltk.pos_tag para etiquetar morfológicamente una oración en inglés, siempre que la especifiquemos como una lista de palabras o tokens.
Step2: El etiquetador funciona bastante bien aunque comete errores, obviamente. Si probamos con la famosa frase de Chomksy detectamos palabras mal etiquetadas.
Step3: ¿Cómo funciona este etiquetador? nltk.pos_tag es un etiquetador morfológico basado en aprendizaje automático. A partir de miles de ejemplos de oraciones etiquetadas manualmente, el sistema ha aprendido, calculando frecuencias y generalizando cuál es la categoría gramatical más probable para cada token.
Step4: Para comparar ambas versiones, podemos imprimir la primera oración de este corpus en su versión sin etiquetas (fíjate que se trata de una lista de tokens, sin más) y en su versión etiquetada (se trata de una lista de tuplas donde el primer elemento es el token y el segundo es la etiqueta morfológica).
Step5: Etiquetado morfológico automático
Step6: Obviamente no funciona bien, pero ojo, en el ejemplo anterior con oracion1 hemos etiquetado correctamente 2 de 10 tokens. Si lo evaluamos con un corpus más grande, como el conjunto de oraciones de Brown que ya tenemos, obtenemos una precisión superior al 13%
Step7: el método .evaluate que podemos ejecutar con cualquier etiquetador si especificamos como argumento una colección de referencia que ya esté etiquetada, nos devuelve un número
Step8: Cuando probamos a evaluarlo con un corpus de oraciones más grande, vemos que nuestra precisión sube por encima del 32%.
Step9: Etiquetado basado en ngramas
Step10: A continuación vamos a crear un etiquetador basado en unigramas (secuencias de una palabra o palabras sueltas) a través de la clase nltk.UnigramTagger, proporcionando nuestro corpusEntrenamiento para que aprenda. Una vez entrenado, vamos a evaluar su rendimiento sobre corpusTest.
Step11: Los etiquetadores basados en unigramas se construyen a partir del simple cálculo de una distribución de frecuencia para cada token (palabra) y asignan siempre la etiqueta morfológica más probable. En nuestro caso, esta estrategia funciona relativamente bien
Step12: Como se ve en los ejemplos, los resultados son desastrosos. La mayoría de los tokens se quedan sin etiqueta y se muestran como None.
Step13: ¿Por qué ocurre esto? La intuición no nos engaña, y es verdad que si calculamos distribuciones de frecuencia condicionales teniendo en cuenta secuencias de palabras más largas, nuestros datos serán más finos. Sin embargo, cuando consideramos secuencias de tokens más largos nos arriesgamos a que dichas secuencias no aparezcan como tales en el corpus de entrenamiento.
Step14: Ejercicio en clase
|
11,434
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.version
import gdal
import h5py
import numpy as np
from math import floor
import os
import matplotlib.pyplot as plt
def plot_band_array(band_array,image_extent,title,cmap_title,colormap,colormap_limits):
plt.imshow(diff_dsm_array,extent=image_extent)
cbar = plt.colorbar(); plt.set_cmap(colormap); plt.clim(colormap_limits)
cbar.set_label(cmap_title,rotation=270,labelpad=20)
plt.title(title); ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
def raster2array(geotif_file):
metadata = {}
dataset = gdal.Open(geotif_file)
metadata['array_rows'] = dataset.RasterYSize
metadata['array_cols'] = dataset.RasterXSize
metadata['bands'] = dataset.RasterCount
metadata['driver'] = dataset.GetDriver().LongName
metadata['projection'] = dataset.GetProjection()
metadata['geotransform'] = dataset.GetGeoTransform()
mapinfo = dataset.GetGeoTransform()
metadata['pixelWidth'] = mapinfo[1]
metadata['pixelHeight'] = mapinfo[5]
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = mapinfo[0]
metadata['ext_dict']['xMax'] = mapinfo[0] + dataset.RasterXSize/mapinfo[1]
metadata['ext_dict']['yMin'] = mapinfo[3] + dataset.RasterYSize/mapinfo[5]
metadata['ext_dict']['yMax'] = mapinfo[3]
metadata['extent'] = (metadata['ext_dict']['xMin'],metadata['ext_dict']['xMax'],
metadata['ext_dict']['yMin'],metadata['ext_dict']['yMax'])
if metadata['bands'] == 1:
raster = dataset.GetRasterBand(1)
metadata['noDataValue'] = raster.GetNoDataValue()
metadata['scaleFactor'] = raster.GetScale()
# band statistics
metadata['bandstats'] = {} #make a nested dictionary to store band stats in same
stats = raster.GetStatistics(True,True)
metadata['bandstats']['min'] = round(stats[0],2)
metadata['bandstats']['max'] = round(stats[1],2)
metadata['bandstats']['mean'] = round(stats[2],2)
metadata['bandstats']['stdev'] = round(stats[3],2)
array = dataset.GetRasterBand(1).ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float)
array[array==int(metadata['noDataValue'])]=np.nan
array = array/metadata['scaleFactor']
return array, metadata
elif metadata['bands'] > 1:
print('More than one band ... need to modify function for case of multiple bands')
print('Start Uncertainty Script')
dsm1_filename = '/Users/olearyd/Git/data/2016_PRIN_1_607000_3696000_DSM.tif'
dsm2_filename = '/Users/olearyd/Git/data/2016_PRIN_2_607000_3696000_DSM.tif'
dtm1_filename = '/Users/olearyd/Git/data/2016_PRIN_1_607000_3696000_DTM.tif'
dtm2_filename = '/Users/olearyd/Git/data/2016_PRIN_2_607000_3696000_DTM.tif'
chm_filename = '/Users/olearyd/Git/data/2016_PRIN_1_607000_3696000_pit_free_CHM.tif'
dsm1_array, dsm1_array_metadata = raster2array(dsm1_filename)
dsm2_array, dsm2_array_metadata = raster2array(dsm2_filename)
dtm1_array, dtm1_array_metadata = raster2array(dtm1_filename)
dtm2_array, dtm2_array_metadata = raster2array(dtm2_filename)
chm_array, chm_array_metadata = raster2array(chm_filename)
diff_dsm_array = np.subtract(dsm1_array,dsm2_array)
diff_dtm_array = np.subtract(dtm1_array,dtm2_array)
diff_dsm_array_mean = np.mean(diff_dsm_array)
diff_dsm_array_std = np.std(diff_dsm_array)
print('Mean difference in DSMs: ',round(diff_dsm_array_mean,3),' (m)')
print('Standard deviations of difference in DSMs: ',round(diff_dsm_array_std,3),' (m)')
plt.figure(1)
plt.hist(diff_dsm_array.flatten(),100)
plt.title('Histogram of PRIN DSM')
plt.xlabel('Height Difference(m)'); plt.ylabel('Frequency')
plt.show()
plt.figure(1)
plt.hist(diff_dsm_array.flatten(),100,range=[diff_dsm_array_mean-2*diff_dsm_array_std, diff_dsm_array_mean+2*diff_dsm_array_std])
plt.title('Histogram of PRIN DSM')
plt.xlabel('Height Difference(m)'); plt.ylabel('Frequency')
plt.show()
plt.figure(3)
plot_band_array(diff_dsm_array,dsm1_array_metadata['extent'],'DSM Difference','Difference (m)','bwr',[diff_dsm_array_mean-2*diff_dsm_array_std, diff_dsm_array_mean+2*diff_dsm_array_std])
plt.show()
diff_dtm_array_mean = np.nanmean(diff_dtm_array)
diff_dtm_array_std = np.nanstd(diff_dtm_array)
print('Mean difference in DTMs: ',round(diff_dtm_array_mean,3),' (m)')
print('Standard deviations of difference in DTMs: ',round(diff_dtm_array_std,3),' (m)')
plt.figure(4)
plt.hist(diff_dtm_array.flatten()[~np.isnan(diff_dtm_array.flatten())],100,range=[diff_dtm_array_mean-2*diff_dtm_array_std, diff_dtm_array_mean+2*diff_dtm_array_std])
plt.title('Histogram of PRIN DTM')
plt.xlabel('Height Difference(m)'); plt.ylabel('Frequency')
plt.show()
plt.figure(5)
plot_band_array(diff_dtm_array,dtm1_array_metadata['extent'],'DTM Difference','Difference (m)','bwr',[diff_dtm_array_mean-2*diff_dtm_array_std, diff_dtm_array_mean+2*diff_dtm_array_std])
plt.show()
chm_array_mean = np.nanmean(chm_array)
chm_array_std = np.nanstd(chm_array)
plt.figure(6)
plot_band_array(chm_array,dtm1_array_metadata['extent'],'Canopy height Model','Canopy height (m)','Greens',[0, chm_array_mean])
plt.savefig('PRIN_CHM.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.show()
diff_dsm_array_veg_mean = np.nanmean(diff_dsm_array[chm_array!=0.0])
diff_dsm_array_veg_std = np.nanstd(diff_dsm_array[chm_array!=0.0])
plt.figure(7)
print('Mean difference in DSMs on veg points: ',round(diff_dsm_array_veg_mean,3),' (m)')
print('Standard deviations of difference in DSMs on veg points: ',round(diff_dsm_array_veg_std,3),' (m)')
plt.figure(8)
diff_dsm_array_nodata_removed = diff_dsm_array[~np.isnan(diff_dsm_array)]
chm_dsm_nodata_removed = chm_array[~np.isnan(diff_dsm_array)]
plt.hist(diff_dsm_array_nodata_removed[chm_dsm_nodata_removed!=0.0],100,range=[diff_dsm_array_veg_mean-2*diff_dsm_array_veg_std, diff_dsm_array_veg_mean+2*diff_dsm_array_veg_std])
plt.title('Histogram of PRIN DSM (veg)')
plt.xlabel('Height Difference(m)'); plt.ylabel('Frequency')
plt.show()
diff_dtm_array_veg_mean = np.nanmean(diff_dtm_array[chm_array!=0.0])
diff_dtm_array_veg_std = np.nanstd(diff_dtm_array[chm_array!=0.0])
plt.figure(9)
print('Mean difference in DTMs on veg points: ',round(diff_dtm_array_veg_mean,3),' (m)')
print('Standard deviations of difference in DTMs on veg points: ',round(diff_dtm_array_veg_std,3),' (m)')
plt.figure(10)
diff_dtm_array_nodata_removed = diff_dtm_array[~np.isnan(diff_dtm_array)]
chm_dtm_nodata_removed = chm_array[~np.isnan(diff_dtm_array)]
plt.hist((diff_dtm_array_nodata_removed[chm_dtm_nodata_removed!=0.0]),100,range=[diff_dtm_array_veg_mean-2*diff_dtm_array_veg_std, diff_dtm_array_veg_mean+2*diff_dtm_array_veg_std])
plt.title('Histogram of PRIN DTM (veg)')
plt.xlabel('Height Difference(m)'); plt.ylabel('Frequency')
plt.show()
diff_dtm_array_ground_mean = np.nanmean(diff_dtm_array[chm_array==0.0])
diff_dtm_array_ground_std = np.nanstd(diff_dtm_array[chm_array==0.0])
print('Mean difference in DTMs on ground points: ',round(diff_dtm_array_ground_mean,3),' (m)')
print('Standard deviations of difference in DTMs on ground points: ',round(diff_dtm_array_ground_std,3),' (m)')
plt.figure(11)
plt.hist((diff_dtm_array_nodata_removed[chm_dtm_nodata_removed==0.0]),100,range=[diff_dtm_array_ground_mean-2*diff_dtm_array_ground_std, diff_dtm_array_ground_mean+2*diff_dtm_array_ground_std])
plt.title('Histogram of PRIN DTM (ground)')
plt.xlabel('Height Difference(m)'); plt.ylabel('Frequency')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define functions
Step2: This next piece of code just helps identify where the script portion of our code starts. It is not essential to the code but can be useful when running scripts.
Step3: To start, we can define all of the input files. This will include two Digital Surface Model (DSMs) tifs from the flight days, two Digital Terrain Models (DTMs) from the flight days, and a single Canopy Height Model (CHM). In this case, all input GeoTiff rasters are a single tile of the site that measures 1000 m by 1000 m.
Step4: Open all of the tif data in the previous files using the raster2array function written for NEON data.
Step5: Since we want to know what the changed between the two days, we will create an array with any of the pixel differneces across the two arrays. To do this let's subtract the two DSMs.
Step6: Let's get some summary statistics for this DSM differences array.
Step7: As a result we get the following
Step8: The histogram has long tails, obscuring the distribution near the center. To constrain the x-limits of the histogram we will use the mean and standard deviation just calculated. Since the data appears to be normally distributed, we can constrain the histogram to 95% of the data by including 2 standard deviations above and below the mean.
Step9: The histogram shows a wide variation in DSM differences, with those at the 95% limit at around +/- 1.5 m. Let's take a look at the spatial distribution of the errors by plotting a map of the difference between the two DSMs. Here we'll also use the extra variable in the plot function to constrain the limits of the colorbar to 95% of the observations.
Step10: It seems that there is a spatial pattern in the distribution of errors. Now let's take a look at the histogram and map for the difference in DTMs.
Step11: The overall magnitude of differences are smaller than in the DSM but the same spatial pattern of the error is evident.
Step12: From the CHM, it appears the spatial distribution of error patterns follow the location of vegetation.
Step13: The results show a similar mean difference of near zero, but an extremely high standard deviation of 1.381 m! Since the DSM represents the top of the tree canopy, this provides the level of uncertainty we can expect in the canopy height in forests characteristic of the PRIN site using NEON LiDAR data.
Step14: Here we can see that the mean difference is almost zero at 0.023 m, and the variation in less than the DSM at 0.163 m.
|
11,435
|
<ASSISTANT_TASK:>
Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
Step7: Alignment Analysis
|
11,436
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import nibabel as nb
from scipy import ndimage
from matplotlib.colors import LinearSegmentedColormap
from m2g.utils.qa_utils import pad_im
from m2g.stats.qa_fast import qa_fast_png
def qa_fast_png(csf, gm, wm, outdir):
FAST (FMRIB's Automated Segmentation Tool)
segments a 3D image of the brain into different tissue types (Grey Matter, White Matter, CSF, etc.)
Mark different colors of white matter, gray matter, cerebrospinal fluid in a '3 by 3' picture, i.e. QA for FAST
Parameters
---------------
csf: str
the path of csf nifti image
gm: str
the path of gm nifti image
wm: str
the path of wm nifti image
outdir: str
the path to save QA graph
# load data
gm_data = nb.load(gm).get_data()
csf_data = nb.load(csf).get_data()
wm_data = nb.load(wm).get_data()
# set Color map
cmap1 = LinearSegmentedColormap.from_list('mycmap1', ['white', 'blue'])
cmap2 = LinearSegmentedColormap.from_list('mycmap2', ['white', 'magenta'])
cmap3 = LinearSegmentedColormap.from_list('mycmap2', ['white', 'green'])
overlay = plt.figure()
overlay.set_size_inches(12.5, 10.5, forward=True)
plt.title(f'Qa for FAST(segments a 3D image of the brain into different tissue types)\n (scan volume:{gm_data.shape}) \n', fontsize=22)
plt.xticks([])
plt.yticks([])
plt.axis('off')
# Determine whether the input data types are consistent. If they are inconsistent, an error is reported.
if gm_data.shape != csf_data.shape:
raise ValueError("GM and CSF are not the same shape.")
elif gm_data.shape != wm_data.shape:
raise ValueError("GM and WM are not the same shape.")
elif wm_data.shape != csf_data.shape:
raise ValueError("WM and CSF are not the same shape.")
# Set the 3D matrix cutting position in three directions
shape = csf_data.shape
index = [0.35, 0.51, 0.65]
x = [int(shape[0] * index[0]), int(shape[0] * index[1]), int(shape[0] * index[2])]
y = [int(shape[1] * index[0]), int(shape[1] * index[1]), int(shape[1] * index[2])]
z = [int(shape[2] * index[0]), int(shape[2] * index[1]), int(shape[2] * index[2])]
coords = (x, y, z)
# Set labels for the y-axis
labs = [
"Sagittal Slice",
"Coronal Slice",
"Axial Slice",
]
var = ["X", "Y", "Z"]
# Generate 3 by 3 picture
idx = 0
for i, coord in enumerate(coords):
for pos in coord:
idx += 1
ax = overlay.add_subplot(3, 3, idx)
ax.set_title(var[i] + " = " + str(pos))
if i == 0:
csf_slice = ndimage.rotate(csf_data[pos, :, :], 90)
gm_slice = ndimage.rotate(gm_data[pos, :, :], 90)
wm_slice = ndimage.rotate(wm_data[pos, :, :], 90)
elif i == 1:
csf_slice = ndimage.rotate(csf_data[:, pos, :], 90)
gm_slice = ndimage.rotate(gm_data[:, pos, :], 90)
wm_slice = ndimage.rotate(wm_data[:, pos, :], 90)
else:
csf_slice = ndimage.rotate(csf_data[:, :, pos], 90)
gm_slice = ndimage.rotate(gm_data[:, :, pos], 90)
wm_slice = ndimage.rotate(wm_data[:, :, pos], 90)
# set y labels
if idx % 3 == 1:
plt.ylabel(labs[i])
# padding pictures to make them the same size
csf_slice = (csf_slice*255).astype(np.uint8)
gm_slice = (gm_slice*255).astype(np.uint8)
wm_slice = (wm_slice*255).astype(np.uint8)
csf_slice = pad_im(csf_slice, max(shape), 0, False)
gm_slice = pad_im(gm_slice, max(shape), 0, False)
wm_slice = pad_im(wm_slice, max(shape), 0, False)
# hide axes
ax.set_xticks([])
ax.set_yticks([])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
# display image
ax.imshow(csf_slice, interpolation="none", cmap=cmap1, alpha=1)
ax.imshow(gm_slice, interpolation="none", cmap=cmap2, alpha=0.5)
ax.imshow(wm_slice, interpolation="none", cmap=cmap3, alpha=0.3)
# Legend of white matter(WM), gray matter(GM) and cerebrospinal fluid(csf)
if idx == 3:
plt.plot(0, 0, "-", c='green', label='wm')
plt.plot(0, 0, "-", c='pink', label='gm')
plt.plot(0, 0, "-", c='blue', label='csf')
plt.legend(loc='upper right',fontsize=15,bbox_to_anchor=(1.5,1.2))
# save figure
overlay.savefig(f"{outdir}", format="png")
# change this path to your own m2g output_dir when you want to run in your own computer
output_dir = 'd:/Downloads/neurodatadesign/output_data/flirt/sub-0025864/ses-1/'
# The following input path will be generated automatically
csf_input = f"{output_dir}anat/preproc/t1w_seg_pve_0.nii.gz"
gm_input = f"{output_dir}anat/preproc/t1w_seg_pve_1.nii.gz"
wm_input = f"{output_dir}anat/preproc/t1w_seg_pve_2.nii.gz"
# The path where the picture is saved, the name is qa_fast.png
save_dir = f"{output_dir}qa/reg/qa_fast.png"
# Generates quality analysis pictures of white matter, gray matter and cerebrospinal fluid
qa_fast_png(csf_input, gm_input, wm_input, save_dir)
ax.imshow(csf_slice, interpolation="none", cmap=cmap1, alpha=1)
ax.imshow(gm_slice, interpolation="none", cmap=cmap2, alpha=0.5)
ax.imshow(wm_slice, interpolation="none", cmap=cmap3, alpha=0.3)
cmap1 = LinearSegmentedColormap.from_list('mycmap1', ['white', 'blue'])
cmap2 = LinearSegmentedColormap.from_list('mycmap2', ['white', 'magenta'])
cmap3 = LinearSegmentedColormap.from_list('mycmap2', ['white', 'green'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: qa_fast_png
Step3: Set the input data path
Step4: Call function to generate quality analysis picture
Step5: Potential Issues
Step6: the last parameter can change color
|
11,437
|
<ASSISTANT_TASK:>
Python Code:
print "Hello World"
#this is a comment
print 'this is code' #this is also a comment
a = 2
a = 2
b = 3
print a + b
print type(12)
print type(12.1)
print type(True)
print type('blueberries')
print type(blueberries)
blueberries = 5
print type(blueberries)
print 2 + 2
print 'First ' + 'Last'
numBerries = 5
print 'Number of Blueberries: ' + numBerries
numBerries = 5
print 'Number of Blueberries: ' + str(numBerries)
print 'Number of Blueberries: ', numBerries
numbers = [1, 2, 3, 4, 5]
fruits = ['apples', 'oranges', 'bananas']
numbers = [1, 2, 3, 4, 5]
print numbers[0]
fruits = ['apples', 'oranges', 'bananas']
print fruits[1]
fruitsAndNumbers = ['apples', 2, 'bananas']
print type(fruitsAndNumbers)
print type(fruitsAndNumbers[0])
print type(fruitsAndNumbers[1])
numbers = [1, 2, 3, 4, 5]
newNumbers = numbers[0:3] # [index of first item:index after last item]
print newNumbers
numbers = [1, 2, 3, 4, 5]
print numbers[-1]
numbers = [1, 2, 3, 4, 5]
numbers.append(6)
print numbers
numbers = []
numbers.append(1)
numbers.append(2)
print numbers
dict = {'a': 1, 'b': 2, 'c': 3}
dict = {'a': 1, 'b': 2, 'c': 3}
print dict['a']
dict = {'a': 1, 'b': 2, 'c': 3}
dict['d'] = 4
print dict['d']
dict = {}
dict['a'] = 1
dict['b'] = 2
print dict
dict = {'a': 1, 'b': 2, 'c': 3}
print dict.keys()
dict = {}
dict['numbers'] = [1, 2, 3, 4, 5]
dict['fruits'] = ['apples', 'oranges', 'bananas']
dict['numbers'].append(6)
dict['fruits'].append({'berries':['strawberries', 'blueberries']})
#compound request, should print 'blueberries'
print dict['fruits'][-1]['berries'][1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a simple one-line piece of code that prints a statement. If you run the cell above, you will see "Hello World" displayed directly below the cell.
Step2: If you run this code through the interpreter, you can see that it prints out 'this is code' because it executes the line print 'this is code'. Meanwhile it ignores both comments occurring after the '#' symbol. Although every language specifies them differently, comments are an important part of every programming language, as they allow the developer to add extra information and description to their code which is not strictly related to its execution.
Step3: Here, 'a' is the name of my variable, and the number '2' is the data I am assigning it. From here on out, 'a' will be associated with the number '2', until it is assigned another value, or the program ends. Try this code
Step4: This should print out the number 5, since 'a' is storing the number '2', and 'b' is storing the number '3'. You can use many other common arithmetic operators in the same way. Some of the most common are
Step5: You can see that it prints the four types described above. Notice also the particular way in which the data must be written so that Python does not confuse it with the name of a variable. Numbers can be written directly because you cannot name variables with only a number. Booleans must be written capitalized (True or False) as these are reserved key words in Python (notice the syntax coloring). Strings are always contained within quotes. You can use either single (') or double (") quotes, but they must match on either side of the string. If you try to write
Step6: Without the quotes, you will get the following error
Step7: it will tell you it's an 'int' because 'blueberries' is now a variable with an int stored inside of it.
Step8: we get '4'. When given two numbers, the '+' operator performs arithmetic addition. However, if we type
Step9: we get 'First Last'. When given two strings, the '+' operator 'concatenates' or merges them together into one string. Over-loading is useful because it produces clean and readable code without having a special function for each type of variable. You have to be careful, however, because mismatching different types of variables can lead to errors. For instance, this line
Step10: will produce an error because it is trying to perform a concatenation of a string and an integer. Instead, you can use the str() function to convert the 5 to a string before using it with the '+' operator.
Step11: In this particular case, to make the printing of various types of data simpler, the 'print' function allows you to simply list all of the variables separated by a comma
Step12: Multi-part variables
Step13: To retrieve an object from such a list, you once again use square brackets, but this time appended to the end of the variable name. Inside the brackets you place the index or place of the piece of data you want. For instance
Step14: Notice that like in all languages, counting begins with '0', so if you want the first item in a list you use [0], the second item [1], and so on. Unlike many other languages, Python will allow you to mix different types of data within a single list, so something like this is perfectly legal
Step15: You can also use a '
Step16: You can even index backwards using negative indices. For instance, this will print out the last item in the list
Step17: Various functions exist to help you work with lists. The most common is .append(), which adds a value to the end of a list
Step18: You can even start with an empty list, and fill it gradually with appends
Step19: For other list functions you can refer to the Python documentation.
Step20: In this dictionary, the integers 1, 2, and 3 are tied to their unique keys, 'a', 'b', and 'c'. Note that keys must be strings, while values can be any data type. To retrieve a piece of data from this dictionary, you can again use the square bracket notation, this time passing in a key instead of an index
Step21: To add entries to a dictionary, you just have to specify the data that relates to a particular key using the '=' operator and the same square bracket syntax
Step22: As with lists, you can start with an empty dictionary and build it up over time
Step23: There are also many useful functions for working with dictionaries, including the .keys() function which returns a list of all of the dictionary's keys
Step24: For other useful functions you can refer to the proper place in the documentation.
|
11,438
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q sciunit
import sciunit
from sciunit.models.examples import ConstModel # One of many dummy models included for illustration.
const_model_37 = ConstModel(37, name="Constant Model 37")
from sciunit.capabilities import ProducesNumber
from sciunit.scores import ZScore # One of many SciUnit score types.
from sciunit.errors import ObservationError # An exception class raised when a test is instantiated
# with an invalid observation.
class MeanTest(sciunit.Test):
Tests if the model predicts
the same number as the observation.
required_capabilities = (ProducesNumber,) # The one capability required for a model to take this test.
score_type = ZScore # This test's 'judge' method will return a BooleanScore.
def validate_observation(self, observation):
if type(observation) is not dict:
raise ObservationError("Observation must be a python dictionary")
if 'mean' not in observation:
raise ObservationError("Observation must contain a 'mean' entry")
def generate_prediction(self, model):
return model.produce_number() # The model has this method if it inherits from the 'ProducesNumber' capability.
def compute_score(self, observation, prediction):
score = ZScore.compute(observation,prediction) # Compute and return a ZScore object.
score.description = ("A z-score corresponding to the normalized location of the observation "
"relative to the predicted distribution.")
return score
observation = {'mean':37.8, 'std':2.1}
mean_37_test = MeanTest(observation, name='=37')
score = mean_37_test.judge(const_model_37)
score.summarize()
score.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this chapter we will use the same toy model in Chapter 1 but write a more interesting test with additional features included in SciUnit.
Step3: Now let's write a test that validates the observation and returns more informative score type.
Step4: We've done two new things here
Step5: Instead of returning a BooleanScore, encoding a True/False value, we return a ZScore encoding a more quantitative summary of the relationship between the observation and the prediction. When we execute the test
Step6: Then we get a more quantitative summary of the results
|
11,439
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-2', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
11,440
|
<ASSISTANT_TASK:>
Python Code:
def fib(n ) :
if(n <= 1 ) :
return n
return fib(n - 1 ) + fib(n - 2 )
n = 9
print(fib(n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,441
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
import datetime
import warnings
import numpy as np
import matplotlib.pyplot as plt
import pandas
import seaborn
seaborn.set(style='ticks', context='paper')
import wqio
import pybmpdb
import pynsqd
import pycvc
min_precip = 1.9999
palette = seaborn.color_palette('deep', n_colors=6)
pybmpdb.setMPLStyle()
POCs = [p['cvcname'] for p in filter(lambda p: p['include'], pycvc.info.POC_dicts)]
if wqio.testing.checkdep_tex() is None:
warnings.warn("LaTeX not found on system path. You will not be able to compile ISRs to PDF files", UserWarning)
def LV1_runoff(row):
return max(0, -12.0 + 2.87 * row['total_precip_depth'] + 0.863 * row['duration_hours'])
def ED1_runoff(row):
return 10**(1.58 + 0.000667 * row['peak_precip_intensity'] + 0.0169 * row['total_precip_depth'] )
def ED1_bypass(row):
return max(0, -26.4 + 0.184 * row['peak_precip_intensity'] + 1.22 * row['total_precip_depth'])
def ED1_inflow(row):
return max(0, ED1_runoff(row) - ED1_bypass(row))
def LV2_runoff(row):
return 10**(1.22 + 0.00622 * row['peak_precip_intensity'] + 0.0244 * row['total_precip_depth'] )
def LV2_bypass(row):
return 0
def LV2_inflow(row):
return max(0, LV2_runoff(row) - LV2_bypass(row))
def LV4_runoff(row):
return 10**(1.35 + 0.00650 * row['peak_precip_intensity'] + 0.00940 * row['total_precip_depth'] )
def LV4_bypass(row):
return max(0, 7.36 + 0.0370 * row['peak_precip_intensity'] + 0.112 * row['total_precip_depth'])
def LV4_inflow(row):
return max(0, LV4_runoff(row) - LV4_bypass(row))
bmpdb = pycvc.external.bmpdb(palette[3], 'D')
nsqdata = pycvc.external.nsqd(palette[2], 'd')
cvcdbfile = "C:/users/phobson/Desktop/cvc.accdb"
cvcdb = pycvc.Database(cvcdbfile, nsqdata, bmpdb, testing=False)
LV1 = pycvc.Site(db=cvcdb, siteid='LV-1', raingauge='LV-1', tocentry='Lakeview Control',
isreference=True, influentmedians=pycvc.wqstd_template(),
runoff_fxn=LV1_runoff, minprecip=min_precip,
color=palette[1], marker='s')
LV_Influent = (
LV1.wqdata
.query("sampletype == 'composite'")
.groupby(by=['season', 'parameter', 'units'])['concentration']
.median()
.reset_index()
.rename(columns={'concentration': 'influent median'})
)
LV_Influent.head()
ED_Influent = (
cvcdb.nsqdata
.medians.copy()
.rename(columns={'NSQD Medians': 'influent median'})
)
ED_Influent.head()
ED1 = pycvc.Site(db=cvcdb, siteid='ED-1', raingauge='ED-1',
tocentry='Elm Drive', influentmedians=ED_Influent,
minprecip=min_precip, isreference=False,
runoff_fxn=ED1_runoff, bypass_fxn=ED1_bypass,
inflow_fxn=ED1_inflow, color=palette[0], marker='o')
LV2 = pycvc.Site(db=cvcdb, siteid='LV-2', raingauge='LV-1',
tocentry='Lakeview Grass Swale', influentmedians=LV_Influent,
minprecip=min_precip, isreference=False,
runoff_fxn=LV2_runoff, bypass_fxn=LV2_bypass,
inflow_fxn=LV2_inflow, color=palette[4], marker='^')
LV4 = pycvc.Site(db=cvcdb, siteid='LV-4', raingauge='LV-1',
tocentry=r'Lakeview Bioswale 1$^{\mathrm{st}}$ South Side',
influentmedians=LV_Influent,
minprecip=min_precip, isreference=False,
runoff_fxn=LV4_runoff, bypass_fxn=LV4_bypass,
inflow_fxn=LV4_inflow, color=palette[5], marker='v')
ED1.hydrodata.data.loc['2012-08-10 23:50:00':'2012-08-11 05:20', 'storm'] = 0
ED1.hydrodata.data.loc['2012-08-11 05:30':, 'storm'] += 1
storm_date = datetime.date(2013, 7, 8)
for site in [ED1, LV1, LV2, LV4]:
bigstorm = site.storm_info.loc[site.storm_info.start_date.dt.date == storm_date].index[0]
inflow = site.drainagearea.simple_method(site.storm_info.loc[bigstorm, 'total_precip_depth'])
site.storm_info.loc[bigstorm, 'inflow_m3'] = inflow
site.storm_info.loc[bigstorm, 'runoff_m3'] = np.nan
site.storm_info.loc[bigstorm, 'bypass_m3'] = np.nan
stormfile = pandas.ExcelWriter("output/xlsx/CVCHydro_StormInfo.xlsx")
hydrofile = pandas.ExcelWriter("output/xlsx/CVCHydro_StormStats.xlsx")
with pandas.ExcelWriter("output/xlsx/CVCHydro_StormStats.xlsx") as hydrofile,\
pandas.ExcelWriter("output/xlsx/CVCHydro_StormInfo.xlsx") as stormfile:
for site in [ED1, LV1, LV2, LV4]:
site.storm_info.to_excel(stormfile, sheet_name=site.siteid)
site.storm_stats().to_excel(hydrofile, sheet_name=site.siteid)
for site in [ED1, LV2, LV4]:
for by in ['year', 'outflow', 'season']:
try:
site.hydro_pairplot(by=by)
except:
print('failed on {}, {}'.format(site, by))
with pandas.ExcelWriter('output/xlsx/CVCWQ_DataInventory.xlsx') as prev_tables:
for site in [ED1, LV1, LV2, LV4]:
stype = 'composite'
site.prevalence_table()[stype].to_excel(prev_tables, sheet_name='{}'.format(site.siteid))
with pandas.ExcelWriter('output/xlsx/CVCWQ_ConcStats.xlsx') as concfile:
for site in [ED1, LV1, LV2, LV4]:
concs = site.wq_summary('concentration', sampletype='composite').T
concs.to_excel(concfile, sheet_name=site.siteid, na_rep='--')
with pandas.ExcelWriter('output/xlsx/CVCWQ_LoadStats.xlsx') as loadstats:
for site in [ED1, LV1, LV2, LV4]:
load = (
site.wq_summary('load_outflow', sampletype='composite')
.stack(level='parameter')
.stack(level='load_units')
)
load.to_excel(loadstats, sheet_name=site.siteid, na_rep='--')
with pandas.ExcelWriter('output/xlsx/CVCWQ_TidyData.xlsx') as tidyfile:
for site in [ED1, LV1, LV2, LV4]:
site.tidy_data.to_excel(tidyfile, sheet_name=site.siteid, na_rep='--')
with pandas.ExcelWriter('output/xlsx/CVCWQ_LoadTotals.xlsx') as loadfile:
for site in [ED1, LV1, LV2, LV4]:
loads = site.load_totals(sampletype='composite')
loads.to_excel(loadfile, sheet_name=site.siteid, na_rep='--')
seaborn.set(style='ticks', context='paper')
pybmpdb.setMPLStyle()
for site in [ED1, LV1, LV2, LV4]:
print('\n----Compiling ISR for {0}----'.format(site.siteid))
site.allISRs('composite', version='draft')
for site in [ED1, LV1, LV2, LV4]:
print('\n----Summarizing {0}----'.format(site.siteid))
site.hydro_jointplot(
xcol='total_precip_depth',
ycol='outflow_mm',
conditions="outflow_mm > 0",
one2one=True
)
site.hydro_jointplot(
xcol='antecedent_days',
ycol='outflow_mm',
conditions="outflow_mm > 0",
one2one=False
)
site.hydro_jointplot(
xcol='total_precip_depth',
ycol='antecedent_days',
conditions="outflow_mm == 0",
one2one=False
)
site.hydro_jointplot(
xcol='peak_precip_intensity',
ycol='peak_outflow',
conditions=None,
one2one=False
)
plt.close('all')
site_lists = [
[ED1],
[LV1, LV2, LV4],
]
for sl in site_lists:
print('\n----Comparing {}----'.format(', '.join([s.siteid for s in sl])))
for poc in POCs:
print(' ' + poc)
wqcomp = pycvc.summary.WQComparison(sl, 'composite', poc, nsqdata, bmpdb)
wqcomp.seasonalBoxplots(load=False, finalOutput=True)
wqcomp.seasonalBoxplots(load=True, finalOutput=True)
wqcomp.landuseBoxplots(finalOutput=True)
wqcomp.bmpCategoryBoxplots(finalOutput=True)
wqcomp.parameterStatPlot(finalOutput=True)
wqcomp.parameterStatPlot(load=True, finalOutput=True)
wqcomp.parameterTimeSeries(finalOutput=True)
wqcomp.parameterTimeSeries(load=True, finalOutput=True)
plt.close('all')
for sl in site_lists:
print('\n----Megafigs with {}----'.format(', '.join([s.siteid for s in sl])))
# construct the megafigures
mf1 = pycvc.summary.WQMegaFigure(sl, 'composite', POCs[:6], 1, nsqdata, bmpdb)
mf2 = pycvc.summary.WQMegaFigure(sl, 'composite', POCs[6:], 2, nsqdata, bmpdb)
for n, mf in enumerate([mf1, mf2]):
print('\tTime Series {0}'.format(n+1))
mf.timeseriesFigure(load=False)
mf.timeseriesFigure(load=True)
print('\tStat plots {0}'.format(n+1))
mf.statplotFigure(load=False)
mf.statplotFigure(load=True)
print('\tBMPDB Boxplots {0}'.format(n+1))
mf.bmpCategoryBoxplotFigure()
print('\tNSQD Boxplots {0}'.format(n+1))
mf.landuseBoxplotFigure()
print('\tSeasonal Boxplots {0}'.format(n+1))
mf.seasonalBoxplotFigure(load=False)
mf.seasonalBoxplotFigure(load=True)
plt.close('all')
cols = [
'duration_hours', 'total_precip_depth_mm',
'runoff_m3', 'bypass_m3', 'inflow_m3', 'outflow_m3',
'load_runoff', 'load_bypass', 'load_inflow', 'load_outflow',
]
with pandas.ExcelWriter("output/xlsx/CVCHydro_UnsampledLoadEstimates.xlsx") as unsampled_file:
for site in [ED1, LV1, LV2, LV4]:
loads = (
site._unsampled_load_estimates()
.groupby(['season', 'has_outflow', 'parameter', 'load_units'])
.sum()
.select(lambda c: c in cols, axis=1)
.reset_index()
)
loads.to_excel(unsampled_file, sheet_name=site.siteid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hydrologic Relationships
Step2: ED-1
Step3: LV-2
Step4: LV-4
Step5: Water quality loading relationship
Step6: Load CVC Database
Step7: Define the site object for the reference site and compute its median values ("influent" to other sites)
Step8: Lakeview BMP sites get their "influent" data from LV-1
Step9: Elm Drive's "influent" data come from NSQD
Step10: Remaining site objects
Step11: Fix ED-1 storm that had two composite samples
Step12: Replace total inflow volume with estimate from simple method for 2013-07-08 storm
Step13: High-level summaries
Step14: Hydrologic Pairplots
Step15: Prevalence Tables
Step16: Concentrations Stats
Step17: Load Stats
Step18: Tidy Data
Step19: Total Loads Summary
Step20: Analysis
Step21: Individual Storm Reports
Step22: Precip-outflow scatter plots
Step23: WQ Comparison
Step24: Individual Figures
Step25: Megafigures
Step26: Unsampled loading estimates
|
11,442
|
<ASSISTANT_TASK:>
Python Code:
%%bash
head ../../data/raw/palm_reference_sequences.fasta
%%bash
source activate secapr_env
secapr find_target_contigs -h
import pandas as pd
table = pd.read_csv('../../data/processed/target_contigs/match_table.txt', delimiter = '\t',index_col=0)
table.head()
%%bash
cat ../../data/processed/target_contigs/summary_stats.txt
%%bash
source activate secapr_env
secapr plot_sequence_yield -h
from IPython.display import Image, display
img1 = Image("../../data/processed/plots/contig_yield_overview.png",width=1000)
display(img1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Find and extract all target contigs
Step2: Note that in this step SECAPR will index all your locus names stored in the reference file, so in all downstream steps your loci will carry a numerical index. The translation table holding the information which index corresponds to which locus name is stored in a text file in the output folder (reference_fasta_header_info.txt).
Step3: Those fields containing a '1' indicate that a unique match was extracted from the contig sequences for the respective exon and sample. If the output reveals a very low harvest of target sequences, you can try to reduce the values for the flags --min-coverage and --min-identity in order to be more generous in the matching step. If on the other hand your output turns out to capture a lot of non-homologous sequences between the different samples (can be identified after the alignment step), you may want to turn up the values for these flags in order to be more conservative in your search.
Step4: Plotting with SECAPR
Step5: For now we only want to print the contig data, so we can execute the function like this
|
11,443
|
<ASSISTANT_TASK:>
Python Code:
import pysal as ps
import numpy as np
import networkx as nx
import shapefile as shp
import gurobipy as gbp
import cplex as cp
import datetime as dt
import time
from collections import OrderedDict
import IPython.display as IPd
%pylab inline
from mpl_toolkits.basemap import Basemap
ntw = ps.Network('Waverly/Waverly.shp')
print dt.datetime.now()
print 'PySAL.Network\n'
print dir(ntw)
# Roads and Nodes
g = nx.Graph()
# Graph of Roads and Nodes
g1 = nx.MultiGraph()
# Clients
GRAPH_client = nx.Graph()
# Snapped Clients
g_client = nx.Graph()
# Service
GRAPH_service = nx.Graph()
# Snapped Service
g_service = nx.Graph()
# Gurobi p-Maxian
GUROBI_maxian_g = nx.Graph()
# Cplex p-Maxian
CPLEX_maxian_g = nx.Graph()
shp_W = ps.open('Waverly/Waverly.shp')
shp_W.bbox
lat_client = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 100)
lon_client = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 100)
lat_service = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 50)
lon_service = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 50)
rand_coords_client = map(list, zip(lat_client, lon_client))
rand_coords_service = map(list, zip(lat_service, lon_service))
points_client = {}
points_service = {}
# CLIENT
for idx, coords in enumerate(rand_coords_client):
GRAPH_client.add_node(idx)
points_client[idx] = coords
GRAPH_client.node[idx] = coords
# SERVICE
for idx, coords in enumerate(rand_coords_service):
GRAPH_service.add_node(idx)
points_service[idx] = coords
GRAPH_service.node[idx] = coords
print dt.datetime.now()
#Instantiate Figure
figsize(10,10)
# Draw Graph of Actual Nodes and Roads
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw only unique edges in graph
for e in ntw.graphedges:
g1.add_edge(*e)
# highlights cases where start and end node are the same
if e[0]==e[1]:
g1.add_node(e[0])
for node_id in g1.node:
g1.node[node_id] = ntw.node_coords[node_id]
nx.draw(g1, ntw.node_coords, node_size=10, alpha=0.5)
# Draw Graph of Random Client Points
nx.draw(GRAPH_client, points_client,
node_size=75, alpha=1, node_color='b')
# Draw Graph of Random Service Points
nx.draw(GRAPH_service, points_service,
node_size=100, alpha=1, node_color='c')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Graph Vertices']=g1
LEGEND['Graph Edges']=g1
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.507, 0.0, 0.01, width=.0003, head_width=0.0012,
head_length=0.002, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.52), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
# Client Weights
Ai = np.random.randint(5, 50, len(rand_coords_client))
Ai = Ai.reshape(len(Ai),1)
# Sum
AiSum = np.sum(Ai)
# Client
client = shp.Writer(shp.POINT)
# Add Random Points
for i,j in rand_coords_client:
client.point(i,j)
# Add Fields
client.field('client_ID')
client.field('Weight')
client.field('LAT')
client.field('LON')
counter = 0
for i in range(len(rand_coords_client)):
counter = counter + 1
client.record('client_' + str(counter), Ai[i], lat_client[i], lon_client[i])
# Save Shapefile
client.save('shapefiles/RandomPoints_CLIENT')
#Service
service = shp.Writer(shp.POINT)
# Add Random Points
for i,j in rand_coords_service:
service.point(i,j)
# Add Fields
service.field('y_ID')
service.field('x_ID')
service.field('LAT')
service.field('LON')
counter = 0
for i in range(len(rand_coords_service)):
counter = counter + 1
service.record('y' + str(counter), 'x' + str(counter), lat_service[i], lon_service[i])
# Save Shapefile
service.save('shapefiles/RandomPoints_SERVICE')
t1 = time.time()
ntw.snapobservations('shapefiles/RandomPoints_CLIENT.shp',
'Rand_Points_CLIENT', attribute=True)
ntw.snapobservations('shapefiles/RandomPoints_SERVICE.shp',
'Rand_Points_SERVICE', attribute=True)
print round(time.time()-t1, 4), 'seconds'
# Instantiate Figure
figsize(10,10)
# Draw Graph of Roads
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw Graph of Snapped Client Nodes
g_client = nx.Graph()
for p,coords in ntw.pointpatterns['Rand_Points_CLIENT'].snapped_coordinates.iteritems():
g_client.add_node(p)
g_client.node[p] = coords
nx.draw(g_client, ntw.pointpatterns['Rand_Points_CLIENT'].snapped_coordinates,
node_size=100, alpha=1, node_color='b')
# Draw Graph of Snapped Service Nodes
g_service = nx.Graph()
for p,coords in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
g_service.add_node(p)
g_service.node[p] = coords
nx.draw(g_service, ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates,
node_size=100, alpha=1, node_color='c')
# Draw Graph of Random Client Points
nx.draw(GRAPH_client, points_client,
node_size=20, alpha=1, node_color='y')
# Draw Graph of Random Client Points
nx.draw(GRAPH_service, points_service,
node_size=20, alpha=1, node_color='w')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Snapped Client']=g_client
LEGEND['Snapped Service']=g_service
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.507, 0.0, 0.01, width=.0003, head_width=0.0012,
head_length=0.002, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.52), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
t1 = time.time()
All_Neigh_Dist = ntw.allneighbordistances(sourcepattern=ntw.pointpatterns['Rand_Points_CLIENT'],
destpattern=ntw.pointpatterns['Rand_Points_SERVICE'])
All_Dist_MILES = All_Neigh_Dist * float(10000/90) * 0.6214
seconds = round(time.time()-t1, 4)
print seconds, 'seconds'
print 'Client [i] x Service [j] Matrix Shape --> ', All_Dist_MILES.shape
t1 = time.time()
# 1. Data
# Weighted Costs
# Cost Matrix
Cij = All_Dist_MILES
# Weights Matrix
Ai = Ai
# Demand Sum
AiSum = AiSum
# Weighted Cost Coefficients for Decision Variables
Sij = Ai * Cij
client_nodes = range(len(Sij))
service_nodes = range(len(Sij[0]))
mPMaxP_GUROBI = gbp.Model(' -- p-Maxian -- ')
gbp.setParam('MIPFocus', 2)
# Client IxJ
client_var = []
for orig in client_nodes:
client_var.append([])
for dest in service_nodes:
client_var[orig].append(mPMaxP_GUROBI.addVar(vtype=gbp.GRB.BINARY,
obj=Sij[orig][dest],
name='x'+str(orig+1)+'_'+str(dest+1)))
#J
serv_var = []
for dest in service_nodes:
serv_var.append([])
serv_var[dest].append(mPMaxP_GUROBI.addVar(vtype=gbp.GRB.BINARY,
name='y'+str(dest+1)))
mPMaxP_GUROBI.update()
mPMaxP_GUROBI.setObjective(gbp.quicksum(Sij[orig][dest]*client_var[orig][dest]
for orig in client_nodes for dest in service_nodes),
gbp.GRB.MAXIMIZE)
for orig in client_nodes:
mPMaxP_GUROBI.addConstr(gbp.quicksum(client_var[orig][dest]
for dest in service_nodes) == 1)
for orig in service_nodes:
for dest in client_nodes:
mPMaxP_GUROBI.addConstr((serv_var[orig] - client_var[dest][orig] >= 0))
mPMaxP_GUROBI.addConstr(gbp.quicksum(serv_var[dest][0] for dest in service_nodes) == 2)
mPMaxP_GUROBI.optimize()
mPMaxP_GUROBI.write('WaverlyPMaxP.lp')
t2G = time.time()-t1
print '\n*************************************************************************'
selected = []
dbf1 = ps.open('shapefiles/RandomPoints_SERVICE.dbf')
NEW_Records_PMaxP_GUROBI = []
for v in mPMaxP_GUROBI.getVars():
if 'x' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PMaxP_GUROBI.append(x)
else:
pass
print ' | ', var
print ' | Selected Facility Locations -------------- ^^^^ '
print ' | Candidate Facilities [p] ----------------- ', len(selected)
val_G = mPMaxP_GUROBI.objVal
print ' | Objective Value (miles) ------------------ ', val_G
avg_G = float(mPMaxP_GUROBI.objVal)/float(AiSum)
print ' | Avg. Value / Client (miles) -------------- ', avg_G
print ' | Real Time to Optimize (sec.) ------------- ', t2G
print ' | Date/Time -------------------------------- ', dt.datetime.now()
print '*************************************************************************'
print '\nJames Gaboardi, 2015'
SHP_Maxian_GUROBI = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PMaxP_GUROBI:
SHP_Maxian_GUROBI.point(float(x), float(y))
# Add Fields
SHP_Maxian_GUROBI.field('y_ID')
SHP_Maxian_GUROBI.field('x_ID')
SHP_Maxian_GUROBI.field('LAT')
SHP_Maxian_GUROBI.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PMaxP_GUROBI:
SHP_Maxian_GUROBI.record(idy,idx,x,y)
# Save Shapefile
SHP_Maxian_GUROBI.save('shapefiles/Selected_Locations_Pmaxian_GUROBI')
t1 = time.time()
# Demand
Ai = Ai
# Demand Sum
AiSum = AiSum
# Travel Costs
Cij = All_Dist_MILES
# Weighted Costs
Sij = Ai * Cij
client_nodes = range(len(Sij))
service_nodes = range(len(Sij[0]))
all_nodes = len(Sij) * len(Sij[0])
ALL_nodes = range(all_nodes)
x = 'x'
cli_var = []
for i in client_nodes:
for j in service_nodes:
temp = x + str(i+1) + '_' + str(j+1)
cli_var.append(temp)
client_var = np.array(cli_var)
client_var = client_var.reshape(len(Sij),len(Sij[0]))
y = 'y'
fac_var = []
for i in service_nodes:
temp = y + str(i+1)
fac_var.append(temp)
facility_var = np.array(fac_var)
facility_var = facility_var.reshape(1,len(Sij[0]))
# 2. Create Model and Add Variables
# Create Model
mPMaxP_CPLEX = cp.Cplex()
# Problem Name
mPMaxP_CPLEX.set_problem_name('\n -- P-Maxian -- ')
print mPMaxP_CPLEX.get_problem_name()
# Problem Type ==> Linear Programming
mPMaxP_CPLEX.set_problem_type(mPMaxP_CPLEX.problem_type.LP)
# Set MIP Emphasis to '2' --> Optimal
mPMaxP_CPLEX.parameters.emphasis.mip.set(2)
print mPMaxP_CPLEX.parameters.get_changed()
print '\nProblem Type\n ' + str(mPMaxP_CPLEX.problem_type[mPMaxP_CPLEX.get_problem_type()])
# Objective Function Sense ==> Maximize
mPMaxP_CPLEX.objective.set_sense(mPMaxP_CPLEX.objective.sense.maximize)
print 'Objective Sense\n ' + str(mPMaxP_CPLEX.objective.sense[mPMaxP_CPLEX.objective.get_sense()])
# Add Client Decision Variables
mPMaxP_CPLEX.variables.add(names = [cli_var[i] for i in ALL_nodes],
obj = [Sij[i][j] for i in client_nodes for j in service_nodes],
lb = [0] * all_nodes,
ub = [1] * all_nodes,
types = ['B'] * all_nodes)
# Add Service Decision Variable
mPMaxP_CPLEX.variables.add(names = [fac_var[j] for j in service_nodes],
lb = [0] * len(Sij[0]),
ub = [1] * len(Sij[0]),
types = ['B'] * len(Sij[0]))
# 3. Add Constraints
# Add Assignment Constraints
for orig in client_nodes:
assignment_constraints = cp.SparsePair(ind = [client_var[orig][dest]
for dest in service_nodes],
val = [1] * len(Sij[0]))
mPMaxP_CPLEX.linear_constraints.add(lin_expr = [assignment_constraints],
senses = ['E'],
rhs = [1]);
# Add Facility Constraint
facility_constraint = cp.SparsePair(ind = fac_var,
val = [1.0] * len(Sij[0]))
mPMaxP_CPLEX.linear_constraints.add(lin_expr = [facility_constraint],
senses = ['E'],
rhs = [2])
# Add Opening Constraint
cli_var_open = []
for i in client_nodes:
for j in service_nodes:
temp = x + str(i+1) + '_' + str(j+1)
cli_var_open.append(temp)
fac_var_open = []
for i in client_nodes:
for j in service_nodes:
temp = y + str(j+1)
fac_var_open.append(temp)
l = []
for i in ALL_nodes:
l.append([cli_var_open[i]]+[fac_var_open[i]])
for i in l:
opening_constraint = cp.SparsePair(ind = i, val = [-1.0, 1.0])
mPMaxP_CPLEX.linear_constraints.add(lin_expr = [opening_constraint],
senses = ['G'],
rhs = [0])
# 4. Optimize and Print Results
mPMaxP_CPLEX.solve()
t2C = time.time()-t1
mPMaxP_CPLEX.write('WaverlyPMP_CPLEX.lp')
solution = mPMaxP_CPLEX.solution
selected = []
dbf1 = ps.open('shapefiles/RandomPoints_SERVICE.dbf')
NEW_Records_PMaxP_CPLEX = []
for v in mPMaxP_CPLEX.variables.get_names():
if 'x' in v:
pass
elif (solution.get_values(v) >
mPMaxP_CPLEX.parameters.mip.tolerances.integrality.get()):
var = '%s' % v
selected.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PMaxP_CPLEX.append(x)
else:
pass
# solution.get_status() returns an integer code
print 'Solution status = ' , solution.get_status(), ':',
# the following line prints the corresponding string
print solution.status[solution.get_status()]
# Display solution.
print 'Total cost: ' , solution.get_objective_value()
print 'Determination Time to Build Model in Python and Optimize (sec.): ', mPMaxP_CPLEX.get_dettime(), 'ticks'
print 'Real Time to Build Model in Python and Optimize (sec.): ', t2C
print '****************************'
for f in fac_var:
if (solution.get_values(f) >
mPMaxP_CPLEX.parameters.mip.tolerances.integrality.get()):
print ' Facility %s is open' % f
else:
pass #print ' Facility %s is closed' % f
print '****************************'
print '\n----- Date/Time ------------------- ', dt.datetime.now()
print '\n----- Cplex p-Maxian Problem -----'
print '\n-----\nJames Gaboardi, 2015'
SHP_Maxian_CPLEX = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PMaxP_CPLEX:
SHP_Maxian_CPLEX.point(float(x), float(y))
# Add Fields
SHP_Maxian_CPLEX.field('y_ID')
SHP_Maxian_CPLEX.field('x_ID')
SHP_Maxian_CPLEX.field('LAT')
SHP_Maxian_CPLEX.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PMaxP_CPLEX:
SHP_Maxian_CPLEX.record(idy,idx,x,y)
# Save Shapefile
SHP_Maxian_CPLEX.save('shapefiles/Selected_Locations_Pmaxian_CPLEX')
figsize(10,10)
# Draw Network Actual Roads and Nodes
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw Graph
nx.draw(g1, ntw.node_coords, node_size=10, alpha=0.5)
# Gurobi p-Maxian
P_Max_GUROBI = ps.open('shapefiles/Selected_Locations_Pmaxian_GUROBI.shp')
points_max_GUROBI = {}
for idx, coords in enumerate(P_Max_GUROBI):
GUROBI_maxian_g.add_node(idx)
points_max_GUROBI[idx] = coords
GUROBI_maxian_g.node[idx] = coords
nx.draw(GUROBI_maxian_g, points_max_GUROBI,
node_size=600, alpha=1, node_color='g')
# Cplex p-Maxian
P_Max_CPLEX = ps.open('shapefiles/Selected_Locations_Pmaxian_CPLEX.shp')
points_max_CPLEX = {}
for idx, coords in enumerate(P_Max_CPLEX):
CPLEX_maxian_g.add_node(idx)
points_max_CPLEX[idx] = coords
CPLEX_maxian_g.node[idx] = coords
nx.draw(CPLEX_maxian_g, points_max_CPLEX,
node_size=300, alpha=1, node_color='r')
# Draw Graph of Random Service
nx.draw(GRAPH_client, points_client,
node_size=15, alpha=.5, node_color='k')
# Draw Graph of Random Service
nx.draw(GRAPH_service, points_service,
node_size=50, alpha=1, node_color='k')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Graph Vertices']=g1
LEGEND['Graph Edges']=g1
LEGEND['Gurobi Optimal p-Maxian (p=2)']=GUROBI_maxian_g
LEGEND['Cplex Optimal p-Maxian (p=2)']=CPLEX_maxian_g
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.507, 0.0, 0.01, width=.0003, head_width=0.0012,
head_length=0.002, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.52), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
print '********************************************************'
print ' | Total Cost: Objective Function Value (miles) '
print ' | | Gurobi ------------------ ', val_G, ' '
print ' | | CPLEX ------------------- ', solution.get_objective_value(), ' '
print '-------------------------------------------------------'
NpmG = NEW_Records_PMaxP_GUROBI
NpmC = NEW_Records_PMaxP_CPLEX
G_val = mPMaxP_GUROBI.objVal
C_val = solution.get_objective_value()
if G_val == C_val:
print ' | Gurobi and CPLEX came to equal Objective Values'
else:
print ' | Gurobi and CPLEX came to different Objective Values'
if NpmG == NpmC:
print ' | ***'
print ' | Gurobi and CPLEX chose the same facilities'
else:
print ' | ***'
print ' | Gurobi and CPLEX chose different facilities'
print '-------------------------------------------------------'
print ' | Total Time to Build Model and Optimize: (seconds) '
print ' | | Gurobi ------------------ ', t2G, ' '
print ' | | CPLEX ------------------- ', t2C, ' '
print '********************************************************'
print '********************************************************'
print ' | Platform Specs: |'
print ' | | OS X Yosemite v10.10.4 |'
print ' | | MacBook Pro (Retina, 13-inch, Early 2015) |'
print ' | | Processor: 3.1 GHz Intel Core i7 |'
print ' | | Memory: 16 GB 1867 MHz DDR3 |'
print '********************************************************'
print ' | | Date/Time --------- ', dt.datetime.now(), '|'
print '********************************************************'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size='7' face='Times New Roman'><b>1. <u>Data preparation and creation</u></b></font>
Step2: <font size='5' face='Times New Roman'><b>1.2 <u>Instantiate all graphs to be drawn</u></b></font>
Step3: <font size='5' face='Times New Roman'><b>1.3 <u>Create Bounding Box from 'Waverly.shp'</u></b></font>
Step4: <font size='5' face='Times New Roman'><b>1.4 <u>Create numpy arrays of random floats within a bounding box</u></b></font>
Step5: <font size='5' face='Times New Roman'><b>1.5 <u>Zip the latitude and longitude lists together</u></b></font>
Step6: <font size='5' face='Times New Roman'><b>1.6 <u>Create Empty Random Points Dictionaries</u></b></font>
Step7: <font size='5' face='Times New Roman'><b>1.7 <u>Fill dictionaries of random roints</u></b></font>
Step8: <font size='5' face='Times New Roman'><b>1.8 <u>Draw network, simplified network, and random client & service nodes</u></b></font>
Step9: <font size='5' face='Times New Roman'><b>1.9 <u>Create weights at nodes and sum</u></b></font>
Step10: <font size='5' face='Times New Roman'><b>1.10 <u>Instantiate client and service shapefiles</u></b></font>
Step11: <font size='5' face='Times New Roman'><b>1.11 <u>Snap Observations to NTW</u></b></font>
Step12: <font size='5' face='Times New Roman'><b>1.12 <u>Draw NTW, snapped coords, & random coords</u></b></font>
Step13: <font size='5' face='Times New Roman'><b>1.13 <u>Create distance matrix</u></b></font>
Step14: <font size='7' face='Times New Roman'><b>2. <u>Mathematical Optimization</u></b></font>
Step15: <font size='5' face='Times New Roman'><b>2.1b <u>Instantiate Selected Gurobi <i>p</i>-Maxian shapefile</u></b></font>
Step16: <font size='5' face='Times New Roman'><b>2.2a <u>Cplex <i>p</i>-Maxian test [<i>p</i> = 2]</u></b></font>
Step17: <font size='5' face='Times New Roman'><b>2.2b <u>Instantiate Selected Cplex <i>p</i>-Maxian shapefile</u></b></font>
Step18: <font size='7' face='Times New Roman'><b>3. <u>Selected locations</u></b></font>
Step19: <font size='5' face='Times New Roman'><b>3.2 <u>Optimized Values</u></b></font>
Step20: <font size='5' face='Times New Roman'><b>3.3 <u>System Information</u></b></font>
|
11,444
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
11,445
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot as plt
import numpy.random as ra
%matplotlib inline
spectrum = [[1, 2, 3, 4, 5, 6],[2000, 4040, 6500, 6000, 4020, 2070]]
energies = np.array(spectrum[0])
fluxes = np.array(spectrum[1])
spectrum
prob = fluxes/float(sum(fluxes))
cum_prob = np.cumsum(prob)
cum_prob
N = 10000
R = ra.uniform(0, 1, N)
R[1:10]
gen_energies = [int(energies[np.argwhere(cum_prob == min(cum_prob[(cum_prob - r) > 0]))]) for r in R]
gen_energies[1:10]
gen_energies = ((np.array(gen_energies) - 1) / 1).astype(int)
times = np.arange(1, 6, 1)
lc = np.bincount(gen_energies, minlength=len(times))
lc
plot1, = plt.plot(lc/float(sum(lc)), 'r--', label='Assigned energies')
plot2, = plt.plot(prob,'g',label='Original Spectrum')
plt.xlabel('Energies')
plt.ylabel('Probability')
plt.legend(handles=[plot1,plot2])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below is a spectrum which follows an almost bell-curve type distribution (anyway, the specific type of distribution is not important here).
Step2: Below, first we compute probabilities of flux. Afterwards, we compute the cumulative probability.
Step3: We draw ten thousand numbers from uniform random distribution.
Step4: We assign energies to events corresponding to the random number drawn.
Step5: Histogram energies to get shape approximation.
|
11,446
|
<ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
Overlay("base.bit").download()
from pynq.iop import Pmod_TC1
from pynq.iop import PMODB
# TC1 sensor is on PMODB
my_tc1 = Pmod_TC1(PMODB)
r = my_tc1.read()
print('Raw Register Value: %08x hex' % r)
print('Ref Junction Temp: %.4f' % my_tc1.reg_to_ref(r))
print('Thermocouple Temp: %.2f' % my_tc1.reg_to_tc(r))
print('Alarm flags: %08x hex' % my_tc1.reg_to_alarms(r))
my_tc1.start_log()
my_tc1.stop_log()
log = my_tc1.get_log()
%matplotlib inline
import matplotlib.pyplot as plt
tc = [my_tc1.reg_to_tc(v) for v in log]
ref = [my_tc1.reg_to_ref(v) for v in log]
plt.plot(range(len(tc)), tc, 'ro', label='Thermocouple')
plt.plot(range(len(ref)), ref, 'bo', label='Ref Junction')
plt.title('TC1 Sensor log')
plt.axis([0, len(log), min(tc+ref)*0.9, max(tc+ref)*1.1])
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Temperature (C)')
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Starting logging temperature once every second
Step2: 3. Modifying the temperture
Step3: 4. Plot values over time
|
11,447
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from io import StringIO
print('---欠測値を含むデータ---')
csv_data = '''A,B,C,D
1.0,2.0,3.0,4.0
5.0,6.0,,8.0
10.0,11.0,12.0,'''
df = pd.read_csv(StringIO(csv_data))
print(df)
print('---欠測値のカウント---')
print(df.isnull().sum())
print('---欠測値を含む行を削除---')
print(df.dropna())
print('\n---欠測値を含む列を削除---')
print(df.dropna(axis=1))
from sklearn.preprocessing import Imputer
# storategyはmean(平均値)の他にmedian(中央値)とmost_frequent(最頻値)が利用可能
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(df)
imputed_data = imr.transform(df.values)
print('列の平均値で補完')
print(imputed_data)
imr = Imputer(missing_values='NaN', strategy='mean', axis=1)
imr = imr.fit(df)
imputed_data = imr.transform(df.values)
print('\n行の平均値で補完')
print(imputed_data)
df = pd.DataFrame([
['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1'],
])
df.columns = ['色', 'サイズ', '価格', 'クラス']
df
size_mapping = {'XL': 3, 'L':2, 'M':1}
df['サイズ'] = df['サイズ'].map(size_mapping)
df
import numpy as np
class_mapping = {label:idx for idx, label in enumerate(np.unique(df['クラス']))}
print(class_mapping)
df['クラス'] = df['クラス'].map(class_mapping)
df
# 元に戻す
inv_class_mapping = {v:k for k, v in class_mapping.items()}
print(inv_class_mapping)
df['クラス'] = df['クラス'].map(inv_class_mapping)
df
from sklearn.preprocessing import LabelEncoder
# ラベルエンコーダで自動的に数値化してくれる
class_le = LabelEncoder()
y = class_le.fit_transform(df['クラス'].values)
print(y)
# 元に戻す
class_le.inverse_transform(y)
X = df[['色', 'サイズ', '価格']].values
color_le = LabelEncoder()
X[:,0] = color_le.fit_transform(X[:, 0])
X
from sklearn.preprocessing import OneHotEncoder
# categorical_featuresに指定した列がカテゴリ文字列として扱われ変換される。今回は1列目のみ。
ohe = OneHotEncoder(categorical_features=[0])
ohe.fit_transform(X).toarray()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 欠測値を持つサンプル、特徴量を取り除く
Step2: 欠測値を補完する
Step3: カテゴリデータの処理
Step4: 順序特徴量のマッピング
Step5: クラスラベルのエンコーディング
Step6: 名義特徴量でのone-hotエンコーディング
Step7: そこで、名義特徴量ごとに列を作り、ダミー特徴量を作成する。
|
11,448
|
<ASSISTANT_TASK:>
Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import nsfg
preg = nsfg.ReadFemPreg()
preg.head()
preg.columns
preg.columns[1]
pregordr = preg['pregordr']
type(pregordr)
pregordr
pregordr[0]
pregordr[2:5]
pregordr = preg.pregordr
preg.outcome.value_counts().sort_index()
preg.birthwgt_lb.value_counts().sort_index()
caseid = 10229
preg_map = nsfg.MakePregMap(preg)
indices = preg_map[caseid]
preg.outcome[indices].values
# Solution
preg.birthord.value_counts().sort_index()
preg.birthord.isnull().sum()
# Solution
preg.prglngth.value_counts().sort_index()
preg.totalwgt_lb.mean()
# Solution
preg['totalwgt_kg'] = preg.totalwgt_lb / 2.2
preg.totalwgt_kg.mean()
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz")
resp = nsfg.ReadFemResp()
resp.head()
# Solution
resp.age_r.value_counts().sort_index()
resp[resp.caseid==2298]
preg[preg.caseid==2298]
# Solution
resp[resp.caseid==1].age_r
# Solution
preg[preg.caseid==2298].prglngth
# Solution
preg[preg.caseid==5012].birthwgt_lb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples from Chapter 1
Step2: Print the column names.
Step3: Select a single column name.
Step4: Select a column and check what type it is.
Step5: Print a column.
Step6: Select a single element from a column.
Step7: Select a slice from a column.
Step8: Select a column using dot notation.
Step9: Count the number of times each value occurs.
Step10: Check the values of another variable.
Step11: Make a dictionary that maps from each respondent's caseid to a list of indices into the pregnancy DataFrame. Use it to select the pregnancy outcomes for a single respondent.
Step12: Exercises
Step13: We can also use isnull to count the number of nans.
Step14: Select the prglngth column, print the value counts, and compare to results published in the codebook
Step15: To compute the mean of a column, you can invoke the mean method on a Series. For example, here is the mean birthweight in pounds
Step16: Create a new column named <tt>totalwgt_kg</tt> that contains birth weight in kilograms. Compute its mean. Remember that when you create a new column, you have to use dictionary syntax, not dot notation.
Step17: nsfg.py also provides ReadFemResp, which reads the female respondents file and returns a DataFrame
Step18: DataFrame provides a method head that displays the first five rows
Step19: Select the age_r column from resp and print the value counts. How old are the youngest and oldest respondents?
Step20: We can use the caseid to match up rows from resp and preg. For example, we can select the row from resp for caseid 2298 like this
Step21: And we can get the corresponding rows from preg like this
Step22: How old is the respondent with caseid 1?
Step23: What are the pregnancy lengths for the respondent with caseid 2298?
Step24: What was the birthweight of the first baby born to the respondent with caseid 5012?
|
11,449
|
<ASSISTANT_TASK:>
Python Code:
import random
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 0
return random.random() < 0
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 1 / 2
return random.random() < 1 / 2
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 3 / 4
return random.random() < 3 / 4
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 1
return random.random() < 1
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
import itertools
pets = ("cat", "dog", "fish", "lizard", "hamster")
### BEGIN SOLUTION
permutations = tuple(itertools.permutations(pets, 4))
number_of_permutations = len(permutations)
### END SOLUTION
import scipy.special
### BEGIN SOLUTION
direct_number_of_permutations = scipy.special.perm(5, 4)
### END SOLUTION
import sympy as sym
x = sym.Symbol("x")
c1 = sym.Symbol("c1")
### BEGIN SOLUTION
second_derivative = 4 * x + sym.cos(x)
derivative = sym.integrate(second_derivative, x) + c1
### END SOLUTION
### BEGIN SOLUTION
equation = sym.Eq(derivative.subs({x:0}), 0)
### END SOLUTION
### BEGIN SOLUTION
particular_derivative = derivative.subs({c1: 0})
function = sym.integrate(particular_derivative) + c1
sym.integrate(function, (x, 0, 5 * sym.pi))
### END SOLUTION
c = sym.Symbol("c")
### BEGIN SOLUTION
def get_sequence_a(n):
Return the sequence a.
if n == 1:
return c
return 3 * get_sequence_a(n - 1) + c / n
sum(get_sequence_a(n) for n in range(1, 16))
### END SOLUTION
### BEGIN SOLUTION
f = (get_sequence_a(n=1) + get_sequence_a(n=2) * x + get_sequence_a(n=3) * x ** 2 + + get_sequence_a(n=4) * x ** 3).subs({c: 2})
sym.diff(f, x)
### END SOLUTION
### BEGIN SOLUTION
sym.integrate(f, x)
### END SOLUTION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computing for Mathematics - 2020/2021 individual coursework
Step3: b. $1/2$
Step5: c. $3/4$
Step7: d. $1$
Step8: Question 2
Step9: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation.
Step10: Question 3
Step11: b. Create a variable equation that has value the equation $f'(0)=0$.
Step12: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$.
Step14: Question 4
Step15: b. Given that $c=2$ output $\frac{df}{dx}$ where
Step16: c. Given that $c=2$ output $\int f(x)dx$
|
11,450
|
<ASSISTANT_TASK:>
Python Code:
class Directions:
NORTH = 'North'
SOUTH = 'South'
EAST = 'East'
WEST = 'West'
STOP = 'Stop'
def P_1(eps, E_N, E_S):
'''
Calculates: P(X=x|E_{N}=e_{N},E_{S}=e_{S})
Arguments: E_N, E_S \in {True,False}
0 <= eps <= 1 (epsilon)
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
P_1(0, True, False)
def P_2(eps, E_N, E_S):
'''
Calculates: P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})
Arguments: E_N, E_S \in {True,False}
0 <= eps <= 1
'''
pd = {True:0, False:0}
return pd
P_2(0.2, True, False)
def P_3(eps, S):
'''
Calculates: P(S), where S\subseteq\{e_{N},e_{S},e_{E},e_{W}\}
Arguments: S a dictionary with keywords in Directions and values in
{True,False}
0 <= eps <= 1
'''
return 0
P_3(0.3, {Directions.EAST: True, Directions.SOUTH: False})
def P_4(eps, E_1, E_3):
'''
Calculates: P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})
Arguments: E_1, E_3 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
P_4(0.1, E_1, E_3)
def P_5(eps, E_2, E_3, E_4):
'''
Calculates: P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})
Arguments: E_2, E_3, E_4 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_4 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
P_5(0.1, E_2, E_3, E_4)
def P_6(eps, E_1, E_2, E_3):
'''
Calculates: P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})
Arguments: E_1, E_2, E_3 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
'''
pd = {(n, s, e, w): 0 for n in [False, True] for s in [False, True]
for e in [False, True] for w in [False, True]}
return pd
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
P_6(0.1, E_1, E_2, E_3)
def P_7(eps, E_N, E_S):
'''
Calculates: P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})
Arguments: E_N_2, E_S_2 \in {True,False}
0 <= eps <= 1
'''
pd = {True:0, False:0}
return pd
P_7(0.1, True, False)
def approx_equal(val1, val2):
return abs(val1-val2) <= 0.00001
def test_P_1():
pd = P_1(0.0, True, True)
assert approx_equal(pd[(2, 1)], 0.1111111111111111)
assert approx_equal(pd[(3, 1)], 0)
pd = P_1(0.3, True, False)
assert approx_equal(pd[(2, 1)], 0.03804347826086956)
assert approx_equal(pd[(3, 1)], 0.016304347826086956)
def test_P_2():
pd = P_2(0.0, True, True)
assert approx_equal(pd[False], 1.0)
pd = P_2(0.3, True, False)
assert approx_equal(pd[False], 0.5514492753623188)
def test_P_3():
pd = P_3(0.1, {Directions.EAST: True, Directions.WEST: True})
assert approx_equal(pd, 0.2299999999999999)
pd = P_3(0.1, {Directions.EAST: True})
assert approx_equal(pd, 0.3999999999999999)
pd = P_3(0.2, {Directions.EAST: False, Directions.WEST: True, Directions.SOUTH: True})
assert approx_equal(pd, 0.0980000000000000)
def test_P_4():
E_1 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}
E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}
pd = P_4(0.0, E_1, E_3)
assert approx_equal(pd[(6, 3)], 0.1842105263157895)
assert approx_equal(pd[(4, 3)], 0.0)
pd = P_4(0.2, E_1, E_3)
assert approx_equal(pd[(6, 3)], 0.17777843398830864)
assert approx_equal(pd[(4, 3)], 0.000578430282649176)
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
pd = P_4(0.0, E_1, E_3)
assert approx_equal(pd[(6, 2)], 0.3333333333333333)
assert approx_equal(pd[(4, 3)], 0.0)
def test_P_5():
E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: False, Directions.WEST: False}
E_4 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
pd = P_5(0, E_2, E_3, E_4)
assert approx_equal(pd[(2, 5)], 0.5)
assert approx_equal(pd[(4, 3)], 0.0)
pd = P_5(0.3, E_2, E_3, E_4)
assert approx_equal(pd[(2, 5)], 0.1739661245168835)
assert approx_equal(pd[(4, 3)], 0.0787991740545979)
def test_P_6():
E_1 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
pd = P_6(0.2, E_1, E_2, E_3)
assert approx_equal(pd[(False, False, True, True)], 0.15696739914079486)
assert approx_equal(pd[(True, True, False, False)], 0.20610191744824477)
pd = P_6(0., E_1, E_2, E_3)
assert approx_equal(pd[(False, False, True, True)], 0.5)
assert approx_equal(pd[(False, True, False, False)], 0.0)
def test_P_7():
pd = P_7(0.0, True, False)
assert approx_equal(pd[False], 0.7142857142857143)
pd = P_7(0.3, False, False)
assert approx_equal(pd[False], 0.5023529411764706)
test_P_1()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: a. Bayes' net for instant perception and position.
Step2: ii. $P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})$
Step3: iii. $P(S)$, where $S\subseteq{e_{N},e_{S},e_{E},e_{W}}$
Step4: c. Bayes' net for dynamic perception and position.
Step5: ii. $P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})$
Step6: iii. $P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})$
Step7: iv. $P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})$
Step8: Test functions
|
11,451
|
<ASSISTANT_TASK:>
Python Code:
##
# The 'import' statement imports external libraries for use in the interactive session.
# ... and 'import <library> as <nickname>' makes a shorter name for convenience.
#
# The '%matplotlib inline' statement allows inline plots here. (see try.jupyter.org)
#
import datetime
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Bigger fonts and figures for the demo
matplotlib.rcParams.update({
'font.size': 14,
'figure.figsize':(10.0, 8.0),
'axes.formatter.useoffset':False })
# Better data frame display for the demo
pd.set_option('expand_frame_repr', True)
pd.set_option('max_rows', 18)
pd.set_option('max_colwidth', 14)
pd.set_option('precision',2)
# Evaluate this cell for documentation on Jupyter magics
%magic
##
# Pro tip:
# You probably know 'help(object)' will show documentation for the object.
# 'dir(object)' will list all names accessible via the object.
print(' '.join([x for x in dir(pd) if 'read_' in x]))
## ... and output options:
print(' '.join([x for x in dir(pd.DataFrame) if 'to_' in x]))
import os # in case you're on windows
file_location = os.path.join('data', 'nfl_season2008to2014.csv')
df = pd.read_csv(file_location)
df.head(3)
##
# Transformations using Pandas
# Spread
# The pandas.DataFrame uses numpy arrays to enable array-wise operations.
df['Spread'] = df.Points - df.PointsAllowed
# PtsPct
df['PointsPercent'] = df.Points / (df.Points + df.PointsAllowed)
# Outcome
# When assigning to a subset of a column in a DataFrame,
# use indexing ('ix' or other functions) to identify the subset.
df['Outcome'] = np.nan # pre-fill the column
df.ix[df.Spread > 0, 'Outcome'] = 'W'
df.ix[df.Spread < 0, 'Outcome'] = 'L'
df.ix[df.Spread == 0, 'Outcome'] = 'T'
# WLT (record)
# Make sure the data are sorted.
# Then:
# Use 'apply' for user-defined functions
# and 'cumsum' for a running sum; rows with np.nan
# in the outcome will just add zero
df = df.sort(['Team','Season','Week'])
df['WLT'] = df.groupby(['Team','Season'])['Outcome'].apply(
lambda o:
(o == 'W').cumsum().astype(str) + '-' +
(o == 'L').cumsum().astype(str) + '-'+
(o == 'T').cumsum().astype(str) )
# WinPct
df['WinPct'] = df.groupby(('Team','Season'))['Spread'].apply(
lambda s: (0.5 * (s == 0).cumsum() + (s > 0).cumsum()) / s.notnull().cumsum() )
# LastWkBye
# Make sure the data are sorted.
# Then flag whether last game was a Bye
df = df.sort(['Team', 'Season','Week'])
df['LastWeekBye'] = df.groupby(['Team','Season'])['Spread'].shift(1).isnull().fillna(False)
df.ix[df.Week == 1, 'LastWeekBye'] = False
# Past5WkAvgPts
# Make sure the data are sorted.
# Then use the windowing functions
# see: http://pandas.pydata.org/pandas-docs/stable/computation.html
# window size = 5, minimum required observations = 2
df = df.sort(['Team','Season', 'Week'])
df['Past5WkAvgPts'] = df.groupby(['Team']).Points.apply(
pd.rolling_mean, window=5, min_periods=2).shift(1)
# Past5WkAvgFumbles
# Some of the sorting seems unnecessary
# but is good in case you copy and paste a snippet elsewhere...
df = df.sort(['Team','Season','Week'])
df['Past5WkAvgFumbles'] = df.groupby(['Team']).Fumbles.apply(
pd.rolling_mean, window=5, min_periods=2).shift(1)
# Past5WkInterceptions
df = df.sort(['Team','Season','Week'])
df['Past5WkInterceptions'] = df.groupby('Team').Interceptions.apply(
pd.rolling_sum, window=5, min_periods=2).shift(1)
# EwmaPenaltyYards, centered at 2 Weeks ago
# exponentially weighted moving average
df = df.sort(['Team','Season','Week'])
df['EwmaPenaltyYards'] = df.groupby('Team').PenaltyYards.apply(
pd.ewma, 2).shift(1)
df.head(9)
##
# Distribution of the point spread
#
plt.subplot(221)
df.Spread.plot(kind='density')
plt.title('Spread 2008-2014')
plt.xlim((-60,60))
plt.gcf().set_size_inches(12, 5)
plt.subplot(222)
df.Spread.hist(bins=60)
plt.title('...but not many ties')
plt.subplot(212)
plt.plot(df.Spread, df.index, 'ko', alpha=0.1)
plt.xlabel('All games')
plt.show()
##
# Spread distribution by year
# -- demonstrates pandas.DataFrame.pivot_table
#
df[['Week', 'Team', 'Season','Spread']].pivot_table(
index=['Week', 'Team'], columns='Season', values='Spread').hist(
bins=25, layout=(2,4), figsize=(12,3.5), sharey=True)
plt.show()
##
# Or cumulative distribution of spread
# Uses the same 'hist' function as above but with different keyword arguments.
#
df.Spread.hist(cumulative=True, normed=True, histtype='step', bins=100, figsize=(9,4))
#- The shape is consistent season-over-season. If you want to see that uncomment the below....
#df.groupby('Season').Spread.hist(cumulative=True, normed=True, histtype='step', bins=100, figsize=(9,4))
plt.title('Cumulative Spread Distribution, NFL 2008-2014')
plt.ylim(0,1)
plt.xlabel('Spread')
plt.ylabel('Percent of Teams below Spread')
plt.axvline(x=7, hold=None, color='lightblue', linewidth=3)
plt.axvline(x=14, hold=None, color='lightblue', linewidth=2)
plt.axvline(x=21, hold=None, color='lightblue', linewidth=1)
plt.show()
##
# We can do things like explore whether home field advantage exists.
#
df[['AtHome', 'Spread']].boxplot(by='AtHome', figsize=(7,4))
plt.suptitle('')
plt.title('Difference in spread for Home vs Away teams')
plt.ylabel('Spread'); plt.xlabel('Playing at Home?')
plt.show()
##
# ... by Win counts now
fig = plt.figure()
ax = plt.subplot(111)
pd.crosstab(df.AtHome, df.Outcome).plot(
ax=ax, kind='barh', stacked=True,
color=['red','blue', 'green'], figsize=(9,5))
# Mom says label your axes
plt.ylabel('At Home?'); plt.xlabel('Number of Outcomes')
# Shrink current axis by 20% to make room for the legend
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize=12, frameon=False)
plt.show()
##
# ... or see which teams are consistently great or not
#
df2 = pd.DataFrame({col:vals.Spread for col,vals in df.groupby('Team')})
meds = df2.median().order(ascending=True)
df2[meds.index].boxplot(vert=False, figsize=(5,35), return_type='axes',fontsize=14)
plt.xlabel('Spreads in Seasons 2008-2014')
plt.show()
##
# ... Some teams make points but not wins
#
df2 = pd.DataFrame({col:vals.Points for col,vals in df[df.Season==2014].groupby('Team')})
meds = df2.median().order(ascending=True)
df2[meds.index].boxplot(vert=False, figsize=(5,35), return_type='axes',fontsize=14)
plt.xlabel('Distribution of points')
plt.title("Points in the 2014 Season")
plt.show()
##
# ... How about Sacks?
#
df[(df.Season==2014) & (df.Category == 'regular')
].groupby('Team').Sacks.sum().order().plot(kind='barh')
plt.title("Sacks made in the 2014 season")
plt.show()
fig = plt.figure()
ax = plt.subplot(111)
tmp = df[df.Team.isin(['Chicago Bears','Detroit Lions','Denver Broncos'])
].groupby(['Team', 'Season']
).Spread.median().unstack().transpose()
tmp.plot(ax=ax, lw=3, figsize=(12,4), legend=False)
plt.xlabel('Season')
plt.ylabel('Median Spread')
# Shrink current axis by 20% to make room for the annotation
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Legend to the right of the current axis
ax.annotate('... Manning?',(2012, tmp.ix[2012,'Denver Broncos']), xytext=(2010.2, 10.5),
arrowprops=dict(arrowstyle='->'))
ax.annotate('... Suh?',(2010, tmp.ix[2012,'Detroit Lions']), xytext=(2010, -11),
arrowprops=dict(arrowstyle='->'))
for y, txt in zip([-7,7,2],
['Chicago Bears', 'Denver Broncos', 'Detroit Lions']):
ax.annotate(txt, (2014, y), xytext=(2014.1, y))
plt.title("It's a team sport, but...")
plt.show()
df.columns
##
# Combine the data by game, so that the opponent aggregations are available too
tm = df[[
'Season', 'Category', 'Week', 'Team', 'Opponent', 'Spread', 'AtHome',
'LastWeekBye',
'Past5WkAvgPts',
'Past5WkAvgFumbles', 'Past5WkInterceptions',
'EwmaPenaltyYards']]
tm.columns = [
'Season', 'Category', 'Week', 'Team', 'Opponent', 'Spread', 'AtHome',
'T_LastWkBye',
'T_5WkAvgPts',
'T_5WkAvgFumbles', 'T_5WkAvgInterceptions',
'T_EwmaPenaltyYards']
opp = df[[
'Season', 'Category', 'Week', 'Team', 'Opponent',
'LastWeekBye',
'Past5WkAvgPts',
'Past5WkAvgFumbles', 'Past5WkInterceptions',
'EwmaPenaltyYards']]
opp.columns = [
'Season', 'Category', 'Week', 'Opponent', 'Team',
'O_LastWkBye',
'O_5WkAvgPts',
'O_5WkAvgFumbles', 'O_5WkAvgInterceptions',
'O_EwmaPenaltyYards']
games = tm.merge(opp, how='inner', on=['Season', 'Category', 'Week', 'Team', 'Opponent'])
games = games[games.Spread.notnull()]
print('games shape:', games.shape)
print('df shape:', df.shape, 'df no bye', df[df.Points.notnull()].shape)
##
# All of the Scikit-learn models are trained with
# (1) an output column and
# (2) an input dataset
#
# Want to predict 'Spread' given known values:
# ==> Ignore 'Bye' weeks (they have no 'Spread')
# - Category (regular|postseason)
# - AtHome (True|False)
# - LastWeekBye (True|False)
# - Past5WeekAveragePointsPercent (Numeric)
# - LastSeasonPlayoffGames (Numeric-count)
# - LastSeasonPointsPercent (Numeric)
#
no_nulls = games.notnull().all(axis=1)
spread = games[no_nulls].Spread
input_data = games[no_nulls][[
'Team', # 'Opponent',
'Category',
'AtHome',
'T_LastWkBye',
'T_5WkAvgPts', 'T_5WkAvgFumbles', 'T_5WkAvgInterceptions', 'T_EwmaPenaltyYards',
'O_LastWkBye',
'O_5WkAvgPts', 'O_5WkAvgFumbles', 'O_5WkAvgInterceptions', 'O_EwmaPenaltyYards'
]]
# The input column 'Category' contains categories,
# so we have to make dummy variables to use in the regression.
input_data = pd.get_dummies(input_data)
print("Size of the input set:", input_data.shape)
input_data.head(3)
fig, axs = plt.subplots(2, 5)
fig.set_size_inches(14, 6)
cols = ('T_LastWkBye', 'T_5WkAvgPts', 'T_5WkAvgFumbles', 'T_5WkAvgInterceptions', 'T_EwmaPenaltyYards',
'O_LastWkBye', 'O_5WkAvgPts', 'O_5WkAvgFumbles', 'O_5WkAvgInterceptions', 'O_EwmaPenaltyYards')
for ax, col in zip(axs.flatten(), cols):
ax.scatter(y=games.Spread, x=games[col], alpha=0.1)
ax.set_title(col, fontsize=12)
plt.show()
# Set up cross-validation
from sklearn import cross_validation
from sklearn.metrics import mean_squared_error
def perform_kfold_cross_validation(model, all_X, all_y, k=5):
Calculate root mean squared error for each cross-validation fold.
Parameters:
model - a scikit learn model
all_X - a pandas DataFrame with the observed input data
all_y - a pandas Series with the observed outcome
k - number of cross validation folds (test set will be 1/k of the data)
Return value:
An array of length 'k' with the root mean squared error
for each fold.
# 'folds' is a generator that will yield pairs of arrays (train, test)
# selecting row numbers for training/testing
folds = cross_validation.KFold(n=len(all_y), n_folds=k)
RMSE = [] # root mean squared errors
# Loop over the cross-validation folds
for training, testing in folds:
# Get the training and test splits
training = all_X.index[training]
testing = all_X.index[testing]
X_train, X_test = all_X.ix[training], all_X.ix[testing]
y_train, y_test = all_y.ix[training], all_y.ix[testing]
# Train the model
model.fit(X_train, y_train)
# Use the model to predict output
y_fitted = model.predict(X_test)
RMSE.append(np.sqrt(mean_squared_error(y_test, y_fitted)))
# Leave the model fit to the entire dataset
model.fit(all_X, all_y)
# And return the array of root mean squared errors
return RMSE
##
# Some popular regression models
from sklearn import ensemble
from sklearn import linear_model
from sklearn import svm
from sklearn import tree
# To see available tuning settings are for a model,
# call help on its initialization function, e.g. :
# help(linear_model.Ridge.__init__)
models = dict(
ols = linear_model.LinearRegression(),
gbm = ensemble.GradientBoostingRegressor(max_depth=5),
ridge = linear_model.Ridge(),
svr = svm.LinearSVR(epsilon=2),
tree = tree.DecisionTreeRegressor(max_depth=5),
random_forest = ensemble.RandomForestRegressor(n_estimators=5, max_depth=5)
)
rmses = {}
for name, model in models.items():
rmses[name] = perform_kfold_cross_validation(model, input_data, spread, k=8)
pd.DataFrame(rmses).boxplot(vert=False, return_type='axes')
plt.gcf().set_size_inches(9, 5)
plt.xlabel("Error in predicted spread"); plt.ylabel("Model")
plt.show()
fig, axs = plt.subplots(2, 3, sharey=True)
fig.set_size_inches(12, 8)
# Make the train/test split be pre-2014/2014
train = games[no_nulls].Season < 2014
test = games[no_nulls].Season == 2014
for (ax, (name, model)) in zip(axs.flatten(), models.items()):
model.fit(input_data.ix[train], spread[train])
ax.scatter(x=spread[test], y=model.predict(input_data.ix[test]), alpha=0.2)
ax.plot((-60,60), (-60,60), ls="--", c=".3", color='gray') # Diagonal line 1=1
ax.set_title(name)
ax.set_ylim(-30,30)
ax.set_ylabel('Predicted')
plt.show()
# To see available tuning settings are for a model,
# call help on its initialization function, e.g. :
# help(linear_model.Ridge.__init__)
from sklearn import naive_bayes
# We can use the Naive Bayes classifier so long as there are no
# columns with negative values
input_data = games[no_nulls][[
'Team', 'Opponent',
'AtHome',
'T_LastWkBye',
'T_5WkAvgPts', 'T_5WkAvgFumbles', 'T_5WkAvgInterceptions', 'T_EwmaPenaltyYards',
'O_LastWkBye',
'O_5WkAvgPts', 'O_5WkAvgFumbles', 'O_5WkAvgInterceptions', 'O_EwmaPenaltyYards'
]]
# The input columns 'Team' and 'Opponent', contain categories,
# so we have to make dummy variables to use in the regression.
input_data = pd.get_dummies(input_data)
print("Size of the input set:", input_data.shape)
models = dict(
logistic = linear_model.LogisticRegression(),
gbc = ensemble.GradientBoostingClassifier(max_depth=5),
ridge = linear_model.RidgeClassifier(),
tree = tree.DecisionTreeClassifier(max_depth=5),
#svc = svm.LinearSVC(),
naive_bayes = naive_bayes.MultinomialNB(), # Can only use if all inputs are positive
random_forest = ensemble.RandomForestClassifier(n_estimators=10, max_depth=5)
)
win = (spread > 0).astype(int)
rmses = {}
for name, model in models.items():
rmses[name] = perform_kfold_cross_validation(model, input_data, win, k=3)
pd.DataFrame(rmses).boxplot(vert=False, return_type='axes')
plt.gcf().set_size_inches(9, 5)
plt.xlabel("Error in prediction"); plt.ylabel("Model")
plt.show()
fig, axs = plt.subplots(2, 3, sharey=True)
fig.set_size_inches(12, 8)
# Make the train/test split be pre-2014/2014
train = games[no_nulls].Season < 2014
test = games[no_nulls].Season == 2014
for (ax, (name, model)) in zip(axs.flatten(), models.items()):
pd.crosstab(win.apply(lambda x: ('Lose', 'Win')[x]),
pd.Series(('Pred. Lose', 'Pred. Win')[x] for x in model.predict(input_data))).plot(
ax=ax, kind='barh', stacked=True, legend=None,
color=['red','green'], figsize=(9,5))
if ax.is_last_col():
# Legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize=12, frameon=False)
ax.set_xticks([])
ax.set_title(name)
ax.set_ylabel('Actual')
plt.show()
##
# ROC charts
# http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
from sklearn.metrics import roc_curve, auc
fig, axs = plt.subplots(2, 3, sharey=True)
fig.set_size_inches(12, 8)
for (ax, (name, model)) in zip(axs.flatten(), models.items()):
try:
fpr, tpr, _ = roc_curve(win[test], model.predict_proba(input_data[test])[:,1])
except:
fpr, tpr, _ = roc_curve(win[test], model.decision_function(input_data[test]))
roc_auc = auc(fpr, tpr)
ax.plot(fpr, tpr, label='ROC curve\n(area = {:0.2f})'.format(roc_auc))
ax.plot((0,0), (1,1), ls="--", c=".3", color='lightgray') # Diagonal line 1=1
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.legend(loc='lower right', fontsize=12, frameon=False)
if ax.is_first_col():
ax.set_ylabel('True Positive Rate')
ax.set_xticks([])
ax.set_xlabel('False Positive Rate')
ax.set_title(name)
plt.show()
# Week by week
# Pick the GBC model since that looked best
model = models['gbc']
# Pick columns to show the results
results = games[no_nulls][['Season', 'Week', 'Team', 'Opponent', 'AtHome']]
for wk in range(1, 18):
# Each week we want only the dates before
print wk, '...',
train = (games[no_nulls].Season < 2014) | (games[no_nulls].Week < wk)
test = (games[no_nulls].Season == 2014) & (games[no_nulls].Week == wk)
model.fit(input_data[train], win[train])
probability = model.predict_proba(input_data[test])[:,1]
results.ix[test, 'Win_Actual'] = win[test]
results.ix[test, 'Win_Predicted'] = probability
results.shape
results = results[(results.Season==2014) & (results.Week < 18)]
# Merge on the home team
resultsH = results[results.AtHome]
resultsA = results[~results.AtHome]
del resultsH['AtHome']
del resultsA['AtHome']
del resultsA['Win_Actual']
resultsH.columns = ['Season', 'Week', 'Team', 'Opponent', 'Home_Win', 'Home_Pred_W']
resultsA.columns = ['Season', 'Week', 'Opponent', 'Team', 'Away_Pred_W']
resultsH = resultsH.merge(resultsA, on=['Season', 'Week', 'Team', 'Opponent'])
resultsH.columns
resultsH.sort(['Week', 'Team'])
resultsH['Odds'] = resultsH.Home_Pred_W / resultsH.Away_Pred_W
resultsH['Rank'] = resultsH.groupby('Week').Odds.rank()
resultsH = resultsH.sort(['Week','Rank'], ascending=False)
resultsH[resultsH.Week == 1]
results.ix[results.Season==2014, ['Win_Actual', 'Win_Predicted']].boxplot(
by='Win_Actual', figsize=(7,4))
plt.suptitle('')
plt.title('Actual Wins vs Prediction')
plt.ylabel('Prediction'); plt.xlabel('Won?')
plt.show()
# The script below does just this
%run make_predictions.py
# And the other code
%run extra_code/make_datasheet.py
# ...Finally
%run extra_code/make_gamesheets.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OK, Jupyter
Step2: Pandas I/O
Step3: Data description
Step4: Takeaway
Step5: Or we can look at a particular team / teams
Step6: Interpretation
Step8: Set up to do cross-validation
Step9: Try out some regression models
Step10: Well, then. Kind of disappointing.
Step11: For the confidence pool
Step12: Code to get a confidence rating
Step13: Use the model
|
11,452
|
<ASSISTANT_TASK:>
Python Code:
# set up Python environment: numpy for numerical routines, and matplotlib for plotting
import numpy as np
import matplotlib.pyplot as plt
# display plots in this notebook
%matplotlib inline
# set display defaults
plt.rcParams['figure.figsize'] = (10, 10) # large images
plt.rcParams['image.interpolation'] = 'nearest' # don't interpolate: show square pixels
plt.rcParams['image.cmap'] = 'gray' # use grayscale output rather than a (potentially misleading) color heatmap
# The caffe module needs to be on the Python path;
# we'll add it here explicitly.
import sys
caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)
sys.path.insert(0, caffe_root + 'python')
import caffe
# If you get "No module named _caffe", either you have not built pycaffe or you have the wrong path.
import os
if os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print 'CaffeNet found.'
else:
print 'Downloading pre-trained CaffeNet model...'
!../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
caffe.set_mode_cpu()
model_def = caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt'
model_weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
# load the mean ImageNet image (as distributed with Caffe) for subtraction
mu = np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
mu = mu.mean(1).mean(1) # average over pixels to obtain the mean (BGR) pixel values
print 'mean-subtracted values:', zip('BGR', mu)
# create transformer for the input called 'data'
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1)) # move image channels to outermost dimension
transformer.set_mean('data', mu) # subtract the dataset-mean value in each channel
transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255]
transformer.set_channel_swap('data', (2,1,0)) # swap channels from RGB to BGR
# set the size of the input (we can skip this if we're happy
# with the default; we can also change it later, e.g., for different batch sizes)
net.blobs['data'].reshape(50, # batch size
3, # 3-channel (BGR) images
227, 227) # image size is 227x227
image = caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')
transformed_image = transformer.preprocess('data', image)
plt.imshow(image)
# copy the image data into the memory allocated for the net
net.blobs['data'].data[...] = transformed_image
### perform classification
output = net.forward()
output_prob = output['prob'][0] # the output probability vector for the first image in the batch
print 'predicted class is:', output_prob.argmax()
# load ImageNet labels
labels_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
if not os.path.exists(labels_file):
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(labels_file, str, delimiter='\t')
print 'output label:', labels[output_prob.argmax()]
# sort top five predictions from softmax output
top_inds = output_prob.argsort()[::-1][:5] # reverse sort and take five largest items
print 'probabilities and labels:'
zip(output_prob[top_inds], labels[top_inds])
%timeit net.forward()
caffe.set_device(0) # if we have multiple GPUs, pick the first one
caffe.set_mode_gpu()
net.forward() # run once before timing to set up memory
%timeit net.forward()
# for each layer, show the output shape
for layer_name, blob in net.blobs.iteritems():
print layer_name + '\t' + str(blob.data.shape)
for layer_name, param in net.params.iteritems():
print layer_name + '\t' + str(param[0].data.shape), str(param[1].data.shape)
def vis_square(data):
Take an array of shape (n, height, width) or (n, height, width, 3)
and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)
# normalize data for display
data = (data - data.min()) / (data.max() - data.min())
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = (((0, n ** 2 - data.shape[0]),
(0, 1), (0, 1)) # add some space between filters
+ ((0, 0),) * (data.ndim - 3)) # don't pad the last dimension (if there is one)
data = np.pad(data, padding, mode='constant', constant_values=1) # pad with ones (white)
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
plt.imshow(data); plt.axis('off')
# the parameters are a list of [weights, biases]
filters = net.params['conv1'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
feat = net.blobs['conv1'].data[0, :36]
vis_square(feat)
feat = net.blobs['pool5'].data[0]
vis_square(feat)
feat = net.blobs['fc6'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
feat = net.blobs['prob'].data[0]
plt.figure(figsize=(15, 3))
plt.plot(feat.flat)
# download an image
my_image_url = "..." # paste your URL here
# for example:
# my_image_url = "https://upload.wikimedia.org/wikipedia/commons/b/be/Orang_Utan%2C_Semenggok_Forest_Reserve%2C_Sarawak%2C_Borneo%2C_Malaysia.JPG"
!wget -O image.jpg $my_image_url
# transform it and copy it into the net
image = caffe.io.load_image('image.jpg')
net.blobs['data'].data[...] = transformer.preprocess('data', image)
# perform classification
net.forward()
# obtain the output probabilities
output_prob = net.blobs['prob'].data[0]
# sort top five predictions from softmax output
top_inds = output_prob.argsort()[::-1][:5]
plt.imshow(image)
print 'probabilities and labels:'
zip(output_prob[top_inds], labels[top_inds])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load caffe.
Step2: If needed, download the reference model ("CaffeNet", a variant of AlexNet).
Step3: 2. Load net and set up input preprocessing
Step4: Set up input preprocessing. (We'll use Caffe's caffe.io.Transformer to do this, but this step is independent of other parts of Caffe, so any custom preprocessing code may be used).
Step5: 3. CPU classification
Step6: Load an image (that comes with Caffe) and perform the preprocessing we've set up.
Step7: Adorable! Let's classify it!
Step8: The net gives us a vector of probabilities; the most probable class was the 281st one. But is that correct? Let's check the ImageNet labels...
Step9: "Tabby cat" is correct! But let's also look at other top (but less confident predictions).
Step10: We see that less confident predictions are sensible.
Step11: That's a while, even for a batch of 50 images. Let's switch to GPU mode.
Step12: That should be much faster!
Step13: Now look at the parameter shapes. The parameters are exposed as another OrderedDict, net.params. We need to index the resulting values with either [0] for weights or [1] for biases.
Step15: Since we're dealing with four-dimensional data here, we'll define a helper function for visualizing sets of rectangular heatmaps.
Step16: First we'll look at the first layer filters, conv1
Step17: The first layer output, conv1 (rectified responses of the filters above, first 36 only)
Step18: The fifth layer after pooling, pool5
Step19: The first fully connected layer, fc6 (rectified)
Step20: The final probability output, prob
Step21: Note the cluster of strong predictions; the labels are sorted semantically. The top peaks correspond to the top predicted labels, as shown above.
|
11,453
|
<ASSISTANT_TASK:>
Python Code:
import os as OS
import arcpy as ARCPY
import SSDataObject as SSDO
import SSPanelObject as SSPO
import SSPanel as PANEL
ARCPY.overwriteOutput = True
inputFC = r'../data/CA_Counties_Panel.shp'
outputCube = r'../data/CA_Panel.nc'
fullFC = OS.path.abspath(inputFC)
outputCube = OS.path.abspath(outputCube)
fullPath, fcName = OS.path.split(fullFC)
ssdo = SSDO.SSDataObject(inputFC)
uniqueIDField = "MYID"
timeField = "YEAR"
analysisField = "PCR"
panelObj = SSPO.SSPanelObject(inputFC)
requireGeometry = panelObj.ssdo.shapeType.upper() == "POLYGON"
panelObj.obtainData(uniqueIDField, "YEAR", "1 Years", fields = [analysisField],
requireGeometry = requireGeometry)
panelCube = PANEL.SSPanel(outputCube, panelObj = panelObj)
varName = panelCube.fieldNames[0]
panelCube.mannKendall(varName)
panelCube.close()
panel = PANEL.SSPanel(outputCube)
print("# locations = {0}, # time periods = {1}".format(panel.numLocations, panel.numTime))
print(panel.obtainVariableList())
import pandas as PANDAS
locations = panel.locationLabel[0]
z = panel.obtainValues('PCR_TREND_ZSCORE')
pv = panel.obtainValues('PCR_TREND_PVALUE')
d = {'PCR_TREND_ZSCORE':z, 'PCR_TREND_PVALUE':pv}
df = PANDAS.DataFrame(d, index = locations)
print(df.head())
data = panel.obtainValues(analysisField)
print(data.shape)
import pysal as PYSAL
w = PYSAL.open(r"../data/queen.gal").read()
lm = PYSAL.LISA_Markov(data.T, w)
print(lm.classes)
print(lm.p)
panel.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: Open Panel Cube From NetCDF File for Analysis
Step3: Number of Locations and Time Periods
Step4: List Variables
Step5: View Mann-Kendall Trend Results in PANDAS Data Frame
Step6: Get 3D Analysis Variable
Step7: Use PySAL to Analyze LISA Markov Transitions
Step8: View Transistion Probabilities
|
11,454
|
<ASSISTANT_TASK:>
Python Code:
def check_if_last_char_is_a_letter(txt):
'''
Create a function that returns True if the last character
of a given string is an alphabetical character and is not
a part of a word, and False otherwise.
Note: "word" is a group of characters separated by space.
Examples:
check_if_last_char_is_a_letter("apple pie") ➞ False
check_if_last_char_is_a_letter("apple pi e") ➞ True
check_if_last_char_is_a_letter("apple pi e ") ➞ False
check_if_last_char_is_a_letter("") ➞ False
'''
check = txt.split(' ')[-1]
return True if len(check) == 1 and (97 <= ord(check.lower()) <= 122) else False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,455
|
<ASSISTANT_TASK:>
Python Code:
! module use /g/data3/hh5/public/modules
! module load conda/analysis27
from ARCCSSive import CMIP5
from ARCCSSive.CMIP5.Model import Instance
from ARCCSSive.CMIP5.other_functions import unique
db=CMIP5.connect()
results=db.outputs(ensemble='r1i1p1',experiment='rcp45',mip='day')
results.count()
miroc5=results.filter(Instance.model == 'MIROC5')
miroc5.count()
unique(miroc5,'model')
not_miroc5=results.filter(Instance.model != 'MIROC5')
not_miroc5.count()
'MIROC5' in unique(not_miroc5,'model')
miroc_models=results.filter(Instance.model.like('MIROC%'))
# miroc_models=results.filter(Instance.model.like('miroc%'))
miroc_models.count()
unique(miroc_models,'model')
tasmin_tasmax=results.filter(Instance.variable.in_(['tasmin','tasmax']))
tasmin_tasmax.count()
unique(tasmin_tasmax,'variable')
not_tasmin_tasmax=results.filter(~Instance.variable.in_(['tasmin','tasmax']))
not_tasmin_tasmax.count()
print(unique(not_tasmin_tasmax,'variable'))
from sqlalchemy import and_, or_
miroc5_tas=results.filter(and_(Instance.model == 'MIROC5',Instance.variable == 'tas'))
print( miroc5_tas.count() )
print( unique(miroc5_tas,'model') )
print( unique(miroc5_tas,'variable') )
miroc5_tas=results.filter(Instance.model == 'MIROC5').filter(Instance.variable == 'tas')
print( miroc5_tas.count() )
print( unique(miroc5_tas,'model') )
print( unique(miroc5_tas,'variable') )
miroc5_tas=results.filter(Instance.model == 'MIROC5', Instance.variable == 'tas')
print( miroc5_tas.count() )
print( unique(miroc5_tas,'model') )
print( unique(miroc5_tas,'variable') )
miroc5_or_clt=results.filter(or_(Instance.model == 'MIROC5', Instance.variable == 'clt'))
miroc5_or_clt.count()
for o in miroc5_or_clt:
print( o.model, o.variable )
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5')
print(type(results))
results=db.outputs()
results.count()
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').all()
print(type(results))
print( results )
result=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5',ensemble='r1i1p1').one()
print(type(result))
result=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').one()
result=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').first()
print(type(result))
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').order_by(Instance.ensemble)
for o in results:
print(o.ensemble)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start from a simple query and see how we can use different operators to refine it.
Step2: equals ( == )
Step3: not equals ( != )
Step4: LIKE ( like % ) NB like is case-insensitive
Step5: IN ( in_ )
Step6: NOT IN ( ~ in_ )
Step7: The AND and OR operators need to be explicitly imported
Step8: AND ( and_ )
Step9: Let's try using the same two constraints as consecutive filter calls.
Step10: and_ returns the same results as using two filters one after the other.
Step11: Again we're getting the same result as if we used and_.
Step12: Getting deeper in the query object
Step13: outputs( ) is a method of an ARCCSSive session which is actually using the SQLalchemy Session.query( ) method.
Step14: We can directly loop through the query object returned or we can use one of the methods that return a value, as
Step15: If we specified all the 5 constraints we can pass to the outputs function we should always get only one row back, since you cannot have two rows sharing all these values.
Step16: Let'see what happens if you use one( ) with a query that returns multiple rows.
Step17: This generates an error, so we should use only when we are expecting one row back or if we want to generate two different responses inc ase a query returns one or many rows.
Step18: Another useful method of the query is order_by( ).
|
11,456
|
<ASSISTANT_TASK:>
Python Code:
def logistic(x):
'''
'''
return 1/(1+np.exp(-x))
def U_logistic(theta, Y, X, phi):
'''
'''
return - (Y.T @ X @ theta - np.sum(np.log(1+np.exp(X @ theta))) - 0.5 * phi * np.sum(theta**2))
def gradU_logistic(theta, Y, X, phi):
'''
'''
n = X.shape[0]
Y_pred = logistic(X @ theta)
epsilon = (Y[:,np.newaxis] - Y_pred[:,np.newaxis])
grad = X.T @ epsilon - phi * theta[:, np.newaxis]
return -grad/n
def hmc(Y, X, U, gradU, M, eps, m, theta0, phi):
'''
'''
theta = theta0.copy()
n, p = X.shape
# Precompute
Minv = np.linalg.inv(M)
# Randomly sample momentum
r = np.random.multivariate_normal(np.zeros(p),M)[:,np.newaxis]
# Intial energy
H0 = U(theta0, Y, X, phi) + 0.5 * np.asscalar(r.T @ Minv @ r)
# Hamiltonian dynamics
r -= (eps/2)*gradU(theta, Y, X, phi)
for i in range(m):
theta += (eps*Minv@r).ravel()
r -= eps*gradU(theta, Y, X, phi)
r -= (eps/2)*gradU(theta, Y, X, phi)
# Final energy
H1 = U(theta, Y, X, phi) + np.asscalar(0.5 * r.T @ Minv @ r)
# MH step
u = np.random.uniform()
rho = np.exp(H0 - H1) # Acceptance probability
if u < np.min((1, rho)):
# accept
accept = True
H = H1
else:
# reject
theta = theta0
accept = False
H = H0
return theta, accept, rho, H
def run_hmc(Y, X, U, gradU, M, eps, m, theta, phi, nsample):
n, p = X.shape
# Allocate space
samples = np.zeros((nsample, p))
accept = np.zeros(nsample)
rho = np.zeros(nsample)
H = np.zeros(nsample)
# Run hmc
for i in range(nsample):
theta, accept[i], rho[i], H[i] = hmc(Y, X, U, gradU, M, eps, m, theta, phi)
samples[i] = theta
return samples, accept, rho, H
def stogradU(theta, Y, X, nbatch, phi):
'''A function that returns the stochastic gradient. Adapted from Eq. 5.
Inputs are:
theta, the parameters
Y, the response
X, the covariates
nbatch, the number of samples to take from the full data
'''
n, p = X.shape
# Sample minibatch
batch_id = np.random.choice(np.arange(n),nbatch,replace=False)
Y_pred = logistic(X[batch_id,:] @ theta[:,np.newaxis])
epsilon = (Y[batch_id,np.newaxis] - Y_pred)
grad = n/nbatch * X[batch_id,:].T @ epsilon - phi * theta[:, np.newaxis]
#return -grad/n
return -grad
def sghmc(Y, X, U, gradU, M, Minv, eps, m, theta, B, D, phi):
n, p = X.shape
# Randomly sample momentum
r = np.random.multivariate_normal(np.zeros(p),M)[:,np.newaxis]
# Hamiltonian dynamics
for i in range(m):
theta += (eps*Minv@r).ravel()
r -= eps*stogradU(theta, Y, X, nbatch,phi) - eps*C @ Minv @ r \
+ np.random.multivariate_normal(np.zeros(p),D)[:,np.newaxis]
# Record the energy
H = U(theta, Y, X, phi) + np.asscalar(0.5 * r.T @ Minv @ r)
return theta, H
def run_sghmc(Y, X, U, gradU, M, eps, m, theta, C, V, phi, nsample):
n, p = X.shape
# Precompute
Minv = np.linalg.inv(M)
B = 0.5 * V * eps
D = 2*(C-B)*eps
# Allocate space
samples = np.zeros((nsample, p))
H = np.zeros(nsample)
# Run sghmc
for i in range(nsample):
theta, H[i] = sghmc(Y, X, U, gradU, M, Minv, eps, m, theta, B, D, phi)
samples[i] = theta
return samples, H
def gd(Y, X, gradU, eps, m, theta, phi):
'''
'''
samples = np.zeros((nsample, p))
for i in range(m):
theta -= eps*gradU(theta, Y, X, phi).ravel()
return theta
import numpy as np
import matplotlib.pyplot as plt
n = 500
p = 50
beta = np.random.normal(0, 1, p+1)
Sigma = np.zeros((p, p))
Sigma_diags = np.array([25, 5, 0.2**2])
distribution = np.random.multinomial(p, pvals=[.05, .05, .9], size=1).tolist()
np.fill_diagonal(Sigma, np.repeat(Sigma_diags, distribution[0], axis=0))
X = np.random.multivariate_normal(np.zeros(p), Sigma, n)
X = np.hstack((np.ones((n, 1)), X))
p = np.exp(X @ beta)/np.exp(1 + np.exp(X @ beta))
Y = np.random.binomial(1, p, n)
Xs = (X - np.mean(X, axis=0))/np.concatenate((np.ones(1),np.std(X[:,1:], axis=0)))
Xs = Xs[:,1:]
p = Xs.shape[1]
from sklearn.linear_model import LogisticRegression
# Unscaled
mod_logis = LogisticRegression(fit_intercept=False, C=1e50)
mod_logis.fit(X,Y)
beta_true_unscale = mod_logis.coef_.ravel()
beta_true_unscale
# Scaled
mod_logis = LogisticRegression(fit_intercept=False, C=1e50)
mod_logis.fit(Xs,Y)
beta_true_scale = mod_logis.coef_.ravel()
beta_true_scale
# HMC - Scaled
nsample = 1000
m = 20
eps = .0005
theta = np.zeros(p)
#theta = beta_true_scale.copy()
phi = 5
M = np.identity(p)
samples, accept, rho, H = run_hmc(Y, Xs, U_logistic, gradU_logistic, M, eps, m, theta, phi, nsample)
hmc_mean = np.mean(samples, axis=0)
np.mean(samples, axis=0) - beta_true_scale
plt.plot((samples - beta_true_scale)[:,3])
plt.show()
plt.plot(H)
plt.show()
# HMC - Scaled (no intercept)
nsample = 1000
m = 20
eps = .01
theta = np.zeros(p)
#theta = beta_true_scale.copy()
phi = 5
nbatch = 500
C = 1 * np.identity(p)
V = 0 * np.identity(p)
M = np.identity(p)
samples, H = run_sghmc(Y, Xs, U_logistic, gradU_logistic, M, eps, m, theta, C, V, phi, nsample)
print(np.mean(samples, axis=0) - beta_true_scale)
plt.plot((samples - beta_true_scale)[:,0])
plt.show()
plt.plot(H)
plt.show()
# Gradient descent - Scaled
np.random.seed(2)
phi = .1
res = gd(Y, Xs, gradU_logistic, .1, 20000, np.zeros(p), phi)
res - beta_true_scale
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Everything after here is the script that runs the simulation
Step2: Regression
Step3: HMC
Step4: HMC - Unscaled
Step5: HMC - Unscaled (no intercept)
|
11,457
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import svm
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
import matplotlib.mlab as mlab
import warnings
warnings.filterwarnings('ignore') # this is really annoying.
all_combined_train_features = pd.read_csv('data/combined-pe-features-apt.csv')
combined_train_features = pd.read_csv('data/combined-pe-features-apt-reduced.csv')
# BROKEN DO NOT USE: function_counts = pd.read_csv('data/call-graph-reduced-function_counts-apt.csv')
train_labels = pd.read_csv('data/sorted-pe-coff-train-labels-apt.csv')
combined_train_features.head()
all_combined_train_features.head()
# BROKEN: do not use.
function_counts.head()
train_labels.head()
X_all = all_combined_train_features.iloc[:,1:]
y = train_labels['label']
X_all.head()
X_all.describe()
# Train feature stats
X_all_train_means = X_all.mean()
X_all_train_medians = X_all.median()
X_all_train_maxs = X_all.max()
X_all_train_mins = X_all.min()
X_all_train_std = X_all.std()
X_all_train_cor = X_all.corr()
X_all_train_cov = X_all.cov()
X_all_train_maxs.head()
X_all.shape
X_all_train_cor.head()
X_all_train_cor.to_csv('data/pe-coff-feature-corr-apt.csv')
X_all_train_cov.head()
X_all_train_cov.to_csv('data/pe-coff-feature-cov-apt.csv')
all_column_names = list(all_combined_train_features.columns)
all_column_names[:20]
all_train_stats = pd.DataFrame()
all_train_stats['feature_name'] = all_column_names[1:]
all_train_stats.head()
all_train_stats['mean'] = list(X_all_train_means)
all_train_stats['median'] = list(X_all_train_medians)
all_train_stats['standard_deviation'] = list(X_all_train_std)
all_train_stats['max'] = list(X_all_train_maxs)
all_train_stats['min'] = list(X_all_train_mins)
all_train_stats.head()
all_train_stats.to_csv('data/pe-coff-train-stats-apt.csv')
plt.figure(figsize=(15,15))
# plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.scatter(X_all['es'], X_all['ds'], c=y, cmap='brg')
plt.xlabel('ES Register')
plt.ylabel('DS Register')
plt.title('PE/COFF ASM Register Scatter Plot')
plt.show()
plt.figure(figsize=(15,15))
x_graph = X_all['entropy']
num_bins = 100
n, bins, patches = plt.hist(x_graph, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph.mean(), x_graph.std())
plt.plot(bins, y, 'r--')
plt.xlabel('PE/COFF File Entropy')
plt.ylabel('PE/COFF Sample Count')
plt.title('PE/COFF Histogram Plot')
plt.show()
plt.figure(figsize=(15,15))
# rectangular box plot
plt.boxplot(x_graph, showfliers=True, patch_artist=True)
plt.show()
plt.figure(figsize=(15,15))
x_graph = X_all['vertex_count']
num_bins = 100
n, bins, patches = plt.hist(x_graph, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph.mean(), x_graph.std())
plt.plot(bins, y, 'r--')
plt.xlabel('PE/COFF Vertex Count')
plt.ylabel('PE/COFF Sample Count')
plt.title('PE/COFF Histogram Plot')
plt.show()
all_train_stats = pd.read_csv('data/pe-coff-train-stats-apt.csv', index_col=0)
all_train_stats.head()
X = all_train_stats.iloc[:,1:]
y = train_labels['label']
X['mean'].mean()
X['mean'].std()
plt.figure(figsize=(15,15))
x_graph = X['mean']
num_bins = 100
# the histogram of the data, we have to remove the PE header characteristic stats as
# they are not counts but contain memory base location values and other large values
# that produce huge outliers that ruin the graphs.
n, bins, patches = plt.hist(x_graph[0:119], num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph[0:119].mean(), x_graph[0:119].std())
plt.plot(bins, y, 'r--')
plt.xlabel('ASM Feature Means')
plt.ylabel('ASM Feature Probability')
plt.title('PE/COFF Histogram Plot')
plt.show()
plt.figure(figsize=(15,15))
x_graph = X['median']
num_bins = 100
# the histogram of the data, we have to remove the PE header characteristic stats as
# they are not counts but contain memory base location values and other large values
# that produce huge outliers that ruin the graphs.
n, bins, patches = plt.hist(x_graph[0:119], num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph[0:119].mean(), x_graph[0:119].std())
plt.plot(bins, y, 'r--')
plt.xlabel('ASM Feature Counts')
plt.ylabel('ASM Feature Means')
plt.title('PE/COFF Histogram Plot')
plt.show()
plt.figure(figsize=(15,15))
x_graph = all_combined_train_features['edx']
num_bins = 100
# the histogram of the data, we have to remove the PE header characteristic stats as
# they are not counts but contain memory base location values and other large values
# that produce huge outliers that ruin the graphs.
n, bins, patches = plt.hist(x_graph, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph.mean(), x_graph.std())
plt.plot(bins, y, 'r--')
plt.xlabel('EDX Feature Counts')
plt.ylabel('EDX Feature Probability')
plt.title('PE/COFF Histogram Plot')
plt.show()
plt.figure(figsize=(15,15))
# rectangular box plot
plt.boxplot(x_graph, showfliers=True, patch_artist=True)
plt.show()
help(plt.boxplot)
plt.figure(figsize=(15,15))
# rectangular box plot
plt.boxplot(x_graph[0:119], vert=True, patch_artist=True)
plt.show()
# notch shape box plot
bplot2 = axes[1].boxplot(all_data,
notch=True, # notch shape
vert=True, # vertical box aligmnent
patch_artist=True) # fill with color
plt.figure(figsize=(15,15))
# Plot also the training points
# plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.scatter(range(X.shape[0]), X.iloc[:, 0], c=range(X.shape[0]), cmap='brg')
plt.xlabel('ASM Feature Index')
plt.ylabel('ASM Feature Means')
plt.title('PE/COFF Scatter Plot')
plt.yscale('log')
plt.show()
help(pd.read_csv)
column_names = list(combined_train_features.columns)
column_names[:10]
X = combined_train_features.iloc[:,1:]
y = train_labels['label']
# Train feature stats
# X_train_stats = X.describe()
X_train_means = X.mean()
X_train_medians = X.median()
X_train_std = X.std()
X_train_cor = X.corr()
X_train_cov = X.cov()
X_train_means.head()
train_stats = pd.DataFrame()
train_stats['feature_name'] = column_names[1:]
train_stats.head()
#train_stats= pd.DataFrame.join(X_train_means, X_train_medians)
#train_stats = pd.concat([X_train_means, X_train_medians, X_train_std])
#train_stats = train_stats.concat(pd.DataFrame(X_train_medians))
#train_stats = train_stats.merge(pd.DataFrame(X_train_std))
train_stats['mean'] = list(X_train_means) #.join(X_train_medians.to_frame())
train_stats['median'] = list(X_train_medians)
train_stats['standard_deviation'] = list(X_train_std)
#train_stats.columns = ['mean','median','std']
train_stats.head()
plt.figure(figsize=(15,15))
# Plot also the training points
# plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.scatter(range(X.shape[0]), X.iloc[:, 0], c=y, cmap='brg')
plt.xlabel('Sample Index')
plt.ylabel('EDI')
plt.title('PE/COFF Scatter Plot')
plt.show()
plt.figure(figsize=(15,15))
plt.scatter(range(X.shape[0]), X.iloc[:, 1], c=y, cmap='brg')
plt.xlabel('Sample Index')
plt.ylabel('ESI')
plt.title('PE/COFF Scatter Plot')
plt.show()
plt.figure(figsize=(15,15))
plt.scatter(range(X_all.shape[0]), X_all.loc[:,'entropy'], c=y, cmap='brg')
plt.xlabel('Sample Index')
plt.ylabel('ESI')
plt.title('PE/COFF Scatter Plot')
plt.show()
plt.figure(figsize=(15,15))
# Plot also the training points
# plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap='brg')
plt.xlabel('EDI')
plt.ylabel('ESI')
plt.title('PE/COFF Scatter Plot')
plt.show()
plt.figure(figsize=(15,15))
column_names = list(combined_train_features.columns)
# Plot also the training points
# plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.scatter(X.loc[:,'vertex_count'], X.loc[:,'edge_count'], c=X.loc[:,'delta_max'], cmap='brg')
plt.xlabel('vertex_count')
plt.ylabel('edge_count')
plt.title('PE/COFF Scatter Plot')
plt.show()
column_names = list(combined_train_features.columns)
plt.figure(figsize=(20,400))
for idx in range(20):
plt.subplot(20, 1, idx + 1)
idx2 = idx + 1
plt.scatter(X.iloc[:,idx], X.iloc[:,idx2], c=y, cmap='brg')
plt.xlabel(column_names[idx])
plt.ylabel(column_names[idx2])
plt.title('PE/COFF Scatter Plot')
plt.show()
help(plt.subplot)
all_combined_features = pd.read_csv('data/combined-pe-features-vs251.csv')
y = pd.read_csv('data/sorted-pe-coff-train-labels-vs251.csv')
all_combined_features.head()
y.head()
column_names = all_combined_features.columns
column_names
X_all = all_combined_features.iloc[:,1:]
#y_all = y['family_label']
y_all = list(y['label'])
len(y_all)
X_all.describe()
# Train feature stats
X_all_train_means = X_all.mean()
X_all_train_medians = X_all.median()
X_all_train_maxs = X_all.max()
X_all_train_mins = X_all.min()
X_all_train_std = X_all.std()
X_all_train_cor = X_all.corr()
X_all_train_cov = X_all.cov()
X_all_train_maxs.head()
vs251_stats = pd.DataFrame()
vs251_stats['feature_name'] = list(column_names[1:])
vs251_stats.head()
vs251_stats['mean'] = list(X_all_train_means)
vs251_stats['median'] = list(X_all_train_medians)
vs251_stats['standard_deviation'] = list(X_all_train_std)
vs251_stats['min'] = list(X_all_train_mins)
vs251_stats['max'] = list(X_all_train_maxs)
vs251_stats.head()
vs251_stats.to_csv('data/pe-coff-statistics-vs251.csv', index=False)
X_all_train_cor.to_csv('data/pe-coff-corr-vs251.csv', index=False)
X_all_train_cov.to_csv('data/pe-coff-cov-vs251.csv', index=False)
type(X_all_train_means)
plt.figure(figsize=(15,15))
#column_names = list(combined_train_features.columns)
# Plot also the training points
# plt.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.scatter(X_all.loc[:,'vertex_count'], X_all.loc[:,'edge_count'], c=y_all, cmap='brg')
plt.xlabel('vertex_count')
plt.ylabel('edge_count')
plt.title('PE/COFF Scatter Plot')
plt.show()
plt.figure(figsize=(15,15))
x_graph = X_all['entropy']
num_bins = 100
n, bins, patches = plt.hist(x_graph, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph.mean(), x_graph.std())
plt.plot(bins, y, 'r--')
plt.xlabel('Shannons Entropy')
plt.ylabel('Feature Count')
plt.title('PE/COFF Histogram Plot')
plt.show()
plt.figure(figsize=(15,15))
# rectangular box plot
plt.boxplot(x_graph, vert=True, patch_artist=True)
plt.show()
function_count_features = pd.read_csv('data/sorted-pe-function-count-features-10percent-vs251.csv', na_filter=False)
sorted_train_labels = pd.read_csv('data/sorted-train-labels-vs251.csv', na_filter=False)
y = pd.read_csv('data/sorted-pe-coff-train-labels-vs251.csv', na_filter=False)
function_count_features.head()
sorted_train_labels.head()
y.head()
X_all = function_count_features.iloc[:,1:]
y_all = list(y['label'])
len(y_all)
X_all.describe()
function_count_features.shape
temp_x_4_std = X_all['sub_4'].std() * 4
temp_x_4_std
# OK, We are good to go :)
# Lets have a look at some graphs first.
plt.figure(figsize=(15,15))
x_graph = X_all['sub_4'].clip(-temp_x_4_std, temp_x_4_std)
num_bins = 100
n, bins, patches = plt.hist(x_graph, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph.mean(), x_graph.std())
plt.plot(bins, y, 'r--')
plt.xlabel('Function Count (sub_4)')
plt.ylabel('Function Count Percentage')
plt.title('PE/COFF Function Count Histogram Plot')
plt.show()
plt.figure(figsize=(15,15))
# rectangular box plot
plt.boxplot(x_graph, vert=True, patch_artist=True)
plt.show()
help(pd.DataFrame.clip)
all_combined_features = pd.read_csv('data/combined-pe-features-vs252.csv')
all_train_labels = pd.read_csv('data/sorted-pe-coff-train-labels-vs252.csv')
all_combined_features.head()
all_train_labels.head()
X_all = all_combined_features.iloc[:,1:]
y_all = list(all_train_labels['label'])
len(y_all)
X_all.shape
feature_names = list(X_all.columns)
feature_names
vs252_stats = pd.DataFrame()
vs252_stats['feature_name'] = feature_names
vs252_stats['mean'] = list(X_all.mean())
vs252_stats['median'] = list(X_all.median())
vs252_stats['standard_deviation'] = list(X_all.std())
vs252_stats['min'] = list(X_all.min())
vs252_stats['max'] = list(X_all.max())
vs252_stats.head()
vs252_stats.to_csv('data/pe-coff-statistics-vs252.csv', index=False)
function_count_features = pd.read_csv('data/sorted-pe-function-count-features-10percent-vs252.csv', na_filter=False)
sorted_train_labels = pd.read_csv('data/sorted-train-labels-vs252.csv', na_filter=False)
y = pd.read_csv('data/sorted-pe-coff-train-labels-vs252.csv', na_filter=False)
function_count_features.head()
function_count_features.head(10)
sorted_train_labels.head(10)
y.head()
X_all = function_count_features.iloc[:,1:]
y_all = list(y['label'])
len(y_all)
X_all.describe()
temp_x_4_std = X_all['sub_4'].std() * 4
temp_x_4_std
# Histogram
plt.figure(figsize=(15,15))
x_graph = X_all['sub_4'].clip(-temp_x_4_std, temp_x_4_std)
num_bins = 100
n, bins, patches = plt.hist(x_graph, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, x_graph.mean(), x_graph.std())
plt.plot(bins, y, 'r--')
plt.xlabel('Function Count (sub_4)')
plt.ylabel('Function Count Percentage')
plt.title('PE/COFF Function Count Histogram Plot')
plt.show()
# rectangular box plot
plt.figure(figsize=(15,15))
plt.boxplot(x_graph, vert=True, patch_artist=True)
plt.show()
#fig, axes = plt.subplots(nrows=2, ncols=10)
#df['A'].plot(ax=axes[0,0]); axes[0,0].set_title('A')
column_names = list(X_all.columns)
plt.figure(figsize=(20,400))
for idx in range(10):
plt.subplot(10, 1, idx + 1)
idx2 = idx + 1
plt.scatter(X_all.iloc[:,idx], X_all.iloc[:,idx2], c=y, cmap='brg')
plt.xlabel(column_names[idx])
plt.ylabel(column_names[idx2])
plt.title('PE/COFF Scatter Plot')
plt.show()
temp_x_3_std = X_all.iloc[:,0:10].std() * 3
temp_x_3_std
temp_x_2_std = X_all.iloc[:,0:10].std() * 2
temp_x_2_std
temp_x_1_std = X_all.iloc[:,0:10].std()
temp_x_1_std
plt.figure(figsize=(20,10));
temp_x_3_std = X_all.iloc[:,0:10].std()
x_graph = X_all.iloc[:,0:10].clip(0, temp_x_3_std, axis=1) # apply the clip function column wise.
x_graph = X_all.iloc[:,0:10].clip(0, 10, axis=1)
bp = x_graph.boxplot()
plt.show()
X_all['__vba'].mean()
X_all['__vba'].max()
x_graph['__vba'].min()
X_all['__vba'].std()
plt.show()
df = DataFrame
help(pd.DataFrame.std)
help(pd.DataFrame.clip)
all_combined_features = pd.read_csv('data/combined-pe-features-vs263.csv')
all_train_labels = pd.read_csv('data/sorted-pe-coff-train-labels-vs263.csv')
all_combined_features.head()
X_all = all_combined_features.iloc[:,1:]
y_all = all_train_labels['label']
X_all.head()
X_all.shape
len(y_all)
feature_names = list(X_all.columns)
feature_names
vs263_stats = pd.DataFrame()
vs263_stats['feature_name'] = feature_names
vs263_stats['mean'] = list(X_all.mean())
vs263_stats['median'] = list(X_all.median())
vs263_stats['standard_deviation'] = list(X_all.std())
vs263_stats['min'] = list(X_all.min())
vs263_stats['max'] = list(X_all.max())
vs263_stats.head()
vs263_stats.to_csv('data/pe-coff-statistics-vs263.csv', index=False)
function_count_features = pd.read_csv('data/sorted-pe-function-count-features-10percent-vs263.csv', na_filter=False)
sorted_train_labels = pd.read_csv('data/sorted-train-labels-vs263.csv', na_filter=False)
y = pd.read_csv('data/sorted-pe-coff-train-labels-vs263.csv', na_filter=False)
function_count_features.head()
X_all = function_count_features.iloc[:,1:]
y_all = list(y['label'])
len(y_all)
sorted_train_labels.head()
all_combined_features = pd.read_csv('data/combined-pe-features-vs264.csv')
all_train_labels = pd.read_csv('data/sorted-pe-coff-train-labels-vs264.csv')
all_combined_features.head()
all_train_labels.head()
all_train_labels.shape
all_combined_features.shape
X_all = all_combined_features.iloc[:,1:]
y_all = list(all_train_labels['label'])
feature_names = list(X_all.columns)
vs264 = pd.DataFrame()
vs264['feature_name'] = feature_names
vs264['mean'] = list(X_all.mean())
vs264['median'] = list(X_all.median())
vs264['standard_deviation'] = list(X_all.std())
vs264['min'] = list(X_all.min())
vs264['max'] = list(X_all.max())
vs264.head()
vs264.to_csv('data/pe-coff-statistics-vs264.csv', index=False)
function_count_features = pd.read_csv('data/sorted-pe-function-count-features-10percent-vs264.csv', na_filter=False)
sorted_train_labels = pd.read_csv('data/sorted-train-labels-vs264.csv', na_filter=False)
y = pd.read_csv('data/sorted-pe-coff-train-labels-vs264.csv', na_filter=False)
function_count_features.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. APT All Combined Feature Statistics
Step2: 2.1 APT PE/COFF Statistics Graphs.
Step3: 3. APT Reduced Feature Set Statistics
Step4: 3. VirusShare 251 Statistics.
Step5: Analyse PE/COFF Function Count Features vs251.
Step6: 4. VirusShare 252 Statistics.
Step7: Analyse PE/COFF Function Count Features vs252.
Step8: 5. VirusShare 263 Statistics.
Step9: PE/COFF Function Count Features vs263
Step10: 6. VirusShare 264 Statistics.
Step11: PE/COFF Function Count Features vs264
|
11,458
|
<ASSISTANT_TASK:>
Python Code:
tpl_path = '../../pyfas/test/test_files/'
fname = '11_2022_BD.tpl'
tpl = fa.Tpl(tpl_path+fname)
tpl.filter_data('PT')
tpl.filter_data("'POSITION:' 'EXIT'")
pd.DataFrame(tpl.filter_data('PT'), index=("Trends",)).T
tpl.view_trends('PT')
tpl.view_trends('TM')
tpl.view_trends('PT')
# single trend extraction
tpl.extract(11)
tpl.extract(38)
# multiple trends extraction
tpl.extract(12, 37)
tpl.data.keys()
tpl.label
%matplotlib inline
pt_inlet = tpl.data[38]
tm_outlet = tpl.data[11]
fig, ax1 = plt.subplots(figsize=(12, 7));
ax1.grid(True)
p0, = ax1.plot(tpl.time/3600, tm_outlet)
ax1.set_ylabel("Outlet T [C]", fontsize=16)
ax1.set_xlabel("Time [h]", fontsize=16)
ax2 = ax1.twinx()
p1, = ax2.plot(tpl.time/3600, pt_inlet/1e5, 'r')
ax2.grid(False)
ax2.set_ylabel("Inlet P [bara]", fontsize=16)
ax1.tick_params(axis="both", labelsize=16)
ax2.tick_params(axis="both", labelsize=16)
plt.legend((p0, p1), ("Outlet T", "Inlet P"), loc=4, fontsize=16)
plt.title("Inlet P and Outlet T for case FC1", size=20);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Trend selection
Step2: or
Step3: The same outpout can be reported as a pandas dataframe
Step4: The view_trends method provides the same info better arranged
Step5: Dump to excel
Step6: Our targets are
Step7: The tpl object now has the four trends available in the data attribute
Step8: while the label attibute stores the variable type as a dictionary
Step9: Data processing
|
11,459
|
<ASSISTANT_TASK:>
Python Code:
deck = [rank + suit for rank in '23456789TJQKA' for suit in 'SHDC']
print(deck)
import random
def deal(numhands=1, n=5, deck=[r+s for r in '23456789TJQKA' for s in 'SHDC']):
takes in numhands, and optionaly a deck, shuffles and returns a list of numhands hands
assert numhands*n <= len(deck), "you're trying to deal more cards then are in the deck"
random.shuffle(deck)
return [deck[i*n:n*(i+1)] for i in range(numhands)]
deal(3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Now to shuffle and deal cards from the deck
|
11,460
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
! pip install tflite-model-maker tensorflow==2.5
import tensorflow as tf
import tflite_model_maker as mm
from tflite_model_maker import audio_classifier
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
import glob
import random
from IPython.display import Audio, Image
from scipy.io import wavfile
print(f"TensorFlow Version: {tf.__version__}")
print(f"Model Maker Version: {mm.__version__}")
birds_dataset_folder = tf.keras.utils.get_file('birds_dataset.zip',
'https://storage.googleapis.com/laurencemoroney-blog.appspot.com/birds_dataset.zip',
cache_dir='./',
cache_subdir='dataset',
extract=True)
# @title [Run this] Util functions and data structures.
data_dir = './dataset/small_birds_dataset'
bird_code_to_name = {
'wbwwre1': 'White-breasted Wood-Wren',
'houspa': 'House Sparrow',
'redcro': 'Red Crossbill',
'chcant2': 'Chestnut-crowned Antpitta',
'azaspi1': "Azara's Spinetail",
}
birds_images = {
'wbwwre1': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Henicorhina_leucosticta_%28Cucarachero_pechiblanco%29_-_Juvenil_%2814037225664%29.jpg/640px-Henicorhina_leucosticta_%28Cucarachero_pechiblanco%29_-_Juvenil_%2814037225664%29.jpg', # Alejandro Bayer Tamayo from Armenia, Colombia
'houspa': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/52/House_Sparrow%2C_England_-_May_09.jpg/571px-House_Sparrow%2C_England_-_May_09.jpg', # Diliff
'redcro': 'https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Red_Crossbills_%28Male%29.jpg/640px-Red_Crossbills_%28Male%29.jpg', # Elaine R. Wilson, www.naturespicsonline.com
'chcant2': 'https://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Chestnut-crowned_antpitta_%2846933264335%29.jpg/640px-Chestnut-crowned_antpitta_%2846933264335%29.jpg', # Mike's Birds from Riverside, CA, US
'azaspi1': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Synallaxis_azarae_76608368.jpg/640px-Synallaxis_azarae_76608368.jpg', # https://www.inaturalist.org/photos/76608368
}
test_files = os.path.join('/content', data_dir, 'test/*/*.wav')
def get_random_audio_file():
test_list = glob.glob(test_files)
random_audio_path = random.choice(test_list)
return random_audio_path
def show_bird_data(audio_path):
sample_rate, audio_data = wavfile.read(audio_path, 'rb')
bird_code = audio_path.split('/')[-2]
print(f'Bird name: {bird_code_to_name[bird_code]}')
print(f'Bird code: {bird_code}')
display(Image(birds_images[bird_code]))
plttitle = f'{bird_code_to_name[bird_code]} ({bird_code})'
plt.title(plttitle)
plt.plot(audio_data)
display(Audio(audio_data, rate=sample_rate))
print('functions and data structures created')
random_audio = get_random_audio_file()
show_bird_data(random_audio)
spec = audio_classifier.YamNetSpec(
keep_yamnet_and_custom_heads=True,
frame_step=3 * audio_classifier.YamNetSpec.EXPECTED_WAVEFORM_LENGTH,
frame_length=6 * audio_classifier.YamNetSpec.EXPECTED_WAVEFORM_LENGTH)
train_data = audio_classifier.DataLoader.from_folder(
spec, os.path.join(data_dir, 'train'), cache=True)
train_data, validation_data = train_data.split(0.8)
test_data = audio_classifier.DataLoader.from_folder(
spec, os.path.join(data_dir, 'test'), cache=True)
batch_size = 128
epochs = 100
print('Training the model')
model = audio_classifier.create(
train_data,
spec,
validation_data,
batch_size=batch_size,
epochs=epochs)
print('Evaluating the model')
model.evaluate(test_data)
def show_confusion_matrix(confusion, test_labels):
Compute confusion matrix and normalize.
confusion_normalized = confusion.astype("float") / confusion.sum(axis=1)
axis_labels = test_labels
ax = sns.heatmap(
confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels,
cmap='Blues', annot=True, fmt='.2f', square=True)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
confusion_matrix = model.confusion_matrix(test_data)
show_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)
serving_model = model.create_serving_model()
print(f'Model\'s input shape and type: {serving_model.inputs}')
print(f'Model\'s output shape and type: {serving_model.outputs}')
# if you want to try another file just uncoment the line below
random_audio = get_random_audio_file()
show_bird_data(random_audio)
sample_rate, audio_data = wavfile.read(random_audio, 'rb')
audio_data = np.array(audio_data) / tf.int16.max
input_size = serving_model.input_shape[1]
splitted_audio_data = tf.signal.frame(audio_data, input_size, input_size, pad_end=True, pad_value=0)
print(f'Test audio path: {random_audio}')
print(f'Original size of the audio data: {len(audio_data)}')
print(f'Number of windows for inference: {len(splitted_audio_data)}')
print(random_audio)
results = []
print('Result of the window ith: your model class -> score, (spec class -> score)')
for i, data in enumerate(splitted_audio_data):
yamnet_output, inference = serving_model(data)
results.append(inference[0].numpy())
result_index = tf.argmax(inference[0])
spec_result_index = tf.argmax(yamnet_output[0])
t = spec._yamnet_labels()[spec_result_index]
result_str = f'Result of the window {i}: ' \
f'\t{test_data.index_to_label[result_index]} -> {inference[0][result_index].numpy():.3f}, ' \
f'\t({spec._yamnet_labels()[spec_result_index]} -> {yamnet_output[0][spec_result_index]:.3f})'
print(result_str)
results_np = np.array(results)
mean_results = results_np.mean(axis=0)
result_index = mean_results.argmax()
print(f'Mean result: {test_data.index_to_label[result_index]} -> {mean_results[result_index]}')
models_path = './birds_models'
print(f'Exporing the TFLite model to {models_path}')
model.export(models_path, tflite_filename='my_birds_model.tflite')
model.export(models_path, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transfer Learning for the Audio Domain with Model Maker
Step2: Import TensorFlow, Model Maker and other libraries
Step3: The Birds dataset
Step4: Explore the data
Step5: Playing some audio
Step6: Training the Model
Step7: Loading the data
Step8: Training the model
Step9: The accuracy looks good but it's important to run the evaluation step on the test data and vefify your model achieved good results on unseed data.
Step11: Understanding your model
Step12: Testing the model [Optional]
Step13: Coming back to the random audio you loaded earlier
Step14: The model created has a fixed input window.
Step15: You'll loop over all the splitted audio and apply the model for each one of them.
Step16: Exporting the model
Step17: You can also export the SavedModel version for serving or using on a Python environment.
|
11,461
|
<ASSISTANT_TASK:>
Python Code:
# <!-- collapse=True -->
# importando modulos necesarios
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
import seaborn as sns
np.random.seed(2016) # replicar random
# parametros esteticos de seaborn
sns.set_palette("deep", desat=.6)
sns.set_context(rc={"figure.figsize": (8, 4)})
# Graficando histograma
mu, sigma = 0, 0.2 # media y desvio estandar
datos = np.random.normal(mu, sigma, 1000) #creando muestra de datos
# histograma de distribución normal.
cuenta, cajas, ignorar = plt.hist(datos, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma')
plt.show()
# Graficando FMP
n, p = 30, 0.4 # parametros de forma de la distribución binomial
n_1, p_1 = 20, 0.3 # parametros de forma de la distribución binomial
x = np.arange(stats.binom.ppf(0.01, n, p),
stats.binom.ppf(0.99, n, p))
x_1 = np.arange(stats.binom.ppf(0.01, n_1, p_1),
stats.binom.ppf(0.99, n_1, p_1))
fmp = stats.binom.pmf(x, n, p) # Función de Masa de Probabilidad
fmp_1 = stats.binom.pmf(x_1, n_1, p_1) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.plot(x_1, fmp_1)
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.vlines(x_1, 0, fmp_1, colors='g', lw=5, alpha=0.5)
plt.title('Función de Masa de Probabilidad')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# Graficando Función de Distribución Acumulada con Python
x_1 = np.linspace(stats.norm(10, 1.2).ppf(0.01),
stats.norm(10, 1.2).ppf(0.99), 100)
fda_binom = stats.binom.cdf(x, n, p) # Función de Distribución Acumulada
fda_normal = stats.norm(10, 1.2).cdf(x_1) # Función de Distribución Acumulada
plt.plot(x, fda_binom, '--', label='FDA binomial')
plt.plot(x_1, fda_normal, label='FDA nomal')
plt.title('Función de Distribución Acumulada')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.legend(loc=4)
plt.show()
# Graficando Función de Densidad de Probibilidad con Python
FDP_normal = stats.norm(10, 1.2).pdf(x_1) # FDP
plt.plot(x_1, FDP_normal, label='FDP nomal')
plt.title('Función de Densidad de Probabilidad')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# Graficando Poisson
mu = 3.6 # parametro de forma
poisson = stats.poisson(mu) # Distribución
x = np.arange(poisson.ppf(0.01),
poisson.ppf(0.99))
fmp = poisson.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Poisson')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = poisson.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Poisson')
plt.show()
# Graficando Binomial
N, p = 30, 0.4 # parametros de forma
binomial = stats.binom(N, p) # Distribución
x = np.arange(binomial.ppf(0.01),
binomial.ppf(0.99))
fmp = binomial.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Binomial')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = binomial.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Binomial')
plt.show()
# Graficando Geométrica
p = 0.3 # parametro de forma
geometrica = stats.geom(p) # Distribución
x = np.arange(geometrica.ppf(0.01),
geometrica.ppf(0.99))
fmp = geometrica.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Geométrica')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = geometrica.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Geométrica')
plt.show()
# Graficando Hipergeométrica
M, n, N = 30, 10, 12 # parametros de forma
hipergeometrica = stats.hypergeom(M, n, N) # Distribución
x = np.arange(0, n+1)
fmp = hipergeometrica.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Hipergeométrica')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = hipergeometrica.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Hipergeométrica')
plt.show()
# Graficando Bernoulli
p = 0.5 # parametro de forma
bernoulli = stats.bernoulli(p)
x = np.arange(-1, 3)
fmp = bernoulli.pmf(x) # Función de Masa de Probabilidad
fig, ax = plt.subplots()
ax.plot(x, fmp, 'bo')
ax.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
ax.set_yticks([0., 0.2, 0.4, 0.6])
plt.title('Distribución Bernoulli')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = bernoulli.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Bernoulli')
plt.show()
# Graficando Normal
mu, sigma = 0, 0.2 # media y desvio estandar
normal = stats.norm(mu, sigma)
x = np.linspace(normal.ppf(0.01),
normal.ppf(0.99), 100)
fp = normal.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Normal')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = normal.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Normal')
plt.show()
# Graficando Uniforme
uniforme = stats.uniform()
x = np.linspace(uniforme.ppf(0.01),
uniforme.ppf(0.99), 100)
fp = uniforme.pdf(x) # Función de Probabilidad
fig, ax = plt.subplots()
ax.plot(x, fp, '--')
ax.vlines(x, 0, fp, colors='b', lw=5, alpha=0.5)
ax.set_yticks([0., 0.2, 0.4, 0.6, 0.8, 1., 1.2])
plt.title('Distribución Uniforme')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = uniforme.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Uniforme')
plt.show()
# Graficando Log-Normal
sigma = 0.6 # parametro
lognormal = stats.lognorm(sigma)
x = np.linspace(lognormal.ppf(0.01),
lognormal.ppf(0.99), 100)
fp = lognormal.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Log-normal')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = lognormal.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Log-normal')
plt.show()
# Graficando Exponencial
exponencial = stats.expon()
x = np.linspace(exponencial.ppf(0.01),
exponencial.ppf(0.99), 100)
fp = exponencial.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Exponencial')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = exponencial.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Exponencial')
plt.show()
# Graficando Gamma
a = 2.6 # parametro de forma.
gamma = stats.gamma(a)
x = np.linspace(gamma.ppf(0.01),
gamma.ppf(0.99), 100)
fp = gamma.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Gamma')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = gamma.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Gamma')
plt.show()
# Graficando Beta
a, b = 2.3, 0.6 # parametros de forma.
beta = stats.beta(a, b)
x = np.linspace(beta.ppf(0.01),
beta.ppf(0.99), 100)
fp = beta.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Beta')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = beta.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Beta')
plt.show()
# Graficando Chi cuadrado
df = 34 # parametro de forma.
chi2 = stats.chi2(df)
x = np.linspace(chi2.ppf(0.01),
chi2.ppf(0.99), 100)
fp = chi2.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Chi cuadrado')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = chi2.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Chi cuadrado')
plt.show()
# Graficando t de Student
df = 50 # parametro de forma.
t = stats.t(df)
x = np.linspace(t.ppf(0.01),
t.ppf(0.99), 100)
fp = t.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución t de Student')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = t.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma t de Student')
plt.show()
# Graficando Pareto
k = 2.3 # parametro de forma.
pareto = stats.pareto(k)
x = np.linspace(pareto.ppf(0.01),
pareto.ppf(0.99), 100)
fp = pareto.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución de Pareto')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = pareto.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma de Pareto')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Función de Masa de Probabilidad
Step2: Función de Distribución Acumulada
Step3: Función de Densidad de Probabilidad
Step4: Distribuciones
Step5: Distribución Binomial
Step6: Distribución Geométrica
Step7: Distribución Hipergeométrica
Step8: Distribución de Bernoulli
Step9: Distribuciones continuas
Step10: Distribución Uniforme
Step11: Distribución de Log-normal
Step12: Distribución de Exponencial
Step13: Distribución Gamma
Step14: Distribución Beta
Step15: Distribución Chi cuadrado
Step16: Distribución T de Student
Step17: Distribución de Pareto
|
11,462
|
<ASSISTANT_TASK:>
Python Code:
hightemp = "".join(map(str, [i.replace('\t', ' ') for i in open('hightemp.txt', 'r')]))
print(hightemp)
col1 = open('col1.txt', 'w')
col2 = open('col2.txt', 'w')
hightemp = [i.replace('\t', ' ').split() for i in open('hightemp.txt', 'r')]
col1.write("\n".join(map(str, [i[0] for i in hightemp])))
col1.close()
col2.write("\n".join(map(str, [i[1] for i in hightemp])))
col2.close()
%%timeit
col3 = open('col3.txt', 'w')
f1 = [i for i in open('col1.txt', 'r')]
f2 = [i for i in open('col2.txt', 'r')]
# [col3.write(i+'\t'+j) for i, j in zip(f1, f2)]
col3.close()
def display_nline(n, filename):
return "".join(map(str, [i for i in open(filename, 'r')][:n]))
display = display_nline(5, "col3.txt")
print(display)
def display_back_nline(n, filename):
return "".join(map(str, [i for i in open(filename, 'r')][-n-1:-1]))
display = display_back_nline(5, "col3.txt")
print(display)
%%timeit
def split_file(n, filename):
line = [i.strip('\n') for i in open(filename, 'r')]
length = len(line)
n = length//n
return [line[i:i+n] for i in range(0, length, n)]
split_file(2, "col1.txt")
def first_char(filename):
return set(i[0] for i in open(filename, 'r'))
print(first_char('hightemp.txt'))
%%timeit
from operator import itemgetter
def column_sort(sort_key, filename):
return sorted([i.split() for i in open(filename, 'r')], key=itemgetter(sort_key-1))
column_sort(3, 'hightemp.txt')
%%timeit
from operator import itemgetter
def frequency(filename):
first_char = [i[0] for i in open(filename, 'r')]
dictionary = set([(i, first_char.count(i)) for i in first_char])
return sorted(dictionary, key=itemgetter(1), reverse=True)
frequency('hightemp.txt')
%%timeit
from operator import itemgetter
first_char = [i[0] for i in open('hightemp.txt', 'r')]
dictionary = set([(i, first_char.count(i)) for i in first_char])
sorted(dictionary, key=itemgetter(1), reverse=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 12. 1列目をcol1.txtに,2列目をcol2.txtに保存
Step2: 13. col1.txtとcol2.txtをマージ
Step3: 14. 先頭からN行を出力
Step4: 15. 末尾のN行を出力
Step5: 16. ファイルをN分割する
Step6: 17. 1列目の文字列の異なり
Step7: 18. 各行を3コラム目の数値の降順にソート
Step8: 19. 各行の1コラム目の文字列の出現頻度を求め, 出現頻度の高い順に並べる
|
11,463
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
# YOUR CODE HERE
#raise NotImplementedError()
return 0.5*c*((1/np.cosh((c**0.5)/2*(x-c*t-a)))**2)
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
# YOUR CODE HERE
#raise NotImplementedError()
phi=np.ones([xpoints,tpoints])
for j in t:
for i in x:
phi[i,j]=soliton(x[i],t[j],c,a)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
# YOUR CODE HERE
#raise NotImplementedError()
plt.plot(soliton(x,t[i],c,a))
plt.xlabel('time')
plt.ylabel('phi(x,t[i])')
plt.title('Soliton Wave vs Time')
plt.tick_params(axis='y', right='off', direction='out')
plt.tick_params(axis='x', top='off', direction='out')
plt.box(False)
plt.grid(True)
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
# YOUR CODE HERE
#raise NotImplementedError()
interact(plot_soliton_data, i = (0.0,60.0,.1))
assert True # leave this for grading the interact with plot_soliton_data cell
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Using interact for animation with data
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step6: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step7: Use interact to animate the plot_soliton_data function versus time.
|
11,464
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alan Leggitt <alan.leggitt@ucsf.edu>
#
# License: BSD (3-clause)
import mne
from mne import setup_source_space, setup_volume_source_space
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
subject = 'sample'
aseg_fname = subjects_dir + '/sample/mri/aseg.mgz'
# setup a cortical surface source space and extract left hemisphere
surf = setup_source_space(subject, subjects_dir=subjects_dir, add_dist=False)
lh_surf = surf[0]
# setup a volume source space of the left cerebellum cortex
volume_label = 'Left-Cerebellum-Cortex'
sphere = (0, 0, 0, 120)
lh_cereb = setup_volume_source_space(subject, mri=aseg_fname, sphere=sphere,
volume_label=volume_label,
subjects_dir=subjects_dir)
# Combine the source spaces
src = surf + lh_cereb
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
surfaces='white', coord_frame='head',
src=src)
mne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,
distance=0.30, focalpoint=(-0.03, -0.01, 0.03))
# Export source positions to nift file
nii_fname = data_path + '/MEG/sample/mne_sample_lh-cerebellum-cortex.nii'
src.export_volume(nii_fname, mri_resolution=True)
# Uncomment the following lines to display source positions in freeview.
'''
# display image in freeview
from mne.utils import run_subprocess
mri_fname = subjects_dir + '/sample/mri/brain.mgz'
run_subprocess(['freeview', '-v', mri_fname, '-v',
'%s:colormap=lut:opacity=0.5' % aseg_fname, '-v',
'%s:colormap=jet:colorscale=0,2' % nii_fname, '-slice',
'157 75 105'])
'''
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup the source spaces
Step2: Plot the positions of each source space
Step3: Compare volume source locations to segmentation file in freeview
|
11,465
|
<ASSISTANT_TASK:>
Python Code:
import yaml
import time
import operator
import string
import re
import csv
import random
import nltk.tokenize
from sklearn.feature_extraction import text
import twitter
import scipy
## self-correlation
a = [i for i in range(20)]
scipy.stats.kendalltau(a,a).correlation
## remember that the rows need not be ordered
# shuffle in place
random.shuffle(a)
scipy.stats.kendalltau(a,a).correlation
## anti-correlation
a = [i for i in range(20)]
b = list(reversed(a))
scipy.stats.kendalltau(a,b).correlation
## random case
# correlation will average 0 and get closer to 0 for large lists
a = [i for i in range(1000)]
b = random.sample(a, k=len(a))
scipy.stats.kendalltau(a,b).correlation
## ties
# scipy implementation uses:
# https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient#Tau-b
a = [i for i in range(10)]
b = [i for i in range(10)]
# items in list b at indices 0 and 1 will both have rank 1 (zero-based)
b[0] = 1
print(a)
print(b)
scipy.stats.kendalltau(a,b).correlation
from rbo import rbo
# elements in the list can be any object
rbo([{'c', 'a'}, 'b', 'd'], ['a', {'c', 'b'}, 'd'], p=.9)
# self-similarity
a = [i for i in range(20)]
rbo(a,a,p=0.9)
# order doesn't matter
random.shuffle(a)
rbo(a,a,p=0.9)
# we are comparing ordered lists of objects, not rankings
a = [i for i in string.punctuation]
rbo(a,a,p=0.9)
# reversed case
a = [i for i in string.punctuation]
b = list(reversed(a))
rbo(a,b,p=0.9)
# random comparison
a = [i for i in string.punctuation]
b = random.sample(a, k=len(a))
rbo(a,b,p=0.9)
Get Tweets from the Twitter public API
# Get your Twitter API tokens
# this is specific to my computer; modify for yours
my_creds_file = '/Users/jkolb/.twitter_api_creds'
creds = yaml.load(open(my_creds_file))
consumer_key = creds['audience']['consumer_key']
consumer_secret = creds['audience']['consumer_secret']
access_token_key = creds['audience']['token']
access_token_secret = creds['audience']['token_secret']
api = twitter.Api(consumer_key=consumer_key,
consumer_secret=consumer_secret,
access_token_key=access_token_key,
access_token_secret=access_token_secret
)
mom_tweets = []
for _ in range(20):
mom_tweets.extend( api.GetSearch("mom",count=100) )
time.sleep(1)
dad_tweets = []
for _ in range(20):
dad_tweets.extend( api.GetSearch("dad",count=100) )
time.sleep(1)
mum_tweets = []
for _ in range(20):
mum_tweets.extend( api.GetSearch("mom",count=100) )
time.sleep(1)
Get Tweets from the Gnip Search API
from search.api import Query
import json
import yaml
creds = yaml.load(open('/Users/jkolb/.creds.yaml'))
# set up a query to the Gnip Search API
q = Query(creds['username'],
creds['password'],
creds['search_endpoint'],
paged=True,
hard_max = 2000, ## <--- control tweet volume here
)
# query parameters
start_date = '2017-06-01T00:00'
end_date = '2017-06-03T00:00'
# get the tweet data
rule = 'mom'
rule += ' -is:retweet'
q.execute(rule,start=start_date,end=end_date)
mom_tweets = list(q.get_activity_set())
rule = 'dad'
rule += ' -is:retweet'
q.execute(rule,start=start_date,end=end_date)
dad_tweets = list(q.get_activity_set())
rule = 'mum'
rule += ' -is:retweet'
q.execute(rule,start=start_date,end=end_date)
mum_tweets = list(q.get_activity_set())
## get tweet bodies
dad_bodies = [tweet['body'] for tweet in dad_tweets]
mom_bodies = [tweet['body'] for tweet in mom_tweets]
mum_bodies = [tweet['body'] for tweet in mum_tweets]
## create a tweet tokenizer and stopword list
my_additional_stop_words = ['https','rt']
my_additional_stop_words.extend(string.punctuation)
stop_words = text.ENGLISH_STOP_WORDS.union(my_additional_stop_words)
tokenizer = nltk.tokenize.TweetTokenizer(preserve_case=False, reduce_len=True)
## make vectorizers
dad_ngram_vectorizer = text.CountVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
dad_ngram_vectorizer_idf = text.TfidfVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mom_ngram_vectorizer = text.CountVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mom_ngram_vectorizer_idf = text.TfidfVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mum_ngram_vectorizer = text.CountVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mum_ngram_vectorizer_idf = text.TfidfVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
# helper functions
def ngram_freq_from_dtmatrix(dtmatrix,col_names):
return dict([(ngram,dtmatrix.getcol(icol).toarray().sum()) for icol,ngram in enumerate(col_names)])
def ranked_tuples_from_ngram_freq(term_freq_dict):
return list(reversed(sorted(term_freq_dict.items(),key=operator.itemgetter(1))))
## get top ranked ngrams for 'dad' tweets
dad_dtmatrix = dad_ngram_vectorizer.fit_transform(dad_bodies)
dad_ngrams = dad_ngram_vectorizer.get_feature_names()
dad_tf_dict = ngram_freq_from_dtmatrix(dad_dtmatrix,dad_ngrams)
dad_ngrams_ranked = ranked_tuples_from_ngram_freq(dad_tf_dict)
## get top ranked ngrams for 'mom' tweets
mom_dtmatrix = mom_ngram_vectorizer.fit_transform(mom_bodies)
mom_ngrams = mom_ngram_vectorizer.get_feature_names()
mom_tf_dict = ngram_freq_from_dtmatrix(mom_dtmatrix,mom_ngrams)
mom_ngrams_ranked = ranked_tuples_from_ngram_freq(mom_tf_dict)
## get top ranked ngrams for 'mum' tweets
mum_dtmatrix = mum_ngram_vectorizer.fit_transform(mum_bodies)
mum_ngrams = mum_ngram_vectorizer.get_feature_names()
mum_tf_dict = ngram_freq_from_dtmatrix(mum_dtmatrix,mum_ngrams)
mum_ngrams_ranked = ranked_tuples_from_ngram_freq(mum_tf_dict)
# sanity check
dad_ngrams_ranked[:20]
## now let's extract the rankings and compare
# probably want to cut off the rankings somewhere
cutoff = 10000
final_cutoff = 300
# get the (ngram,rank) lists
dad_ngram_ranks = {ngram:rank for rank,(ngram,count) in enumerate(dad_ngrams_ranked[:cutoff])}
mom_ngram_ranks = {ngram:rank for rank,(ngram,count) in enumerate(mom_ngrams_ranked[:cutoff])}
mum_ngram_ranks = {ngram:rank for rank,(ngram,count) in enumerate(mum_ngrams_ranked[:cutoff])}
# get the rank lists
# NB: if cutoff lists are not conjoint (they probably aren't),
# you'll have to choose one list as a reference
dad_ranks = []
mom_ranks = []
mum_ranks = []
data = []
for ngram,mom_rank in mom_ngram_ranks.items():
try:
dad_rank = dad_ngram_ranks[ngram]
except KeyError:
# for elements not in list, rank them last
dad_rank = cutoff
try:
# for elements not in list, rank them last
mum_rank = mum_ngram_ranks[ngram]
except KeyError:
mum_rank = cutoff
if mom_rank < final_cutoff:
dad_ranks.append(dad_rank)
mom_ranks.append(mom_rank)
mum_ranks.append(mum_rank)
data.append((ngram,mom_rank,mum_rank,dad_rank))
dad_mom_tau = scipy.stats.kendalltau(dad_ranks,mom_ranks).correlation
mum_mom_tau = scipy.stats.kendalltau(mum_ranks,mom_ranks).correlation
print('Tau')
print('cutoff = ' + str(final_cutoff))
print('mom-dad: ' + str(dad_mom_tau))
print('mom-mum: ' + str(mum_mom_tau))
mom_top_ngrams = [ngram for ngram,ct in mom_ngrams_ranked][:final_cutoff]
mum_top_ngrams = [ngram for ngram,ct in mum_ngrams_ranked][:final_cutoff]
dad_top_ngrams = [ngram for ngram,ct in dad_ngrams_ranked][:final_cutoff]
mum_mom_rbo = rbo(mom_top_ngrams,mum_top_ngrams,p=0.9)['ext']
dad_mom_rbo = rbo(mom_top_ngrams,dad_top_ngrams,p=0.9)['ext']
print('RBO')
print('cutoff = ' + str(cutoff))
print('mom-dad: ' + str(dad_mom_rbo))
print('mom-mum: ' + str(mum_mom_rbo))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Review Part I
Step2: Finally
Step7: Apply it!
Step8: Do better n-gram extraction
Step9: Simply by looking at this list, we can see other avenues for improving ngram extraction.
Step10: Try RBO
|
11,466
|
<ASSISTANT_TASK:>
Python Code:
year = 2015
month = 7
%matplotlib inline
import glob
import os
import netCDF4
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from matplotlib import colors
basedir = "~/DataOceano/MyOcean/INSITU_GLO_NRT_OBSERVATIONS_013_030/monthly/" + str(year) + str(month).zfill(2) + '/'
basedir = os.path.expanduser(basedir)
tempmin, tempmax = 5., 30.
cmaptemp = plt.cm.RdYlBu_r
normtemp = colors.Normalize(vmin=tempmin, vmax=tempmax)
tempticks = np.arange(tempmin, tempmax+0.1,2.5)
fig = plt.figure(figsize=(12, 8))
nfiles_notemp = 0
filelist = sorted(glob.glob(basedir+'*.nc'))
for datafiles in filelist:
with netCDF4.Dataset(datafiles) as nc:
lon = nc.variables['LONGITUDE'][:]
lat = nc.variables['LATITUDE'][:]
depth = nc.variables['DEPH'][:]
try:
temperature = nc.variables['TEMP'][:,0]
if(depth.shape[1] == 1):
scat = plt.scatter(lon.mean(), lat.mean(), s=15., c=temperature.mean(), edgecolor='None',
cmap=cmaptemp, norm=normtemp)
except KeyError:
# print 'No variable temperature in this file'
#temperature = np.nan*np.ones_like(lat)
nfiles_notemp+=1
# Add colorbar and title
cbar = plt.colorbar(scat, extend='both')
cbar.set_label('$^{\circ}$C', rotation=0, ha='left')
plt.title('Temperature from surface drifters\n' + str(year) + '-' + str(month).zfill(2))
plt.show()
print 'Number of files: ' + str(len(filelist))
print 'Number of files without temperature: ' + str(nfiles_notemp)
m = Basemap(projection='moll', lon_0=0, resolution='c')
fig = plt.figure(figsize=(12, 8))
nfiles_notemp = 0
filelist = sorted(glob.glob(basedir+'*.nc'))
for datafiles in filelist:
with netCDF4.Dataset(datafiles) as nc:
lon = nc.variables['LONGITUDE'][:]
lat = nc.variables['LATITUDE'][:]
depth = nc.variables['DEPH'][:]
lon, lat = m(lon, lat)
try:
temperature = nc.variables['TEMP'][:,0]
if(depth.shape[1] == 1):
scat = m.scatter(lon.mean(), lat.mean(), s=15., c=temperature.mean(), edgecolor='None',
cmap=cmaptemp, norm=normtemp)
except KeyError:
# print 'No variable temperature in this file'
#temperature = np.nan*np.ones_like(lat)
nfiles_notemp+=1
# Add colorbar and title
cbar = plt.colorbar(scat, extend='both', shrink=0.7)
cbar.set_label('$^{\circ}$C', rotation=0, ha='left')
m.drawcoastlines(linewidth=0.2)
m.fillcontinents(color = 'gray')
plt.title('Temperature from surface drifters\n' + str(year) + '-' + str(month).zfill(2))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The directory where we store the data files
Step2: Simple plot
Step3: Loop on the files
Step4: We also counted how many files don't have the temperature variable
Step5: Plot on a map
Step6: The rest of the configuration of the plot can be kept as it was.
|
11,467
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
11,468
|
<ASSISTANT_TASK:>
Python Code:
class KMeans:
k-means algo
def __init__(self, k):
self.k = k # number of clusters
self.means = None # means of clusters
def classify(self, input):
return the index of the cluster to closest to input
return min(range(self.k),
key = lambda i: squared_distance(input, self.means[i]))
def train(self, inputs):
# choose k rand points as initials
self.means = random.sample(inputs, self.k)
assignments = None
while True:
# Find new assignments
new_assignments = map(self.classify, inputs)
# If nothing changed we're good to go
if assignments == new_assignments:
return
# otherwise keep
assignments = new_assignments
# And compute new means based on assigments
for i in range(self.k):
# get points in cluster
i_points = [p for p,a in zip(inputs, assignments) if a == i]
# check for membership
if i_points:
self.means[i] = vector_mean(i_points)
inputs = [[-14,-5],[13,13],[20,23],[-19,-11],[-9,-16],[21,27],[-49,15],[26,13],[-46,5],[-34,-1],[11,15],[-49,0],[-22,-16],[19,28],[-12,-8],[-13,-19],[-41,8],[-11,-6],[-25,-9],[-18,-3]]
random.seed(0)
clusterer = KMeans(2)
clusterer.train(inputs)
clusterer.means
def squared_clustering_errors(inputs, k):
finds total square error for k
clusterer = KMeans(k)
clusterer.train(inputs)
means = clusterer.means
assignments = map(clusterer.classify, inputs)
return sum(squared_distance(input, means[cluster])
for input, cluster in zip(inputs, assignments))
ks = range(1, len(inputs) + 1)
errors = [squared_clustering_errors(inputs, k) for k in ks]
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
plt.plot(ks, errors, '.')
def is_leaf(cluster):
a cluster is a leaf if it has len 1
return len(cluster) == 1
def get_children(cluster):
returns children of the cluster if merged else exception
if is_leaf(cluster):
raise TypeError("a leaf cluster has no children")
else:
return cluster[1]
def get_values(cluster):
returns the value in the cluster (if leaf)
or all values in leaf clusters below
if is_leaf(cluster):
return cluster
else:
return [value
for child in get_children(cluster)
for value in get_values(child)]
def cluster_distance(cluster1, cluster2, distance_agg = min):
compute all pairwise distances btw clusters
and apply distance_agg to the list
return distance_agg([distance(input1, input2)
for input1 in get_values(cluster1)
for input2 in get_values(cluster2)])
def get_merge_order(cluster):
if is_leaf(cluster):
return float('inf')
else:
return cluster[0]
def bottom_up_cluster(inputs, distance_agg = min):
# we start with all leaf clusters (this is bottom up after all)
clusters = [(input,) for input in inputs]
# Don't stop until we have one cluster
while len(clusters) > 1:
# the two clusters we want to merge
# are the clusters that are closest without touching
c1, c2 = min([(cluster1, cluster2)
for i, cluster1 in enumerate(clusters)
for cluster2 in clusters[:i]],
key = lambda (x,y): cluster_distance(x, y, distance_agg))
# the above is really inefficient in distance calc
# we should instead "look up" the distance
# once we merge them we remove them from the list
clusters = [c for c in clusters if c != c1 and c != c2]
# merge them with order = # of clusters left (so that last merge is "0")
merged_cluster = (len(clusters), [c1, c2])
# append the merge
clusters.append(merged_cluster)
return clusters[0]
base_cluster = bottom_up_cluster(inputs)
base_cluster
def generate_clusters(base_cluster, num_clusters):
clusters = [base_cluster]
# keep going till we have the desired number of clusters
while len(clusters) < num_clusters:
# choose the last-merge
next_cluster = min(clusters, key = get_merge_order)
# remove it from the list
clusters = [c for c in clusters if c != next_cluster]
# add its children to the list (this is an unmerge)
clusters.extend(get_children(next_cluster))
return clusters
three_clusters = [get_values(cluster)
for cluster in generate_clusters(base_cluster, 3)]
three_clusters
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: k - means
Step4: Choosing k
Step9: Hierarchical Clustering
|
11,469
|
<ASSISTANT_TASK:>
Python Code:
imdb = pd.read_csv("C:\\Users\\Adam\\Google Drive\\School\\ComputerScience\\intro to data science\\rotten_needles\\data\\datasets\\movies_dataset.csv")
#imdb = imdb.dropna()
imdb = imdb.assign(rating10=(imdb['rating']*10))
imdb = imdb.assign(metascore10=(imdb['metascore']/10))
imdb = imdb.assign(score1=100*(imdb.gross_income-imdb.budget)/imdb.budget)
imdb = imdb.assign(score2=(imdb['gross_income']-imdb['budget'])) # best score measure
imdb = imdb.assign(score3=np.log(imdb['gross_income'])/np.log(imdb['budget']))
# imdb[['score2', 'name','rating','metascore']].sort_values('score2',ascending=0)
plt.figure()
imdb_temp = imdb
imdb_temp['scaled_gross_income'] = np.log(imdb['gross_income']) # / 1000000
sns.regplot(x = imdb['rating']*10, y = 'scaled_gross_income', data = imdb_temp, color = 'yellow')
sns.regplot(x = imdb['metascore'], y = 'scaled_gross_income', data = imdb_temp, color = 'Green')
sns.plt.title("Gross Income against MetaScore \ IMDB Rating - Scatter")
sns.plt.xlabel("IMDB Rating, Metascore")
sns.plt.ylabel("Log of Gross Income")
# legend_patches = matplotlib.patches.Patch(color='green', label='label')
# Plot the legend
sns.plt.legend(['IMDB Ratings', 'Metascore'])
# imdb.isnull().sum()
plt.figure()
sns.countplot(x = 'rating', data = imdb)
plt.xticks(rotation=60)
sns.plt.title("Distribution of Movie Ratings")
sns.plt.xlabel("Movie Rating")
sns.plt.ylabel("Count of Ratings")
temp = pd.DataFrame(
data = {
'type':
[i for i in range(1,11) for genre in imdb.columns if 'genre' in genre],
'votes':
[imdb[imdb[genre] == 1]['rating_freq.1'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.2'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.3'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.4'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.5'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.6'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.7'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.8'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.9'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.10'].mean() for genre in imdb.columns if 'genre' in genre]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]*10
)
plt.figure()
sns.barplot(x = temp.index , y = 'votes',hue = 'type', data = temp)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Distribution of Ratings by Genres")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Number of Votes")
# plt.figure()
# plt.ylim([0,10])
# plt.xlim([0,10])
# sns.regplot(x ='avg_rating_per_demo.aged_under_18', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'red')
# plt.figure()
# plt.ylim([0,10])
# plt.xlim([0,10])
# sns.regplot(x ='avg_rating_per_demo.aged_18-29', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'green')
# imdb.plot(kind='scatter', x='rating', y='avg_rating_per_demo.us_users');
plt.figure()
sns.regplot(x = 'opening_weekend_income', y = 'gross_income', data=imdb, color='seagreen')
sns.plt.title("Opening weeked Incomes vs Total Incomes")
sns.plt.xlabel("Opening Weekend")
sns.plt.ylabel("Total")
# imdb[['metascore','critic_review_count','rating','rating_count','gross_income','rating_freq.3','rating_freq.4','rating_freq.5','rating_freq.6',
# 'rating_freq.7','rating_freq.8','rating_freq.9','score2']].corr()
# imdb[['avg_rating_per_demo.males','avg_rating_per_demo.females']].corr()
from pandas.tools.plotting import scatter_matrix
temp = imdb[['avg_rating_per_demo.aged_under_18','avg_rating_per_demo.aged_18-29',
'avg_rating_per_demo.aged_30-44','avg_rating_per_demo.aged_45+']]
temp.columns = ['-18','18-29','30-44','45+']
scatter_matrix(temp, alpha=0.2,figsize=(6,6))
plt.suptitle('Rating Scatter over Different Age Groups')
plt.figure()
sns.regplot(x = 'rating_count', y = 'rating', data=imdb, color='seagreen')
sns.plt.title("IMDB Rating vs Number of Votes")
sns.plt.xlabel("Number of Votes")
sns.plt.ylabel("IMDB Rating")
temp = pd.DataFrame(
data={
'sex':
['Male' for genre in imdb.columns if 'genre' in genre]
+
['Female' for genre in imdb.columns if 'genre' in genre],
'score':
[
imdb[imdb[genre] == 1]['votes_per_demo.males'].mean()
for genre in imdb.columns if 'genre' in genre
]
+
[
imdb[imdb[genre] == 1]['votes_per_demo.females'].mean()
for genre in imdb.columns if 'genre' in genre
]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
+
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
)
plt.figure()
sns.barplot(x = temp.index , y = 'score',hue = 'sex', data = temp)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Number of Votes, Difference between Male and Female")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Number of Votes")
temp1 = pd.DataFrame(
data={
'sex':
['Male' for genre in imdb.columns if 'genre' in genre]
+
['Female' for genre in imdb.columns if 'genre' in genre],
'score':
[
imdb[imdb[genre] == 1]['avg_rating_per_demo.males'].mean()
for genre in imdb.columns if 'genre' in genre
]
+
[
imdb[imdb[genre] == 1]['avg_rating_per_demo.females'].mean()
for genre in imdb.columns if 'genre' in genre
]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
+
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
)
plt.figure()
sns.barplot(x = temp1.index , y = 'score',hue = 'sex', data = temp1)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Average Ratings, Difference between Male and Female")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Average Rating")
# plt.figure()
# plt.ylim([0,10])
# plt.xlim([0,10])
# sns.regplot(x ='avg_rating_per_demo.males', y = 'avg_rating_per_demo.females', data = imdb, color = 'red')
temp2 = pd.DataFrame(
data={
'score':
[
imdb[imdb[genre] == 1]['score1'].mean()
for genre in imdb.columns if 'genre' in genre
]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
)
plt.figure()
sns.barplot(x = temp2.index , y = 'score', data = temp2)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Return on Investment by Genre")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Roi %")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: create movie profit score column
Step2: Figure shows scatter of gross income against meta score and imdb rating
Step3: Figure shows distribution of Movie Ratings
Step4: Distribution of ratings by Genres
Step5: scattering stuff
Step6: Figure shows high correlation between opening weekend incomes and Total weekend
Step7: correlations
Step8: figure shows how different age groups tend to vote the same, the diagonal shows the rating distribution of each age group
Step9: figure shows that above 400K voters, the average rating is allways greater than 7 - people tend to rate when they like a movie
Step10: figure shows the difference of males and females number of votes over different genres
Step11: figure shows the similarity of males and females average scores over different genres - women are more mefargenot!
Step12: figure shows retrun on investment (gross income divided by budget)
|
11,470
|
<ASSISTANT_TASK:>
Python Code:
report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04dra/encdec_noing10_200_512_04dra.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04dra/encdec_noing10_200_512_04dra_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drbef/encdec_noing10_200_512_04drbef_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
Step7: Alignment Analysis
|
11,471
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os, sys
import numpy as np
import matplotlib.pyplot as plt
from eqcat.parsers.isf_catalogue_reader import ISFReader
import eqcat.catalogue_query_tools as cqt
# Read in the catalogue
parser = ISFReader("inputs/isc_test_catalogue_isf.txt")
catalogue1 = parser.read_file("ISC_DB1", "ISC Global M >= 5")
print("Catalogue contains: %d events" % catalogue1.get_number_events())
# Build the HDF5 Database
database_file = "outputs/catalogue_db1.hdf5"
if os.path.exists(database_file):
os.remove(database_file)
_ = catalogue1.build_dataframe(hdf5_file=database_file)
db1 = cqt.CatalogueDB(database_file)
lower_lon = 15.
upper_lon = 30.
lower_lat = 30.
upper_lat = 45.
bbox = [lower_lon, upper_lon, lower_lat, upper_lat]
selector = cqt.CatalogueSelector(db1)
aegean_cat = selector.select_within_bounding_box(bbox)
number_origins, number_magnitudes = aegean_cat._get_number_origins_magnitudes()
print("Number of Origins = %d, Number of Magnitudes = %d" % (number_origins,
number_magnitudes))
polygon = np.array([[15.0, 45.0],
[30.0, 45.0],
[30.0, 30.0],
[15.0, 30.0],
[15.0, 45.0]])
selector2 = cqt.CatalogueSelector(db1)
aegean_cat_alt = selector2.select_within_polygon(polygon[:, 0], polygon[:, 1])
number_origins, number_magnitudes = aegean_cat_alt._get_number_origins_magnitudes()
print("Number of Origins = %d, Number of Magnitudes = %d" % (number_origins,
number_magnitudes))
# Above magnitude 6.0
selector3 = cqt.CatalogueSelector(aegean_cat)
aegean_cat_m6 = selector3.select_within_magnitude_range(lower_mag=6.0, upper_mag=9.0)
number_origins, number_magnitudes = aegean_cat_m6._get_number_origins_magnitudes()
print("Number of Origins = %d, Number of Magnitudes = %d" % (number_origins,
number_magnitudes))
selector4 = cqt.CatalogueSelector(aegean_cat_alt)
aegean_cat_deep = selector4.select_within_depth_range(upper_depth=50.0, lower_depth=200.0)
number_origins, number_magnitudes = aegean_cat_deep._get_number_origins_magnitudes()
print("Number of Origins = %d, Number of Magnitudes = %d" % (number_origins,
number_magnitudes))
agency_count = cqt.get_agency_magtype_statistics(db1)
query_bji_isc_mb, bji_isc_mb_cat = cqt.get_agency_magnitude_pairs(db1, ("BJI", "mb"), ("ISC", "mb"), no_case=True)
_ = cqt.plot_agency_magnitude_density(query_bji_isc_mb)
query1, cat1 = cqt.get_agency_magnitude_pairs(db1, ("BJI", "Ms"), ("GCMT", "Mw"), no_case=True)
query2, cat2 = cqt.get_agency_magnitude_pairs(db1, ("BJI", "Ms"), ("HRVD", "Mw"), no_case=True)
query_bji_gcmt_ms = cqt.join_query_results(query1, query2)
_ = cqt.plot_agency_magnitude_density(query_bji_gcmt_ms)
regressor = cqt.CatalogueRegressor(query_bji_gcmt_ms)
regressor.plot_density(overlay=False)
linear_model = regressor.run_regression("polynomial", # Model Type
[0., 1.]) # Initial guess of parameters
regressor.plot_model_density(False, 0)
# View Results
print("Mw = %.3f + %.3f MS +/- %.3f" % (regressor.results.beta[0],
regressor.results.beta[1],
regressor.standard_deviation))
from eqcat.isc_homogenisor import MagnitudeConversionRule
# Define empirical model
def RandomRule1(magnitude):
return 1.21 + 0.84 * magnitude
def RandomRuleSigma(magnitude):
return 0.2
# Create Rule
rule1 = MagnitudeConversionRule("BJI", "MS", RandomRule1, RandomRuleSigma,
model_name=r"$M_{W_{GCMT}} = 1.2 + 0.767 M_{S_{BJI}}$")
# Plot the model - with overla set to true
regressor.plot_model_density(True, 0)
# Overlay the rule and close the figure (overlay set to False)
regressor.plot_magnitude_conversion_model(rule1, False, line_color="b")
initial_guess = [1.0, 1.0, 0.0] # [slope 1, slope 2, intercept]
linear_model = regressor.run_regression("2segmentM6.1", # Model Type
initial_guess) # Initial guess of parameters
regressor.plot_model_density(False, 0)
print("Standard Deviation - Segment 1: %.3f, Segment 2: %.3f" % (regressor.standard_deviation[0],
regressor.standard_deviation[1]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constructing the Database
Step2: Using the Database
Step3: Apply Limiting Selections
Step4: By Polygon
Step5: By Magnitude
Step6: By Depth
Step7: Exploring the Catalogue Database
Step8: Search for a Specific Agency-Magnitude Combination
Step9: Join Query Results
Step10: Regression Tools
Step11: Apply a Linear Model
Step12: Overlay another model defined as a Magnitude Conversion Rule
Step13: Apply a Piecewise Linear Model
|
11,472
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'Time': ['2015-04-24 06:38:49', '2015-04-24 06:39:19', '2015-04-24 06:43:49', '2015-04-24 06:44:18',
'2015-04-24 06:44:48', '2015-04-24 06:45:18', '2015-04-24 06:47:48', '2015-04-24 06:48:18',
'2015-04-24 06:50:48', '2015-04-24 06:51:18', '2015-04-24 06:51:48', '2015-04-24 06:52:18',
'2015-04-24 06:52:48', '2015-04-24 06:53:48', '2015-04-24 06:55:18', '2015-04-24 07:00:47',
'2015-04-24 07:01:17', '2015-04-24 07:01:47'],
'Value': [0.023844, 0.019075, 0.023844, 0.019075, 0.023844, 0.019075,
0.023844, 0.019075, 0.023844, 0.019075, 0.023844, 0.019075,
0.023844, 0.019075, 0.023844, 0.019075, 0.023844, 0.019075]})
df['Time'] = pd.to_datetime(df['Time'])
def g(df):
df.set_index('Time', inplace=True)
df_group = df.groupby(pd.Grouper(level='Time', freq='3T'))['Value'].agg('sum')
df_group.dropna(inplace=True)
df_group = df_group.to_frame().reset_index()
return df_group
df = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,473
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot as plt, cm
import skdemo
plt.rcParams['image.cmap'] = 'cubehelix'
plt.rcParams['image.interpolation'] = 'none'
image = np.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8)
plt.imshow(image)
from skimage import morphology
sq = morphology.square(width=3)
dia = morphology.diamond(radius=1)
disk = morphology.disk(radius=30)
skdemo.imshow_all(sq, dia, disk)
skdemo.imshow_all(image, morphology.erosion(image, sq), shape=(1, 2))
skdemo.imshow_all(image, morphology.dilation(image, sq))
skdemo.imshow_all(image, morphology.dilation(image, dia))
image = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], np.uint8)
plt.imshow(image)
skdemo.imshow_all(image, morphology.opening(image, sq)) # erosion -> dilation
skdemo.imshow_all(image, morphology.closing(image, sq)) # dilation -> erosion
from skimage import data, color
hub = color.rgb2gray(data.hubble_deep_field()[350:450, 90:190])
plt.imshow(hub)
disk = morphology.disk(radius=8)
gal = morphology.opening(hub, disk)
plt.imshow(gal)
skdemo.imshow_with_histogram(gal)
gal_selector = gal > 50
# use numpy boolean indexing to estimate histogram
# of brightness intensity on isolated galaxy
skdemo.imshow_all(hub, gal_selector)
f = plt.figure()
plt.hist(hub[gal_selector], bins=50);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The documentation for scikit-image's morphology module is
Step2: The central value of the structuring element represents the pixel being considered, and the surrounding values are the neighbors
Step3: and
Step4: and
Step5: Erosion and dilation can be combined into two slightly more sophisticated operations, opening and closing. Here's an example
Step6: What happens when run an erosion followed by a dilation of this image?
Step7: Exercise
Step8: Remove the smaller objects to retrieve the large galaxy.
|
11,474
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
data = pd.read_csv('../../data/titanic_train.csv',
index_col='PassengerId')
data.head(5)
data.describe()
data[(data['Embarked'] == 'C') & (data.Fare > 200)].head()
data[(data['Embarked'] == 'C') &
(data['Fare'] > 200)].sort_values(by='Fare',
ascending=False).head()
def age_category(age):
'''
< 30 -> 1
>= 30, <55 -> 2
>= 55 -> 3
'''
if age < 30:
return 1
elif age < 55:
return 2
else:
return 3
age_categories = [age_category(age) for age in data.Age]
data['Age_category'] = age_categories
data['Age_category'] = data['Age'].apply(age_category)
data['Sex'].value_counts()
data[data['Sex'] == 'male']['Pclass'].value_counts()
print(
"Медиана – {0}, стандартное отклонение – {1}".format(
round(data['Fare'].median(), 2),
round(data['Fare'].std(), 2)
)
)
yang = data[data['Age'] < 30]
old = data[data['Age'] > 60]
old['Survived'].value_counts(normalize=True)
female = data[data['Sex'] == 'female']
male = data[data['Sex'] == 'male']
male['Survived'].value_counts(normalize=True)
# Ваш код здесь
# Ваш код здесь
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Считаем данные из файла в память в виде объекта Pandas.DataFrame
Step2: Данные представлены в виде таблицы. Посмотрим на первые 5 строк
Step3: Для примера отберем пассажиров, которые сели в Cherbourg (Embarked=C) и заплатили более 200 у.е. за билет (fare > 200).
Step4: Можно отсортировать этих людей по убыванию платы за билет.
Step5: Пример создания признака.
Step6: Другой способ – через apply.
Step7: 1. Сколько мужчин / женщин находилось на борту?
Step8: 2. Выведите распределение переменной Pclass (социально-экономический статус) и это же распределение, только для мужчин / женщин по отдельности. Сколько было мужчин 2-го класса?
Step9: 3. Каковы медиана и стандартное отклонение платежей (Fare)? Округлите до 2 десятичных знаков.
Step10: 4. Правда ли, что люди моложе 30 лет выживали чаще, чем люди старше 60 лет? Каковы доли выживших в обеих группах?
Step11: 5. Правда ли, что женщины выживали чаще мужчин? Каковы доли выживших в обеих группах?
Step12: 6. Найдите самое популярное имя среди пассажиров Титаника мужского пола?
Step13: 8. Как отличается средний возраст мужчин / женщин в зависимости от класса обслуживания? Выберите верные утверждения
|
11,475
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')
coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 90, h_trans_bandwidth=10.)
raw.notch_filter(50., notch_widths=1)
# Set epoch rejection threshold a bit larger than for SQUIDs
reject = dict(mag=2e-10)
tmin, tmax = -0.5, 1
# Find median nerve stimulator trigger
event_id = dict(Median=257)
events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')
picks = mne.pick_types(raw.info, meg=True, eeg=False)
# We use verbose='error' to suppress warning about decimation causing aliasing,
# ideally we would low-pass and then decimate instead
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',
reject=reject, picks=picks, proj=False, decim=10,
preload=True)
evoked = epochs.average()
evoked.plot()
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs, raw
bem = mne.read_bem_solution(bem_fname)
trans = None
# To compute the forward solution, we must
# provide our temporary/custom coil definitions, which can be done as::
#
# with mne.use_coil_def(coil_def_fname):
# fwd = mne.make_forward_solution(
# raw.info, trans, src, bem, eeg=False, mindist=5.0,
# n_jobs=1, verbose=True)
fwd = mne.read_forward_solution(fwd_fname)
# use fixed orientation here just to save memory later
mne.convert_forward_solution(fwd, force_fixed=True, copy=False)
with mne.use_coil_def(coil_def_fname):
fig = mne.viz.plot_alignment(evoked.info, trans=trans, subject=subject,
subjects_dir=subjects_dir,
surfaces=('head', 'pial'), bem=bem)
mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,
focalpoint=(0.02, 0, 0.04))
# Fit dipoles on a subset of time points
with mne.use_coil_def(coil_def_fname):
dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.015, 0.080),
cov, bem, trans, verbose=True)
idx = np.argmax(dip_opm.gof)
print('Best dipole at t=%0.1f ms with %0.1f%% GOF'
% (1000 * dip_opm.times[idx], dip_opm.gof[idx]))
# Plot N20m dipole as an example
dip_opm.plot_locations(trans, subject, subjects_dir,
mode='orthoview', idx=idx)
inverse_operator = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov, loose=0., depth=None)
del fwd, cov
method = "MNE"
snr = 3.
lambda2 = 1. / snr ** 2
stc = mne.minimum_norm.apply_inverse(
evoked, inverse_operator, lambda2, method=method,
pick_ori=None, verbose=True)
# Plot source estimate at time of best dipole fit
brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,
initial_time=dip_opm.times[idx],
clim=dict(kind='percent', lims=[99, 99.9, 99.99]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare data for localization
Step2: Examine our coordinate alignment for source localization and compute a
Step3: Perform dipole fitting
Step4: Perform minimum-norm localization
|
11,476
|
<ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import contract_win
# or if you want to import the free dataset, use:
# from quantopian.data.eventvestor import contract_win_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
contract_win.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
contract_win.count()
# Let's see what the data looks like. We'll grab the first three rows.
contract_win[:3]
ba_sid = symbols('BA').sid
wins = contract_win[contract_win.sid == ba_sid][['timestamp', 'contract_amount','amount_units','contract_entity']].sort('timestamp')
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
wins
ba_df = odo(wins, pd.DataFrame)
# Printing a pandas DataFrame displays the first 30 and last 30 items, and truncates the middle.
ba_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's go over the columns
Step2: Finally, suppose we want the above as a DataFrame
|
11,477
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
autoData = pd.read_csv('../datasets/Auto.csv', na_values='?')
autoData.shape
autoData.columns
autoData.head(5)
autoData.info()
autoData.isnull().values.any()
autoData[autoData.isnull().any(axis=1)]
autoData.dropna(axis=0, inplace=True) # Drop any row that has NaNs within it
autoData[autoData.isnull().any(axis=1)]
# horsepower feature is our input X
X = autoData[['horsepower']]
X.shape
X.describe()
m = X.shape[0] # number of examples
m
# mpg is our output Y, what we want to predict
Y = autoData.mpg.copy() # copy “y” column values out
len(Y)
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.scatter(X, Y)
fig = plt.grid(True)
fig = plt.xlabel('Horsepower')
fig = plt.ylabel('Miles per Gallon (mpg)')
from sklearn import linear_model
#
# : Create the linear regression model
#
model = linear_model.LinearRegression()
model.fit(X, Y)
model.intercept_
model.coef_
y_plot = model.predict([[40],[230]])
y_plot
fig = plt.scatter(X, Y)
fig = plt.grid(True)
fig = plt.xlabel('Horsepower')
fig = plt.ylabel('Miles per Gallon (mpg)')
fig = plt.plot([40,230], y_plot, '-', color='darkred', linewidth=2)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=1)
model.fit(x_train, y_train)
print('Test Accuracy: {:.3f}'.format(model.score(x_test, y_test)))
from sklearn.model_selection import KFold
import numpy as np
kf = KFold(n_splits = 10).split(X, Y)
scores = []
for k, (train_index, test_index) in enumerate(kf):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index], Y.iloc[test_index]
model.fit(X_train, Y_train)
scoreK = model.score(X_train, Y_train)
scores.append(scoreK)
print('Fold: {:2d}, Acc: {:.3f}'.format(k+1, scoreK))
print('\nCV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores),
np.std(scores)))
from sklearn.model_selection import LeaveOneOut
LOO = LeaveOneOut().split(X, Y)
scores = []
for k, (train_index, test_index) in enumerate(LOO):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index], Y.iloc[test_index]
model.fit(X_train, Y_train)
scoreK = model.score(X_train, Y_train)
scores.append(scoreK)
print('\nCV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores),
np.std(scores)))
from sklearn.model_selection import cross_val_score
# We are passing the entirety of X and y, not X_train or y_train
scores = cross_val_score(model, X, Y, cv=KFold(n_splits=5, shuffle=True,
random_state=1))
print ("Cross-validated scores:", scores)
print('CV accuracy: {:.3f} +/- {:.3f} '.format (np.mean(scores), np.std(scores)))
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=2) # quadratic function
# we first transform our data
x_train2 = poly.fit_transform(x_train)
x_test2 = poly.fit_transform(x_test)
x_train2[0:5]
model2 = linear_model.LinearRegression()
model2.fit(x_train2, y_train)
model2.score(x_test2, y_test)
poly = PolynomialFeatures(degree=3) # cubic function
x_train3 = poly.fit_transform(x_train) # again, transform the data
x_test3 = poly.fit_transform(x_test)
model3 = linear_model.LinearRegression()
model3.fit(x_train3, y_train) # fit a new model
model3.score(x_test3, y_test)
# search for an optimal value of poly n for LR
# range of n we want to try
n_range = range(1, 15)
# empty list to store scores
n_scores = []
model_n = linear_model.LinearRegression()
# 1. we will loop through values of n
for n in n_range:
# 2. transform features with n degree
poly = PolynomialFeatures(degree=n)
x_val_n = poly.fit_transform(X)
# 3. obtain cross_val_score for model_n
scores = cross_val_score(model_n, x_val_n, Y, cv=KFold(n_splits=5,
shuffle=True,
random_state=1))
# 4. append mean of scores to n_scores list
n_scores.append(scores.mean())
#print ("Cross-validated scores:", scores)
print('CV accuracy: {:.3f} +/- {:.3f} '.format(np.mean(n_scores), np.std(n_scores)))
# in essence, this is basically running the k-fold cross-validation method n times :
print('Length of list', len(n_scores))
print('Max of list', max(n_scores))
# plot how accuracy changes as we vary the polynomial degree
plt.plot(n_range, n_scores)
plt.xlabel('Degree for horsepower')
plt.ylabel('Cross-validated accuracy')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 397 examples of 9 features
Step2: We are interested in the mpg and the horsepower features.
Step3: There are NaNs. We remove those observations before proceeding.
Step4: Visualise the data
Step5: Fit a linear regression model
Step6: These are the model's coefficients
Step7: The model would look like $mpg = 39.9 - 0.15 * horsepower$.
Step8: We can already see that a simple linear regression model will not be very accurate ... probably a curve would better approximate the real fuel consumption better than a line.
Step9: Measure the accuracy of the model
Step10: And measure the accuracy on the test dataset.
Step11: The validation estimate of the test error can be highly variable, depending on precisely which observations are included in the training set and which observations are included in the test set. Every time you split the data into different train/test datasets you get different models and scores. You can try it yourself by removing the fixed random seed above.
Step12: KFold divides all the samples in K groups of samples, called folds, of equal sizes (if possible).
Step13: Note
Step14: Leave One Out (LOO) cross-validation
Step15: Note
Step16: Now the accuracy of 0.606 is a much more reliable measure than the one from the test-set approach, which could change depending on which test set has been randomly selected.
Step17: In the first iteration, the accuracy is 58%, in the second iteration the accuracy is 65% and so on.
Step18: Then we fit a new model to the transformed data
Step19: It looks better, so we can try with a cubic function
Step20: These results show that a model which predicts mpg using a quadratic function of horsepower performs better than a model that involves only a linear function of horsepower, and there is little evidence in favour of a model that uses a cubic function of horsepower.
|
11,478
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
import numpy as np
from cirq.experiments import random_quantum_circuit_generation as rqcg
circuits = rqcg.generate_library_of_2q_circuits(
n_library_circuits=20,
two_qubit_gate=cirq.ISWAP**0.5,
q0=cirq.GridQubit(4,4),
q1=cirq.GridQubit(4,5),
)
print(len(circuits))
# We will truncate to these lengths
max_depth = 100
cycle_depths = np.arange(3, max_depth, 20)
cycle_depths
device_name = None # change me!
if device_name is None:
sampler = cirq.DensityMatrixSimulator(noise=cirq.depolarize(5e-3))
else:
import cirq_google as cg
sampler = cg.get_engine_sampler(device_name, gate_set_name='sqrt_iswap')
device = cg.get_engine_device(device_name)
import cirq.contrib.routing as ccr
graph = ccr.gridqubits_to_graph_device(device.qubits)
pos = {q: (q.row, q.col) for q in graph.nodes}
import networkx as nx
nx.draw_networkx(graph, pos=pos)
from cirq.experiments.xeb_sampling import sample_2q_xeb_circuits
sampled_df = sample_2q_xeb_circuits(
sampler=sampler,
circuits=circuits,
cycle_depths=cycle_depths,
repetitions=10_000,
)
sampled_df
from cirq.experiments.xeb_fitting import benchmark_2q_xeb_fidelities
fids = benchmark_2q_xeb_fidelities(
sampled_df=sampled_df,
circuits=circuits,
cycle_depths=cycle_depths,
)
fids
%matplotlib inline
from matplotlib import pyplot as plt
# Exponential reference
xx = np.linspace(0, fids['cycle_depth'].max())
plt.plot(xx, (1-5e-3)**(4*xx), label=r'Exponential Reference')
def _p(fids):
plt.plot(fids['cycle_depth'], fids['fidelity'], 'o-', label=fids.name)
fids.name = 'Sampled'
_p(fids)
plt.ylabel('Circuit fidelity')
plt.xlabel('Cycle Depth $d$')
plt.legend(loc='best')
import multiprocessing
pool = multiprocessing.get_context('spawn').Pool()
from cirq.experiments.xeb_fitting import (
parameterize_circuit,
characterize_phased_fsim_parameters_with_xeb,
SqrtISwapXEBOptions,
)
# Set which angles we want to characterize (all)
options = SqrtISwapXEBOptions(
characterize_theta = True,
characterize_zeta = True,
characterize_chi = True,
characterize_gamma = True,
characterize_phi = True
)
# Parameterize the sqrt(iswap)s in our circuit library
pcircuits = [parameterize_circuit(circuit, options) for circuit in circuits]
# Run the characterization loop
characterization_result = characterize_phased_fsim_parameters_with_xeb(
sampled_df,
pcircuits,
cycle_depths,
options,
pool=pool,
# ease tolerance so it converges faster:
fatol=5e-3,
xatol=5e-3
)
characterization_result.final_params
characterization_result.fidelities_df
from cirq.experiments.xeb_fitting import before_and_after_characterization
before_after_df = before_and_after_characterization(fids, characterization_result)
before_after_df
from cirq.experiments.xeb_fitting import exponential_decay
for i, row in before_after_df.iterrows():
plt.axhline(1, color='grey', ls='--')
plt.plot(row['cycle_depths_0'], row['fidelities_0'], '*', color='red')
plt.plot(row['cycle_depths_c'], row['fidelities_c'], 'o', color='blue')
xx = np.linspace(0, np.max(row['cycle_depths_0']))
plt.plot(xx, exponential_decay(xx, a=row['a_0'], layer_fid=row['layer_fid_0']), color='red')
plt.plot(xx, exponential_decay(xx, a=row['a_c'], layer_fid=row['layer_fid_c']), color='blue')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table class="tfo-notebook-buttons" align="left">
Step2: Isolated XEB
Step3: Set up Random Circuits
Step4: Set up a Sampler.
Step5: Take Data
Step6: Benchmark fidelities
Step7: Optimize PhasedFSimGate parameters
|
11,479
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
covariance, precision, adjacency = new_graph(15,.15,adj_type=adj_type,random_sign=True,seed=1)
covariance2, precision2, adjacency2 = new_graph(15,.2,adj_type=adj_type,random_sign=True, seed=1)
# Set up the matplotlib figure
f, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(10, 4));
def compare_population_parameters(covariance,precision,adjacency):
mask = np.zeros_like(precision, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
mask[np.where(np.eye(np.shape(precision)[0]))] = True
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
adj_vmax = np.max(np.triu(adjacency,1))
sns.heatmap(adjacency,mask=mask,cmap=cmap, vmax=adj_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax1)
ax1.set_title('Adjacency of Precision Matrix')
prec_vmax = np.max(np.triu(precision,1))
sns.heatmap(precision,mask=mask,cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax2)
ax2.set_title('Precision Matrix')
cov_vmax = np.max(np.triu(covariance,1))
sns.heatmap(covariance, mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax3)
ax3.set_title('Covariance Matrix')
compare_population_parameters(covariance,precision,adjacency)
%matplotlib inline
prng = np.random.RandomState(2)
X1 = mvn(75*n_features,n_features,covariance,random_state=prng)
X2 = mvn(10*n_features,n_features,covariance,random_state=prng)
X3 = mvn(75*n_features,n_features,covariance2,random_state=prng)
X4 = mvn(10*n_features,n_features,covariance2,random_state=prng)
mask = np.zeros_like(precision, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
mask[np.where(np.eye(np.shape(precision)[0]))] = True
cmap = sns.diverging_palette(220, 10, as_cmap=True)
f, ([ax1, ax2, ax5, ax7], [ax3, ax4, ax6, ax8]) = plt.subplots(2,4,figsize=(14,6));
cov_vmax = np.max(np.triu(covariance,1))
sns.heatmap(np.tril(np.cov(X1,rowvar=False),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax1)
ax1.set_title('Sample Covariance Matrix, n/p=75')
sns.heatmap(np.tril(np.cov(X2,rowvar=False),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax2)
ax2.set_title('Sample Covariance Matrix, n/p=10')
sns.heatmap(np.tril(sp.linalg.pinv(np.cov(X1,rowvar=False)),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax3)
ax3.set_title('Sample Precision Matrix, n/p=75')
sns.heatmap(np.tril(sp.linalg.pinv(np.cov(X2,rowvar=False)),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax4)
ax4.set_title('Sample Precision Matrix, n/p=10')
sns.heatmap(np.tril(np.cov(X3,rowvar=False),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax5)
ax5.set_title('Sample Covariance Matrix, n/p=75')
sns.heatmap(np.tril(sp.linalg.pinv(np.cov(X3,rowvar=False)),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax6)
ax6.set_title('Sample Precision Matrix, n/p=75')
sns.heatmap(np.tril(np.cov(X4,rowvar=False),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax7)
ax7.set_title('Sample Covariance Matrix, n/p=10')
sns.heatmap(np.tril(sp.linalg.pinv(np.cov(X4,rowvar=False)),-1),mask=mask,cmap=cmap, vmax=cov_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax8)
ax8.set_title('Sample Precision Matrix, n/p=10')
%matplotlib inline
# show graphical lasso path
covariance, precision, adjacency = new_graph(15,.2,adj_type=adj_type,random_sign=True,seed=1)
prng = np.random.RandomState(2)
X = mvn(20*n_features,n_features,covariance,random_state=prng)
path = np.logspace(np.log10(0.01), np.log10(1.0), num=25, endpoint=True)[::-1]
estimator = QuicGraphLasso(lam=1.0,path=path,mode='path')
estimator.fit(X)
trace_plot(estimator.precision_, estimator.path_)
metric='log_likelihood';
covariance, precision, adjacency = new_graph(n_features,.05,adj_type='banded',random_sign=True,seed=1) # BUG:Might fail with other graphs
prng = np.random.RandomState(2)
X = mvn(10*n_features,n_features,covariance,random_state=prng)
print 'QuicGraphLassoCV with:'
print ' metric: {}'.format(metric)
cv_model = QuicGraphLassoCV(
cv=2,
n_refinements=6,
n_jobs=1,
init_method='cov',
score_metric=metric)
cv_model.fit(X)
cv_precision_ = cv_model.precision_
print ' len(cv_lams): {}'.format(len(cv_model.cv_lams_))
print ' lam_scale_: {}'.format(cv_model.lam_scale_)
print ' lam_: {}'.format(cv_model.lam_)
# EBIC
gamma = .1 # gamma = 0 equivalent to BIC and gamma=.5 for ultra high dimensions
ebic_model = QuicGraphLassoEBIC(
lam=1.0,
init_method='cov',
gamma = gamma)
ebic_model.fit(X)
ebic_precision_ = ebic_model.precision_
print 'QuicGraphLassoEBIC with:'
print ' len(path lams): {}'.format(len(ebic_model.path_))
print ' lam_scale_: {}'.format(ebic_model.lam_scale_)
print ' lam_: {}'.format(ebic_model.lam_)
%matplotlib inline
f, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(10,4));
mask = np.zeros_like(precision, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
mask[np.where(np.eye(np.shape(precision)[0]))] = True
cmap = sns.diverging_palette(220, 10, as_cmap=True)
prec_vmax = np.max(np.triu(np.abs(adjacency),1))
sns.heatmap(np.abs(adjacency),cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax1)
ax1.set_title('True Precision, CV')
prec_vmax = np.max(np.triu(np.abs(cv_precision_),1))
sns.heatmap(np.abs(cv_precision_),cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax2)
ax2.set_title('Selected Precision, CV')
prec_vmax = np.max(np.triu(np.abs(ebic_precision_),1))
sns.heatmap(np.abs(ebic_precision_),cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax3)
ax3.set_title('Selected Precision, EBIC')
%matplotlib inline
f, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(10,4));
covariance, precision, adjacency = new_graph(15,.15,adj_type=adj_type,random_sign=True,seed=1)
n_samples = 75*n_features
prng = np.random.RandomState(2)
X = mvn(n_samples,n_features,covariance,random_state=prng)
print 'n = {},p = {}'.format(n_samples,n_features)
def compare_init_adaptive(X,n_samples,n_features):
# Initial Estimator
initial_estimator = QuicGraphLassoCV(init_method='corrcoef')
initial_estimator.fit(X)
prec_hat = initial_estimator.precision_
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
prec_vmax = np.max(np.triu(prec_hat,1))
sns.heatmap(initial_estimator.precision_,cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax1)
ax1.set_title('Precision Matrix, Initial Estimator')
# Check Average Power
err_frob, err_supp, err_fp, err_fn, err_inf = ae_trial(
trial_estimator=initial_estimator,
n_samples=n_samples,
n_features=n_features,
cov=covariance,
adj=adjacency,
random_state=np.random.RandomState(2),X=X)
print 'Difference in sparsity: {},{}'.format(
np.sum(np.not_equal(precision,0)),
np.sum(np.not_equal(initial_estimator.precision_,0))
)
print 'Frob Norm: {} ({}), Support Error: {}, False Pos: {}, False Neg: {}'.format(
err_frob, err_inf,
err_supp,
err_fp,
err_fn
)
# Adaptive Estimator
twostage = AdaptiveGraphLasso(estimator=initial_estimator,method='inverse')
twostage.fit(X)
weighted_estimator = twostage.estimator_
prec_hat = weighted_estimator.precision_
prec_vmax = np.max(np.triu(prec_hat,1))
sns.heatmap(weighted_estimator.precision_,cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax2)
ax2.set_title('Precision Matrix, Adaptive Estimator')
print 'Difference in sparsity: {},{}'.format(
np.sum(np.not_equal(precision,0)),
np.sum(np.not_equal(weighted_estimator.precision_,0))
)
# Check Average Power
err_frob, err_supp, err_fp, err_fn, err_inf = ae_trial(
trial_estimator=weighted_estimator,
n_samples=n_samples,
n_features=n_features,
cov=covariance,
adj=adjacency,
random_state=np.random.RandomState(2), X = X)
print 'Frob Norm: {} ({}), Support Error: {}, False Pos: {}, False Neg: {}'.format(
err_frob,err_inf,
err_supp,
err_fp,
err_fn
)
print
prec_vmax = np.max(np.triu(precision,1))
sns.heatmap(adjacency,cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax3)
ax3.set_title('True Precision')
compare_init_adaptive(X,n_samples,n_features)
%matplotlib inline
f, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(10,4));
n_samples = 15*n_features
prng = np.random.RandomState(2)
X = mvn(n_samples,n_features,covariance,random_state=prng)
print 'n = {},p = {}'.format(n_samples,n_features)
compare_init_adaptive(X,n_samples,n_features)
%matplotlib inline
f, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(10,4));
covariance, precision, adjacency = new_graph(15,.4,adj_type=adj_type,random_sign=True,seed=1)
compare_population_parameters(covariance,precision,adjacency)
%matplotlib inline
f, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(10, 4));
n_samples = 75*n_features
prng = np.random.RandomState(2)
X = mvn(n_samples,n_features,covariance,random_state=prng)
print 'n = {},p = {}'.format(n_samples,n_features)
compare_init_adaptive(X,n_samples,n_features)
%matplotlib inline
f, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(10, 4));
n_samples = 20*n_features
prng = np.random.RandomState(2)
X = mvn(n_samples,n_features,covariance,random_state=prng)
print 'n = {},p = {}'.format(n_samples,n_features)
compare_init_adaptive(X,n_samples,n_features)
covariance, precision, adjacency = new_graph(15,.15,adj_type=adj_type,random_sign=True,seed=1)
n_samples = 15*n_features
prng = np.random.RandomState(2)
X = mvn(n_samples,n_features,covariance,random_state=prng)
ensemble_estimator = ModelAverage(
n_trials=50,
penalization='fully-random',
lam=0.15)
ensemble_estimator.fit(X)
covariance, precision, adjacency = new_graph(15,.3,adj_type=adj_type,random_sign=True,seed=1)
n_samples = 15*n_features
prng = np.random.RandomState(2)
X = mvn(n_samples,n_features,covariance,random_state=prng)
ensemble_estimator2 = ModelAverage(
n_trials=50,
penalization='fully-random',
lam=0.15)
ensemble_estimator2.fit(X)
%matplotlib inline
# Plot comparison
f, (ax2, ax3) = plt.subplots(1,2, figsize=(10, 4));
stability_threshold = .5
prec_hat = ensemble_estimator.proportion_
prec_vmax = np.max(np.triu(prec_hat,1))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(prec_hat,cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax2)
ax2.set_title('Stability Matrix, deg/p=.15')
print 'Difference in sparsity: {},{}'.format(
np.sum(np.not_equal(precision,0)),
np.sum(np.not_equal(prec_hat>stability_threshold,0))
)
err_fp, err_fn = _false_support(precision,np.greater(ensemble_estimator.proportion_,stability_threshold))
print 'Support Error:, False Pos: {}, False Neg: {}'.format(
err_fp,
err_fn
)
prec_hat = ensemble_estimator2.proportion_
prec_vmax = np.max(np.triu(prec_hat,1))
sns.heatmap(prec_hat,cmap=cmap, vmax=prec_vmax,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5},ax=ax3)
ax3.set_title('Stability Matrix, deg/p=.3')
print 'Difference in sparsity: {},{}'.format(
np.sum(np.not_equal(precision,0)),
np.sum(np.not_equal(prec_hat>stability_threshold,0))
)
err_fp, err_fn = _false_support(precision,np.greater(ensemble_estimator2.proportion_,stability_threshold))
print 'Support Error:, False Pos: {}, False Neg: {}'.format(
err_fp,
err_fn
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='restart'></a>
Step2: Part II. Sparse Inverse Covariance via Penalized MLE
Step3: Model Selection
Step4: Part III. Compare Estimators
Step5: Initial vs. Adaptive, Low Sample Size
Step6: Initial vs. Adaptive in Moderately Dense Graphs
Step7: High Sample Size
Step8: Low Sample Size
Step9: Part IV Model Averaging
|
11,480
|
<ASSISTANT_TASK:>
Python Code:
# A first function. Find the length of a list.
a_list = [1, 2, 3]
len(a_list)
len({"a": [1, 2, 3], "b": 4})
range(3)
# Experiment with the builtin function all
all([1, "first", 3.4])
any([False, False])
list(range(3))
fd = open("t.txt", "w")
fd.write("a line")
fd.close()
!ls -l
help(fd)
data = [1, 2, 3, 4]
import numpy as np
np.mean(data)
np.median(data)
arr = np.array(data)
arr
np.reshape(arr, (2,2))
np.std(data)
from pandas import read_csv
from pandas.plotting import scatter_matrix
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Builtin Functions
Step2: Notice that Python (in the newest versions, e.g. 3+) has an object type that is a range. This saves memory and speeds up calculations vs. an explicit representation of a range as a list - but it can be automagically converted to a list on the fly by Python. To show the contents as a list we can use the type case like with the tuple above.
Step3: Packages with functions
Step4: How do you find out what's in a package?
|
11,481
|
<ASSISTANT_TASK:>
Python Code:
from csp import *
%psource AC3
%psource revise
%psource AC3b
%psource partition
%psource dom_j_up
%psource sat_up
%psource AC4
sudoku = Sudoku(easy1)
sudoku.display(sudoku.infer_assignment())
%time _, checks = AC3(sudoku, arc_heuristic=no_arc_heuristic)
f'AC3 needs {checks} consistency-checks'
sudoku = Sudoku(easy1)
%time _, checks = AC3b(sudoku, arc_heuristic=no_arc_heuristic)
f'AC3b needs {checks} consistency-checks'
sudoku = Sudoku(easy1)
%time _, checks = AC4(sudoku, arc_heuristic=no_arc_heuristic)
f'AC4 needs {checks} consistency-checks'
sudoku = Sudoku(easy1)
%time _, checks = AC3(sudoku, arc_heuristic=dom_j_up)
f'AC3 with DOM J UP arc heuristic needs {checks} consistency-checks'
sudoku = Sudoku(easy1)
%time _, checks = AC3b(sudoku, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP arc heuristic needs {checks} consistency-checks'
sudoku = Sudoku(easy1)
%time _, checks = AC4(sudoku, arc_heuristic=dom_j_up)
f'AC4 with DOM J UP arc heuristic needs {checks} consistency-checks'
backtracking_search(sudoku, select_unassigned_variable=mrv, inference=forward_checking)
sudoku.display(sudoku.infer_assignment())
sudoku = Sudoku(harder1)
sudoku.display(sudoku.infer_assignment())
%time _, checks = AC3(sudoku, arc_heuristic=no_arc_heuristic)
f'AC3 needs {checks} consistency-checks'
sudoku = Sudoku(harder1)
%time _, checks = AC3b(sudoku, arc_heuristic=no_arc_heuristic)
f'AC3b needs {checks} consistency-checks'
sudoku = Sudoku(harder1)
%time _, checks = AC4(sudoku, arc_heuristic=no_arc_heuristic)
f'AC4 needs {checks} consistency-checks'
sudoku = Sudoku(harder1)
%time _, checks = AC3(sudoku, arc_heuristic=dom_j_up)
f'AC3 with DOM J UP arc heuristic needs {checks} consistency-checks'
sudoku = Sudoku(harder1)
%time _, checks = AC3b(sudoku, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP arc heuristic needs {checks} consistency-checks'
sudoku = Sudoku(harder1)
%time _, checks = AC4(sudoku, arc_heuristic=dom_j_up)
f'AC4 with DOM J UP arc heuristic needs {checks} consistency-checks'
backtracking_search(sudoku, select_unassigned_variable=mrv, inference=forward_checking)
sudoku.display(sudoku.infer_assignment())
chess = NQueensCSP(8)
chess.display(chess.infer_assignment())
%time _, checks = AC3(chess, arc_heuristic=no_arc_heuristic)
f'AC3 needs {checks} consistency-checks'
chess = NQueensCSP(8)
%time _, checks = AC3b(chess, arc_heuristic=no_arc_heuristic)
f'AC3b needs {checks} consistency-checks'
chess = NQueensCSP(8)
%time _, checks = AC4(chess, arc_heuristic=no_arc_heuristic)
f'AC4 needs {checks} consistency-checks'
chess = NQueensCSP(8)
%time _, checks = AC3(chess, arc_heuristic=dom_j_up)
f'AC3 with DOM J UP arc heuristic needs {checks} consistency-checks'
chess = NQueensCSP(8)
%time _, checks = AC3b(chess, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP arc heuristic needs {checks} consistency-checks'
chess = NQueensCSP(8)
%time _, checks = AC4(chess, arc_heuristic=dom_j_up)
f'AC4 with DOM J UP arc heuristic needs {checks} consistency-checks'
backtracking_search(chess, select_unassigned_variable=mrv, inference=forward_checking)
chess.display(chess.infer_assignment())
%psource ACSolver.GAC
crossword = Crossword(crossword1, words1)
crossword.display()
words1
%time _, _, checks = ACSolver(crossword).GAC(arc_heuristic=no_heuristic)
f'GAC needs {checks} consistency-checks'
crossword = Crossword(crossword1, words1)
%time _, _, checks = ACSolver(crossword).GAC(arc_heuristic=sat_up)
f'GAC with SAT UP arc heuristic needs {checks} consistency-checks'
crossword.display(ACSolver(crossword).domain_splitting())
kakuro = Kakuro(kakuro2)
kakuro.display()
%time _, _, checks = ACSolver(kakuro).GAC(arc_heuristic=no_heuristic)
f'GAC needs {checks} consistency-checks'
kakuro = Kakuro(kakuro2)
%time _, _, checks = ACSolver(kakuro).GAC(arc_heuristic=sat_up)
f'GAC with SAT UP arc heuristic needs {checks} consistency-checks'
kakuro.display(ACSolver(kakuro).domain_splitting())
kakuro = Kakuro(kakuro3)
kakuro.display()
%time _, _, checks = ACSolver(kakuro).GAC(arc_heuristic=no_heuristic)
f'GAC needs {checks} consistency-checks'
kakuro = Kakuro(kakuro3)
%time _, _, checks = ACSolver(kakuro).GAC(arc_heuristic=sat_up)
f'GAC with SAT UP arc heuristic needs {checks} consistency-checks'
kakuro.display(ACSolver(kakuro).domain_splitting())
kakuro = Kakuro(kakuro4)
kakuro.display()
%time _, _, checks = ACSolver(kakuro).GAC()
f'GAC needs {checks} consistency-checks'
kakuro = Kakuro(kakuro4)
%time _, _, checks = ACSolver(kakuro).GAC(arc_heuristic=sat_up)
f'GAC with SAT UP arc heuristic needs {checks} consistency-checks'
kakuro.display(ACSolver(kakuro).domain_splitting())
cryptarithmetic = NaryCSP(
{'S': set(range(1, 10)), 'M': set(range(1, 10)),
'E': set(range(0, 10)), 'N': set(range(0, 10)), 'D': set(range(0, 10)),
'O': set(range(0, 10)), 'R': set(range(0, 10)), 'Y': set(range(0, 10)),
'C1': set(range(0, 2)), 'C2': set(range(0, 2)), 'C3': set(range(0, 2)),
'C4': set(range(0, 2))},
[Constraint(('S', 'E', 'N', 'D', 'M', 'O', 'R', 'Y'), all_diff),
Constraint(('D', 'E', 'Y', 'C1'), lambda d, e, y, c1: d + e == y + 10 * c1),
Constraint(('N', 'R', 'E', 'C1', 'C2'), lambda n, r, e, c1, c2: c1 + n + r == e + 10 * c2),
Constraint(('E', 'O', 'N', 'C2', 'C3'), lambda e, o, n, c2, c3: c2 + e + o == n + 10 * c3),
Constraint(('S', 'M', 'O', 'C3', 'C4'), lambda s, m, o, c3, c4: c3 + s + m == o + 10 * c4),
Constraint(('M', 'C4'), eq)])
%time _, _, checks = ACSolver(cryptarithmetic).GAC(arc_heuristic=no_heuristic)
f'GAC needs {checks} consistency-checks'
%time _, _, checks = ACSolver(cryptarithmetic).GAC(arc_heuristic=sat_up)
f'GAC with SAT UP arc heuristic needs {checks} consistency-checks'
assignment = ACSolver(cryptarithmetic).domain_splitting()
from IPython.display import Latex
display(Latex(r'\begin{array}{@{}r@{}} ' + '{}{}{}{}'.format(assignment['S'], assignment['E'], assignment['N'], assignment['D']) + r' \\ + ' +
'{}{}{}{}'.format(assignment['M'], assignment['O'], assignment['R'], assignment['E']) + r' \\ \hline ' +
'{}{}{}{}{}'.format(assignment['M'], assignment['O'], assignment['N'], assignment['E'], assignment['Y']) + ' \end{array}'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Domain-Heuristics for Arc-Consistency Algorithms
Step2: At any stage in the process of making 2-variable CSP arc-consistent in AC3b
Step3: AC3b is a refinement of the AC3 algorithm which consists of the fact that if, when arc $(i,j)$ is being processed and the reverse arc $(j,i)$ is also in the queue, then consistency-checks can be saved because only support for the elements in $S_j^?$ has to be found (as opposed to support for all the elements in $D_j$ in the
Step4: Experimental Results
Step5: Sudoku
Step6: Harder Sudoku
Step7: 8 Queens
Step8: For the experiments below on n-ary CSPs, due to the n-ary constraints, the GAC algorithm was used. <br>
Step9: Crossword
Step10: Kakuro
Step11: Medium Kakuro
Step12: Harder Kakuro
Step13: Cryptarithmetic Puzzle
|
11,482
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr5', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
11,483
|
<ASSISTANT_TASK:>
Python Code:
import tarfile
import re
import os
from itertools import count
# You have a copy of this file in your `data` directory. Tate provides the data in a single TAR (tape archive) file
DATA_PATH = '../data/tate-collection-1.2.tar.gz'
DATA_FOBJ = tarfile.open(DATA_PATH)
# We can use Python's tools for working with tar files to inspect the data package
# For instance by listing the files it contains without unpacking it
FILES = DATA_FOBJ.getmembers()
for i, f in enumerate(FILES[:10]):
print('{0} \t {1}'.format(i, f))
# This time we're only going to extract the CSV file
DATA_FOBJ.extractall(path='../data/tate-collection', members=FILES[0:4])
CSV_FILE_PATH = '../data/tate-collection/collection-1.2/artist_data.csv'
%%bash
csvlook ../data/tate-collection/collection-1.2/artist_data.csv | head -n 5
!csvcut -n ../data/tate-collection/collection-1.2/artist_data.csv
!csvcut -c 1,2,5,6,9 ../data/tate-collection/collection-1.2/artist_data.csv
!csvcut -c name,url ../data/tate-collection/collection-1.2/artist_data.csv
!csvcut -c name,dates ../data/tate-collection/collection-1.2/artist_data.csv | csvlook | head -n 10
!csvcut -c name,yearOfBirth,yearOfDeath,url ../data/tate-collection/collection-1.2/artist_data.csv | \
csvstat
!csvcut -c name,yearOfBirth,yearOfDeath ../data/tate-collection/collection-1.2/artist_data.csv | \
csvgrep -c yearOfBirth -m 1497 | csvlook
!csvcut -c name,yearOfBirth,yearOfDeath ../data/tate-collection/collection-1.2/artist_data.csv | \
csvgrep -c yearOfDeath -m 2005 | csvlook
!csvcut -c name,yearOfBirth,yearOfDeath ../data/tate-collection/collection-1.2/artist_data.csv | \
csvgrep -c yearOfDeath -m 2005 | csvsort -c yearOfBirth | csvlook
!csvsql --query "select name from artist_data where yearOfBirth > 1700;" \
../data/tate-collection/collection-1.2/artist_data.csv | csvlook
!csvsql -i sqlite ../data/tate-collection/collection-1.2/artist_data.csv
!csvsql --db sqlite:///tate_artists.db --insert ../data/tate-collection/collection-1.2/artist_data.csv
!sql2csv --db sqlite:///tate_artists.db --query "select * from artist_data"
!sql2csv --db sqlite:///tate_artists.db --query "select * from artist_data where gender='Female'"
!sql2csv --db sqlite:///tate_artists.db --query "select name,dates from artist_data where gender='Female'" | \
csvlook
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring
Step2: What we're seeing here is effectively a plain text display of our CSV data … not super pretty but faster than opening Excel. Let's make this more useful
Step3: We can look at just a subset of columns …
Step4: Once you know the column names you can also use those to subset or slice
Step5: Now combine these tools together using unix pipes
Step6: In our case, this still doesn't look that helpful because some of these columns (like name) are really wide
Step7: Super useful!
Step8: Let's sort by the year of birth to see which of our 2005 decedents were the oldest
Step9: Power Usage
Step10: This facility is best for one-off ad hoc queries and, since csvkit is building a little in-memory database behind the scenes to make it possible, large datasets can be very slow to work with this way.
Step11: Imported successfully, now we can run any SQL query we want
|
11,484
|
<ASSISTANT_TASK:>
Python Code:
import math
import rebound, rebound.data
%matplotlib inline
sim = rebound.Simulation()
rebound.data.add_outer_solar_system(sim) # add some particles for testing
for i in range(1,sim.N):
sim.particles[i].m *= 50.
sim.integrator = "WHFast" # This will end badly!
sim.dt = sim.particles[1].P * 0.002 # Timestep a small fraction of innermost planet's period
sim.move_to_com()
E0 = sim.calculate_energy() # Calculate initial energy
rebound.OrbitPlot(sim);
sim.integrate(600*2.*math.pi)
E1 = sim.calculate_energy()
print("Relative energy error with WHFast: %f"%((E0-E1)/E0))
sim = rebound.Simulation()
rebound.data.add_outer_solar_system(sim) # add some particles for testing
for i in range(1,sim.N):
sim.particles[i].m *= 50.
sim.integrator = "mercurius"
sim.dt = sim.particles[1].P * 0.002 # Timestep a small fraction of innermost planet's period
sim.move_to_com()
E0 = sim.calculate_energy() # Calculate initial energy
sim.integrate(600*2.*math.pi)
E1 = sim.calculate_energy()
print("Relative energy error with MERCURIUS: %e"%((E1-E0)/E0))
# Sets the minimal timestep to a fraction of the global timestep
sim.ri_ias15.min_dt = 1e-4 * sim.dt
sim.ri_mercurius.hillfac = 5
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us integrate this system for a few hundred years. An instability will occur. We can then measure the energy error, which is a good estimate as to how accurate the integration was.
Step2: An energy error that large means we basically go it wrong completely. Let's try this again but use MERCURIUS.
Step3: As you can see MERCURIUS is able to integrate this system with much better accuracy. When a close encounter occurs, it automatically (and smoothly!) switches to the IAS15 integrator. When there is not close encounter, you still get all the benefits in terms of speed an accuracy from a symplectic integrator.
Step4: You also may want to change the critical distance at which MERCURIUS switches over from pure WHFast to IAS15. This is expressed in units of Hill radii. The default is 3 Hill radii, in the following we change it to 5 Hill radii
|
11,485
|
<ASSISTANT_TASK:>
Python Code:
def list_of_strings_v1(iterable):
converts the iterable input into a list of strings
# build the output
out = [str(i) for i in iterable]
# validate the output
for i in out:
assert type(i) == str
# return
return out
list_of_strings_v1(range(10))
from battle_tested import fuzz
fuzz(list_of_strings_v1)
def list_of_strings_v2(iterable):
converts the iterable input into a list of strings
try:
iter(iterable)
# build the output
out = [str(i) for i in iterable]
except TypeError: # raised when input was not iterable
out = [str(iterable)]
# validate the output
for i in out:
assert type(i) == str
# return
return out
fuzz(list_of_strings_v2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: battle_tested was originally created to harden your safeties.
Step2: Here's an example of what many programmers would consider enough of a test.
Step3: The above proves it works and is pretty clean and understandable right?
Step5: And with 2 lines of code, that was proven wrong.
|
11,486
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
%env TFLITE_FILE=example.tflite
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
! edgetpu_compiler $TFLITE_FILE
import os
from google.colab import files
name = os.path.splitext(os.environ['TFLITE_FILE'])[0]
files.download(str(name + '_edgetpu.tflite'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compile a model for the Edge TPU
Step2: Now click Runtime > Run all in the Colab toolbar.
Step3: Compile the model
Step4: The compiled model uses the same filename but with "_edgetpu" appended at the end.
|
11,487
|
<ASSISTANT_TASK:>
Python Code:
from random import seed
from random import randrange
import random
from csv import reader
from math import sqrt
import copy
# carregar o arquivo de CSV
def carregar_csv(nome_arquivo):
dados = list()
with open(nome_arquivo, 'r') as arquivo:
leitor_csv = reader(arquivo)
for linha in leitor_csv:
if not linha:
continue
dados.append(linha)
return dados
def str_coluna_para_int(dados, coluna):
classe = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup
# Convert string column to float
def str_column_to_float(dataset, column):
column = column
dataset_copy = copy.deepcopy(dataset)
for row in dataset_copy:
row[column] = float(row[column].strip())
return dataset_copy
# carregar os dados
arquivo = '../../data_sets/sonar.all-data.csv'
dados = carregar_csv(arquivo)
# converte atributos de string para números inteiros
for i in range(0, len(dados[0])-1):
dados = str_column_to_float(dados, i)
dados_X = [linha[:-1] for linha in dados]
dados_Y = [linha[-1] for linha in dados]
def equilibrar_as_classes(dados_X, dados_Y):
classes = set(dados_Y)
conta_min = len(dados_Y)
for classe in classes:
conta = dados_Y.count(classe)
if conta < conta_min:
conta_min = conta
dados_igual_X = []
dados_igual_Y = []
indíces = set()
for classe in classes:
while len(dados_igual_Y) < len(classes)*conta_min:
indíce = random.randint(0,len(dados_X)-1)
classe = dados_Y[indíce]
if (indíce not in indíces) and (dados_igual_Y.count(classe) < conta_min):
indíces.update([indíce])
dados_igual_X.append(dados_X[indíce])
dados_igual_Y.append(dados_Y[indíce])
return dados_igual_X, dados_igual_Y
def criar_divisão_trem_teste(dados_X, dados_Y, relação=.8):
classes = set(dados_Y)
n_classes = len(classes)
trem_classe_tamanho = int((len(dados_Y)*relação)/n_classes)
indíces_todo = set(range(len(dados_X)))
indíces_para_escolher = set(range(len(dados_X)))
indíces = set()
trem_X = []
trem_Y = []
teste_X = []
teste_Y = []
while len(trem_Y) < trem_classe_tamanho*n_classes:
indíce = random.choice(list(indíces_para_escolher))
indíces_para_escolher.remove(indíce)
classe = dados_Y[indíce]
if (trem_Y.count(classe) < trem_classe_tamanho):
indíces.update([indíce])
trem_X.append(dados_X[indíce])
trem_Y.append(dados_Y[indíce])
indíces_teste = indíces_todo - indíces
for indíce in indíces_teste:
teste_X.append(dados_X[indíce])
teste_Y.append(dados_Y[indíce])
return trem_X, trem_Y, teste_X, teste_Y
dados_igual_X, dados_igual_Y = equilibrar_as_classes(dados_X, dados_Y)
trem_X, trem_Y, teste_X, teste_Y = criar_divisão_trem_teste(dados_igual_X, dados_igual_Y, relação=.8)
def obter_melhor_divisão(dados_X, dados_Y, n_características=None):
Obter a melhor divisão pelo dados
:param dados_X: Lista, o conjuncto de dados
:param dados_Y: Lista, os classes
:param n_características: Int, o numero de características para usar, quando você está usando a árvore sozinha fica
esta entrada em None
:return: dicionário, pela melhor divisáo, o indíce da característica, o valor para dividir, e os groupos de registors
resultandos da divisão
classes = list(set(dados_Y)) #lista único de classes
b_indíce, b_valor, b_ponto, b_grupos = 999, 999, 999, None
# Addicionar os classes (dados_Y) para os registros
for i in range(len(dados_X)):
dados_X[i].append(dados_Y[i])
dados = dados_X
if n_características is None:
n_características = len(dados_X) - 1
# Faz uma lista de características únicos para usar
características = list()
while len(características) < n_características:
indíce = randrange(len(dados_X[0]))
if indíce not in características:
características.append(indíce)
for indíce in características:
for registro in dados_X:
grupos = tentar_divisão(indíce, registro[indíce], dados_X, dados_Y)
gini = gini_indíce(grupos, classes)
if gini < b_ponto:
b_indíce, b_valor, b_ponto, b_grupos = indíce, registro[indíce], gini, grupos
return {'indíce':b_indíce, 'valor':b_valor, 'grupos':b_grupos}
def tentar_divisão(indíce, valor, dados_X, dados_Y):
Dividir o dados sobre uma característica e o valor da caracaterística dele
:param indíce: Int, o indíce da característica
:param valor: Float, o valor do indíce por um registro
:param dados_X: List, o conjuncto de dados
:param dados_Y: List, o conjuncto de classes
:return: esquerda, direitaç duas listas de registros dividou de o valor de característica
esquerda_X, esquerda_Y, direita_X, direita_Y = [], [], [], []
for linha_ix in range(len(dados_X)):
if dados_X[linha_ix][indíce] < valor:
esquerda_X.append(dados_X[linha_ix])
esquerda_Y.append(dados_Y[linha_ix])
else:
direita_X.append(dados_X[linha_ix])
direita_Y.append(dados_Y[linha_ix])
return esquerda_X, esquerda_Y, direita_X, direita_Y
def gini_indíce(grupos, classes):
Calcular o indíce-Gini pelo dados diversão
:param grupos: O grupo de registros
:param classes: O conjuncto de alvos
:return: gini, Float a pontuação de pureza
grupos_X = grupos[0], grupos[2]
grupos_Y = grupos[1], grupos[3]
gini = 0.0
for valor_alvo in classes:
for grupo_ix in [0,1]:
tomanho = len(grupos_X[grupo_ix])
if tomanho == 0:
continue
proporção = grupos_Y[grupo_ix].count(classes) / float(tomanho)
gini += (proporção * (1.0 - proporção))
return gini
Agora que nos podemos obter a melhor divisão uma vez, nos precisamos fazer isso muitas vezes, e volta a resposta da árvore
def to_terminal(grupo_Y):
Voltar o valor alvo para uma grupo no fim de uma filial
:param grupo_Y: O conjuncto de classes em um lado de uma divisão
:return: valor_de_alvo, Int
valor_de_alvo = max(set(grupo_Y), key=grupo_Y.count)
return valor_de_alvo
def dividir(nó_atual, profundidade_max, tamanho_min, n_características, depth):
Recursivo, faz subdivisões por um nó ou faz um terminal
:param nó_atual: o nó estar analisado agora, vai mudar o root
:param max_profundidade: Int, o número máximo de iterações
esquerda_X, esquerda_Y, direita_X, direita_Y = nó_atual['grupos']
del(nó_atual['grupos'])
# provar por um nó onde um dos lados tem todos os dados
if not esquerda_X or not direita_X:
nó_atual['esquerda'] = nó_atual['direita'] = to_terminal(esquerda_Y + direita_Y)
return
# provar por profundidade maximo
if depth >= profundidade_max:
nó_atual['esquerda'], nó_atual['direita'] = to_terminal(esquerda_Y), to_terminal(direita_Y)
return
# processar o lado esquerda
if len(esquerda_X) <= tamanho_min:
nó_atual['esquerda'] = to_terminal(esquerda_Y)
else:
nó_atual['esquerda'] = obter_melhor_divisão(esquerda_X, esquerda_Y, n_características)
dividir(nó_atual['esquerda'], max_depth, min_size, n_características, depth+1)
# processar o lado direita
if len(direita_X) <= tamanho_min:
nó_atual['direita'] = to_terminal(direita_Y)
else:
nó_atual['direita'] = obter_melhor_divisão(direita_X, direita_Y, n_características)
dividir(nó_atual['direita'], max_depth, min_size, n_características, depth+1)
def criar_árvore(trem_X, trem_Y, profundidade_max, tamanho_min, n_características):
Criar árvore
:param:
root = obter_melhor_divisão(trem_X, trem_Y, n_características)
dividir(root, profundidade_max, tamanho_min, n_características, 1)
return root
n_características = len(dados_X[0])-1
profundidade_max = 10
tamanho_min = 1
árvore = criar_árvore(trem_X, trem_Y, profundidade_max, tamanho_min, n_características)
def prever(nó, linha):
if linha[nó['indíce']] < nó['valor']:
if isinstance(nó['esquerda'], dict):
return prever(nó['esquerda'], linha)
else:
return nó['esquerda']
else:
if isinstance(nó['direita'], dict):
return prever(nó['direita'], linha)
else:
return nó['direita']
teste_ix = 9
print('A classe preveu da árvore é: ', str(prever(árvore, teste_X[teste_ix])))
print('A classe atual é: ', str(teste_Y[teste_ix][-1]))
def precisão(teste_X, teste_Y, árvore):
pontos = []
for teste_ix in range(len(teste_X)):
preverção = prever(árvore, teste_X[teste_ix])
if preverção == teste_Y[teste_ix]:
pontos += [1]
else:
pontos += [0]
precisão_valor = sum(pontos)/len(pontos)
return precisão_valor, pontos
precisão_valor = precisão(teste_X, teste_Y, árvore)[0]
precisão_valor
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carregando os dados e pre-procesamento
Step2: Proximo nos precisamos fazer os dados tem a mesma quantidade de cada classe. Isso é importante por a maioria de classificadores em machine learning porque se não o classificador preveria o modo cada vez, porque isso daria a melhor precisão.
Step9: Proximo nos podemos começar construir a nossa árvore.
Step11: Finalmente nos criamos uma função simples que nos vamos excecutar para criar a árvore.
Step12: E carrega!
Step13: Agora nos podemos usar a nossa árvore para prever a classe de dados.
Step14: Agora nos podemos fazer preveções usando a nossa função prever, é melhor se nos usamos registros no nosso conjuncto de teste porque a árvore nao viu essas antes. Nos podemos fazer uma previção e depois comparar o resulto para a classe atual.
Step15: Proximo nos vamos criar uma função que vai comparar tudos os registros no nosso conjunto de teste e da a precisão para nos. A precisão é definido de o por cento a árvore preveu corrigir.
|
11,488
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(
subset='train',
categories=categories,
shuffle=True,
random_state=42,
remove=('headers', 'footers'),
)
twenty_test = fetch_20newsgroups(
subset='test',
categories=categories,
shuffle=True,
random_state=42,
remove=('headers', 'footers'),
)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.decomposition import TruncatedSVD
from sklearn.pipeline import Pipeline, make_pipeline
vec = TfidfVectorizer(min_df=3, stop_words='english',
ngram_range=(1, 2))
svd = TruncatedSVD(n_components=100, n_iter=7, random_state=42)
lsa = make_pipeline(vec, svd)
clf = SVC(C=150, gamma=2e-2, probability=True)
pipe = make_pipeline(lsa, clf)
pipe.fit(twenty_train.data, twenty_train.target)
pipe.score(twenty_test.data, twenty_test.target)
def print_prediction(doc):
y_pred = pipe.predict_proba([doc])[0]
for target, prob in zip(twenty_train.target_names, y_pred):
print("{:.3f} {}".format(prob, target))
doc = twenty_test.data[0]
print_prediction(doc)
import eli5
from eli5.lime import TextExplainer
te = TextExplainer(random_state=42)
te.fit(doc, pipe.predict_proba)
te.show_prediction(target_names=twenty_train.target_names)
import re
doc2 = re.sub(r'(recall|kidney|stones|medication|pain|tech)', '', doc, flags=re.I)
print_prediction(doc2)
print(te.samples_[0])
len(te.samples_)
te.vec_, te.clf_
te.metrics_
import numpy as np
def predict_proba_len(docs):
# nasty predict_proba - the result is based on document length,
# and also on a presence of "medication"
proba = [
[0, 0, 1.0, 0] if len(doc) % 2 or 'medication' in doc else [1.0, 0, 0, 0]
for doc in docs
]
return np.array(proba)
te3 = TextExplainer().fit(doc, predict_proba_len)
te3.show_prediction(target_names=twenty_train.target_names)
te3.metrics_
from sklearn.pipeline import make_union
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.base import TransformerMixin
class DocLength(TransformerMixin):
def fit(self, X, y=None): # some boilerplate
return self
def transform(self, X):
return [
# note that we needed both positive and negative
# feature - otherwise for linear model there won't
# be a feature to show in a half of the cases
[len(doc) % 2, not len(doc) % 2]
for doc in X
]
def get_feature_names(self):
return ['is_odd', 'is_even']
vec = make_union(DocLength(), CountVectorizer(ngram_range=(1,2)))
te4 = TextExplainer(vec=vec).fit(doc[:-1], predict_proba_len)
print(te4.metrics_)
te4.explain_prediction(target_names=twenty_train.target_names)
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vec_char = HashingVectorizer(analyzer='char_wb', ngram_range=(4,5))
clf_char = SGDClassifier(loss='log')
pipe_char = make_pipeline(vec_char, clf_char)
pipe_char.fit(twenty_train.data, twenty_train.target)
pipe_char.score(twenty_test.data, twenty_test.target)
eli5.show_prediction(clf_char, doc, vec=vec_char,
targets=['sci.med'], target_names=twenty_train.target_names)
te = TextExplainer(random_state=42).fit(doc, pipe_char.predict_proba)
print(te.metrics_)
te.show_prediction(targets=['sci.med'], target_names=twenty_train.target_names)
te = TextExplainer(char_based=True, random_state=42)
te.fit(doc, pipe_char.predict_proba)
print(te.metrics_)
te.show_prediction(targets=['sci.med'], target_names=twenty_train.target_names)
te = TextExplainer(char_based=True, n_samples=50000, random_state=42)
te.fit(doc, pipe_char.predict_proba)
print(te.metrics_)
te.show_prediction(targets=['sci.med'], target_names=twenty_train.target_names)
from eli5.lime.samplers import MaskingTextSampler
sampler = MaskingTextSampler(
# Regex to split text into tokens.
# "." means any single character is a token, i.e.
# we work on chars.
token_pattern='.',
# replace no more than 3 tokens
max_replace=3,
# by default all tokens are replaced;
# replace only a token at a given position.
bow=False,
)
samples, similarity = sampler.sample_near(doc)
print(samples[0])
te = TextExplainer(char_based=True, sampler=sampler, random_state=42)
te.fit(doc, pipe_char.predict_proba)
print(te.metrics_)
te.show_prediction(targets=['sci.med'], target_names=twenty_train.target_names)
from sklearn.tree import DecisionTreeClassifier
te5 = TextExplainer(clf=DecisionTreeClassifier(max_depth=2), random_state=0)
te5.fit(doc, pipe.predict_proba)
print(te5.metrics_)
te5.show_weights()
print("both words removed::")
print_prediction(re.sub(r"(kidney|pain)", "", doc, flags=re.I))
print("\nonly 'pain' removed:")
print_prediction(re.sub(r"pain", "", doc, flags=re.I))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dimension of the input documents is reduced to 100, and then a kernel SVM is used to classify the documents.
Step2: TextExplainer
Step3: Why it works
Step4: Predicted probabilities changed a lot indeed.
Step5: By default TextExplainer generates 5000 distorted texts (use n_samples argument to change the amount)
Step6: Trained white-box classifier and vectorizer are available as vec_ and clf_ attributes
Step7: Should we trust the explanation?
Step8: 'score' is an accuracy score weighted by cosine distance between generated sample and the original document (i.e. texts which are closer to the example are more important). Accuracy shows how good are 'top 1' predictions.
Step9: TextExplainer correctly figured out that 'medication' is important, but failed to account for "len(doc) % 2" condition, so the explanation is incomplete. We can detect this failure by looking at metrics - they are low
Step10: If (a big if...) we suspect that the fact document length is even or odd is important, it is possible to customize TextExplainer to check this hypothesis.
Step11: Much better! It was a toy example, but the idea stands - if you think something could be important, add it to the mix as a feature for TextExplainer.
Step12: This pipeline is supported by eli5 directly, so in practice there is no need to use TextExplainer for it. We're using this pipeline as an example - it is possible check the "true" explanation first, without using TextExplainer, and then compare the results with TextExplainer results.
Step13: TextExplainer produces a different result
Step14: Scores look OK but not great; the explanation kind of makes sense on a first sight, but we know that the classifier works in a different way.
Step15: Hm, the result look worse. TextExplainer detected correctly that only the first part of word "medication" is important, but the result is noisy overall, and scores are bad. Let's try it with more samples
Step16: It is getting closer, but still not there yet. The problem is that it is much more resource intensive - you need a lot more samples to get non-noisy results. Here explaining a single example took more time than training the original pipeline.
Step17: Note that accuracy score is perfect, but KL divergence is bad. It means this sampler was not very useful
Step18: How to read it
|
11,489
|
<ASSISTANT_TASK:>
Python Code:
from pygameday import GameDayClient
from datetime import datetime
# Use an SQLite URI. A database file named `gameday.db` will be created in the current directory
# if it doesn't already exist
database_uri = "sqlite:///gameday.db"
# Instantiate a GameDayClient with the above URI, a moderate log level, without ingesting exhibition games,
# and without processing games in parallel
client = GameDayClient(database_uri, log_level="INFO", ingest_spring_training=False, n_workers=1)
client.db_stats()
a_date = datetime(2018, 7, 1)
client.process_date(a_date)
start_date = datetime(2015, 6, 12) # June 12, 2015
end_date = datetime(2015, 6, 13) # June 13, 2015
client.process_date_range(start_date, end_date)
# Hopefully the output is different than when you ran this function above.
client.db_stats()
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine("sqlite:///gameday.db")
data = pd.read_sql("SELECT * FROM pitches LIMIT 10;", engine, index_col="pitch_id")
# data = data.set_index("pitch_id")
data.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now specify a URI for your database. This example uses SQLite, a file-based database that can exist locally on your system. SQLite is probably the easiest type of database to run if you haven't used one extensively before.
Step2: Before we do anything, use a class method to print the number of rows in each table.
Step3: Scraping, Parsing, and Ingesting Data
Step4: To process a date range, specify a start and end date and use the process_date_range function. Data for all MLB games within the date range (including the start and end dates) will be processed.
Step5: That's it! You now have data in your database! Run db_stats() again to confirm.
Step6: Using Your Database
Step7: To access the database through Pandas, create an SQLAlchemy engine. This engine will be a parameter that we pass to Pandas in order to execute queries. See the Pandas documentation for more details.
Step8: Now use the Pandas read_sql function to execute an SQL statement. Here, we'll select 10 rows from the pitches table.
Step9: The data is now available as a Pandas DataFrame object. Execute the following cell to view it in a pretty table.
|
11,490
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.models import candidate_subclass
ChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])
train_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()
dev_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()
import bz2
from six.moves.cPickle import load
with bz2.BZ2File('data/ctd.pkl.bz2', 'rb') as ctd_f:
ctd_unspecified, ctd_therapy, ctd_marker = load(ctd_f)
def cand_in_ctd_unspecified(c):
return 1 if c.get_cids() in ctd_unspecified else 0
def cand_in_ctd_therapy(c):
return 1 if c.get_cids() in ctd_therapy else 0
def cand_in_ctd_marker(c):
return 1 if c.get_cids() in ctd_marker else 0
def LF_in_ctd_unspecified(c):
return -1 * cand_in_ctd_unspecified(c)
def LF_in_ctd_therapy(c):
return -1 * cand_in_ctd_therapy(c)
def LF_in_ctd_marker(c):
return cand_in_ctd_marker(c)
import re
from snorkel.lf_helpers import (
get_tagged_text,
rule_regex_search_tagged_text,
rule_regex_search_btw_AB,
rule_regex_search_btw_BA,
rule_regex_search_before_A,
rule_regex_search_before_B,
)
# List to parenthetical
def ltp(x):
return '(' + '|'.join(x) + ')'
def LF_induce(c):
return 1 if re.search(r'{{A}}.{0,20}induc.{0,20}{{B}}', get_tagged_text(c), flags=re.I) else 0
causal_past = ['induced', 'caused', 'due']
def LF_d_induced_by_c(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + '.{0,9}(by|to).{0,50}', 1)
def LF_d_induced_by_c_tight(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + ' (by|to) ', 1)
def LF_induce_name(c):
return 1 if 'induc' in c.chemical.get_span().lower() else 0
causal = ['cause[sd]?', 'induce[sd]?', 'associated with']
def LF_c_cause_d(c):
return 1 if (
re.search(r'{{A}}.{0,50} ' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
and not re.search('{{A}}.{0,50}(not|no).{0,20}' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
) else 0
treat = ['treat', 'effective', 'prevent', 'resistant', 'slow', 'promise', 'therap']
def LF_d_treat_c(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)
def LF_c_treat_d(c):
return rule_regex_search_btw_AB(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)
def LF_treat_d(c):
return rule_regex_search_before_B(c, ltp(treat) + '.{0,50}', -1)
def LF_c_treat_d_wide(c):
return rule_regex_search_btw_AB(c, '.{0,200}' + ltp(treat) + '.{0,200}', -1)
def LF_c_d(c):
return 1 if ('{{A}} {{B}}' in get_tagged_text(c)) else 0
def LF_c_induced_d(c):
return 1 if (
('{{A}} {{B}}' in get_tagged_text(c)) and
(('-induc' in c[0].get_span().lower()) or ('-assoc' in c[0].get_span().lower()))
) else 0
def LF_improve_before_disease(c):
return rule_regex_search_before_B(c, 'improv.*', -1)
pat_terms = ['in a patient with ', 'in patients with']
def LF_in_patient_with(c):
return -1 if re.search(ltp(pat_terms) + '{{B}}', get_tagged_text(c), flags=re.I) else 0
uncertain = ['combin', 'possible', 'unlikely']
def LF_uncertain(c):
return rule_regex_search_before_A(c, ltp(uncertain) + '.*', -1)
def LF_induced_other(c):
return rule_regex_search_tagged_text(c, '{{A}}.{20,1000}-induced {{B}}', -1)
def LF_far_c_d(c):
return rule_regex_search_btw_AB(c, '.{100,5000}', -1)
def LF_far_d_c(c):
return rule_regex_search_btw_BA(c, '.{100,5000}', -1)
def LF_risk_d(c):
return rule_regex_search_before_B(c, 'risk of ', 1)
def LF_develop_d_following_c(c):
return 1 if re.search(r'develop.{0,25}{{B}}.{0,25}following.{0,25}{{A}}', get_tagged_text(c), flags=re.I) else 0
procedure, following = ['inject', 'administrat'], ['following']
def LF_d_following_c(c):
return 1 if re.search('{{B}}.{0,50}' + ltp(following) + '.{0,20}{{A}}.{0,50}' + ltp(procedure), get_tagged_text(c), flags=re.I) else 0
def LF_measure(c):
return -1 if re.search('measur.{0,75}{{A}}', get_tagged_text(c), flags=re.I) else 0
def LF_level(c):
return -1 if re.search('{{A}}.{0,25} level', get_tagged_text(c), flags=re.I) else 0
def LF_neg_d(c):
return -1 if re.search('(none|not|no) .{0,25}{{B}}', get_tagged_text(c), flags=re.I) else 0
WEAK_PHRASES = ['none', 'although', 'was carried out', 'was conducted',
'seems', 'suggests', 'risk', 'implicated',
'the aim', 'to (investigate|assess|study)']
WEAK_RGX = r'|'.join(WEAK_PHRASES)
def LF_weak_assertions(c):
return -1 if re.search(WEAK_RGX, get_tagged_text(c), flags=re.I) else 0
def LF_ctd_marker_c_d(c):
return LF_c_d(c) * cand_in_ctd_marker(c)
def LF_ctd_marker_induce(c):
return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_marker(c)
def LF_ctd_therapy_treat(c):
return LF_c_treat_d_wide(c) * cand_in_ctd_therapy(c)
def LF_ctd_unspecified_treat(c):
return LF_c_treat_d_wide(c) * cand_in_ctd_unspecified(c)
def LF_ctd_unspecified_induce(c):
return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_unspecified(c)
def LF_closer_chem(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical closer than @dist/2 in either direction
sent = c.get_parent()
closest_other_chem = float('inf')
for i in range(dis_end, min(len(sent.words), dis_end + dist // 2)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return -1
for i in range(max(0, dis_start - dist // 2), dis_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return -1
return 0
def LF_closer_dis(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical disease than @dist/8 in either direction
sent = c.get_parent()
for i in range(chem_end, min(len(sent.words), chem_end + dist // 8)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return -1
for i in range(max(0, chem_start - dist // 8), chem_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return -1
return 0
LFs = [
LF_c_cause_d,
LF_c_d,
LF_c_induced_d,
LF_c_treat_d,
LF_c_treat_d_wide,
LF_closer_chem,
LF_closer_dis,
LF_ctd_marker_c_d,
LF_ctd_marker_induce,
LF_ctd_therapy_treat,
LF_ctd_unspecified_treat,
LF_ctd_unspecified_induce,
LF_d_following_c,
LF_d_induced_by_c,
LF_d_induced_by_c_tight,
LF_d_treat_c,
LF_develop_d_following_c,
LF_far_c_d,
LF_far_d_c,
LF_improve_before_disease,
LF_in_ctd_therapy,
LF_in_ctd_marker,
LF_in_patient_with,
LF_induce,
LF_induce_name,
LF_induced_other,
LF_level,
LF_measure,
LF_neg_d,
LF_risk_d,
LF_treat_d,
LF_uncertain,
LF_weak_assertions,
]
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=LFs)
%time L_train = labeler.apply(split=0)
L_train
L_train.lf_stats(session)
from snorkel.learning.structure import DependencySelector
ds = DependencySelector()
deps = ds.select(L_train, threshold=0.1)
len(deps)
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel(lf_propensity=True)
gen_model.train(
L_train, deps=deps, decay=0.95, step_size=0.1/L_train.shape[0], reg_param=0.0
)
train_marginals = gen_model.marginals(L_train)
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
gen_model.learned_lf_stats()
from snorkel.annotations import save_marginals
save_marginals(session, L_train, train_marginals)
from load_external_annotations import load_external_labels
load_external_labels(session, ChemicalDisease, split=1, annotator='gold')
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_dev
L_dev = labeler.apply_existing(split=1)
_ = gen_model.error_analysis(session, L_dev, L_gold_dev)
L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part III
Step2: Text pattern approaches
Step3: Composite LFs
Step4: Rules based on context hierarchy
Step5: Running the LFs on the training set
Step6: Part IV
Step7: Now we'll train the generative model, using the deps argument to account for the learned dependencies. We'll also model LF propensity here, unlike the intro tutorial. In addition to learning the accuracies of the LFs, this also learns their likelihood of labeling an example.
Step8: Checking performance against development set labels
|
11,491
|
<ASSISTANT_TASK:>
Python Code:
def log(func):
def wraper():
print("INFO: Starting {}".format(func.__name__))
func()
print("INFO: Finishing {}".format(func.__name__))
return wraper
@log
def run():
print("Running run...")
run()
from time import sleep, time
def timer(Cls):
def wraper():
s = time()
obj = Cls()
e = time()
print("Cost {:.3f}s to init.".format(e - s))
return obj
return wraper
@timer
class Obj:
def __init__(self):
print("Hello")
sleep(3)
print("Obj")
o = Obj()
class HTML(object):
Baking HTML Tags!
def __init__(self, tag="p"):
print("LOG: Baking Tag <{}>!".format(tag))
self.tag = tag
def __call__(self, func):
return lambda: "<{0}>{1}</{0}>".format(self.tag, func(), self.tag)
@HTML("html")
@HTML("body")
@HTML("div")
def body():
return "Hello"
print(body())
RULES = {}
def route(rule):
def decorator(hand):
RULES.update({rule: hand})
return hand
return decorator
@route("/")
def index():
print("Hello world!")
def home():
print("Welcome Home!")
home = route("/home")(home)
index()
home()
print(RULES)
@route("/login")
def login(user = "user", pwd = "pwd"):
print("DB.findOne({{{}, {}}})".format(user, pwd))
login("hail", "python")
def log(f):
def wraper(*args, **kargs):
print("INFO: Start Logging")
f(*args, **kargs)
print("INFO: Finish Logging")
return wraper
@log
def run(hello = "world"):
print("Hello {}".format(hello))
run("Python")
@HTML("body")
def body():
return body content
return "Hello, body!"
print(body.__name__)
print(body.__doc__)
from functools import update_wrapper
functools.update_wrapper(wrapper, wrapped[, assigned][, updated])
class HTML(object):
Baking HTML Tags!
def __init__(self, tag="p"):
print("LOG: Baking Tag <{}>!".format(tag))
self.tag = tag
def __call__(self, func):
wraper = lambda: "<{0}>{1}</{0}>".format(self.tag, func(), self.tag)
update_wrapper(wraper, func)
return wraper
@HTML("body")
def body():
return body content!
return "Hello, body!"
print(body.__name__)
print(body.__doc__)
from functools import update_wrapper, partial
def my_wraps(wrapped):
return partial(update_wrapper, wrapped=wrapped)
def log(func):
@my_wraps(func)
def wraper():
print("INFO: Starting {}".format(func.__name__))
func()
print("INFO: Finishing {}".format(func.__name__))
return wraper
@log
def run():
Docs' of run
print("Running run...")
print(run.__name__)
print(run.__doc__)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 修饰类
Step3: 类作为修饰器
Step4: 传递参数
Step5: 向被修饰的函数传递参数,要看我们的修饰器是如何作用的,如果像上面这个例子一样未执行被修饰函数只是将其原模原样地返回,则不需要任何处理(这就把函数当做普通的值一样看待即可):
Step6: 如果需要在修饰器内执行,则需要稍微变动一下:
Step8: functools
Step12: 因为 body = HTML("body")(body) ,而 HTML("body").__call__() 返回的是一个 lambda 函数,因此 body 已经被替换成了 lambda,虽然都是可执行的函数,但原来定义的 body 中的一些属性,例如 __doc__/__name__/__module__ 都被替换了(在本例中__module__没变因为都在同一个文件中)。为了解决这一问题 Python 提供了 functools 标准库,其中包括了 update_wrapper 和 wraps 两个方法(源码)。其中 update_wrapper 就是用来将原来函数的信息赋值给修饰器中返回的函数:
Step14: 有趣的是 update_wrapper 的用法本身就很像是修饰器,因此 functools.wraps 就利用 functools.partial(还记得函数式编程中的偏应用吧!)将其变成一个修饰器:
|
11,492
|
<ASSISTANT_TASK:>
Python Code:
%%writefile mapper.py
#!/usr/bin/python
import sys
import re
count = 0
WORD_RE = re.compile(r"[\w']+")
filename = sys.argv[2]
findword = sys.argv[1]
with open (filename, "r") as myfile:
#Please insert your code
for line in myfile.readlines():
words = WORD_RE.findall(line.lower())
for w in words:
if w==findword.lower():
count+=1
break
print count
!chmod a+x mapper.py
%%writefile reducer.py
#!/usr/bin/python
import sys
sum = 0
for line in sys.stdin:
#Please insert your code
sum += int(line)
print sum
!chmod a+x reducer.py
%%writefile pGrepCount.sh
ORIGINAL_FILE=$1
FIND_WORD=$2
BLOCK_SIZE=$3
CHUNK_FILE_PREFIX=$ORIGINAL_FILE.split
SORTED_CHUNK_FILES=$CHUNK_FILE_PREFIX*.sorted
usage()
{
echo Parallel grep
echo usage: pGrepCount filename word chuncksize
echo greps file file1 in $ORIGINAL_FILE and counts the number of lines
echo Note: file1 will be split in chunks up to $ BLOCK_SIZE chunks each
echo $FIND_WORD each chunk will be grepCounted in parallel
}
#Splitting $ORIGINAL_FILE INTO CHUNKS
split -b $BLOCK_SIZE $ORIGINAL_FILE $CHUNK_FILE_PREFIX
#DISTRIBUTE
for file in $CHUNK_FILE_PREFIX*
do
#grep -i $FIND_WORD $file|wc -l >$file.intermediateCount &
./mapper.py $FIND_WORD $file >$file.intermediateCount &
done
wait
#MERGEING INTERMEDIATE COUNT CAN TAKE THE FIRST COLUMN AND TOTOL...
#numOfInstances=$(cat *.intermediateCount | cut -f 1 | paste -sd+ - |bc)
numOfInstances=$(cat *.intermediateCount | ./reducer.py)
echo "found [$numOfInstances] [$FIND_WORD] in the file [$ORIGINAL_FILE]"
!chmod a+x pGrepCount.sh
!./pGrepCount.sh License.txt COPYRIGHT 4k
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reduce
Step2: Write script to file
Step3: Run the file
Step4: Usage
|
11,493
|
<ASSISTANT_TASK:>
Python Code:
from helpers import load_data
# load dataset
x, y = load_data()
def build_k_indices(y, k_fold, seed):
build k indices for k-fold.
num_row = y.shape[0]
interval = int(num_row / k_fold)
np.random.seed(seed)
indices = np.random.permutation(num_row)
k_indices = [indices[k * interval: (k + 1) * interval] for k in range(k_fold)]
return np.array(k_indices)
from costs import compute_mse
from ridge_regression import ridge_regression
from build_polynomial import build_poly
def cross_validation(y, x, k_indices, k, lambda_, degree):
return the loss of ridge regression.
# get k'th subgroup in test, others in train
te_indice = k_indices[k]
tr_indice = k_indices[~(np.arange(k_indices.shape[0]) == k)]
tr_indice = tr_indice.reshape(-1)
y_te = y[te_indice]
y_tr = y[tr_indice]
x_te = x[te_indice]
x_tr = x[tr_indice]
# form data with polynomial degree
tx_tr = build_poly(x_tr, degree)
tx_te = build_poly(x_te, degree)
# ridge regression
w = ridge_regression(y_tr, tx_tr, lambda_)
# calculate the loss for train and test data
loss_tr = np.sqrt(2 * compute_mse(y_tr, tx_tr, w))
loss_te = np.sqrt(2 * compute_mse(y_te, tx_te, w))
return loss_tr, loss_te,w
from plots import cross_validation_visualization
def cross_validation_demo():
seed = 12
degree = 7
k_fold = 4
lambdas = np.logspace(-4, 0, 30)
# split data in k fold
k_indices = build_k_indices(y, k_fold, seed)
# define lists to store the loss of training data and test data
rmse_tr = []
rmse_te = []
# cross validation
for lambda_ in lambdas:
rmse_tr_tmp = []
rmse_te_tmp = []
for k in range(k_fold):
loss_tr, loss_te,_ = cross_validation(y, x, k_indices, k, lambda_, degree)
rmse_tr_tmp.append(loss_tr)
rmse_te_tmp.append(loss_te)
rmse_tr.append(np.mean(rmse_tr_tmp))
rmse_te.append(np.mean(rmse_te_tmp))
cross_validation_visualization(lambdas, rmse_tr, rmse_te)
cross_validation_demo()
def best_degree_selection(degrees, k_fold, lambdas, seed = 1):
# split data in k fold
k_indices = build_k_indices(y, k_fold, seed)
#for each degree, we compute the best lambdas and the associated rmse
best_lambdas = []
best_rmses = []
#vary degree
for degree in degrees:
# cross validation
rmse_te = []
for lambda_ in lambdas:
rmse_te_tmp = []
for k in range(k_fold):
_, loss_te,_ = cross_validation(y, x, k_indices, k, lambda_, degree)
rmse_te_tmp.append(loss_te)
rmse_te.append(np.mean(rmse_te_tmp))
ind_lambda_opt = np.argmin(rmse_te)
best_lambdas.append(lambdas[ind_lambda_opt])
best_rmses.append(rmse_te[ind_lambda_opt])
ind_best_degree = np.argmin(best_rmses)
return degrees[ind_best_degree]
best_degree_selection(np.arange(2,11), 4, np.logspace(-4, 0, 30))
def extended_cross_validation_demo():
seed = 1
degree = 7
k_fold = 4
lambdas = np.logspace(-4, 0, 20)
# split data in k fold
k_indices = build_k_indices(y, k_fold, seed)
# define lists to store the loss of training data and test data
rmse_tr = []
rmse_te = []
variances = []
# cross validation
for lambda_ in lambdas:
rmse_tr_tmp = []
rmse_te_tmp = []
for k in range(k_fold):
loss_tr, loss_te,_ = cross_validation(y, x, k_indices, k, lambda_, degree)
rmse_tr_tmp.append(loss_tr)
rmse_te_tmp.append(loss_te)
rmse_tr.append(rmse_tr_tmp)
rmse_te.append(rmse_te_tmp)
variances.append(loss_te)
plt.boxplot(rmse_te)
extended_cross_validation_demo()
from least_squares import least_squares
from split_data import split_data
from plots import bias_variance_decomposition_visualization
def bias_variance_demo():
The entry.
# define parameters
seeds = range(100)
num_data = 10000
ratio_train = 0.005
degrees = range(1, 10)
# define list to store the variable
rmse_tr = np.empty((len(seeds), len(degrees)))
rmse_te = np.empty((len(seeds), len(degrees)))
for index_seed, seed in enumerate(seeds):
np.random.seed(seed)
x = np.linspace(0.1, 2 * np.pi, num_data)
y = np.sin(x) + 0.3 * np.random.randn(num_data).T
# split data with a specific seed
x_tr, x_te, y_tr, y_te = split_data(x, y, ratio_train, seed)
for index_degree, degree in enumerate(degrees):
# form polynomial data
tx_tr = build_poly(x_tr, degree)
tx_te = build_poly(x_te, degree)
# least square
w = least_squares(y_tr, tx_tr)
# calculate the rmse for train and test
rmse_tr[index_seed, index_degree] = np.sqrt(2 * compute_mse(y_tr, tx_tr, w))
rmse_te[index_seed, index_degree] = np.sqrt(2 * compute_mse(y_te, tx_te, w))
bias_variance_decomposition_visualization(degrees, rmse_tr, rmse_te)
bias_variance_demo()
def test_error_distribution_demo():
The entry.
# define parameters
seeds = range(50)
num_data = 10000
ratio_train = 0.005
degrees = range(1, 10)
# define list to store the variable
rmse_te = np.empty((len(seeds), len(degrees)))
for index_seed, seed in enumerate(seeds):
np.random.seed(seed)
x = np.linspace(0.1, 2 * np.pi, num_data)
y = np.sin(x) + 0.3 * np.random.randn(num_data).T
# split data with a specific seed
x_tr, x_te, y_tr, y_te = split_data(x, y, ratio_train, seed)
for index_degree, degree in enumerate(degrees):
# form polynomial data
tx_tr = build_poly(x_tr, degree)
tx_te = build_poly(x_te, degree)
# least square
w = least_squares(y_tr, tx_tr)
# calculate the rmse for test
rmse_te[index_seed, index_degree] = np.sqrt(2 * compute_mse(y_te, tx_te, w))
plt.boxplot(rmse_te)
plt.title("error distribution")
plt.xlabel("degrees")
plt.ylabel("test_error")
test_error_distribution_demo()
def test_error_distribution_demo():
The entry.
# define parameters
seeds = range(50)
num_data = 10000
lambdas = np.logspace(-4, 0, 30)
ratio_train = 0.005
degrees = range(1, 10)
# define list to store the variable
rmse_te = np.empty((len(seeds), len(degrees)))
for index_seed, seed in enumerate(seeds):
np.random.seed(seed)
x = np.linspace(0.1, 2 * np.pi, num_data)
y = np.sin(x) + 0.3 * np.random.randn(num_data).T
# split data with a specific seed
x_tr, x_te, y_tr, y_te = split_data(x, y, ratio_train, seed)
k_fold = 4
k_indices = build_k_indices(y_tr, k_fold, 1)
for index_degree, degree in enumerate(degrees):
# form polynomial data
tx_tr = build_poly(x_tr, degree)
tx_te = build_poly(x_te, degree)
# selecting best lambda
rmse_te_tmp = []
ws = []
for lambda_ in lambdas:
rmse_te_tmp2 = []
ws_tmp = []
for k in range(k_fold):
_, loss_te, w = cross_validation(y_tr, x_tr, k_indices, k, lambda_, degree)
ws_tmp.append(w)
rmse_te_tmp2.append(loss_te)
rmse_te_tmp.append(np.mean(rmse_te_tmp2))
ws.append(ws_tmp)
ind_lambda_opt = np.argmin(rmse_te_tmp)
ws_opt = ws[ind_lambda_opt]
lambda_opt = lambdas[ind_lambda_opt]
# calculate the rmse for test
rmse_te[index_seed, index_degree] = np.mean([np.sqrt(2 * compute_mse(y_te, tx_te, w)) for w in ws_opt])
plt.boxplot(rmse_te)
plt.title("error distribution")
plt.xlabel("degrees")
plt.ylabel("test_error")
test_error_distribution_demo()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Cross-Validation and Bias-Variance decomposition
Step3: Selection of the best model among various degrees
Step4: Box-plot of the RMSE using the cross-validation
Step6: Bias-Variance Decomposition
Step8: Distribution of the test error
Step10: For Ridge regression
|
11,494
|
<ASSISTANT_TASK:>
Python Code:
from Bio import motifs
from Bio.Seq import Seq
instances = [Seq("TACAA"),
Seq("TACGC"),
Seq("TACAC"),
Seq("TACCC"),
Seq("AACCC"),
Seq("AATGC"),
Seq("AATGC")]
m = motifs.create(instances)
print(m)
len(m)
print(m.counts)
m.counts['A']
m.counts['T', 0]
m.counts['T', 2]
m.counts['T', 3]
m.counts[:, 3]
m.alphabet
m.alphabet.letters
sorted(m.alphabet.letters)
m.counts['A',:]
m.counts[0,:]
m.consensus
m.anticonsensus
m.degenerate_consensus
r = m.reverse_complement()
r.consensus
r.degenerate_consensus
print(r)
m.weblogo("mymotif.png")
from Bio import motifs
with open("Arnt.sites") as handle:
arnt = motifs.read(handle, "sites")
print(arnt.instances[:3])
for instance in arnt.instances:
print(instance)
print(arnt.counts)
with open("SRF.pfm") as handle:
srf = motifs.read(handle, "pfm")
print(srf.counts)
print(srf.instances)
print(arnt.counts.consensus)
print(srf.counts.consensus)
fh = open("jaspar_motifs.txt")
for m in motifs.parse(fh, "jaspar"))
print(m)
from Bio.motifs.jaspar.db import JASPAR5
JASPAR_DB_HOST = <hostname>
JASPAR_DB_NAME = <db_name>
JASPAR_DB_USER = <user>
JASPAR_DB_PASS = <passord>
jdb = JASPAR5(
host=JASPAR_DB_HOST,
name=JASPAR_DB_NAME,
user=JASPAR_DB_USER,
password=JASPAR_DB_PASS
)
arnt = jdb.fetch_motif_by_id("MA0004")
print(arnt)
motifs = jdb.fetch_motifs_by_name("Arnt")
print(motifs[0])
motifs = jdb.fetch_motifs(
collection = 'CORE',
tax_group = ['vertebrates', 'insects'],
tf_class = 'Winged Helix-Turn-Helix',
tf_family = ['Forkhead', 'Ets'],
min_ic = 12
)
for motif in motifs:
pass # do something with the motif
motif.pseudocounts = motifs.jaspar.calculate_pseudocounts(motif)
abs_score = (pssm.max - pssm.min) * rel_score + pssm.min
rel_score = (abs_score - pssm.min) / (pssm.max - pssm.min)
test_seq=Seq("TAAGCGTGCACGCGCAACACGTGCATTA", unambiguous_dna)
arnt.pseudocounts = motifs.jaspar.calculate_pseudocounts(arnt)
pssm = arnt.pssm
max_score = pssm.max
min_score = pssm.min
abs_score_threshold = (max_score - min_score) * 0.8 + min_score
for position, score in pssm.search(test_seq,
rel_score = (score - min_score) / (max_score - min_score)
print("Position %d: score = %5.3f, rel. score = %5.3f" % (
handle = open("meme.dna.oops.txt")
record = motifs.parse(handle, "meme")
handle.close()
record.version
record.datafile
record.command
record.alphabet
record.sequences
len(record)
motif = record[0]
print(motif.consensus)
print(motif.degenerate_consensus)
motif.num_occurrences
motif.length
evalue = motif.evalue
print("%3.1g" % evalue)
motif.name
motif = record['Motif 1']
len(motif.instances)
motif.instances[0]
motif.instances[0].motif_name
motif.instances[0].sequence_name
motif.instances[0].start
motif.instances[0].strand
motif.instances[0].length
pvalue = motif.instances[0].pvalue
print("%5.3g" % pvalue)
handle = open("transfac.dat")
record = motifs.parse(handle, "TRANSFAC")
handle.close()
record.version
motif = record[0]
motif.degenerate_consensus # Using the Bio.motifs.Motif method
motif['ID'] # Using motif as a dictionary
print(record)
text = str(record)
handle = open("mytransfacfile.dat", 'w')
handle.write(text)
handle.close()
print(arnt.format("pfm"))
print(arnt.format("jaspar"))
print(m.format("transfac"))
two_motifs = [arnt, srf]
print(motifs.write(two_motifs, 'transfac'))
two_motifs = [arnt, mef2a]
print(motifs.write(two_motifs, "jaspar"))
pwm = m.counts.normalize(pseudocounts=0.5)
print(pwm)
pwm = m.counts.normalize(pseudocounts={'A':0.6, 'C': 0.4, 'G': 0.4, 'T': 0.6})
print(pwm)
pwm.consensus
pwm.anticonsensus
pwm.degenerate_consensus
m.degenerate_consensus
rpwm = pwm.reverse_complement()
print(rpwm)
pssm = pwm.log_odds()
print(pssm)
background = {'A':0.3,'C':0.2,'G':0.2,'T':0.3}
pssm = pwm.log_odds(background)
print(pssm)
print("%4.2f" % pssm.max)
print("%4.2f" % pssm.min)
mean = pssm.mean(background)
std = pssm.std(background)
print("mean = %0.2f, standard deviation = %0.2f" % (mean, std))
test_seq=Seq("TACACTGCATTACAACCCAAGCATTA", m.alphabet)
len(test_seq)
for pos, seq in m.instances.search(test_seq):
print("%i %s" % (pos, seq))
for pos, seq in r.instances.search(test_seq):
print("%i %s" % (pos, seq))
for position, score in pssm.search(test_seq, threshold=3.0):
print("Position %d: score = %5.3f" % (position, score))
pssm.calculate(test_seq)
rpssm = pssm.reverse_complement()
rpssm.calculate(test_seq)
distribution = pssm.distribution(background=background, precision=10**4)
threshold = distribution.threshold_fpr(0.01)
print("%5.3f" % threshold)
threshold = distribution.threshold_fnr(0.1)
print("%5.3f" % threshold)
threshold = distribution.threshold_balanced(1000)
print("%5.3f" % threshold)
threshold = distribution.threshold_patser()
print("%5.3f" % threshold)
threshold = distribution.threshold_fpr(0.01)
print("%5.3f" % threshold)
for position, score in pssm.search(test_seq, threshold=threshold):
print("Position %d: score = %5.3f" % (position, score))
from Bio import motifs
with open("Arnt.sites") as handle:
motif = motifs.read(handle, 'sites')
print(motif.counts)
print(motif.pwm)
print(motif.pssm)
for letter in "ACGT":
print("%s: %4.2f" % (letter, motif.pseudocounts[letter]))
motif.pseudocounts = 3.0
for letter in "ACGT":
print("%s: %4.2f" % (letter, motif.pseudocounts[letter]))
print(motif.pwm)
print(motif.pssm)
for letter in "ACGT":
print("%s: %4.2f" % (letter, motif.background[letter]))
motif.background = {'A': 0.2, 'C': 0.3, 'G': 0.3, 'T': 0.2}
print(motif.pssm)
motif.background = None
for letter in "ACGT":
print("%s: %4.2f" % (letter, motif.background[letter]))
motif.background = 0.8
for letter in "ACGT":
print("%s: %4.2f" % (letter, motif.background[letter]))
print("%f" % motif.pssm.mean(motif.background))
print("%f" % motif.pssm.std(motif.background))
distribution = motif.pssm.distribution(background=motif.background)
threshold = distribution.threshold_fpr(0.01)
print("%f" % threshold)
pssm = motif.pssm
with open("REB1.pfm") as handle:
m_reb1 = motifs.read(handle, "pfm")
m_reb1.consensus
print(m_reb1.counts)
m_reb1.pseudocounts = {'A':0.6, 'C': 0.4, 'G': 0.4, 'T': 0.6}
m_reb1.background = {'A':0.3,'C':0.2,'G':0.2,'T':0.3}
pssm_reb1 = m_reb1.pssm
print(pssm_reb1)
distance, offset = pssm.dist_pearson(pssm_reb1)
print("distance = %5.3g" % distance)
print(offset)
from Bio import motifs
with open("meme.out") as handle:
motifsM = motifs.parse(handle, "meme")
motifsM
motifsM[0].consensus
motifsM[0].instances[0].sequence_name
motifsM[0].instances[0].start
motifsM[0].instances[0].strand
motifsM[0].instances[0].pvalue
from Bio import motifs
with open("alignace.out") as handle:
motifsA = motifs.parse(handle, "alignace")
motifsA[0].consensus
motifsM[0].reverse_complement().consensus
command="/opt/bin/AlignACE"
input_file="test.fa"
from Bio.motifs.applications import AlignAceCommandline
cmd = AlignAceCommandline(cmd=command, input=input_file, gcback=0.6, numcols=10)
stdout, stderr= cmd()
motifs = motifs.parse(stdout, "alignace")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: and we can start creating our first motif objects. We can either create
Step2: then we can create a Motif object as follows
Step3: The instances are saved in an attribute m.instances, which is
Step4: The length of the motif is defined as the sequence length, which should
Step5: The Motif object has an attribute .counts containing the counts of
Step6: You can access these counts as a dictionary
Step7: but you can also think of it as a 2D array with the nucleotide as the
Step8: You can also directly access columns of the counts matrix
Step9: Instead of the nucleotide itself, you can also use the index of the
Step10: The motif has an associated consensus sequence, defined as the sequence
Step11: as well as an anticonsensus sequence, corresponding to the smallest
Step12: You can also ask for a degenerate consensus sequence, in which ambiguous
Step13: Here, W and R follow the IUPAC nucleotide ambiguity codes
Step14: The reverse complement and the degenerate consensus sequence are only
Step15: We should get our logo saved as a PNG in the specified file.
Step16: The instances from which this motif was created is stored in the
Step17: The counts matrix of this motif is automatically calculated from the
Step18: This format does not store any meta information.
Step19: As this motif was created from the counts matrix directly, it has no
Step20: We can now ask for the consensus sequence of these two motifs
Step21: As with the instances file, no meta information is stored in this
Step22: Note that printing a JASPAR motif yields both the counts data and the
Step23: Now we can fetch a single motif by its unique JASPAR ID with the
Step24: Printing the motif reveals that the JASPAR SQL database stores much more
Step25: We can also fetch motifs by name. The name must be an exact match
Step26: The fetch_motifs method allows you to fetch motifs which match a
Step27: Compatibility with Perl TFBS modules {#compatibility-with-perl-tfbs-modules .unnumbered}
Step28: Note that it is possible for the counts matrix to have an unequal
Step29: To convert the absolute score of an instance back to a relative
Step30: For example, using the Arnt motif before, let’s search a sequence
Step31: MEME
Step32: The motifs.parse command reads the complete file directly, so you can
Step33: The record is an object of the Bio.motifs.meme.Record class. The class
Step34: In addition to these generic motif attributes, each motif also stores
Step35: In addition to using an index into the record, as we did above, you can
Step36: Each motif has an attribute .instances with the sequence instances in
Step37: MAST {#mast .unnumbered}
Step38: The overall version number, if available, is stored as record.version
Step39: Each motif in record is in instance of the Bio.motifs.transfac.Motif
Step40: TRANSFAC files are typically much more elaborate than this example,
Step41: You can export the motifs in the TRANSFAC format by capturing this
Step42: Writing motifs
Step43: Similarly, we can use format to write the motif in the JASPAR jaspar
Step44: To write the motif in a TRANSFAC-like matrix format, use
Step45: To write out multiple motifs, you can use motifs.write. This function
Step46: Or, to write multiple motifs in the jaspar format
Step47: Position-Weight Matrices
Step48: Alternatively, pseudocounts can be a dictionary specifying the
Step49: The position-weight matrix has its own methods to calculate the
Step50: Note that due to the pseudocounts, the degenerate consensus sequence
Step51: The reverse complement of the position-weight matrix can be calculated
Step52: Position-Specific Scoring Matrices
Step53: Here we can see positive values for symbols more frequent in the motif
Step54: The maximum and minimum score obtainable from the PSSM are stored in the
Step55: The mean and standard deviation of the PSSM scores with respect to a
Step56: A uniform background is used if background is not specified. The mean
Step57: Searching for exact matches
Step58: We can do the same with the reverse complement (to find instances on the
Step59: Searching for matches using the PSSM score
Step60: The negative positions refer to instances of the motif found on the
Step61: In general, this is the fastest way to calculate PSSM scores. The scores
Step62: Selecting a score threshold
Step63: The distribution object can be used to determine a number of different
Step64: or the false-negative rate (probability of “not finding” an instance
Step65: or a threshold (approximately) satisfying some relation between the
Step66: or a threshold satisfying (roughly) the equality between the
Step67: For example, in case of our motif, you can get the threshold giving you
Step68: Each motif object has an associated Position-Specific Scoring Matrix
Step69: The negative infinities appear here because the corresponding entry in
Step70: If you change the .pseudocounts attribute, the position-frequency
Step71: You can also set the .pseudocounts to a dictionary over the four
Step72: Again, if you modify the background distribution, the position-specific
Step73: Setting motif.background to None resets it to a uniform
Step74: If you set motif.background equal to a single value, it will be
Step75: Note that you can now calculate the mean of the PSSM scores over the
Step76: as well as its standard deviation
Step77: and its distribution
Step78: Note that the position-weight matrix and the position-specific scoring
Step79: Comparing motifs {#sec
Step80: To make the motifs comparable, we choose the same values for the
Step81: We’ll compare these motifs using the Pearson correlation. Since we want
Step82: This means that the best PCC between motif m and m_reb1 is obtained
Step83: Besides the most wanted list of motifs, the result object contains more
Step84: AlignAce {#sec
Step85: Again, your motifs behave as they should
Step86: In fact you can even see, that AlignAce found a very similar motif as
Step87: If you have AlignAce installed on the same machine, you can also run it
Step88: Since AlignAce prints all of its output to standard output, you can get
|
11,495
|
<ASSISTANT_TASK:>
Python Code:
def fancy_calc(a, b, c):
x1 = basic_calc(a,b)
x2 = basic_calc(b,c)
x3 = basic_calc(c,a)
z = x1 * x2 * x3
return z
def basic_calc(x, y):
result = x + y
return result
x = 1
y = 2
z = 3
result = fancy_calc(x, y, z)
print x
print z
print x1
print result
# run this first!
def getMax(someList):
someList.sort()
x = someList[-1]
return x
scores = [9, 5, 7, 1, 8]
maxScore = getMax(scores)
print maxScore
print someList
print scores
# run this first!
list1 = [1, 2, 3, 4]
list2 = list1
list2[0] = "HELLO"
print list2
print list1
# for lists
list1 = [1, 2, 3, 4]
list2 = list(list1) #make a true copy of the list
list2[0] = "HELLO"
print list2
print list1
# for dictionaries
dict1 = {'A':1, 'B':2, 'C':3}
dict2 = dict1.copy() #make a true copy of the dict
dict2['A'] = 99
print dict2
print dict1
##### testing gc
gcCont = gc("ATGGGCCCAATGG")
if type(gcCont) != float:
print ">> Problem with gc: answer is not a float, it is a %s." % type(gcCont)
elif gcCont != 0.62:
print ">> Problem with gc: incorrect answer (should be 0.62; your code gave", gcCont, ")"
else:
print "gc: Passed."
##### testing reverse_compl
revCompl = reverse_compl("GGGGTCGATGCAAATTCAAA")
if type(revCompl) != str:
print ">> Problem with reverse_compl: answer is not a string, it is a %s." % type(revCompl)
elif revCompl != "TTTGAATTTGCATCGACCCC":
print ">> Problem with reverse_compl: answer (%s) does not match expected (%s)" % (revCompl, "TTTGAATTTGCATCGACCCC")
else:
print "reverse_compl: Passed."
##### testing read_fasta
try:
ins = open("horrible.fasta", 'r')
except IOError:
print ">> Can not test read_fasta because horrible.fasta is missing. Please add it to the directory with this notebook."
else:
seqDict = read_fasta("horrible.fasta")
if type(seqDict) != dict:
print ">> Problem with read_fasta: answer is not a dictionary, it is a %s." % type(seqDict)
elif len(seqDict) != 22:
print ">> Problem with read_fasta: # of keys in dictionary (%s) does not match expected (%s)" % (len(seqDict), 22)
else:
print "read_fasta: Passed."
##### testing rand_seq
randSeq1 = rand_seq(23)
randSeq2 = rand_seq(23)
if type(randSeq1) != str:
print ">> Problem with rand_seq: answer is not a string, it is a %s." % type(randSeq1)
elif len(randSeq1) != 23:
print ">> Problem with rand_seq: answer length (%s) does not match expected (%s)." % (len(randSeq1), 23)
elif randSeq1 == randSeq2:
print ">> Problem with rand_seq: generated the same sequence twice (%s) -- are you sure this is random?" % randSeq1
else:
print "rand_seq: Passed."
##### testing shuffle_nt
shuffSeq = shuffle_nt("AAAAAAGTTTCCC")
if type(shuffSeq) != str:
print ">> Problem with shuffle_nt: answer is not a string, it is a %s." % type(shuffSeq)
elif len(shuffSeq) != 13:
print ">> Problem with shuffle_nt: answer length (%s) does not match expected (%s)." % (len(shuffSeq), 12)
elif shuffSeq == "AAAAAAGTTTCCC":
print ">> Problem with shuffle_nt: answer is exactly the same as the input. Are you sure this is shuffling?"
elif shuffSeq.count('A') != 6:
print ">> Problem with shuffle_nt: answer doesn't contain the same # of each nt as the input."
else:
print "shuff_seq: Passed."
def get_kmers(k):
kmers = []
# your code here
return kmers
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (A) List the line numbers of the code above in the order that they will be executed. If a line will be executed more than once, list it each time.
Step2: Your guess
Step3: Your guess
Step4: Your guess
Step5: Your guess
Step6: Your guess
Step7: Your guess
Step8: Your guess
Step9: Your guess
Step10: Your guess
Step11: And here's a way for dictionaries
Step12: 3. Writing custom functions (8pts)
Step13: 4. Using your functions (5pts)
|
11,496
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
resolutions = [150, 360, 600, 1200, 2400, 4800] # dpi
inch2cm = 2.54 # cm/inch
nbrOfSubpixels = 32
# Calulation Pixel Pinch
pixel_pitch = np.empty(shape=[len(resolutions)], dtype=np.float64) # um
for i in range(len(resolutions)):
pixel_pitch[i] = (inch2cm/resolutions[i])*10000
# Calculation Subpixel Pinch
sub_pixel_pitch = np.empty(shape=[len(resolutions)], dtype=np.float64) # um
for i in range(len(resolutions)):
sub_pixel_pitch[i] = pixel_pitch[i]/nbrOfSubpixels
for i in range(len(resolutions)):
print("Resolution: {:4} dpi Pixel Pitch: {} um Sub Pixel Pitch: {} um".format(resolutions[i], pixel_pitch[i], sub_pixel_pitch[i]))
import numpy as np
import pylab as pl
pl.clf()
frequencies = [50e6, 100e6, 150e6, 200e6] # MHz
speed = np.linspace(0,10,50) # 50 x points from 0m/s to 10m/s
for freq in frequencies:
delta_x_pitch = (speed*100000) * 1/freq # um/s / s
#error = numpy.mod(pixel_pitch[1],delta_x_pitch)
# Plot graph
pl.plot(speed,delta_x_pitch,label=str(int(freq/1000000))+" MHz")
# Place legend, Axis and Title
pl.legend(loc='best')
pl.xlabel("Speed [m/s]")
pl.ylabel("$\Delta$ x_pitch or Error [um]")
pl.title("Pixel Pitch Errors @ {}dpi".format(resolutions[1]))
pixel_pitch = 0.1692 # mm
inch2cm = 2.54 # cm/inch
resolution = (inch2cm/pixel_pitch)*10
print("Resolution: {:4} dpi Pixel Pitch: {} um".format(resolution, pixel_pitch))
ph_resolution = 360 # dpi
inch2cm = 2.54 # cm/inch
ph_overlapping = 0.196 # cm
pixel_overlap = ph_resolution / inch2cm * ph_overlapping
print("{} dots".format(pixel_overlap))
resolution_job = 720 # dpi
resolution_ph = 360 # dpi
f_ph = 30e3 # Hz
speed_percent = 100 # %
#-------------------------------------
inch2mm = 25.4 # mm/inch
#-------------------------------------
pixel_pinch = inch2mm/resolution_ph # (inch/mm)/(dot/inch) = mm/dot
adjusted_frequency = f_ph / 100 * speed_percent # Hz
speed = pixel_pinch * adjusted_frequency * resolution_ph/resolution_job # (mm/dot) * 1/s * (dot/inch)/(dot/inch) = mm/s
print("Pixel Pinch = {} mm".format(pixel_pinch))
print("Adjusted Frequency = {}% of {} Hz = {} Hz".format(speed_percent, f_ph, adjusted_frequency))
print("Speed = {} mm/s".format(speed))
print("Speed = {} m/s".format(speed/1000))
import math
def printing_dimensions(section_width, section_depth, section_numbers, number_of_channels, ph_dpi):
printing_dimensions calculates the printing width and length of a printhead with the available memory
:param section_width : (bytes^=pixel/channel) Can be found in CoE Memory Organization 8040:1
:param section_depth : (bytes => 1 byte=2 px) Can be found in CoE Memory Organization 8040:2
:param section_numbers : Can be found in CoE Memory Organization 8040:3
:param number_of_channels : Equals number of colors per board
:param ph_dpi : Dpi of the printheads
:return print_width : height (cross print axis) in mm of the printable width per printhead
:return print_length : length (print axis) in mm of the printable job
# Constants
inch2m = 0.0254 # m/inch
pixel_byte = 2 # pixel/byte
# Calc Pixel Pitch
pixel_pitch = inch2m/ph_dpi # in m
# Calc Print width = Y Axis = Crossprint Axis
section_width_pixel = section_width # this is already in bytes (1 byte = 1 px)
print_width = section_width_pixel*pixel_pitch
# Calc Print length = X Axis = Print Axis
section_depth_pixel = section_depth * pixel_byte
print_length = section_depth_pixel*section_numbers*pixel_pitch
# Calc memory size
memory_size_byte = section_width*section_depth*section_numbers*number_of_channels
memory_size_pixel = memory_size_byte * pixel_byte
# Print
print("Memory Size : {} MBytes = {} MPixels".format((memory_size_byte/1000/1000), (memory_size_pixel/1000/1000)))
print("PixelPitch @ {:3}dpi : {} um".format(ph_dpi,pixel_pitch*1000*1000))
print("Image width : {} mm".format(print_width*1000))
print("Print length : {} m".format(print_length))
# m_number_of_sections=MEMORY_SIZE/(Memory_Section::get_width()*Memory_Section::get_depth()*anheads);
# m_memory_depth=m_number_of_sections*Memory_Section::get_depth();
return print_width, print_length
def biggest_multipage_job(print_length, max_image_size):
calculates the max number of multipages in job with a given substratesize (max_image_size).
:param print_length : max length of the job according to printing_dimensions() return value
:param max_image_size : max size of one image
:return nbr_of_multipages : max number of multipage in the entire job
nbr_of_multipages = print_length*1000/max_image_size
print("Max Number of Multipages : {} pages = {} pages".format(nbr_of_multipages, int(math.floor(nbr_of_multipages))))
return nbr_of_multipages
section_width = 3072 # bytes = pixel
section_depth = 16384 # bytes (1 bytes = 1 px)
section_numbers = 21 # Nbr per Channel (Color)
number_of_channels = 1 # Number of printheads per board
ph_dpi = 600 # dpi
print_width, print_length = printing_dimensions(section_width, section_depth, section_numbers, number_of_channels, ph_dpi)
# For 4 GB of RAM
print_length = print_length *4
print("Print length (4GB) : {} m".format(print_length))
section_width = 1536 # bytes ^= pixel per channel
section_depth = 8192 # bytes => 1bytes = 2 pixel (by 4bpp)
section_numbers = 21 # Nbr per Channel (Color)
number_of_channels = 4 # Number of printheads per board
ph_dpi = 360 # dpi
subrate_length = (281.94, 125.56) # (x, y) in mm (DART_24PX_SHORT_CMYK)
print_width, print_length = printing_dimensions(section_width, section_depth, section_numbers, number_of_channels, ph_dpi)
nbr_of_multipages = biggest_multipage_job(print_length,subrate_length[0] )
section_width = 1536 # bytes ^= pixel per channel
section_depth = 8192 # bytes => 1bytes = 2 pixel (by 4bpp)
section_numbers = 21 # Nbr per Channel (Color)
number_of_channels = 4 # Number of printheads per board
ph_dpi = 360 # dpi
subrate_length = (115.5822, 80) # (x, y) in mm
print_width, print_length = printing_dimensions(section_width, section_depth, section_numbers, number_of_channels, ph_dpi)
nbr_of_multipages = biggest_multipage_job(print_length,subrate_length[0] )
def calc_fire_time(speed=10, dpi=1200): # speed = [m/min] dpi = [dpi]
inch2m = 0.0254
pixel_pitch = inch2m/dpi # m
fire_time = float(pixel_pitch) / (float(speed)/float(60))
print("Speed = {} m/min".format(speed))
print("Resolution = {} dpi".format(dpi))
print("Fire Time = {} s = {} kHz".format(fire_time, 1/fire_time/1000))
fire_time_act = 0.000131
dif = fire_time_act - fire_time
dist = float(speed)/float(60)*dif
print("Distance = {} ".format(dist))
print("")
calc_fire_time(speed=10,dpi=1200)
calc_fire_time(speed=10,dpi=600)
calc_fire_time(speed=10,dpi=150)
calc_fire_time(speed=50,dpi=600)
calc_fire_time(speed=50,dpi=150)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Error in Pixel Pitch $\Delta x_{Pitch}$because of substrate speed and clock frequency
Step2: Resolution
Step3: Stitching
Step4: Pixel distance
Step7: Printing Dimensions
Step8: Techma 4
Step9: Digiround
Step10: RDA Multihead
Step11: Fire Time
|
11,497
|
<ASSISTANT_TASK:>
Python Code:
print(data['CITY_NAME'].value_counts())
print(data.pivot_table(index=['TDQ'], columns=['CITY_NAME']))
print(data.pivot_table(index=['TDQ'], columns=['BLOCK']))
data['P'].hist()
(np.log(data['P'])).hist()
data['A'].hist()
plt.figure(figsize=(20,8))
plt.subplot(4, 2, 1)
data['P'].hist()
plt.title(u"成約価格")
plt.subplot(4, 2, 2)
data['S'].hist()
plt.title("専有面積")
plt.subplot(4, 2, 3)
data['L'].hist()
plt.title("土地面積")
plt.subplot(4, 2, 4)
data['R'].hist()
plt.title("部屋数")
plt.subplot(4, 2, 5)
data['A'].hist()
plt.title("建築後年数")
plt.subplot(4, 2, 6)
data['RW'].hist()
plt.title("前面道路幅員")
plt.subplot(4, 2, 7)
data['TS'].hist()
plt.title("最寄駅までの距離")
plt.subplot(4, 2, 8)
data['TT'].hist()
plt.title(u"東京駅までの時間")
plt.figure(figsize=(20,8))
data['TDQ'].value_counts().plot(kind='bar')
plt.figure(figsize=(20,8))
data['CITY_NAME'].value_counts().plot(kind='bar') #市区町村別の件数
vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']
eq = fml_build(vars)
y, X = dmatrices(eq, data=data, return_type='dataframe')
CITY_NAME = pd.get_dummies(data['CITY_NAME'])
TDQ = pd.get_dummies(data['TDQ'])
X = pd.concat((X, CITY_NAME, TDQ), axis=1)
datas = pd.concat((y, X), axis=1)
datas = datas[datas['12世田谷区'] == 1][0:5000]
datas.head()
vars = ['S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR']
#vars += vars + list(TDQ.columns)
class CAR(Chain):
def __init__(self, unit1, unit2, unit3, col_num):
self.unit1 = unit1
self.unit2 = unit2
self.unit3 = unit3
super(CAR, self).__init__(
l1 = L.Linear(col_num, unit1),
l2 = L.Linear(self.unit1, self.unit1),
l3 = L.Linear(self.unit1, self.unit2),
l4 = L.Linear(self.unit2, self.unit3),
l5 = L.Linear(self.unit3, self.unit3),
l6 = L.Linear(self.unit3, 1),
)
def __call__(self, x, y):
fv = self.fwd(x, y)
loss = F.mean_squared_error(fv, y)
return loss
def fwd(self, x, y):
h1 = F.sigmoid(self.l1(x))
h2 = F.sigmoid(self.l2(h1))
h3 = F.sigmoid(self.l3(h2))
h4 = F.sigmoid(self.l4(h3))
h5 = F.sigmoid(self.l5(h4))
h6 = self.l6(h5)
return h6
class OLS_DLmodel(object):
def __init__(self, data, vars, bs=200, n=1000):
self.vars = vars
eq = fml_build(vars)
y, X = dmatrices(eq, data=datas, return_type='dataframe')
self.y_in = y[:-n]
self.X_in = X[:-n]
self.y_ex = y[-n:]
self.X_ex = X[-n:]
self.logy_in = np.log(self.y_in)
self.logy_ex = np.log(self.y_ex)
self.bs = bs
def OLS(self):
X_in = self.X_in
X_in = X_in.drop(['X', 'Y'], axis=1)
model = sm.OLS(self.logy_in, X_in, intercept=False)
self.reg = model.fit()
print(self.reg.summary())
df = (pd.DataFrame(self.reg.params)).T
df['X'] = 0
df['Y'] = 0
self.reg.params = pd.Series((df.T)[0])
def directDL(self, ite=100, bs=200, add=False):
logy_in = np.array(self.logy_in, dtype='float32')
X_in = np.array(self.X_in, dtype='float32')
y = Variable(logy_in)
x = Variable(X_in)
num, col_num = X_in.shape
if add is False:
self.model1 = CAR(15, 15, 5, col_num)
optimizer = optimizers.SGD()
optimizer.setup(self.model1)
for j in range(ite):
sffindx = np.random.permutation(num)
for i in range(0, num, bs):
x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])
y = Variable(logy_in[sffindx[i:(i+bs) if (i+bs) < num else num]])
self.model1.zerograds()
loss = self.model1(x, y)
loss.backward()
optimizer.update()
if j % 1000 == 0:
loss_val = loss.data
print('epoch:', j)
print('train mean loss={}'.format(loss_val))
print(' - - - - - - - - - ')
y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))
X_ex = np.array(self.X_ex, dtype='float32')
X_ex = Variable(X_ex)
logy_pred = self.model1.fwd(X_ex, X_ex).data
y_pred = np.exp(logy_pred)
error = y_ex - y_pred.reshape(len(y_pred),)
plt.hist(error[:])
def DL(self, ite=100, bs=200, add=False):
y_in = np.array(self.y_in, dtype='float32').reshape(len(self.y_in))
resid = y_in - np.exp(self.reg.predict())
resid = np.array(resid, dtype='float32').reshape(len(resid),1)
X_in = np.array(self.X_in, dtype='float32')
y = Variable(resid)
x = Variable(X_in)
num, col_num = X_in.shape
if add is False:
self.model1 = CAR(10, 10, 3, col_num)
optimizer = optimizers.Adam()
optimizer.setup(self.model1)
for j in range(ite):
sffindx = np.random.permutation(num)
for i in range(0, num, bs):
x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])
y = Variable(resid[sffindx[i:(i+bs) if (i+bs) < num else num]])
self.model1.zerograds()
loss = self.model1(x, y)
loss.backward()
optimizer.update()
if j % 1000 == 0:
loss_val = loss.data
print('epoch:', j)
print('train mean loss={}'.format(loss_val))
print(' - - - - - - - - - ')
def predict(self):
y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))
X_ex = np.array(self.X_ex, dtype='float32')
X_ex = Variable(X_ex)
resid_pred = self.model1.fwd(X_ex, X_ex).data
print(resid_pred[:10])
self.logy_pred = np.matrix(self.X_ex)*np.matrix(self.reg.params).T
self.error1 = np.array(y_ex - np.exp(self.logy_pred.reshape(len(self.logy_pred),)))[0]
self.pred = np.exp(self.logy_pred) + resid_pred
self.error2 = np.array(y_ex - self.pred.reshape(len(self.pred),))[0]
def compare(self):
plt.hist(self.error1)
plt.hist(self.error2)
vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']
#vars += vars + list(TDQ.columns)
model = OLS_DLmodel(datas, vars)
model.OLS()
model.DL(ite=10, bs=200)
model.predict()
model.DL(ite=20000, bs=200, add=True)
model.DL(ite=10000, bs=200, add=True)
model.predict()
model.compare()
print(np.mean(model.error1))
print(np.mean(model.error2))
print(np.mean(np.abs(model.error1)))
print(np.mean(np.abs(model.error2)))
print(max(np.abs(model.error1)))
print(max(np.abs(model.error2)))
print(np.var(model.error1))
print(np.var(model.error2))
fig = plt.figure()
ax = fig.add_subplot(111)
errors = [model.error1, model.error2]
bp = ax.boxplot(errors)
plt.grid()
plt.ylim([-5000,5000])
plt.title('分布の箱ひげ図')
plt.show()
X = model.X_ex['X'].values
Y = model.X_ex['Y'].values
e = model.error2
import numpy
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig=plt.figure()
ax=Axes3D(fig)
ax.scatter3D(X, Y, e)
plt.show()
t
plt.hist(Xs)
import numpy as np
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
Xs = np.linspace(min(X),max(X),10)
Ys = np.linspace(min(Y),max(Y),10)
error = model.error1
Xgrid, Ygrid = np.meshgrid(Xs, Ys)
Z = LL(X, Y, Xs, Ys, error)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_wireframe(Xgrid,Ygrid,Z) #<---ここでplot
plt.show()
fig = plt.figure()
ax = Axes3D(fig)
ax.set_zlim(-100, 500)
ax.plot_surface(Xgrid,Ygrid,Z) #<---ここでplot
plt.show()
h = 10
(0.9375*(1-((X-1)/h)**2)**2)*(0.9375*(1-((Y-2)/h)**2)**2)
def LL(X, Y, Xs, Ys, error):
n = len(X)
h = 0.1
error = model.error2
mean_of_error = np.zeros((len(Xs), len(Ys)))
for i in range(len(Xs)):
for j in range(len(Ys)):
u1 = ((X-Xs[i])/h)**2
u2 = ((Y-Ys[j])/h)**2
k = (0.9375*(1-((X-Xs[i])/h)**2)**2)*(0.9375*(1-((Y-Ys[j])/h)**2)**2)
K = np.diag(k)
indep = np.matrix(np.array([np.ones(n), X - Xs[i], Y-Ys[j]]).T)
dep = np.matrix(np.array([error]).T)
gls_model = sm.GLS(dep, indep, sigma=K)
gls_results = gls_model.fit()
mean_of_error[i, j] = gls_results.params[0]
return mean_of_error
h = 200
u1 = ((X-30)/h)**2
u1
u1[u1 < 0] = 0
for x in range(lXs[:2]):
print(x)
mean_of_error
plt.plot(gaussian_kde(Y, 0.1)(Ys))
N = 5
means = np.random.randn(N,2) * 10 + np.array([100, 200])
stdev = np.random.randn(N,2) * 10 + 30
count = np.int64(np.int64(np.random.randn(N,2) * 10000 + 50000))
a = [
np.hstack([
np.random.randn(count[i,j]) * stdev[i,j] + means[i,j]
for j in range(2)])
for i in range(N)]
for x in Xs:
for y in Ys:
def loclinearc(points,x,y,h):
n = len(points[,1])
const = matrix(1, nrow=length(x), ncol=1)
bhat = matrix(0, nrow=3, ncol=n)
b1 = matrix(0, n, n)
predict = matrix(0, n, 1)
for (j in 1:n) {
for (i in 1:n) {
a <- -.5*sign( abs( (points[i, 1]*const - x[,1])/h ) -1 ) + .5
#get the right data points, (K(x) ~=0)
b <- -.5*sign( abs( (points[j, 2]*const - x[,2])/h ) -1 ) + .5
x1andy <- nonzmat(cbind((x[,1]*a*b), (y*a*b)))
x2andy <- nonzmat(cbind((x[,2]*a*b), (y*a*b)))
ztheta1 <- x1andy[,1]
ztheta2 <- x2andy[,1]
yuse <- x1andy[,2]
q1 <- (ztheta1 - points[i,1]);
q2 <- (ztheta2 - points[j,2]);
nt1 <- ( (ztheta1- points[i,1])/h )
nt2 <- ( (ztheta2- points[j,2])/h )
#q2 = ((ztheta - points(i,1)).^2)/2;
weights <- diag(c((15/16)%*%( 1-(nt1^2))^2*((15/16)%*%( 1-(nt2^2))^2)))
#Biweight Kernel
tempp3 <- cbind(matrix(1, nrow=length(ztheta1), ncol=1), q1, q2)
bhat[,i] <- solve(t(tempp3)%*%weights%*%tempp3)%*%t(tempp3)%*%weights%*%yuse
}
b1[,j] <- t(bhat[1,])
}
return(b1)
}
nonzmat(x):
#This function computes nonzeros of a MATRIX when certain ROWS of the
#matrix are zero. This function returns a matrix with the
#zero rows deleted
m, k = x.shape
xtemp = matrix(np.zeros(m, k))
for (i in 1:m) {
xtemp[i,] <- ifelse(x[i,] == matrix(0, nrow=1, ncol=k), 99999*matrix(1, nrow=1, ncol=k), x[i,])
}
xtemp <- xtemp - 99999
if (length(which(xtemp !=0,arr.ind = T)) == 0) {
a <- matrix(-99999, nrow=1, ncol=k)
} else {
a <- xtemp[which(xtemp !=0,arr.ind = T)]
}
a <- a + 99999
n1 <- length(a)
rowlen <- n1/k
collen <- k
out = matrix(a, nrow=rowlen, ncol=collen)
return(out)
}
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.tri as mtri
#============
# First plot
#============
# Plot the surface. The triangles in parameter space determine which x, y, z
# points are connected by an edge.
ax = fig.add_subplot(1, 2, 1, projection='3d')
ax.plot_trisurf(X, Y, e)
ax.set_zlim(-1, 1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 成約時点別×市区町村別の件数を集計
Step2: 成約時点別×地域ブロック別の件数を集計
Step3: Histogram
Step4: 価格(自然対数)
Step5: 建築後年数
Step6: Plot
Step7: Main Analysis
Step8: 青がOLSの誤差、緑がOLSと深層学習を組み合わせた誤差。
|
11,498
|
<ASSISTANT_TASK:>
Python Code:
import mcpi.minecraft as minecraft
import mcpi.block as block
from time import sleep
world = minecraft.Minecraft.create()
height = 5
material = block.COBBLESTONE
[x,y,z] = world.player.getPos()
[x, y, z]
# This variable will track the current level being created inside the loop
level = 1
# Execute the loop, building from the bottom up
while level <= height:
print "Level " + str(level) # Print the levels as we build them
sleep(0.5)
world.setBlocks( x - level, y + height - level, z - level,
x + level, y + height - level, z + level, material )
level = level + 0.5
# Put the player on top of the pyramid!
world.player.setPos( x, y + height, z )
world.postToChat("Done building pyramid!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to the Minecraft server, store the connection in a variable named world
Step2: Next, set some variables to customize the pyramid. You might come back here later to change these values to build bigger pyramids, smaller pyramids, or ones made out of different stuff.
Step3: Find out where the player is in the world. Store the current position in variables named x, y, and z
Step4: Now, get busy building the pyramid.
Step5: Finally, move the player to best seat in the house
|
11,499
|
<ASSISTANT_TASK:>
Python Code:
import random
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
import itertools
pets = ("cat", "dog", "fish", "lizard", "hamster")
### BEGIN SOLUTION
### END SOLUTION
import scipy.special
### BEGIN SOLUTION
### END SOLUTION
import sympy as sym
x = sym.Symbol("x")
c1 = sym.Symbol("c1")
### BEGIN SOLUTION
### END SOLUTION
### BEGIN SOLUTION
### END SOLUTION
### BEGIN SOLUTION
### END SOLUTION
c = sym.Symbol("c")
### BEGIN SOLUTION
### END SOLUTION
### BEGIN SOLUTION
### END SOLUTION
### BEGIN SOLUTION
### END SOLUTION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: b. $1/2$
Step2: c. $3/4$
Step3: d. $1$
Step4: Question 2
Step5: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation.
Step6: Question 3
Step7: b. Create a variable equation that has value the equation $f'(0)=0$.
Step8: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$.
Step9: Question 4
Step10: b. Given that $c=2$ output $\frac{df}{dx}$ where
Step11: c. Given that $c=2$ output $\int f(x)dx$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.