Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
4,500
|
<ASSISTANT_TASK:>
Python Code:
from scipy.special import k0 # bessel function K0
import numpy as np
def dWpp(r, z, a, b, D):
Returns additional drawdown caused by partial penetration
Solution by Hantush. See Kruseman and De Ridder (1994), p159.
The real extra drawdown is Q/(2 pi kD) * dW
Parmeters:
----------
r : distance from the well
z : distance from the bottom of the aquifer
a : distance of top of screen to bottom of aquifer
b : distance of bottom of creen to bottom of aquifer
Returns:
--------
dW : ndarray size (Nz, Nr) containing all combinations of z and r
Extra drawdown in terms of well function, which has still to be multiplied
by Q/(2 pi kD).
tol = 1e-3
pi = np.pi
d = a - b
r = np.array(r.ravel(), dtype = float)[np.newaxis,:]
z = np.array(z.ravel(), dtype = float)[:,np.newaxis]
alpha= pi * a/D
beta = pi * b/D
zeta = pi * z/D
rho = pi * r/D
delta= pi * d/D
maxiter = 500
Dw = (z * r) * 0.
dw0 = np.ones(Dw.shape)
for n in np.arange(1.,maxiter+1.):
dw1 = dw0
dw0 = (np.sin(n * alpha) - np.sin(n * beta)) * np.cos(n * zeta) * k0( n * rho) / n
Dw += dw0
#print(np.sum(np.abs(dw1 + dw0)))
if np.sum(np.abs(dw1 + dw0))< tol:
break
print("Iterations: {}".format(n))
return (2./delta) * Dw
import matplotlib.pylab as plt
kD = 600.
S = 0.001
D = 50.
a = 40.
b = 30.
r0 = 0.3
r = np.logspace(np.log10(r0), 2., 41)
z = np.linspace(0., D, 51)
dw = dWpp(r, z, a, b, D)
fig =plt.figure()
ax = fig.add_subplot(111)
ax.set(xlabel='r [m]', ylabel='z [m]', title='Additional drawdown due to partial screen penetraton')
ax.contour(r, z, dw, 50)
plt.show()
from scipy.special import expi
def W(u): return -expi(-u) # Theis well function as was done earlier
u = lambda r, t : r**2 * S / (4 * kD * t) # the argument u
Q = 1200
t = 1.0
s = Q / (4 * np.pi * kD) * W(u(r,t)) + Q/(2 * np.pi * kD) * dWpp(r, z, a, b, D)
fig1 =plt.figure()
ax1 = fig1.add_subplot(111)
ax1.set(xlabel='r [m]', ylabel='z [m]', title='Additional drawdown due to partial screen penetraton')
ax1.contour(r, z, s, 50)
plt.show()
import pdb
r = np.array([r0]) # distance to the well
z = np.linspace(b, a, 21) # elevation
ds = Q/(2 * np.pi * kD) * dWpp(r, z, a, b, D) # partial penetration
print('mean of dpp along the screen: {:.2f} m'.format(np.mean(ds)))
print('max of dpp along the screen : {:.2f} m'.format(np.max(ds)))
print('dpp along for 21 points the screen:')
print(' z dpp(z)')
for zz, dds in zip(z, ds):
print("{:10.2f} {:10.3f}".format(zz, dds[0]))
print("Drawdown by fully penetrating schreen {:.2f} m".format(Q / (4 * np.pi * kD) * W(u(r0,t))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Partial penetration
Step2: Let's apply this solution and see if it works.
Step3: As was said, it is difficult to see wether this result is correct. Therefore we'll add the drawdown due to a fully penetrating well to see whether the contours fo the drawdown near the screen concentrates around it and whether the flow above and below the partially penetrating screen is indeed zero.
Step4: We now superimpose the Theis drawdown and the extra drawdown due to partial penetration.
Step5: This result shows that the contours indeed concentrate around the screen and only around the screen. The figure also shows that the head contours are perpendicular to the impervious top and bottom of the aquifer, as well as to the center of the well below and above the partially penetrating screen, as expected.
|
4,501
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from scipy.special import expit
import matplotlib.pyplot as plt
# define our hypothesis (vectorized!)
def f(x):
return expit(np.matrix([0, 1, -.5,.5])*x);
x_min = -5; x_max = 5
y_min = -5; y_max = 5
x1 = np.linspace(x_min, x_max, 200)
y1 = np.linspace(y_min, y_max , 200)
x,y = np.meshgrid(x1, y1)
#
# evalute it in a vectorized way (and reshape into a matrix)
#
# make a 3 x N matrix of the sample points
data = np.vstack((
np.ones(x.size), # add the bias term
x.ravel(), # make the matrix into a vector
y.ravel(),
y.ravel()**2)) # add a quadratic term for fun
z = f(data)
z = z.reshape(x.shape)
# show the function value in the background
cs = plt.imshow(z,
extent=(x_min,x_max,y_max,y_min), # define limits of grid, note reversed y axis
cmap=plt.cm.jet)
plt.clim(0,1) # defines the value to assign the min/max color
# draw the line on top
levels = np.array([.5])
cs_line = plt.contour(x,y,z,levels)
# add a color bar
CB = plt.colorbar(cs)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the domain for the plot
Step2: Make the plots
|
4,502
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
pd.options.display.max_rows = 10
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
df.groupby('key').aggregate(np.sum) # 'sum'
df.groupby('key').sum()
df = pd.read_csv("data/titanic.csv")
df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some 'theory'
Step2: Using the filtering and reductions operations we have seen in the previous notebooks, we could do something like
Step3: And many more methods are available.
|
4,503
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('KE952yueVLA', width=720, height=400, loop=1, autoplay=0)
from IPython.display import YouTubeVideo
YouTubeVideo('lfNVv0A8QvI', width=720, height=400, loop=1, autoplay=0)
from IPython.display import YouTubeVideo
YouTubeVideo('hxdPdKbqm_I', width=720, height=400, loop=1, autoplay=0)
# Rectified linear neuron
%pylab inline
import numpy
import nengo
n = nengo.neurons.RectifiedLinear()
J = numpy.linspace(-1,1,100)
plot(J, n.rates(J, gain=10, bias=-5))
xlabel('J (current)')
ylabel('$a$ (Hz)');
#assume this has been run
#%pylab inline
# Leaky integrate and fire
import numpy
import nengo
n = nengo.neurons.LIFRate(tau_rc=0.02, tau_ref=0.002) #n is a Nengo LIF neuron, these are defaults
J = numpy.linspace(-1,10,100)
plot(J, n.rates(J, gain=1, bias=-3))
xlabel('J (current)')
ylabel('$a$ (Hz)');
#assume this has been run
#%pylab inline
import numpy
import nengo
n = nengo.neurons.LIFRate() #n is a Nengo LIF neuron
x = numpy.linspace(-100,0,100)
plot(x, n.rates(x, gain=1, bias=50), 'b') # x*1+50
plot(x, n.rates(x, gain=0.1, bias=10), 'r') # x*0.1+10
plot(x, n.rates(x, gain=0.5, bias=5), 'g') # x*0.05+5
plot(x, n.rates(x, gain=0.1, bias=4), 'c') #x*0.1+4))
xlabel('x')
ylabel('a');
#assume this has been run
#%pylab inline
import numpy
import nengo
n = nengo.neurons.LIFRate()
e = numpy.array([1.0, 1.0])
e = e/numpy.linalg.norm(e)
a = numpy.linspace(-1,1,50)
b = numpy.linspace(-1,1,50)
X,Y = numpy.meshgrid(a, b)
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_surface(X, Y, n.rates((X*e[0]+Y*e[1]), gain=1, bias=1.5),
linewidth=0, cstride=1, rstride=1, cmap=pylab.cm.jet)
import nengo
import numpy
n = nengo.neurons.LIFRate()
theta = numpy.linspace(0, 2*numpy.pi, 100)
x = numpy.array([numpy.cos(theta), numpy.sin(theta)])
plot(x[0],x[1])
axis('equal')
e = numpy.array([1.0, 1.0])
e = e/numpy.linalg.norm(e)
plot([0,e[0]], [0,e[1]],'r')
gain = 1
bias = 2.5
figure()
plot(theta, n.rates(numpy.dot(x.T, e), gain=gain, bias=bias))
plot([numpy.arctan2(e[1],e[0])],0,'rv')
xlabel('angle')
ylabel('firing rate')
xlim(0, 2*numpy.pi);
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 10
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates=Uniform(100,200)) #Defaults to LIF neurons,
#with random gains and biases for
#neurons between 100-200hz over -1,1
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqL2(reg=0)) #reg=0 means ignore noise
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
xhat = numpy.dot(A, d)
pyplot.plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
figure()
plot(x, xhat-x)
xlabel('$x$')
ylabel('$\hat{x}-x$')
xlim(-1, 1)
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
#Have to run previous python cell first
A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
pyplot.plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 100
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates=Uniform(100,200)) #Defaults to LIF neurons,
#with random gains and biases for
#neurons between 100-200hz over -1,1
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
pyplot.plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
#%pylab inline
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 40
tau_rc = .2
tau_ref = .001
lif_model = nengo.LIFRate(tau_rc=tau_rc, tau_ref=tau_ref)
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates = Uniform(250,300),
neuron_type = lif_model)
sim = nengo.Simulator(model)
x, A = tuning_curves(neurons, sim)
plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)');
#Have to run previous code cell first
noise = 0.2
with model:
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
A_noisy = A + numpy.random.normal(scale=noise*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
print 'RMSE with %d neurons is %g'%(N, np.sqrt(np.average((x-xhat)**2)))
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1);
import numpy
import nengo
n = nengo.neurons.LIFRate()
theta = numpy.linspace(-numpy.pi, numpy.pi, 100)
x = numpy.array([numpy.sin(theta), numpy.cos(theta)])
e = numpy.array([1.0, 0])
plot(theta*180/numpy.pi, n.rates(numpy.dot(x.T, e), bias=1, gain=0.2)) #bias 1->1.5
xlabel('angle')
ylabel('firing rate')
xlim(-180, 180)
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some sort of mapping between neural activity and a state in the world
Step2: Rectified Linear Neuron
Step3: Leaky integrate-and-fire neuron
Step4: Response functions
Step5: For mapping #1, the NEF uses a linear map
Step6: But that's not how people normally plot it
Step7: That starts looking a lot more like the real data.
Step8: What happens to the error with more neurons?
Step9: What if we just increase the number of neurons? Will it help?
Step10: Number of neurons
Step11: How good is the representation?
Step12: Possible questions
|
4,504
|
<ASSISTANT_TASK:>
Python Code:
import graphviz as gv
class TwoThreeTree:
sNodeCount = 0
def __init__(self):
TwoThreeTree.sNodeCount += 1
self.mID = TwoThreeTree.sNodeCount
def getID(self):
return self.mID
def isNil(self):
return False
def isOne(self):
return False
def isTwo(self):
return False
def isThree(self):
return False
def isFour(self):
return False
def isTree(self):
return False
def isMethod(self):
return False
def insert(self, k):
return self._ins(k)._restore()._grow()
def delete(self, k):
return self._del(k)._repair()._shrink()
def _grow(self):
return self
def _shrink(self):
return self
def _make_string(self, attributes):
# map the function __str__ to all attributes and join them with a comma
name = self.__class__.__name__
return f"{name}({', '.join(map(str, [getattr(self, at) for at in attributes]))})"
TwoThreeTree._make_string = _make_string
def toDot(self):
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
nodeDict = {}
self._collectIDs(nodeDict)
for n, t in nodeDict.items():
if t.isNil():
dot.node(str(n), label='', shape='point')
elif t.isOne():
dot.node(str(n), label='', shape='point')
elif t.isTwo():
dot.node(str(n), label=str(t.mKey))
elif t.isThree():
dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyR))
elif t.isFour():
dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyM) + '|' + str(t.mKeyR))
elif t.isTree():
dot.node(str(n), label=str(t.mName), shape='triangle')
else:
assert False, f'Unknown node {t}'
for n, t in nodeDict.items():
if t.isOne():
dot.edge(str(n), str(t.mChild.getID()))
if t.isTwo():
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mRight.getID()))
if t.isThree():
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mMiddle.getID()))
dot.edge(str(n), str(t.mRight .getID()))
if t.isFour():
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mMiddleL.getID()))
dot.edge(str(n), str(t.mMiddleR.getID()))
dot.edge(str(n), str(t.mRight .getID()))
return dot
TwoThreeTree.toDot = toDot
def _collectIDs(self, nodeDict):
nodeDict[self.getID()] = self
if self.isOne():
self.mChild._collectIDs(nodeDict)
elif self.isTwo():
self.mLeft ._collectIDs(nodeDict)
self.mRight._collectIDs(nodeDict)
elif self.isThree():
self.mLeft ._collectIDs(nodeDict)
self.mMiddle._collectIDs(nodeDict)
self.mRight ._collectIDs(nodeDict)
elif self.isFour():
self.mLeft ._collectIDs(nodeDict)
self.mMiddleL._collectIDs(nodeDict)
self.mMiddleR._collectIDs(nodeDict)
self.mRight ._collectIDs(nodeDict)
TwoThreeTree._collectIDs = _collectIDs
def toDotList(NodeList):
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
nodeDict = {}
for node in NodeList:
node._collectIDs(nodeDict)
for n, t in nodeDict.items():
if t.isNil():
dot.node(str(n), label='', shape='point')
elif t.isOne():
dot.node(str(n), label='', shape='point')
elif t.isTwo():
dot.node(str(n), label=str(t.mKey))
elif t.isThree():
dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyR))
elif t.isFour():
dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyM) + '|' + str(t.mKeyR))
elif t.isTree():
dot.node(str(n), label=str(t.mName), shape='triangle', style='solid')
elif t.isMethod():
dot.node(str(n), label=str(t.mLabel), shape='rectangle', style='dotted')
else:
assert False, f'toDotList: Unknown node {str(t)}'
for n, t in nodeDict.items():
if t.isOne():
dot.edge(str(n), str(t.mChild.getID()))
if t.isTwo():
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mRight.getID()))
if t.isThree():
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mMiddle.getID()))
dot.edge(str(n), str(t.mRight .getID()))
if t.isFour():
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mMiddleL.getID()))
dot.edge(str(n), str(t.mMiddleR.getID()))
dot.edge(str(n), str(t.mRight .getID()))
return dot
class Tree(TwoThreeTree):
def __init__(self, name):
TwoThreeTree.__init__(self)
self.mName = name
def __str__(self):
return self.mName
def isTree(self):
return True
class Method(TwoThreeTree):
def __init__(self, label):
TwoThreeTree.__init__(self)
self.mLabel = label
def __str__(self):
return self.mLabel
def isMethod(self):
return True
class Nil(TwoThreeTree):
def __init__(self):
TwoThreeTree.__init__(self)
def isNil(self):
return True
def __str__(self):
return 'Nil()'
class One(TwoThreeTree):
def __init__(self, child):
TwoThreeTree.__init__(self)
self.mChild = child
def isOne(self):
return True
def __str__(self):
return _make_string(self, ['mChild'])
toDotList([One(Tree('t'))])
class Two(TwoThreeTree):
def __init__(self, left, key, right):
TwoThreeTree.__init__(self)
self.mLeft = left
self.mKey = key
self.mRight = right
def isTwo(self):
return True
def __str__(self):
return self._make_string(['mLeft', 'mKey', 'mRight'])
toDotList([Two(Tree('l'), 'k', Tree('r'))])
class Three(TwoThreeTree):
def __init__(self, left, keyL, middle, keyR, right):
TwoThreeTree.__init__(self)
self.mLeft = left
self.mKeyL = keyL
self.mMiddle = middle
self.mKeyR = keyR
self.mRight = right
def __str__(self):
return self._make_string(['mLeft', 'mKeyL', 'mMiddle', 'mKeyR', 'mRight'])
def isThree(self):
return True
toDotList([Three(Tree('l'), 'kL', Tree('m'), 'kR', Tree('r'))])
class Four(TwoThreeTree):
def __init__(self, l, kl, ml, km, mr, kr, r):
TwoThreeTree.__init__(self)
self.mLeft = l
self.mKeyL = kl
self.mMiddleL = ml
self.mKeyM = km
self.mMiddleR = mr
self.mKeyR = kr
self.mRight = r
def __str__(self):
return self._make_string(['mLeft', 'mKeyL', 'mMiddleL', 'mKeyM', 'mMiddleR', 'mKeyR', 'mRight'])
def isFour(self):
return True
toDotList([Four(Tree('l'), 'kL', Tree('mL'), 'kM', Tree('mR'), 'kR', Tree('r'))])
toDotList([Nil(), Method('insert(k)'), Two(Nil(), 'k', Nil())])
toDotList([Two(Tree('l'), 'k', Tree('r')), Method('.ins(k)'), Two(Tree('l'), 'k', Tree('r')) ])
toDotList([Method('k1 < k2:'), Two(Nil(), 'k1', Nil()), Method('.ins(k2)'), Three(Nil(), 'k1', Nil(), 'k2', Nil()) ])
toDotList([Method('k2 < k1:'), Two(Nil(), 'k1', Nil()), Method('.ins(k2)'), Three(Nil(), 'k2', Nil(), 'k1', Nil()) ])
toDotList([Method('k1 < k2:'), Two(Tree('l'), 'k1', Tree('r')), Method('.ins(k2)'), Two(Tree('l'), 'k1', Tree('r.ins(k2)')) ])
toDotList([Method('k2 < k1:'), Two(Tree('l'), 'k1', Tree('r')), Method('.ins(k2)'), Two(Tree('l.ins(k2)'), 'k1', Tree('r')) ])
toDotList([Two(Four(Tree('l1'),'kl',Tree('ml'),'km', Tree('mr'),'kr',Tree('r1')), 'k', Tree('r')),
Method('.restore()'),
Three(Two(Tree('l1'),'kl',Tree('ml')), 'km', Two(Tree('mr'),'kr',Tree('r1')), 'k', Tree('r'))])
toDotList([Two(Tree('l'), 'k', Four(Tree('l1'),'kl',Tree('ml'),'km', Tree('mr'),'kr',Tree('r1'))),
Method('.restore()'),
Three(Tree('l'), 'k', Two(Tree('l1'),'kl',Tree('ml')), 'km', Two(Tree('mr'),'kr',Tree('r1')))])
toDotList([Three(Four(Tree('l1'), 'k1', Tree('ml'), 'k2', Tree('mr'), 'k3', Tree('r1')), 'kl', Tree('m'), 'kr', Tree('r')),
Method('.restore()'),
Four(Two(Tree('l1'), 'k1', Tree('ml')), 'k2', Two(Tree('mr'), 'k3', Tree('r1')), 'kl', Tree('m'), 'kr', Tree('r')),
])
toDotList([Three(Tree('l'), 'kl', Four(Tree('l1'), 'k1', Tree('ml'), 'k2', Tree('mr'), 'k3', Tree('r1')), 'kr', Tree('r')),
Method('.restore()'),
Four(Tree('l'), 'kl', Two(Tree('l1'), 'k1', Tree('ml')), 'k2', Two(Tree('mr'), 'k3', Tree('r1')), 'kr', Tree('r'))
])
toDotList([Three(Tree('l'), 'kl', Tree('m'), 'kr', Four(Tree('l1'), 'k1', Tree('ml'), 'k2', Tree('mr'), 'k3', Tree('r1'))),
Method('.restore()'),
Four(Tree('l'), 'kl', Tree('m'), 'kr', Two(Tree('l1'), 'k1', Tree('ml')), 'k2', Two(Tree('mr'), 'k3', Tree('r1')))
])
toDotList([Four(Tree('l'),'kl', Tree('ml'), 'km', Tree('mr'), 'kr', Tree('r')),
Method('.grow()'),
Two(Two(Tree('l'),'kl', Tree('ml')), 'km', Two(Tree('mr'), 'kr', Tree('r')))
])
toDotList([Two(One(Tree('L1')), 'k', Two(Tree('M1'), 'k1', Tree('R1'))), Method('.repair()'), One(Three(Tree('L1'), 'k', Tree('M1'), 'k1', Tree('R1')))])
toDotList([Two(Two(Tree('L1'), 'k1', Tree('M1')), 'k', One(Tree('R1'))), Method('.repair()'), One(Three(Tree('L1'), 'k1', Tree('M1'), 'k', Tree('R1')))])
toDotList([Two(One(Tree('A')), 'k', Three(Tree('B'), 'k1', Tree('C'), 'k2', Tree('D'))),
Method('.repair()'),
Two(Two(Tree('A'), 'k', Tree('B')), 'k1', Two(Tree('C'), 'k2', Tree('D')))
])
toDotList([Two(Three(Tree('A'), 'k1', Tree('B'), 'k2', Tree('C')), 'k', One(Tree('D'))),
Method('.repair()'),
Two(Two(Tree('A'), 'k1', Tree('B')), 'k2', Two(Tree('C'), 'k', Tree('D')))
])
toDotList([Three(One(Tree('A')), 'kL', Two(Tree('B'), 'k', Tree('C')), 'kR', Tree('R')),
Method('.repair()'),
Two(Three(Tree('A'), 'kL', Tree('B'), 'k', Tree('C')), 'kR', Tree('R'))
])
toDotList([Three(Two(Tree('A'), 'k', Tree('B')), 'kL', One(Tree('C')), 'kR', Tree('R')),
Method('.repair()'),
Two(Three(Tree('A'), 'k', Tree('B'), 'kL', Tree('C')), 'kR', Tree('R'))
])
toDotList([Three(Tree('L'), 'kL', Two(Tree('A'), 'k', Tree('B')), 'kR', One(Tree('C'))),
Method('.repair()'),
Two(Tree('L'), 'kL', Three(Tree('A'), 'k', Tree('B'), 'kR', Tree('C')))
])
toDotList([Three(One(Tree('A')), 'kL', Three(Tree('B'), 'k1', Tree('C'), 'k2', Tree('D')), 'kR', Tree('R')),
Method('.repair()'),
Three(Two(Tree('A'), 'kL', Tree('B')), 'k1', Two(Tree('C'), 'k2', Tree('D')), 'kR', Tree('R'))
])
toDotList([Three(Three(Tree('A'), 'k1', Tree('B'), 'k2', Tree('C')), 'kL', One(Tree('D')), 'kR', Tree('R')),
Method('.repair()'),
Three(Two(Tree('A'), 'k1', Tree('B')), 'k2', Two(Tree('C'), 'kL', Tree('D')), 'kR', Tree('R'))
])
toDotList([Three(Tree('L'), 'kL', Three(Tree('A'), 'k1', Tree('B'), 'k2', Tree('C')), 'kR', One(Tree('D'))),
Method('.repair()'),
Three(Tree('L'), 'kL', Two(Tree('A'), 'k1', Tree('B')), 'k2', Two(Tree('C'), 'kR', Tree('D')))])
toDotList([One(Tree('A')), Method('.shrink()'), Tree('A')])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook presents 2-3 trees. We define these trees inductively as follows
Step2: The function make_string is a helper function used to shorten the implementation of __str__.
Step3: The method $t.\texttt{toDot}()$ takes a 2-3-4 tree $t$ and returns a graph that depicts the tree $t$.
Step4: The method $t.\texttt{collectIDs}(d)$ takes a tree $t$ and a dictionary $d$ and updates the dictionary so that the following holds
Step5: The function $\texttt{toDotList}(\texttt{NodeList})$ takes a list of trees and displays them one by one.
Step6: The class Tree is not used in the implementation of 2-3 trees. It is only used for displaying abstract subtrees in equations.
Step7: The class Method is not used in the implementation of 2-3 trees. It is only used for displaying method calls in equations. It is displayed as a rectangle containing the string that is stored in the member variable mLabel.
Step8: The class Nil represents an empty tree. It has no member variables of its own.
Step9: The class Onerepresents a 1-node. These are nodes without a key that have only a single child.
Step10: Graphically, the node $\texttt{One}(t)$ is represented as shown below
Step11: The class Two represents a 2-node of the form $\texttt{Two}(l, k, r)$. It manages three member variables
Step12: Graphically, the node $\texttt{Two}(l, k, r)$ is represented as shown below
Step13: The class Three represents a 3-node of the form $\texttt{Three}(l, k_L, m, k_R, r)$. It manages 5 member variables
Step14: Graphically, the node $\texttt{Three}(l, k_L, m, k_R, r)$ is represented as shown below
Step15: The class Four represents a 4-node. It manages 7 member variables
Step16: Graphically, the node $\texttt{Four}(l, k_L, m_L, k_M, m_R, k_R, r)$ is represented as shown below
Step17: Methods of the Class Nil
Step18: Methods of the Class Two
Step19: The function call $t.\texttt{restore}()$ takes a 2-3-4 tree $t$ that has at most one 4-node. This 4-node has to be a child
Step20: Methods of the Class Three
Step21: Methods of the Class Four
Step22: Deletion
Step23: Finally, we specify the method shrink. The only class where the implementation of shrink is nontrivial is the class One.
|
4,505
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
init = State(S=89, I=1, R=0)
init /= sum(init)
def make_system(beta, gamma):
Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
def update_func(state, t, system):
Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
state = update_func(init, 0, system)
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
run_simulation(system, update_func)
# Solution
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
s0 = system.init.S
final = run_simulation(system, update_func)
s_end = final.S
s0 - s_end
def run_simulation(system, update_func):
Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
def plot_results(S, I, R):
Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
plot_results(S, I, R)
savefig('figs/chap11-fig01.pdf')
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
plot_results(results.S, results.I, results.R)
# Solution
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
plot_results(results.S, results.I, results.R)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SIR implementation
Step2: To convert from number of people to fractions, we divide through by the total.
Step4: make_system creates a System object with the given parameters.
Step5: Here's an example with hypothetical values for beta and gamma.
Step7: The update function takes the state during the current time step and returns the state during the next time step.
Step8: To run a single time step, we call it like this
Step10: Now we can run a simulation by calling the update function for each time step.
Step11: The result is the state of the system at t_end
Step12: Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?
Step14: Using TimeSeries objects
Step15: Here's how we call it.
Step17: And then we can plot the results.
Step18: Here's what they look like.
Step20: Using a DataFrame
Step21: Here's how we run it, and what the result looks like.
Step22: We can extract the results and plot them.
Step23: Exercises
|
4,506
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([[0,1,0,0], [0,0,1,0], [0,1,1,0], [1,0,0,1]])
def LI_vecs(M):
dim = M.shape[0]
LI=[M[0]]
for i in range(dim):
tmp=[]
for r in LI:
tmp.append(r)
tmp.append(M[i]) #set tmp=LI+[M[i]]
if np.linalg.matrix_rank(tmp)>len(LI): #test if M[i] is linearly independent from all (row) vectors in LI
LI.append(M[i]) #note that matrix_rank does not need to take in a square matrix
return LI #return set of linearly independent (row) vectors
result = LI_vecs(a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
4,507
|
<ASSISTANT_TASK:>
Python Code:
import ConfigParser
CP = ConfigParser.ConfigParser()
CP.read("../.config")
head = CP.get('IPyLogbook-Config','head')
url = CP.get('IPyLogbook-Config','url')
port = CP.get('IPyLogbook-Config','ssh-port')
headLink="[Logbook HEAD]("+url+":"+port+"/tree)"
extensionsLink="[Logbook Extensions]("+url+":"+port+"/notebooks/IPyLogbook/mgmt/IPyLogbookExtensions.ipynb)"
indexLink="[Logbook Index]("+url+":"+port+"/notebooks/IPyLogbook/IPyLogbookIndex.ipynb)"
usersguideLink="[Logbook User Guide]("+url+":"+port+"/notebooks/IPyLogbook/doc/IPyLogbookUsersGuide.ipynb)"
CreateScript_Dir=head+"/experiment/20140101" # Abs. path to directory where Logbook entry will be created
CreateScript_Name="20140101" # Name of the Logbook entry (will be used in name if notebook file)
CreateScript_Execute="No" # "Yes" = run script; "No" = do not run script
CreateScript_Overwrite="No" # "Yes" = overwrite preexising log entry; "No" = do NOT overwrite preexisting log entry
#-------------------- The user should not need to set anything below this line --------------------#
%%bash -s "$CreateScript_Dir" "$CreateScript_Name" "$CreateScript_Execute" "$CreateScript_Overwrite" "$head"
if [ "$#" -ne 5 ]; then
echo -e "\nError: This script requires four arguments that should be set by the user!"
echo -e "arg1 : Absolute path to where new Logbook entry will be created
echo -e "arg2 : Name of the new Logbook entry
echo -e "arg3 : Yes/No to run the script"
echo -e "arg4 : Yes/No to overwrite an existing Logbook entry with the name specified"
echo -e "arg5 : Full path to the HEAD directory\n"
exit
fi
# Ensure that the user has intentionally flagged this script to run
if [ "$3" == "No" ]; then
echo -e "\nThis script is not flagged for execution. Set 'CreateScript_Execute' flag to 'Yes' to execute"
exit
fi
# Set full path to the directory containing the new entry
EntryDir=$1
# Set the new entry's file name
EntryName="IPyLogbookEntry-"$2".ipynb"
# Set the full path to the new entry file
Entry=$EntryDir/$EntryName
# If the directory does NOT exist then create it
if [ ! -d $EntryDir ]; then
mkdir -p $EntryDir
# If the directory DOES exist then...
else
# ... overwrite the preexisting Logbook entry if the user has granted permission to do so
if [ "$4" == "No" ]; then
echo -e "\nWARNING : The Logbook entry '$Entry' already exists!"
echo -e " You may set the above 'CreateScript_Overwrite' parameter to 'Yes' to overwrite this entry,"
echo -e " but you should exercise extreme caution when using this option!\n"
exit
fi
fi
# Set the Logbook entry template to be copied to the new Logbook entry and copy it
EntryTemplate="IPyLogbookEntryTemplate.ipynb"
cp $EntryTemplate $Entry
# Place a symbolic link to the IPyLogbook config file so that the new Logbook entry will have access to it
if [ "$4" == "Yes" ]; then
rm $EntryDir/.config -f
fi
ln -s $5/IPyLogbook/.config $EntryDir/.config
echo -e "\nA new Logbook entry was successfully created at:\n"
echo -e " $Entry\n"
ListScript_Execute="No" # "Yes" = run script; "No" = do not run script
#-------------------- The user should not need to set anything below this line --------------------#
%%bash -s "$ListScript_Execute" "$head"
if [ "$#" -ne 2 ]; then
echo -e "\nError: This script requires four arguments that should be set by the user!"
echo -e "arg1 : Yes/No to run the script"
echo -e "arg2 : Full path to the HEAD directory\n"
exit
fi
# Ensure that the user has intentionally flagged this script to run
if [ "$1" == "No" ]; then
echo -e "\nThis script is not flagged for execution. Set 'ListScript_Execute' flag to 'Yes' to execute".
exit
fi
find $2 -name "IPyLogbookEntry-*.ipynb"
BackupScript_Directory="/home/hartwig/logbook/backup" # Full path to backup directory
BackupScript_Execute="No" # "Yes" = run script; "No" = do not run script
BackupScript_Overwrite="No" # "Yes" = overwrite preexising backup; "No" = do NOT overwrite preexisting backup
#-------------------- The user should not need to set anything below this line --------------------#
%%bash -s "$BackupScript_Directory" "$BackupScript_Execute" "$BackupScript_Overwrite" "$head"
# Ensure correct number or arguments are passed; provide helpful output
if [ "$#" -ne 4 ]; then
echo -e "\nError: This script requires four arguments that should be set by the user!"
echo -e "arg1 : Absolute path to where the Logbook will be backed up"
echo -e "arg2 : Yes/No to run the script"
echo -e "arg3 : Yes/No to overwrite an existing Logbook entry with the name specified"
echo -e "arg4 : Full path to the HEAD directory\n"
exit
fi
# Put cmd line args into reasonably named variables
Directory=$1
Execute=$2
Overwrite=$3
Head=$4
# Ensure that the user has intentionally flagged this script to run
if [ "$Execute" == "No" ]; then
echo -e "\nThis script is not flagged for execution. Set 'ListScript_Execute' flag to 'Yes' to execute"
exit
fi
# Check to see if a directory already exists where the backup will be made
if [ -d $Directory ]; then
# Prevent overwriting the directory; provide advice to overwrite
if [ "$Overwrite" == "No" ]; then
echo -e "\nA backup of this logbook already exists at $1! Set 'BackupScript_Overwrite' to 'Yes' to overwrite."
echo -e "Please exercise CAUTION when using this option!\n"
exit
# Remove and recreate the directory
elif [ "$Overwrite" == "Yes" ]; then
chmod 755 $Directory
rm $Directory -rf
mkdir $Directory
fi
else
mkdir -p $Directory
fi
# Change to the IPyLogbook HEAD directory and copy all of the IPyLogbook entries to
# the specified directory being sure to preserve the directory structure!
cd $Head
EntryList=$(find . -name 'IPyLogbookEntry-*.ipynb' | grep -v '.ipynb_checkpoints')
for Entry in $EntryList; do
Entry=${Entry#.\/}
cp --parents $Entry $Directory
done
cat > $Directory/README.txt << EOL
*************
** WARNING **
*************
This directory contains a backup of an IPyLogbook! Please treat with respect!
EOL
Date=$(date)
echo "name: README.txt" >> $Directory/README.txt
echo "date: "$Date >> $Directory/README.txt
echo -e "\nA backup of this IPyLogbook was created at $Directory that includes the following files:"
for Entry in $EntryList; do
Entry=${Entry#.\/}
echo " "$Entry
done
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Logbook Entry
Step2: List Logbook Entries
Step3: Backup Logbook
|
4,508
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import csv
import json
import os
import ujson
import urllib2
import glob
import pickle
import re
import time
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import scipy as sp
from riotwatcher import RiotWatcher
config = {
'key': 'your key',
}
class RiotCrawler:
def __init__(self, key):
self.key = key
self.w = RiotWatcher(key)
self.tiers = {
'bronze': [],
'silver': [],
'gold': [],
'platinum': [],
'diamond': [],
'challenger': [],
'master': [],
}
def get_player_by_tier(self, summoner_id):
request_url = 'https://na.api.pvp.net/api/lol/na/v2.5/league/by-summoner/{}?api_key={}'.format(
summoner_id, self.key
)
response = urllib2.urlopen(request_url)
tier_info = ujson.loads(response.read())
tier = tier_info[str(summoner_id)][0]['tier'].lower()
entries = tier_info[str(summoner_id)][0]['entries']
level = self.tiers[tier]
for entry in entries:
level.append(entry['playerOrTeamId'])
# for l in level:
# print 'summoner id: {}'.format(str(l))
def get_tier():
# challenger: 77759242
# platinum: 53381
# gold: 70359816
# silver: 65213225
# bronze: 22309680
# master: 22551130
# diamond: 34570626
player_ids = [70359816, 77759242, 53381, 65213225, 22309680, 22551130, 34570626]
riot_crawler = RiotCrawler(config['key'])
for player_id in player_ids:
print 'start crawling id: {}'.format(player_id)
try:
riot_crawler.get_player_by_tier(player_id)
except:
continue
return riot_crawler.tiers
tiers = get_tier()
for tier, rank_dict in tiers.iteritems():
print '--- {} ---'.format(tier)
for summoner in rank_dict:
print 'summoner id: {}'.format(summoner)
print '--- end of {} ---'.format(tier)
class TopChampion:
FIELD_NAMES = ['totalSessionsPlayed', 'totalSessionsLost', 'totalSessionsWon',
'totalChampionKills', 'totalDamageDealt', 'totalDamageTaken',
'mostChampionKillsPerSession', 'totalMinionKills', 'totalDoubleKills',
'totalTripleKills', 'totalQuadraKills', 'totalPentaKills',
'totalUnrealKills', 'totalDeathsPerSession', 'totalGoldEarned',
'mostSpellsCast', 'totalTurretsKilled', 'totalPhysicalDamageDealt',
'totalMagicDamageDealt', 'totalFirstBlood', 'totalAssists',
'maxChampionsKilled', 'maxNumDeaths', 'label']
def __init__(self, key, player_id, label, n):
self.label = label
self.player_id = player_id
self.key = key
self.n = n
self.top_champions = []
pass
def get_top_champions(self):
self.top_champions[:] = []
data = urllib2.urlopen(
'https://na.api.pvp.net/api/lol/na/v1.3/stats/by-summoner/' +
self.player_id +
'/ranked?season=SEASON2016&api_key=' +
self.key
).read()
json_data = json.loads(data)
champions = json_data['champions']
champion_stats = []
for champion in champions:
champion_stat = champion['stats']
champion_stat['id'] = champion['id']
champion_stat['label'] = self.label
champion_stats.append(champion_stat)
pass
self.top_champions = sorted(champion_stats,
key=lambda x: x['totalSessionsPlayed'],
reverse=True)[1:self.n + 1]
return self.top_champions
pass
def save_top_champions(self, champion_id=None):
for champion in self.top_champions:
if not champion_id:
champion_id = champion['id']
file_name = './data/{}.csv'.format(champion['id'])
if os.path.isfile(file_name):
with open(file_name, 'a') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=self.FIELD_NAMES)
writer.writerow(
{
'totalSessionsPlayed': champion['totalSessionsPlayed'],
'totalSessionsLost': champion['totalSessionsLost'],
'totalSessionsWon': champion['totalSessionsWon'],
'totalChampionKills': champion['totalChampionKills'],
'totalDamageDealt': champion['totalDamageDealt'],
'totalDamageTaken': champion['totalDamageTaken'],
'mostChampionKillsPerSession': champion['mostChampionKillsPerSession'],
'totalMinionKills': champion['totalMinionKills'],
'totalDoubleKills': champion['totalDoubleKills'],
'totalTripleKills': champion['totalTripleKills'],
'totalQuadraKills': champion['totalQuadraKills'],
'totalPentaKills': champion['totalPentaKills'],
'totalUnrealKills': champion['totalUnrealKills'],
'totalDeathsPerSession': champion['totalDeathsPerSession'],
'totalGoldEarned': champion['totalGoldEarned'],
'mostSpellsCast': champion['mostSpellsCast'],
'totalTurretsKilled': champion['totalTurretsKilled'],
'totalPhysicalDamageDealt': champion['totalPhysicalDamageDealt'],
'totalMagicDamageDealt': champion['totalMagicDamageDealt'],
'totalFirstBlood': champion['totalFirstBlood'],
'totalAssists': champion['totalAssists'],
'maxChampionsKilled': champion['maxChampionsKilled'],
'maxNumDeaths': champion['maxNumDeaths'],
'label': champion['label']
}
)
pass
pass
else:
with open(file_name, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=self.FIELD_NAMES)
writer.writeheader()
writer.writerow(
{
'totalSessionsPlayed': champion['totalSessionsPlayed'],
'totalSessionsLost': champion['totalSessionsLost'],
'totalSessionsWon': champion['totalSessionsWon'],
'totalChampionKills': champion['totalChampionKills'],
'totalDamageDealt': champion['totalDamageDealt'],
'totalDamageTaken': champion['totalDamageTaken'],
'mostChampionKillsPerSession': champion['mostChampionKillsPerSession'],
'totalMinionKills': champion['totalMinionKills'],
'totalDoubleKills': champion['totalDoubleKills'],
'totalTripleKills': champion['totalTripleKills'],
'totalQuadraKills': champion['totalQuadraKills'],
'totalPentaKills': champion['totalPentaKills'],
'totalUnrealKills': champion['totalUnrealKills'],
'totalDeathsPerSession': champion['totalDeathsPerSession'],
'totalGoldEarned': champion['totalGoldEarned'],
'mostSpellsCast': champion['mostSpellsCast'],
'totalTurretsKilled': champion['totalTurretsKilled'],
'totalPhysicalDamageDealt': champion['totalPhysicalDamageDealt'],
'totalMagicDamageDealt': champion['totalMagicDamageDealt'],
'totalFirstBlood': champion['totalFirstBlood'],
'totalAssists': champion['totalAssists'],
'maxChampionsKilled': champion['maxChampionsKilled'],
'maxNumDeaths': champion['maxNumDeaths'],
'label': champion['label']
}
)
pass
pass
pass
pass
pass
def main():
tiers = get_tier()
for tier, rank_dict in tiers.iteritems():
print 'starting tier: {}'.format(tier)
for summoner_id in rank_dict:
print 'tier: {}, summoner id: {}'.format(tier, summoner_id)
top_champion = TopChampion(config['key'], summoner_id, tier, 3)
top_champion.get_top_champions()
top_champion.save_top_champions()
time.sleep(2)
print 'end tier: {}'.format(tier)
main()
adc_idx = [202, 42, 67, 222, 29, 51, 96, 81, 21, 6, 119, 15, 22, 236, 110, 18, 429]
dfs = list()
for adc_id in adc_idx:
df = pd.read_csv('./data/{}.csv'.format(adc_id))
dfs.append(df)
pass
res = pd.concat(dfs, ignore_index=True, verify_integrity=True)
res.to_csv('./data/adc.csv', index=False)
def string_to_int(s):
if s == 'bronze':
return 0
elif s == 'silver':
return 1
elif s == 'gold':
return 2
elif s == 'platinum':
return 3
elif s == 'diamond':
return 4
elif s == 'challenger':
return 5
else:
return 6
pass
def normalize(X):
n, m = X.shape
for i in range(m):
col_max = np.max(X[:, i])
col_min = np.min(X[:, i])
col_delta = col_max - col_min + 1.0
X[:, i] = (X[:, i] - col_min) / col_delta
pass
return X
pass
def preprocess_adc(filename):
df = pd.read_csv(filename, index_col=False)
y = df['label'].map(lambda x: string_to_int(x)).as_matrix()
df_t = df.drop(['totalDamageDealt', 'totalDamageTaken', 'totalDoubleKills', 'totalTripleKills',
'totalQuadraKills', 'totalPentaKills', 'totalUnrealKills', 'mostSpellsCast',
'totalTurretsKilled', 'totalMagicDamageDealt', 'totalFirstBlood'], axis=1)
X = df_t.drop(['label'], axis=1).as_matrix().astype(float)
X = normalize(X)
uniques = np.unique(y)
X_list = list()
y_list = list()
for unique in uniques:
idxes = y == unique
X_list.append(X[idxes])
y_list.append(y[idxes])
pass
return X_list, y_list, df_t
pass
def cross_validation(X_list, y_list, k=10):
res = list()
n = len(y_list)
for i in range(k):
X_tr_list = list()
X_te_list = list()
y_tr_list = list()
y_te_list = list()
for j in range(n):
X_j = X_list[j]
y_j = y_list[j]
m = len(y_j)
test_num = m / k
test_start_idx = i * test_num
test_end_idx = test_start_idx + test_num
test_idx = range(test_start_idx, test_end_idx)
train_idx = range(0, test_start_idx) + range(test_end_idx, m)
X_tr_list.append(X_j[train_idx])
y_tr_list.append(y_j[train_idx])
X_te_list.append(X_j[test_idx])
y_te_list.append(y_j[test_idx])
pass
X_tr = np.concatenate(X_tr_list)
y_tr = np.concatenate(y_tr_list)
X_te = np.concatenate(X_te_list)
y_te = np.concatenate(y_te_list)
res.append((X_tr, y_tr, X_te, y_te))
pass
return res
pass
features_list, labels_list, df_avg = preprocess_adc('./data/adc.csv')
cf_list = cross_validation(features_list, labels_list)
k = len(cf_list)
from sklearn.neighbors import KNeighborsClassifier
neighbor_nums = [3, 5, 10, 15, 20, 30, 40, 60, 80, 100, 200]
n = len(neighbor_nums)
knn_accuracies = np.zeros((k, n))
for i in range(k):
X_tr, y_tr, X_te, y_te = cf_list[i]
for j in range(n):
knn = KNeighborsClassifier(n_neighbors=neighbor_nums[j], weights='distance')
knn.fit(X_tr, y_tr)
knn_accuracies[i, j] = knn.score(X_te, y_te)
pass
pass
knn_mean_accuracies = np.mean(knn_accuracies, axis=0)
knn_std_accuracies = np.std(knn_accuracies, axis=0)
plt.figure(1)
plt.plot(neighbor_nums, knn_mean_accuracies)
plt.xlabel('The value of k nearest neighbors')
plt.ylabel('Mean accuracy across all folds')
plt.figure(2)
plt.plot(neighbor_nums, knn_std_accuracies)
plt.xlabel('The value of k nearest neighbors')
plt.ylabel('Standard Deviation of accuracies across all folds')
plt.show()
from sklearn.svm import SVC
kernels = ['linear', 'poly', 'rbf', 'sigmoid']
kernel_num = len(kernels)
svm_accuracies = np.zeros((k, kernel_num))
for i in range(k):
X_tr, y_tr, X_te, y_te = cf_list[i]
for j in range(kernel_num):
svm = SVC(C=40, kernel=kernels[j])
svm.fit(X_tr, y_tr)
svm_accuracies[i, j] = svm.score(X_te, y_te)
pass
pass
svm_mean_accuracies = np.mean(svm_accuracies, axis=0)
svm_std_accuracies = np.std(svm_accuracies, axis=0)
plt.figure(3)
plt.bar(range(4), svm_mean_accuracies, align='center')
plt.xticks(range(4), kernels)
plt.ylabel('Mean accuracy')
plt.figure(4)
plt.bar(range(4), svm_std_accuracies, align='center')
plt.xticks(range(4), kernels)
plt.ylabel('Standard deviation across folds')
plt.show()
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers.core import Dense, Dropout
from keras.layers.convolutional import Convolution1D, MaxPooling1D, ZeroPadding1D
from keras.regularizers import l2, activity_l2
from keras.utils.np_utils import to_categorical
def classify(X, y, X_te):
_, n = X.shape
model = Sequential()
model.add(ZeroPadding1D(input_shape=(n, 1)))
model.add(Convolution1D(64, 3, activation='relu'))
model.add(ZeroPadding1D())
model.add(Convolution1D(64, 3, activation='relu'))
model.add(MaxPooling1D())
model.add(ZeroPadding1D())
model.add(Convolution1D(128, 3, activation='relu'))
model.add(ZeroPadding1D())
model.add(Convolution1D(128, 3, activation='relu'))
model.add(MaxPooling1D())
model.add(Flatten())
model.add(Dense(256, activation='relu', W_regularizer=l2(0.01), activity_regularizer=activity_l2(0.01)))
model.add(Dropout(0.6))
model.add(Dense(7, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=[
'categorical_accuracy', 'mean_squared_error', 'mean_absolute_error'
])
y = to_categorical(y, nb_classes=None)
model.fit(np.expand_dims(X, axis=2), y, nb_epoch=500, verbose=0)
return model.predict_classes(np.expand_dims(X_te, axis=2))
def accuracy(training_result, y_te):
return np.sum(training_result==y_te) / float(len(training_result))
X_tr, y_tr, X_te, y_te = cf_list[0]
predict_class = classify(X_tr, y_tr, X_te)
print accuracy(predict_class, y_te)
def find_average(tier_id, df):
tier = TIER[tier_id]
df = df[df['label'] == tier]
mean_df = df.mean(axis=0)
mean_data = mean_df / mean_df['totalSessionsPlayed']
mean_data = mean_data.drop(['totalSessionsPlayed', 'totalSessionsLost',
'mostChampionKillsPerSession', 'maxNumDeaths', 'maxChampionsKilled'])
mean_dict = dict(mean_data)
mean_dict['KDA'] = (mean_dict['totalChampionKills'] + mean_dict['totalAssists']) / float(mean_dict['totalDeathsPerSession'])
del mean_dict['totalChampionKills']
del mean_dict['totalAssists']
del mean_dict['totalDeathsPerSession']
return mean_dict
# 70359816 is a summoner's id
top_champion = TopChampion(config['key'], '70359816', 2, n=1)
top_champion.save_top_champions(champion_id='solo_summoner_{}'.format('70359816'))
X_te = pd.read_csv('./data/solo_summoner_70359816.csv').drop(
['label', 'totalDamageDealt', 'totalDamageTaken', 'totalDoubleKills',
'totalTripleKills', 'totalQuadraKills', 'totalPentaKills',
'totalUnrealKills', 'mostSpellsCast', 'totalTurretsKilled',
'totalMagicDamageDealt', 'totalFirstBlood'], axis=1).as_matrix().astype(float)
predict = classify(X_tr, y_tr, X_te)[0]
TIER = {
0: 'bronze',
1: 'silver',
2: 'gold',
3: 'platinum',
4: 'diamond',
5: 'challenger',
6: 'master',
}
print 'the predicted tier is: {}'.format(TIER[predict])
data = {}
reader = csv.reader(open('./data/solo_summoner_70359816.csv'))
index = 0
title, content = [], []
for row in reader:
if index == 0:
title = row
index += 1
else:
content = row
for i in xrange(len(title)):
data[title[i]] = float(content[i])
total_played = data['totalSessionsPlayed']
for key, value in data.iteritems():
data[key] = value / total_played
filtered_data = {}
mean_temp = find_average(predict, df_avg)
data['KDA'] = (data['totalChampionKills'] + data['totalAssists']) / float(data['totalDeathsPerSession'])
for key, value in mean_temp.iteritems():
if key in data:
filtered_data[key] = data[key]
print "Summoner's game data: {}".format(filtered_data)
me, avg = [], []
key_list = ['Damage', 'WinRate', 'TotalCS', 'KDA', 'GoldEarned']
for key, value in filtered_data.iteritems():
# normalize
total = mean_temp[key] + value
avg.append(mean_temp[key] / total)
me.append(value / total)
avg.append(avg[0])
me.append(me[0])
theta = np.linspace(0, 2*np.pi, 6, endpoint=True)
r1 = np.array(me)
r2 = np.array(avg)
ax = plt.subplot(projection='polar')
ax.plot(theta, r1, color='r', linewidth=3)
ax.plot(theta, r2, color='b', linewidth=3)
ax.set_thetagrids(np.arange(0, 360, 360.0/5), labels=key_list)
ax.set_rmax(0.6)
ax.set_title("Summoner Game Skill Analysis", va='bottom')
plt.show()
def count_champion(api_key):
res = dict()
file_names = glob.glob('./data/*.csv')
for file_name in file_names:
m = re.search(r'/(\d+)\.csv', file_name)
if m:
champion_id = m.group(1)
req_url = 'https://global.api.pvp.net/api/lol/static-data/na/v1.2/champion/' \
+ champion_id \
+ '?api_key=' \
+ api_key
time.sleep(1)
# print req_url
data = urllib2.urlopen(req_url).read()
json_data = json.loads(data)
champion_name = json_data['name']
with open(file_name) as f:
champion_count = len(f.readlines())
res[champion_name] = champion_count
pass
pass
return res
pass
if os.path.isfile('champion_popularity.pkl'):
with open('champion_popularity.pkl', 'r+') as f:
champion_counts = pickle.load(f)
pass
pass
else:
champion_counts = count_champion(config['key'])
with open('champion_popularity.pkl', 'w+') as f:
pickle.dump(champion_counts, f, pickle.HIGHEST_PROTOCOL)
pass
word_cloud = WordCloud().generate_from_frequencies(champion_counts.iteritems())
plt.imshow(word_cloud)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To use the Riot api, one more important thing to do is to get your own API key. API key can be obtained from here. Note that normal developr API key has a narrow request limit, whereas production API key for commercial use has a looser requirement of request limit.
Step2: <a name="architecture"></a>Project Architecture
Step3: get_tier will return a divisioin dictionary, whose keys are the tier name, and values are the summoner id list in each tier. The results are printed in a human-readable format, categorized by tier.
Step4: <a name="mfpChampions"></a>2. Fetch most frequently played champions
Step5: With the above class, now we can start crawling the stats data of all champions saving them to csv files by the following code. Notice that this process is pretty slow since we added the sleep methods in our code. Riot APIs have a limitation on the API calls rate. You cannot send more than 500 requests per 10 minutes. So everytime we send a request here, we sleep for 1 second to prevent error responses.
Step6: <a name="train"></a>Data process, training and Analysis
Step7: Now we generate a new adc.csv file of all data we collected for the "ADC" champions. The next thing we need to do is to group the data of "ADC" of different levels together. Because later we will use k-fold cross validation on the data and we need to sample the same ratio of data of different levels for testing.
Step8: <a name="cross-validation"></a>K-fold cross validation
Step9: <a name="knn"></a>K-Nearest Neighbors (KNN) Model
Step10: KNN Analysis
Step11: SVM Analysis
Step12: Result
Step13: The training results of the training data (in the first iteration) is as follow (plot by matplotlib using results from keras)
Step14: Now we generate our test data. We assume that the champion the summoner played best reveals the summoner's game tier. Delete unnecessary data keys according to the mean_dict
Step15: Then we train our CNN model
Step16: After predicting the tier, we fetch the average summoner's data from the predicted tier, and process the data for summoner analysis use (details would be explained later).
Step17: Next we visualize the summoners data using Python radar chart
Step18: <a name="s-analysis"></a>Summoner Analysis
|
4,509
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("../")
import pyfesom as pf
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
# %matplotlib notebook
%matplotlib inline
from matplotlib import cm
from netCDF4 import Dataset, MFDataset
basedir = '/mnt/lustre01/work/ab0995/a270088/DATA/swift.dkrz.de/'
meshpath = basedir+'/COREII/'
mesh = pf.load_mesh(meshpath, usepickle=True)
fl = Dataset(basedir+'/COREII_data/fesom.1951.oce.mean.nc')
fl.variables['temp'].shape
data = fl.variables['temp'][0,:]
lon_start = -15
lat_start = -70
lon_end = -15
lat_end = -20
pf.plot_transect_map(lon_start, lat_start, lon_end, lat_end,
mesh, npoints=30, view = 'w', stock_img=False)
npoints = 80
fig, ax = plt.subplots(1,1, figsize=(15,7))
image = pf.plot_transect(data, mesh,
lon_start,
lat_start,
lon_end,
lat_end,
npoints=npoints,
levels = np.round(np.linspace(-2, 13, 42),2),
cmap=cm.Spectral_r,
maxdepth =6000,
title = 'Southern Ocean Transect',
ncols=1,
figsize=(5,10),
ax = ax
)
cb = fig.colorbar(image, orientation='horizontal', ax=ax, pad=0.13)
cb.set_label('deg C')
npoints = 80
lonlat = pf.transect_get_lonlat(lon_start, lat_start, lon_end, lat_end, npoints=npoints)
labeles = [str(abs(int(x)))+"$^{\circ}$S" for x in lonlat[7::8][:,1]]
labeles
npoints = 80
lonlat = pf.transect_get_lonlat(lon_start, lat_start, lon_end, lat_end, npoints=npoints)
labeles = [str(abs(int(x)))+"$^{\circ}$S" for x in lonlat[7::8][:,1]]
dist = pf.transect_get_distance(lonlat) # get's distances between starting point and present point
fig, ax = plt.subplots(1,1, figsize=(15,7))
image = pf.plot_transect(data, mesh,
lon_start,
lat_start,
lon_end,
lat_end,
npoints=npoints,
levels = np.round(np.linspace(-2, 13, 42),2),
cmap=cm.Spectral_r,
maxdepth =6000,
title = 'Southern Ocean Transect',
ncols=1,
figsize=(5,10),
ax = ax
)
cb = fig.colorbar(image, orientation='horizontal', ax=ax, pad=0.13)
cb.set_label('deg C')
ax.xaxis.set_ticks(dist[7::8])
ax.set_xticklabels(labeles, size=20);
data1 = fl.variables['temp'][3,:]
data2 = fl.variables['temp'][6,:]
data3 = fl.variables['temp'][9,:]
data_all = [data, data1, data2, data3]
nrows = 2
ncols = 2
figsize = (15,10)
label = '$^{\circ}$C'
vmin = -2
vmax = 15
cmap = cm.Spectral_r
npoints = 100
cmap.set_bad(color = 'k', alpha = 1.)
lonlat = pf.transect_get_lonlat(lon_start, lat_start, lon_end, lat_end, npoints=npoints)
dist = pf.transect_get_distance(lonlat)
labeles = [str(abs(int(x)))+"$^{\circ}$S" for x in lonlat[9::10][:,1]]
months = ['JAN', 'APR', 'JUL', 'NOV']
fig, ax = plt.subplots(nrows,ncols, figsize=figsize)
ax = ax.flatten()
# for i, sim in enumerate(data):
for i, sim in enumerate(data_all):
image = pf.plot_transect(sim, mesh, lon_start,
lat_start,
lon_end,
lat_end,
npoints=npoints,
levels = np.round(np.linspace(-3, 7, 41),2),
cmap=cmap,
maxdepth =6000,
label = '$^{\circ}$C',
title = months.pop(0),
ncols=3,
ax=ax[i])
cb = fig.colorbar(image, orientation='horizontal', ax=ax[i], pad=0.16)
cb.set_label(label)
ax[i].xaxis.set_ticks(dist[9::10])
# ax.xaxis.set_ticks(list(range(lonlat.shape[0]))[9::10])
ax[i].set_xticklabels(labeles, size=10)
ax[i].set_xlabel(' ')
ax[i].xaxis.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, as usuall load the mesh
Step2: Load data for one year
Step3: Select one month
Step4: Decide start and end points of the transect and plot the map of your future transect. Use %matplotlib notebook if you whant to be able to zoom to the map. NOTE! you have to have cartopy installed to make the plotting work.
Step5: We just use closest points for selection, so on the map blues dots show generated transect, while red points are model points that will be used for "interpolation" (we will just use nearest neighbor aproach).
Step6: If instead of kilometers you would like to have lons or lats, below is the code that will help you to do so
Step7: Let's use it
Step8: Several transects at once
Step9: Put all of them in to list
Step10: Now you can have several transects in a loop. Make sure you have changed number of rows (nrows), number of columns (ncols), figsize and other parameters at the top of the script
|
4,510
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
from modsim import units
m = units.meter
rad = units.radian
s = units.second
from modsim import Params
params = Params(Rmin = 0.02 * m,
Rmax = 0.055 * m,
L = 47 * m,
theta_0 = 0 * rad,
y_0 = 0 * m,
omega = 300 * rad / s,
t_end = 130 * s
)
from modsim import State, System
def make_system(params):
init = State(theta = params.theta_0,
y = params.y_0,
r = params.Rmin
)
k = estimate_k(params)
return System(params,
init=init,
k=k,
)
from numpy import pi
def estimate_k(params):
Rmin, Rmax, L = params.Rmin, params.Rmax, params.L
Ravg = (Rmax + Rmin) / 2
Cavg = 2 * pi * Ravg
revs = L / Cavg
rads = 2 * pi * revs
k = (Rmax - Rmin) / rads
return k
system = make_system(params)
system.init
system.k
def slope_func(t, state, system):
theta, y, r = state
k, omega = system.k, system.omega
dydt = r * omega
drdt = k * omega
return omega, dydt, drdt
slope_func(0, system.init, system)
def event_func(t, state, system):
theta, y, r = state
return y - system.L
event_func(0, system.init, system)
from modsim import run_solve_ivp
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
results.tail()
results.index[-1]
final_state = results.iloc[-1]
print(final_state.y, params.L)
print(final_state.r, params.Rmax)
radians = final_state.theta
rotations = radians / 2 / pi
rotations
from modsim import decorate
def plot_theta(results):
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
plot_theta(results)
def plot_r(results):
results.r.plot(color='C2', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (m)')
plot_r(results)
def plot_y(results):
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
plot_y(results)
k = (params.Rmax**2 - params.Rmin**2) / (2 * params.L)
print(k, system.k)
# Solution
from modsim import gradient
dydt = gradient(results.y);
# Solution
dydt.plot(label='dydt')
decorate(xlabel='Time (s)',
ylabel='Linear velocity (m/s)')
max_linear_velocity = dydt.iloc[-1]
max_linear_velocity
# Solution
def slope_func(t, state, system):
Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
theta, y, r = state
k, omega = system.k, system.omega
dydt = system.linear_velocity
omega = dydt / r
drdt = k * omega
return omega, dydt, drdt
# Solution
system.linear_velocity = max_linear_velocity
slope_func(0, system.init, system)
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
# Solution
t_final = results.index[-1]
t_final
# Solution
plot_theta(results)
# Solution
plot_r(results)
# Solution
plot_y(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this chapter we model systems that involve rotating objects.
Step2: Rmin and Rmax are the initial and final values for the radius, r.
Step3: The initial state contains three variables, theta, y, and r.
Step4: Ravg is the average radius, half way between Rmin and Rmax, so
Step5: Now we can use the differential equations from the previous section to
Step6: As usual, the slope function takes a State object, a time, and a
Step7: We'd like to stop the simulation when the length of paper on the roll is L. We can do that with an event function that passes through 0 when y equals L
Step8: Now we can run the simulation like this
Step9: Here are the last few time steps.
Step10: At $\omega = 300$ rad/s, the time it takes to complete one roll is about 4.2 seconds, which is consistent with what we see in the video.
Step11: The final value of y is 47 meters, as expected.
Step12: The final value of radius is Rmax.
Step13: And the total number of rotations is close to 200, which seems plausible.
Step14: As an exercise, we'll see how fast the paper is moving. But first, let's take a closer look at the results.
Step15: theta grows linearly, as we should expect with constant angular velocity.
Step16: r also increases linearly.
Step17: Because this system is so simple, it is almost silly to simulate it; as we'll see in the next section, it is easy enough to solve the
Step18: In this case the estimate turns out to be exact.
Step19: With constant angular velocity, linear velocity is increasing, reaching its maximum at the end.
Step21: Now suppose this peak velocity is the limiting factor; that is, we can't move the paper any faster than that.
|
4,511
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname)
raw.pick_types(meg='grad', stim=True, eog=True, exclude=())
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the example to run faster. The 2 Hz high-pass helps improve CSP.
raw.load_data().filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=3,
verbose='error')
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
del raw
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: auditory left vs visual left
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(
Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
)
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=None)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
csp = CSP(n_components=3, norm_trace=False)
clf_csp = make_pipeline(
csp,
LinearModel(LogisticRegression(solver='liblinear'))
)
scores = cross_val_multiscore(clf_csp, X, y, cv=5, n_jobs=None)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver='liblinear')
)
time_decod = SlidingEstimator(
clf, n_jobs=None, scoring='roc_auc', verbose=True)
# here we use cv=3 just for speed
scores = cross_val_multiscore(time_decod, X, y, cv=3, n_jobs=None)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
clf = make_pipeline(
StandardScaler(),
LinearModel(LogisticRegression(solver='liblinear'))
)
time_decod = SlidingEstimator(
clf, n_jobs=None, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked_time_gen = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked_time_gen.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=None, scoring='roc_auc',
verbose=True)
# again, cv=3 just for speed
scores = cross_val_multiscore(time_gen, X, y, cv=3, n_jobs=None)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
cbar = plt.colorbar(im, ax=ax)
cbar.set_label('AUC')
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs
fwd = mne.read_forward_solution(
meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif')
inv = mne.minimum_norm.make_inverse_operator(
evoked_time_gen.info, fwd, cov, loose=0.)
stc = mne.minimum_norm.apply_inverse(evoked_time_gen, inv, 1. / 9., 'dSPM')
del fwd, inv
brain = stc.plot(hemi='split', views=('lat', 'med'), initial_time=0.1,
subjects_dir=subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transformation classes
Step2: PSDEstimator
Step3: Source power comodulation (SPoC)
Step4: Decoding over time
Step5: You can retrieve the spatial filters and spatial patterns if you explicitly
Step6: Temporal generalization
Step7: Plot the full (generalization) matrix
Step8: Projecting sensor-space patterns to source space
Step9: And this can be visualized using
|
4,512
|
<ASSISTANT_TASK:>
Python Code:
import string
upcase = [x for x in string.ascii_uppercase]
lcase = [x for x in string.ascii_lowercase]
print(upcase[:5], lcase[:5])
pd.DataFrame([upcase, lcase])
pd.DataFrame([upcase, lcase]).T
letters = pd.DataFrame({'lowercase':lcase, 'uppercase':upcase})
letters.head()
pd.DataFrame({'lowercase':lcase + [0], 'uppercase':upcase})
letters.head()
letters.columns = ['LowerCase','UpperCase']
np.random.seed(25)
letters['Number'] = np.random.random_integers(1,50,26)
letters
letters.dtypes
letters.index = lcase
letters
letters.sort('Number')
letters.sort()
letters[['LowerCase','UpperCase']].head()
letters.iloc[5:10]
letters["f":"k"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can create DataFrames by passing in np arrays, lists of series, or dictionaries.
Step2: We’ll be covering a lot of different aspects here but as always we’re going to start with the simple stuff. A simplification of a data frame is like an excel table or sql table. You’ve got columns and rows.
Step3: This should be familiar because it’s the same way that we transpose ndarrays in numpy.
Step4: Now you’ll see that if these lengths are not the same, we’ll get a ValueError so it’s worth checking to make sure your data is clean before importing or using it to create a DataFrame
Step5: We can rename the columns easily and even add a new one through a relatively simple dictionary like assignment. I'll go over some more complex methods later on.
Step6: Now just like Series, DataFrames have data types, we can get those by accessing the dtypes of the DataFrame which will give us details on the data types we've got.
Step7: Of course we can sort maybe by a specific column or by the index(the default).
Step8: We've seen how to query for one column and multiple columns isn't too much more difficult.
Step9: We can also just query the index as well. We went over a lot of that in the Series Section and a lot of the same applies here.
|
4,513
|
<ASSISTANT_TASK:>
Python Code:
import collections
import subprocess
import itertools
import os
import time
import madoka
import numpy as np
import redis
ALPHANUM = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
NUM_ALPHANUM_COMBINATION = 238328
zipf_array = np.random.zipf(1.5, NUM_ALPHANUM_COMBINATION)
def python_memory_usage():
return int(subprocess.getoutput('ps up %s' % os.getpid()).split()[15])
def redis_memory_usage():
lines = subprocess.getoutput('ps').splitlines()
for line in lines:
if 'redis-server' in line:
pid = line.split()[0]
break
return int(subprocess.getoutput('ps up %s' % pid).split()[15])
def count(counter):
for (i, chars) in enumerate(itertools.product(ALPHANUM, repeat=3)):
chars = ''.join(chars)
counter[chars] = int(zipf_array[i])
return counter
def benchmark(counter, start_mem_usage):
counter = count(counter)
end_mem_usage = python_memory_usage()
diff = end_mem_usage - start_mem_usage
print('memory consumption is {:,d} KB'.format(diff))
return counter
def redis_benchmark():
db = redis.Redis()
db.flushall()
start_mem_usage = redis_memory_usage()
with db.pipeline() as pipe:
for (i, chars) in enumerate(itertools.product(ALPHANUM, repeat=3)):
chars = ''.join(chars)
pipe.set(chars, int(zipf_array[i]))
pipe.execute()
end_mem_usage = redis_memory_usage()
diff = end_mem_usage - start_mem_usage
print('memory consumption is {:,d} KB'.format(diff))
print('collections.Counter')
start_mem_usage = python_memory_usage()
start_time = time.process_time()
counter = collections.Counter()
benchmark(counter, start_mem_usage)
end_time = time.process_time()
print('Processsing Time is %5f sec.' % (end_time - start_time))
del counter
print('*' * 30)
print('madoka.Sketch')
start_mem_usage = python_memory_usage()
start_time = time.process_time()
sketch = madoka.Sketch()
benchmark(sketch, start_mem_usage)
end_time = time.process_time()
print('Processsing Time is %5f sec.' % (end_time - start_time))
del sketch
print('*' * 30)
print('Redis')
start_time = time.process_time()
redis_benchmark()
end_time = time.process_time()
print('Processsing Time is %5f sec.' % (end_time - start_time))
sketch = madoka.Sketch()
diffs = []
for (i, chars) in enumerate(itertools.product(ALPHANUM, repeat=3)):
chars = ''.join(chars)
sketch[chars] = int(zipf_array[i])
diff = abs(sketch[chars] - int(zipf_array[i]))
if diff > 0:
diffs.append(diff / int(zipf_array[i]) * 100)
else:
diffs.append(0)
print(np.average(diffs))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Counting error rate
|
4,514
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from medpy.io import load
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
i, h = load("flair.nii.gz")
i[np.random.randint(0, i.shape[0], int(0.05 * i.size)), np.random.randint(0, i.shape[1], int(0.05 * i.size))] = i.min()
i[np.random.randint(0, i.shape[0], int(0.05 * i.size)), np.random.randint(0, i.shape[1], int(0.05 * i.size))] = i.max()
plt.imshow(i, cmap = cm.Greys_r);
brainmask = i > 0
plt.imshow(brainmask, cmap = cm.Greys_r);
from medpy.filter import largest_connected_component
brainmask = largest_connected_component(brainmask)
plt.imshow(brainmask, cmap = cm.Greys_r);
from scipy.ndimage import binary_fill_holes
brainmask = binary_fill_holes(brainmask)
plt.imshow(brainmask, cmap = cm.Greys_r);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using our previous approach of simply thresholding to obtain the brain mask will fail now.
Step2: What we instead obtain is a rough estimation of the brain mask with noise speckles. First, let's get rid of the small outliers in the background using MedPy's largest_connected_component filter.
Step3: That already looks better. Note that we could have alternatively used the size_threshold filter, if we had to keep more than a single binary object. Now we can close the inner holes with the help of scipy.
|
4,515
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import mne
pd.DataFrame(np.random.normal(size=(16, 100))).to_csv("fnirs.csv")
data = pd.read_csv('fnirs.csv')
ch_names = ['S1_D1 hbo', 'S1_D1 hbr', 'S2_D1 hbo', 'S2_D1 hbr',
'S3_D1 hbo', 'S3_D1 hbr', 'S4_D1 hbo', 'S4_D1 hbr',
'S5_D2 hbo', 'S5_D2 hbr', 'S6_D2 hbo', 'S6_D2 hbr',
'S7_D2 hbo', 'S7_D2 hbr', 'S8_D2 hbo', 'S8_D2 hbr']
ch_types = ['hbo', 'hbr', 'hbo', 'hbr',
'hbo', 'hbr', 'hbo', 'hbr',
'hbo', 'hbr', 'hbo', 'hbr',
'hbo', 'hbr', 'hbo', 'hbr']
sfreq = 10. # in Hz
info = mne.create_info(ch_names=ch_names, ch_types=ch_types, sfreq=sfreq)
raw = mne.io.RawArray(data, info, verbose=True)
montage = mne.channels.make_standard_montage('artinis-octamon')
raw.set_montage(montage)
# View the position of optodes in 2D to confirm the positions are correct.
raw.plot_sensors()
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
mne.datasets.fetch_fsaverage(subjects_dir=subjects_dir)
trans = mne.channels.compute_native_head_t(montage)
fig = mne.viz.create_3d_figure(size=(800, 600), bgcolor='white')
fig = mne.viz.plot_alignment(
raw.info, trans=trans, subject='fsaverage', subjects_dir=subjects_dir,
surfaces=['brain', 'head'], coord_frame='mri', dig=True, show_axes=True,
fnirs=['channels', 'pairs', 'sources', 'detectors'], fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=90, elevation=90, distance=0.5,
focalpoint=(0., -0.01, 0.02))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we generate an example CSV file which will then be loaded in to MNE.
Step2: <div class="alert alert-danger"><h4>Warning</h4><p>The channels must be ordered in haemoglobin pairs, such that for
Step3: Then, the metadata must be specified manually as the CSV file does not
Step4: Finally, the data can be converted in to an MNE data structure.
Step5: Applying standard sensor locations to imported data
Step6: To validate the positions were loaded correctly it is also possible to view
|
4,516
|
<ASSISTANT_TASK:>
Python Code:
# Import the Earth Engine Python Package into Python environment.
import ee
import ee.mapclient
# Initialize the Earth Engine object, using the authentication credentials.
ee.Initialize()
image = ee.Image('srtm90_v4')
from IPython.display import Image
Image(url=image.getThumbUrl({'min':0, 'max': 3000}))
# Print the information for an image asset. the 'srtm90_v4 file is a digital elevation model.
# that is housed in Google's cloud and has an elevation value for every pixel across the whole earth
# at a resolution of 30 meters. That is the map you see below in the static notebook.
print(image.getInfo())
#celebrate the metadata!!
Irene= ee.Image("users/kuhniculous/floodwithnoletters")
from IPython.display import display,Image
test=ee.Image(Irene)
display(Image(url=test.select(['b1']).getThumbUrl({'gamma':2})))
Lparams = {
'min':0.0134,
'max':0.0338,
'palette':'000000,0000ff,00ffff,00ff00,ffff00,ffa500,ff0000',
};
display(Image(url=test.select(["b1"]).getThumbUrl(Lparams)))
Irene= ee.Image("users/kuhniculous/popImage")
from IPython.display import display,Image
test=ee.Image(Irene)
display(Image(url=test.select(['b1']).getThumbUrl({'gamma':2})))
Lparams = {
'min':7,
'max':7.5,
'palette':'000000,ff0000',
};
display(Image(url=test.select(["b1"]).getThumbUrl(Lparams)))
Select rows from a fusion table.
import ee
import ee.mapclient
ee.Initialize()
ee.mapclient.centerMap(-93, 40, 4)
# Select the 'Sonoran desert' feature from the TNC Ecoregions fusion table.
fc = (ee.FeatureCollection('ft:1Ec8IWsP8asxN-ywSqgXWMuBaxI6pPaeh6hC64lA')
.filter(ee.Filter().eq('ECO_NAME', 'Sonoran desert')))
# Paint it into a blank image.
image1 = ee.Image(0).mask(0)
ee.mapclient.addToMap(image1.paint(fc, 0, 5))
%matplotlib inline
from __future__ import print_function # For py 2.7 compat
import datetime
from IPython.html import widgets
from IPython.display import display
from IPython.utils import traitlets
from IPython.core.display import Javascript
%run 'define_google_maps_interactive_widget.ipynb'
Irene= ee.Image("users/kuhniculous/popImage")
map = GoogleMapsWidget(lat=59.5, lng=10.9, zoom=13) # lat, lng and zoom are optional
display(map)
map.addLayer(Irene, {'color': 'FFFFCC'}, name='Irene Map')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize Geographic Data
Step3: Try it with mapclient
Step4: Testing Out Jill's Method for Displaying Maps
|
4,517
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function # For py 2.7 compat
from IPython.html import widgets # Widget definitions
from IPython.display import display # Used to display widgets in the notebook
from IPython.utils.traitlets import Unicode # Used to declare attributes of our widget
class DateWidget(widgets.DOMWidget):
_view_name = Unicode('DatePickerView', sync=True)
%%javascript
require(["widgets/js/widget"], function(WidgetManager){
});
%%javascript
require(["widgets/js/widget", "widgets/js/manager"], function(widget, manager){
// Define the DatePickerView
var DatePickerView = widget.DOMWidgetView.extend({
render: function(){ this.$el.text('Hello World!'); },
});
// Register the DatePickerView with the widget manager.
manager.WidgetManager.register_widget_view('DatePickerView', DatePickerView);
});
DateWidget()
class DateWidget(widgets.DOMWidget):
_view_name = Unicode('DatePickerView', sync=True)
value = Unicode(sync=True)
%%javascript
require(["widgets/js/widget", "widgets/js/manager"], function(widget, manager){
// Define the DatePickerView
var DatePickerView = widget.DOMWidgetView.extend({
render: function(){
// Create the date picker control.
this.$date = $('<input />')
.attr('type', 'date')
.appendTo(this.$el);
},
});
// Register the DatePickerView with the widget manager.
manager.WidgetManager.register_widget_view('DatePickerView', DatePickerView);
});
%%javascript
require(["widgets/js/widget", "widgets/js/manager"], function(widget, manager){
// Define the DatePickerView
var DatePickerView = widget.DOMWidgetView.extend({
render: function(){
// Create the date picker control.
this.$date = $('<input />')
.attr('type', 'date')
.appendTo(this.$el);
},
update: function() {
// Set the value of the date control and then call base.
this.$date.val(this.model.get('value')); // ISO format "YYYY-MM-DDTHH:mm:ss.sssZ" is required
return DatePickerView.__super__.update.apply(this);
},
});
// Register the DatePickerView with the widget manager.
manager.WidgetManager.register_widget_view('DatePickerView', DatePickerView);
});
%%javascript
require(["widgets/js/widget", "widgets/js/manager"], function(widget, manager){
// Define the DatePickerView
var DatePickerView = widget.DOMWidgetView.extend({
render: function(){
// Create the date picker control.
this.$date = $('<input />')
.attr('type', 'date')
.appendTo(this.$el);
},
update: function() {
// Set the value of the date control and then call base.
this.$date.val(this.model.get('value')); // ISO format "YYYY-MM-DDTHH:mm:ss.sssZ" is required
return DatePickerView.__super__.update.apply(this);
},
// Tell Backbone to listen to the change event of input controls (which the HTML date picker is)
events: {"change": "handle_date_change"},
// Callback for when the date is changed.
handle_date_change: function(event) {
this.model.set('value', this.$date.val());
this.touch();
},
});
// Register the DatePickerView with the widget manager.
manager.WidgetManager.register_widget_view('DatePickerView', DatePickerView);
});
my_widget = DateWidget()
display(my_widget)
my_widget
my_widget.value
my_widget.value = "1998-12-01" # December 1st, 1998
# Import the dateutil library to parse date strings.
from dateutil import parser
class DateWidget(widgets.DOMWidget):
_view_name = Unicode('DatePickerView', sync=True)
value = Unicode(sync=True)
description = Unicode(sync=True)
class DateWidget(widgets.DOMWidget):
_view_name = Unicode('DatePickerView', sync=True)
value = Unicode(sync=True)
description = Unicode(sync=True)
# This function automatically gets called by the traitlet machinery when
# value is modified because of this function's name.
def _value_changed(self, name, old_value, new_value):
pass
class DateWidget(widgets.DOMWidget):
_view_name = Unicode('DatePickerView', sync=True)
value = Unicode(sync=True)
description = Unicode(sync=True)
# This function automatically gets called by the traitlet machinery when
# value is modified because of this function's name.
def _value_changed(self, name, old_value, new_value):
# Parse the date time value.
try:
parsed_date = parser.parse(new_value)
parsed_date_string = parsed_date.strftime("%Y-%m-%d")
except:
parsed_date_string = ''
# Set the parsed date string if the current date string is different.
if self.value != parsed_date_string:
self.value = parsed_date_string
class DateWidget(widgets.DOMWidget):
_view_name = Unicode('DatePickerView', sync=True)
value = Unicode(sync=True)
description = Unicode(sync=True)
def __init__(self, **kwargs):
super(DateWidget, self).__init__(**kwargs)
self.validate = widgets.CallbackDispatcher()
# This function automatically gets called by the traitlet machinery when
# value is modified because of this function's name.
def _value_changed(self, name, old_value, new_value):
# Parse the date time value.
try:
parsed_date = parser.parse(new_value)
parsed_date_string = parsed_date.strftime("%Y-%m-%d")
except:
parsed_date_string = ''
# Set the parsed date string if the current date string is different.
if old_value != new_value:
valid = self.validate(parsed_date)
if valid in (None, True):
self.value = parsed_date_string
else:
self.value = old_value
self.send_state() # The traitlet event won't fire since the value isn't changing.
# We need to force the back-end to send the front-end the state
# to make sure that the date control date doesn't change.
%%javascript
require(["widgets/js/widget", "widgets/js/manager"], function(widget, manager){
// Define the DatePickerView
var DatePickerView = widget.DOMWidgetView.extend({
render: function(){
this.$el.addClass('widget-hbox-single'); /* Apply this class to the widget container to make
it fit with the other built in widgets.*/
// Create a label.
this.$label = $('<div />')
.addClass('widget-hlabel')
.appendTo(this.$el)
.hide(); // Hide the label by default.
// Create the date picker control.
this.$date = $('<input />')
.attr('type', 'date')
.appendTo(this.$el);
},
update: function() {
// Set the value of the date control and then call base.
this.$date.val(this.model.get('value')); // ISO format "YYYY-MM-DDTHH:mm:ss.sssZ" is required
// Hide or show the label depending on the existance of a description.
var description = this.model.get('description');
if (description == undefined || description == '') {
this.$label.hide();
} else {
this.$label.show();
this.$label.text(description);
}
return DatePickerView.__super__.update.apply(this);
},
// Tell Backbone to listen to the change event of input controls (which the HTML date picker is)
events: {"change": "handle_date_change"},
// Callback for when the date is changed.
handle_date_change: function(event) {
this.model.set('value', this.$date.val());
this.touch();
},
});
// Register the DatePickerView with the widget manager.
manager.WidgetManager.register_widget_view('DatePickerView', DatePickerView);
});
# Add some additional widgets for aesthetic purpose
display(widgets.Text(description="First:"))
display(widgets.Text(description="Last:"))
my_widget = DateWidget()
display(my_widget)
my_widget.description="DOB:"
my_widget = DateWidget()
display(my_widget)
def require_2014(date):
return not date is None and date.year == 2014
my_widget.validate.register_callback(require_2014)
# Try setting a valid date
my_widget.value = "December 2, 2014"
# Try setting an invalid date
my_widget.value = "June 12, 1999"
my_widget.value
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Abstract
Step2: Our widget inherits from widgets.DOMWidget since it is intended that it will be displayed in the notebook directly.
Step3: Now we need to define a view that can be used to represent the model.
Step4: Test
Step5: Section 2 - Something useful
Step6: JavaScript
Step7: In order to get the HTML date picker to update itself with the value set in the back-end, we need to implement an update() method.
Step8: To get the changed value from the frontend to publish itself to the backend,
Step9: Test
Step10: Display the widget again to make sure that both views remain in sync.
Step11: Read the date from Python
Step12: Set the date from Python
Step13: Section 3 - Extra credit
Step14: In the last section we created a fully working date picker widget.
Step15: The traitlet machinery searches the class that the trait is defined in for methods with "_changed" suffixed onto their names. Any method with the format "_X_changed" will be called when "X" is modified.
Step16: Now the function parses the date string,
Step17: Finally, a CallbackDispatcher is added so the user can perform custom validation.
Step18: JavaScript
Step19: Test
Step20: Now we will try to create a widget that only accepts dates in the year 2014. We render the widget without a description to verify that it can still render without a label.
|
4,518
|
<ASSISTANT_TASK:>
Python Code:
A = [9,3,9,3,9,7,9]
print len(A)
print sorted(A)
single_num = sorted(A)[0]
count = 0
if len(A) == 0:
print A
for num in sorted(A):
print "num: ", num
if num % single_num:
print "single num: ", single_num
print "count: ", count
if count > 1:
single_num = num
count = 0
else:
print single_num
break
count += 1
#print single_num
def solution_1(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
print "num: ", num
print "single num: ", single_num
print "Count: ", count
if count > 1:
single_num = num
count = 1
elif count == 1:
return single_num
else:
count += 1
if i >= len(A)-1:
return single_num
print "end single num: ", single_num
print "end count: ", count
print "end num: ", num
print sorted(A), "\n"
print solution(A)
B = [2, 2, 3, 3, 4]
print sorted(B), "\n"
print solution(B)
def solution_2(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
print "num: ", num
print "Count: ", count
print "single num: ", single_num
if count > 1:
single_num = num
count = 1
elif count == 1:
return str(single_num) +" " + str(sorted(A[i-5:i+5]))
else:
count += 1
if i >= len(A)-1:
return single_num
print sorted(B)
print solution_2(B)
print sorted(A)
print solution_2(A)
def solution_3(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 1
for i,num in enumerate(sorted(A)):
if num != single_num:
if count > 1:
single_num = num
count = 1
elif count == 1:
return single_num
else:
count += 1
if i >= len(A)-1:
return single_num
def solution_4(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 1
for i,num in enumerate(sorted(A)):
if num != single_num:
if count > 1:
single_num = num
count = 1
elif count == 1:
return single_num
elif single_num == num:
count += 1
if i >= len(A)-1:
return single_num
sorted(A)
print A
print solution_4(A)
def solution_5(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
if count > 1:
if count % 2:
print "count", count
print "num", num
print "single num", single_num
return single_num
single_num = num
count = 1
elif count == 1:
print "count", count
print "num", num
print "single num", single_num
return single_num
elif single_num == num:
count += 1
if i >= len(A)-1:
return single_num
print solution_5(A)
1 % 2
def solution_5_1(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
if count % 2:
print "count", count
print "num", num
print "single num", single_num
return single_num
single_num = num
count = 1
elif single_num == num:
count += 1
if i >= len(A)-1:
return single_num
print solution_5_1(A)
def solution_5_2(A):
if len(A) == 0:
return A
elif len(A) == 1:
return A[0]
elif type(A) is int:
return A
elif len(A) % 2:
single_num = sorted(A)[0]
count = 0
for i,num in enumerate(sorted(A)):
if num != single_num:
if count % 2:
return single_num
else:
single_num = num
count = 1
elif single_num == num:
count += 1
if i >= len(A)-1:
if count % 2:
return single_num
else
#just in case-
return None
print solution_5_2(A)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: designing algorithm
Step2: version 2-
Step3: V3
Step4: V4
Step5: V 5 - Some where long the lines forgot about the fine line "All but one occurs Even number of times". Instead solved it for "all but one int does not have any pair, is only a single element". Fixing the solutions now.
Step6: V5 .1
Step7: V 5_2
|
4,519
|
<ASSISTANT_TASK:>
Python Code:
%%bash
gcloud config list
!bq ls
!bq ls --project_id qwiklabs-resources
!bq ls --project_id qwiklabs-resources tpcds_2t_baseline
%%bigquery
SELECT
dataset_id,
table_id,
-- Convert bytes to GB.
ROUND(size_bytes/pow(10,9),2) as size_gb,
-- Convert UNIX EPOCH to a timestamp.
TIMESTAMP_MILLIS(creation_time) AS creation_time,
TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,
row_count,
CASE
WHEN type = 1 THEN 'table'
WHEN type = 2 THEN 'view'
ELSE NULL
END AS type
FROM
`qwiklabs-resources.tpcds_2t_baseline.__TABLES__`
ORDER BY size_gb DESC
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
WHERE
is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL
%%bigquery
SELECT
COUNT(column_name) AS column_count,
table_name
FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
GROUP BY table_name
ORDER BY column_count DESC, table_name
!bq show qwiklabs-resources:tpcds_2t_baseline.catalog_sales
!bq head -n 15 --selected_fields "cs_order_number,cs_quantity,cs_ext_sales_price,cs_ext_ship_cost" qwiklabs-resources:tpcds_2t_baseline.catalog_sales
%%bigquery --verbose
--Query should fail
SELECT
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
LIMIT
10
%%bigquery --verbose
--Query should succeed
SELECT
cs_item_sk,
COUNT(cs_order_number) AS total_orders,
SUM(cs_quantity) AS total_quantity,
SUM(cs_ext_sales_price) AS total_revenue,
SUM(cs_net_profit) AS total_profit
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
GROUP BY
cs_item_sk
ORDER BY
total_orders DESC
LIMIT
10
%%bigquery
SELECT
project_id,
job_id,
query,
cache_hit,
reservation_id,
EXTRACT(DATE FROM creation_time) AS creation_date,
creation_time,
end_time,
TIMESTAMP_DIFF(end_time, start_time, SECOND) AS job_duration_seconds,
job_type,
user_email,
state,
error_result,
total_bytes_processed,
total_slot_ms / 1000 / 60 AS slot_minutes,
-- Average slot utilization per job is calculated by dividing
-- total_slot_ms by the millisecond duration of the job
total_slot_ms / (TIMESTAMP_DIFF(end_time, start_time, MILLISECOND)) AS avg_slots
FROM
`region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
ORDER BY
creation_time DESC
LIMIT 15;
!bq ls -j -a -n 15
!head --lines=50 'sql/example_baseline_queries.sql'
%%bigquery --verbose
# start query 1 in stream 0 using template query96.tpl
select count(*)
from `qwiklabs-resources.tpcds_2t_baseline.store_sales` as store_sales
,`qwiklabs-resources.tpcds_2t_baseline.household_demographics` as household_demographics
,`qwiklabs-resources.tpcds_2t_baseline.time_dim` as time_dim,
`qwiklabs-resources.tpcds_2t_baseline.store` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
!bq ls -j -a -n 5
!bq show --format=prettyjson -j fae46669-5e96-4744-9d2c-2b1b95fa21e7
%%bash
bq query \
--dry_run \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
# Convert bytes to GB
132086388641 / 1e+9
%%bash
bq query \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
%%bash
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create
## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist
datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME"
bq --location=US mk --dataset \
--description 'Benchmark Dataset' \
$PROJECT:$BENCHMARK_DATASET_NAME
fi
# Inspect your project and datasets
!bq ls
!bq ls tpcds_2t_baseline
%%bash
# Should take about 30 seconds, starts a bunch of asynchronous copy jobs
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.call_center tpcds_2t_baseline.call_center
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_page tpcds_2t_baseline.catalog_page
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_returns tpcds_2t_baseline.catalog_returns
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_sales tpcds_2t_baseline.catalog_sales
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer tpcds_2t_baseline.customer
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer_address tpcds_2t_baseline.customer_address
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer_demographics tpcds_2t_baseline.customer_demographics
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.date_dim tpcds_2t_baseline.date_dim
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.dbgen_version tpcds_2t_baseline.dbgen_version
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.household_demographics tpcds_2t_baseline.household_demographics
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.income_band tpcds_2t_baseline.income_band
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.inventory tpcds_2t_baseline.inventory
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.item tpcds_2t_baseline.item
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.perf tpcds_2t_baseline.perf
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.promotion tpcds_2t_baseline.promotion
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.reason tpcds_2t_baseline.reason
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.ship_mode tpcds_2t_baseline.ship_mode
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store tpcds_2t_baseline.store
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store_returns tpcds_2t_baseline.store_returns
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store_sales tpcds_2t_baseline.store_sales
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.time_dim tpcds_2t_baseline.time_dim
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.warehouse tpcds_2t_baseline.warehouse
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_page tpcds_2t_baseline.web_page
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_returns tpcds_2t_baseline.web_returns
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_sales tpcds_2t_baseline.web_sales
bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_site tpcds_2t_baseline.web_site
!bq ls tpcds_2t_baseline
%%bigquery
SELECT COUNT(*) AS store_transaction_count
FROM tpcds_2t_baseline.store_sales
%%bash
# runs the SQL queries from the TPCDS benchmark
# Pull the current Google Cloud Platform project name
BQ_DATASET="tpcds_2t_baseline" # let's start by benchmarking our baseline dataset
QUERY_FILE_PATH="./sql/example_baseline_queries.sql" # the full test is on 99_baseline_queries but that will take 80+ mins to run
IFS=";"
# create perf table to keep track of run times for all 99 queries
printf "\033[32;1m Housekeeping tasks... \033[0m\n\n";
printf "Creating a reporting table perf to track how fast each query runs...";
perf_table_ddl="CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)"
bq rm -f $BQ_DATASET.perf
bq query --nouse_legacy_sql $perf_table_ddl
start=$(date +%s)
index=0
for select_stmt in $(<$QUERY_FILE_PATH)
do
# run the test until you hit a line with the string 'END OF BENCHMARK' in the file
if [[ "$select_stmt" == *'END OF BENCHMARK'* ]]; then
break
fi
printf "\n\033[32;1m Let's benchmark this query... \033[0m\n";
printf "$select_stmt";
SECONDS=0;
bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test
duration=$SECONDS
# get current timestamp in milliseconds
ran_on=$(date +%s)
index=$((index+1))
printf "\n\033[32;1m Here's how long it took... \033[0m\n\n";
echo "Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds."
printf "\n\033[32;1m Writing to our benchmark table... \033[0m\n\n";
insert_stmt="insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)"
printf "$insert_stmt"
bq query --nouse_legacy_sql $insert_stmt
done
end=$(date +%s)
printf "Benchmark test complete"
%%bigquery
SELECT * FROM tpcds_2t_baseline.perf
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
ORDER BY ran_on
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
MAX(performance_test_num) AS latest_performance_test_num,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec
FROM
tpcds_2t_baseline.perf
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Google Cloud resources are organized hierarchically. Starting from the bottom of the hierarchy, projects are the first level, and they contain other resources. All resources except for organizations have exactly one parent. The Organization is the top of the hierarchy and does not have a parent.
Step2: For this lab, you will be accessing data stored in another project, in this case a publically accessible sample project qwiklabs-resources. See how many datasets exist in this project
Step3: And let's look at the tables and views in one of these datasets
Step4: But how are we able to access other data? And won't querying that data create work in that user's cluster? Not at all! Because BigQuery has completely separated the compute and storage layers so they can scale independently, we can easily query data (so long as we have permissions) that are in public datasets or datasets from other teams, without incurring compute costs for them, and without slowing their queries down, even if we're accessing the same data.
Step5: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables)
Step6: Question
Step7: Question
Step8: Previewing sample rows of data values
Step9: Question
Step10: Create an example sales report
Step11: A note on our data
Step12: Side note
Step13: Running the first benchmark test
Step14: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Step15: Be sure to replace the job id with your own most recent job.
Step17: Looking at the job statistics we can see our most recent query hit cache
Step19: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Step20: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below
Step21: Here we will use the bq cp command to copy tables over. If you need to periodically refresh data, the BQ Transfer service or scheduled queries are good tools as well.
Step22: Inspect the tables now in your project.
Step23: Verify you now have the baseline data in your project
Step24: Setup an automated test
Step25: Viewing the benchmark results
Step26: And finally, the overall statistics for the entire test
|
4,520
|
<ASSISTANT_TASK:>
Python Code:
### General imports
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
import GPy
import time
np.random.seed(12345)
### Emukit imports
from emukit.test_functions.forrester import multi_fidelity_forrester_function
from emukit.core.loop.user_function import UserFunctionWrapper
from emukit.multi_fidelity.convert_lists_to_array import convert_x_list_to_array
from emukit.bayesian_optimization.acquisitions.entropy_search import MultiInformationSourceEntropySearch
from emukit.bayesian_optimization.acquisitions.max_value_entropy_search import MUMBO
from emukit.core.acquisition import Acquisition
from emukit.multi_fidelity.models.linear_model import GPyLinearMultiFidelityModel
from emukit.multi_fidelity.kernels.linear_multi_fidelity_kernel import LinearMultiFidelityKernel
from emukit.multi_fidelity.convert_lists_to_array import convert_xy_lists_to_arrays
from emukit.core import ParameterSpace, ContinuousParameter, InformationSourceParameter
from emukit.model_wrappers import GPyMultiOutputWrapper
from GPy.models.gp_regression import GPRegression
### --- Figure config
LEGEND_SIZE = 15
# Load function
# The multi-fidelity Forrester function is already wrapped as an Emukit UserFunction object in
# the test_functions package
forrester_fcn, _ = multi_fidelity_forrester_function()
forrester_fcn_low = forrester_fcn.f[0]
forrester_fcn_high = forrester_fcn.f[1]
# Assign costs
low_fidelity_cost = 1
high_fidelity_cost = 10
# Plot the function s
x_plot = np.linspace(0, 1, 200)[:, None]
y_plot_low = forrester_fcn_low(x_plot)
y_plot_high = forrester_fcn_high(x_plot)
plt.plot(x_plot, y_plot_low, 'b')
plt.plot(x_plot, y_plot_high, 'r')
plt.legend(['Low fidelity', 'High fidelity'])
plt.xlim(0, 1)
plt.title('High and low fidelity Forrester functions')
plt.xlabel('x')
plt.ylabel('y');
# Collect and plot initial samples
np.random.seed(123)
x_low = np.random.rand(6)[:, None]
x_high = x_low[:3]
y_low = forrester_fcn_low(x_low)
y_high = forrester_fcn_high(x_high)
plt.scatter(x_low,y_low)
plt.scatter(x_high,y_high)
x_array, y_array = convert_xy_lists_to_arrays([x_low, x_high], [y_low, y_high])
kern_low = GPy.kern.RBF(1)
kern_low.lengthscale.constrain_bounded(0.01, 0.5)
kern_err = GPy.kern.RBF(1)
kern_err.lengthscale.constrain_bounded(0.01, 0.5)
multi_fidelity_kernel = LinearMultiFidelityKernel([kern_low, kern_err])
gpy_model = GPyLinearMultiFidelityModel(x_array, y_array, multi_fidelity_kernel, 2)
gpy_model.likelihood.Gaussian_noise.fix(0.1)
gpy_model.likelihood.Gaussian_noise_1.fix(0.1)
model = GPyMultiOutputWrapper(gpy_model, 2, 5, verbose_optimization=False)
model.optimize()
# Define cost of different fidelities as acquisition function
class Cost(Acquisition):
def __init__(self, costs):
self.costs = costs
def evaluate(self, x):
fidelity_index = x[:, -1].astype(int)
x_cost = np.array([self.costs[i] for i in fidelity_index])
return x_cost[:, None]
@property
def has_gradients(self):
return True
def evaluate_with_gradients(self, x):
return self.evalute(x), np.zeros(x.shape)
parameter_space = ParameterSpace([ContinuousParameter('x', 0, 1), InformationSourceParameter(2)])
cost_acquisition = Cost([low_fidelity_cost, high_fidelity_cost])
es_acquisition = MultiInformationSourceEntropySearch(model, parameter_space) / cost_acquisition
mumbo_acquisition = MUMBO(model, parameter_space, num_samples=5, grid_size=500) / cost_acquisition
x_plot_low = np.concatenate([np.atleast_2d(x_plot), np.zeros((x_plot.shape[0], 1))], axis=1)
x_plot_high = np.concatenate([np.atleast_2d(x_plot), np.ones((x_plot.shape[0], 1))], axis=1)
t_0=time.time()
es_plot_low = es_acquisition.evaluate(x_plot_low)
es_plot_high = es_acquisition.evaluate(x_plot_high)
t_es=time.time()-t_0
mumbo_plot_low = mumbo_acquisition.evaluate(x_plot_low)
mumbo_plot_high = mumbo_acquisition.evaluate(x_plot_high)
t_mumbo=time.time()-t_es-t_0
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(x_plot, es_plot_low , "blue")
ax1.plot(x_plot, es_plot_high, "red")
ax1.set_title("Multi-fidelity Entropy Search")
ax1.set_xlabel(r"$x$")
ax1.set_ylabel(r"$\alpha(x)$")
ax1.set_xlim(0, 1)
ax2.plot(x_plot, mumbo_plot_low , "blue", label="Low fidelity evaluations")
ax2.plot(x_plot, mumbo_plot_high , "red",label="High fidelity evaluations")
ax2.legend(loc="upper right")
ax2.set_title("MUMBO")
ax2.set_xlabel(r"$x$")
ax2.set_ylabel(r"$\alpha(x)$")
ax2.set_xlim(0, 1)
plt.tight_layout()
plt.figure()
plt.bar(["es","MUMBO"],[t_es,t_mumbo])
plt.xlabel("Acquisition Choice")
plt.yscale('log')
plt.ylabel("Calculation Time (secs)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up our toy problem (1D optimisation of the forrester function with two fidelity levels) and collect 6 initial points at low fidelity and 3 at high fidelitly.
Step2: Fit our linear multi-fidelity GP model to the observed data.
Step3: Define acqusition functions for multi-fidelity problems
Step4: Lets plot the resulting acqusition functions (MUMBO and standard entropy search for multi-fidelity BO) for the chosen model on the collected data. Note that MES takes a fraction of the time of ES to compute (plotted on a log scale). This difference becomes even more apparent as you increase the dimensions of the sample space.
|
4,521
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_train_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,)
show_results([results_list], data_in_df, graph=True)
env, num_states, num_actions = sim.initialize_env(total_data_test_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
env, num_states, num_actions = sim.initialize_env(total_data_test_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
import pickle
with open('../../data/best_agent.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
|
4,522
|
<ASSISTANT_TASK:>
Python Code:
import io3d
import matplotlib.pyplot as plt
import io3d
import matplotlib.pyplot as plt
datap = io3d.read("http://home.zcu.cz/~mjirik/lisa/sample_data/nrn4.pklz")
plt.imshow(datap.data3d[5,:,:], cmap='gray')
io3d.datasets.download("3Dircadb1.1")
datap = io3d.read_dataset("3Dircadb1", "data3d", 1)
# datap.data3d is equal to datap["data3d"]
datap["data3d"]
pth = io3d.datasets.get_dataset_path("3Dircadb1", "data3d", 1)
pth
datap = io3d.read(pth)
datap.data3d.shape
plt.imshow(datap.data3d[30,:,:], cmap='gray')
plt.colorbar()
import sed3
sed3.show_slices(datap.data3d, slice_number=12)
datap_liver = io3d.read_dataset("3Dircadb1", "liver", 1)
sed3.show_slices(datap_liver.data3d, slice_number=12)
sed3.show_slices(datap.data3d, slice_number=12, contour=datap_liver.data3d)
sed3.ipy_show_slices(datap.data3d, contour=datap_liver.data3d)
ed = sed3.sed3(datap.data3d, contour=datap_liver.data3d)
seeds = ed.show()
plt.figure(figsize=[10,8])
sed3.show_slices(datap.data3d, slice_number=6, axis=2)
print(datap.voxelsize_mm)
new_voxelsize_mm = [6,6,6]
data3d = io3d.image_manipulation.resize_to_mm(datap.data3d, voxelsize_mm=datap.voxelsize_mm, new_voxelsize_mm=new_voxelsize_mm)
plt.figure(figsize=[10,8])
sed3.show_slices(data3d, slice_number=6, axis=2)
# sed3.ipy_show_slices(data3d)# , contour=datap_liver.data3d)
io3d.write({"data3d":data3d, "voxelsize_mm":new_voxelsize_mm}, "resized.mhd")
dp = io3d.read("resized.mhd")
plt.imshow(dp.data3d[20,:,:], cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternative 2
Step2: Download the data
Step3: Read intensity data
Step4: Alternative 1
Step5: Show one slice
Step6: Show multiple slices
Step7: Load segmentations
Step8: See segmentations together with intensity image
Step9: Interactive view
Step10: Interactive view in external window (does not work from jupyter)
Step11: Resize image
Step12: After resize
Step13: Save data
|
4,523
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from numpy.random import random
# from matplotlib import pyplot as plt, animation
from matplotlib import pyplot as plt, rcParams, animation, rc
rc('animation', html='html5')
rcParams['figure.figsize'] = 3, 3 # sets plot window size
%precision 4
np.set_printoptions(precision=4, linewidth=60)
def lin(a, b, x): return a*x + b
a = 3.
b = 8.
n = 30
x = random(n)
y = lin(a, b, x)
x
y
plt.scatter(x,y)
def sse(y, y_pred): return ((y - y_pred)**2).sum()
def loss(y, a, b, x): return sse(y, lin(a,b,x))
def avg_loss(y, a,b,x): return np.sqrt(loss(y,a,b,x)/n)
a_guess = -1
b_guess = 1
avg_loss(y, a_guess, b_guess, x)
Lr = 0.01 # below thanks to Wolfram Alpha
# d[(y - y_pred)**2,b] = d[(y - (a*x+b))**2, b] = 2*(b + a*x - y)
# d[(y - y_pred)**2,a] = d[(y - (a*x+b))**2, a] = 2*x*(b+a*x-y) = x * dy/db
def update():
global a_guess, b_guess
y_pred = lin(a_guess, b_guess, x)
dydb = 2*(y_pred - y)
dyda = x*dydb
a_guess -= Lr * dyda.mean() # new guess is minus deriv * (a little bit)
b_guess -= Lr * dydb.mean()
fig = plt.figure(figsize=(5,4), dpi=100)
plt.scatter(x, y)
line, = plt.plot(x, lin(a_guess, b_guess, x))
plt.close()
def animate(i):
line.set_ydata(lin(a_guess, b_guess, x))
for i in xrange(10): update()
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(0, 40), interval=100)
ani
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start out with a line
Step2: Coming up with an update function that'll make our guesses of a & b closer to what a & b actually are, each time it is called.
Step3: Confirm that our line eventually, actually, fits our data, via animation.
|
4,524
|
<ASSISTANT_TASK:>
Python Code:
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
os.chdir('test')
modelname = "atwork_subtour_frequency"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
data.coefficients
data.spec
data.chooser_data
model.estimate(method='SLSQP')
model.parameter_summary()
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from the EDB
Step4: Utility specification
Step5: Chooser data
Step6: Estimate
Step7: Estimated coefficients
Step8: Output Estimation Results
Step9: Write the model estimation report, including coefficient t-statistic and log likelihood
Step10: Next Steps
|
4,525
|
<ASSISTANT_TASK:>
Python Code:
from pymldb import Connection
mldb = Connection("http://localhost")
mldb.query(
SELECT
jseval('
return val * 2;
','val', 5) AS output
)
mldb.query(
SELECT
jseval('
var output = {};
output["mult"] = val * 2;
output["name"] = str_val + " Hello!";
return output;
','val,str_val', 5, 'Bonjour!') AS output
)
dataUrl = "http://snap.stanford.edu/data/facebook.tar.gz"
mldb.put("/v1/procedures/import_data", {
"type": "import.text",
"params": {
"dataFileUrl": "archive+" + dataUrl + "#facebook/3980.circles",
"delimiter": " ",
"quoteChar": "",
"outputDataset": "import_URL2",
"runOnCreation": True
}
})
mldb.query("SELECT * NAMED rowName() FROM import_URL2 LIMIT 10")
dataUrl = "http://snap.stanford.edu/data/facebook.tar.gz"
print mldb.put("/v1/procedures/import_non_formated", {
"type": "import.text",
"params": {
"dataFileUrl": "archive+" + dataUrl + "#facebook/3980.circles",
"headers": ["circles"],
"select":
jseval('
var row_val = val.split("\t");
var rtn = {};
rtn["rowName"] = row_val[0];
for(i=1; i<row_val.length; i++) {
rtn[row_val[i]] = 1;
}
return rtn;
','val', circles) AS *
,
"outputDataset": {
"id": "import_non_formated",
"type": "sparse.mutable"
},
"runOnCreation": True
}
})
mldb.query(
SELECT * EXCLUDING(rowName)
NAMED rowName
FROM import_non_formated
ORDER BY CAST(rowName() AS integer)
LIMIT 5
)
mldb.query(
SELECT *
FROM transpose(
(
SELECT sum({* EXCLUDING(rowName)}) as *
NAMED 'result'
FROM import_non_formated
)
)
ORDER BY result DESC
LIMIT 5
)
print mldb.put('/v1/procedures/import_ML_concepts', {
"type":"import.text",
"params": {
"dataFileUrl":"file://mldb/mldb_test_data/MachineLearningConcepts.csv",
"outputDataset": "ml_concepts",
"named": "Concepts",
"select": "Text",
"runOnCreation": True
}
}
)
mldb.query("SELECT * FROM ml_concepts")
print mldb.put("/v1/functions/getStats", {
"type": "sql.expression",
"params": {
"expression":
jseval('
var result = {};
result["len"] = txt.length;
result["numWords"] = txt.split(" ").length;
result["numCapital"] = txt.replace(/[^A-Z]/g, "").length;
result["numExpl"] = txt.replace(/[^!]/g, "").length;
result["numQst"] = txt.replace(/[^?]/g, "").length;
result["containsHashSign"] = txt.replace(/[^#]/g, "").length >= 1;
result["numNumbers"] = txt.replace(/[^0-9]/g, "").length;
result["capitalProportion"] = result["numCapital"] / result["len"];
result["explProportion"] = result["numExpl"] / result["len"];
result["qstProportion"] = result["numQst"] / result["len"];
result["numberProportion"] = result["numNumbers"] / result["len"];
return result;
', 'txt', text) as stats
}
})
mldb.query("SELECT getStats({text: 'This is a test #hopethisworks #mldb'}) as *")
mldb.query("SELECT getStats({text: Text}) as * FROM ml_concepts")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Basic usage examples
Step4: The variable val takes the input value 5 and the code is then evaluated.
Step5: In the above example, the string val,str_val mean that the function takes 2 input variables. Those values will be 5 and the string Bonjour!. Since we return a JavaScript object, we essentially return a row where the keys are the objects' keys and the cell values are the object's values.
Step7: We see that each line contains the circle number followed by user ids. This type of data is an ideal candidate for MLDB, since we can store it as bags of words, or rather, bags of friends. A dataset of type sparse.mutable can store sparse representations like this one very efficiently.
Step9: We can now run a SELECT query on the resulting dataset and get a nice sparse representation
Step11: We can now answer a simple question like
Step12: Since the maximum value is 1, we now know that the answer to the above question is no.
Step13: Taking a peek at our data, we see there is a single column called Text that contains a textual description of an ML concept
Step15: Let's now create a function of type sql.expression containing a jseval function that calculates different statistics about the string it is given. It calculates things like the number of words in the string, the number of capital letters, etc.
Step16: Now that we have created our getStats function, we can call it on a single string
Step17: Looks like it works! We can also call it on the Text column of our ml_concepts dataset to get the statistics for all the rows of our dataset
|
4,526
|
<ASSISTANT_TASK:>
Python Code:
import requests
base_url = 'http://192.168.59.103:8500/v1/kv/'
response = requests.put(base_url + 'key1', data="value1")
print(response.text)
from consul import Consul
c = Consul('192.168.59.103')
index, data = c.kv.get('key1')
print(data['Value'].decode('utf8'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting a key/value with python-consul
|
4,527
|
<ASSISTANT_TASK:>
Python Code:
# help(font_manager)
path = '../fonts/segoeuib.ttf'
prop = font_manager.FontProperties(fname=path)
print prop.get_name()
print prop.get_family()
font0 = FontProperties()
font1 = font0.copy()
font1.set_family(prop.get_name())
# Data to plot
labels = ['Python', 'R','MATLAB', 'C', 'C++']
sizes = [36, 19, 28, 8, 9]
colors = ['#2196F3','#FF5722', '#FFC107', '#CDDC39', '#4CAF50']
# explode = (0.1, 0, 0, 0) # explode 1st slice
explode = (0, 0, 0, 0, 0) # explode 1st slice
plt.figure(figsize=(8,8))
patches, texts = plt.pie(sizes, explode=explode, labels=labels, labeldistance=0.65, colors=colors,
autopct=None, shadow=False, startangle=22)
for item in texts:
item.set_fontproperties(font1)
item.set_fontsize(30)
item.set_horizontalalignment('center')
item.set_weight('bold')
#item.set_family(prop.get_family())
#draw a circle at the center of pie to make it look like a donut
centre_circle = plt.Circle((0,0),0.4,color='#E7E7E7',linewidth=1.25)
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.axis('equal')
plt.tight_layout()
# plt.savefig('donut.pdf',transparent=True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Controlling font properties
|
4,528
|
<ASSISTANT_TASK:>
Python Code:
# Set up
import numpy as np
from __future__ import division # division
import pandas as pd
import seaborn as sns # for visualiation
from scipy.stats import ttest_ind # t-tests
import statsmodels.formula.api as smf # linear modeling
import statsmodels.api as sm
import matplotlib.pyplot as plt # plotting
import matplotlib
from sklearn import metrics
matplotlib.style.use('ggplot')
%matplotlib inline
# Load the data, replace strings as numeric
df = pd.read_csv('./data/payment-default.csv')
df.default = df.default.replace(['Yes', 'No'], [1, 0]).astype(int)
df.student = df.student.replace(['Yes', 'No'], [1, 0]).astype(int)
# What is the **default rate** in the dataset (# of defaults / total)
# What is the distribution of balances (for those who default, and those who don't)?
# You may want to create subsets of the data for those who defaulted, and those who did not
# Draw histograms of the distribution (perhaps overlapping histogram on the same chart)
# What is the relationship between balalance and income (show a scatter-plot, different colors for default/no default)?
# Hint: http://stackoverflow.com/questions/21654635/scatter-plots-in-pandas-pyplot-how-to-plot-by-category
# Fit a linear model of the dependent variable (default) on balance
# Generate predictions using your model (this can be interpreted as probability of default)
# Visualize the results: scatterplot of balance v.s. predcited default rate. Bonus: add actual default rate.
# Interpret the **coefficients** and assess the **model fit**
# Use the `smf.glm` function to fit `default` to `balance` (hint: use the `binomial` family)
# Generate a set of predicted probabilities using your model
# Visualize the predicted probabilities across balances (bonus: add the observed values as well)
# Retrieve beta values from the model
# Convert balance to units of $100 and re-run the regression
# Extract the beta values from you model
# First, using a threshold of .5, use your model to predict a binary outcome (each case as 0 or 1)
# What is the accuracy of your model (how often does the prediction match the data)?
# What are the sensitivity, specificity, Type I error rate, and Type II error rates?
# Generate data for the ROC curve using the `metrics.roc_curve` function
# Draw your ROC curve
# Calculate the area under your ROC curve using the metrics.roc_auc_curve function
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration (15 minutes)
Step2: List pertinent observations from the above analysis
Step3: What is your interpretation of the coefficients and your assessment of the model fit?
Step4: While the simplicity of the predict() method makes it easy to overlook the interworkings of the model. You should note that the estimates are generated by plugging values of $X$ into this equation (with your estimated Beta values)
Step5: Using the beta values produced by the model, describe the relationship between balance and default (make sure to note the p-value, direction, and units of the coefficient)
Step6: What is the interpretation of your beta value from your regression?
Step7: What is your interpretation of these values?
|
4,529
|
<ASSISTANT_TASK:>
Python Code:
%%R
otu.tbl.file1 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/0/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
otu.tbl.file2 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/100/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
physeq1 = readRDS(otu.tbl.file1)
physeq2 = readRDS(otu.tbl.file2)
%%R
ord1 = ordinate(physeq1, method='NMDS', distance='bray')
ord2 = ordinate(physeq2, method='NMDS', distance='bray')
ord1 %>% scores %>% head %>% print
ord2 %>% scores %>% head %>% print
%%R
get.fracs = function(ord){
fracs = gsub('.+__', '', rownames(ord %>% scores)) %>% as.data.frame()
colnames(fracs) = c('fractions')
fracs = fracs %>%
separate(fractions, c('start','end'), sep='-', convert=T) %>%
mutate(start = start * 1000,
end = end * 1000)
return(fracs)
}
ord1.f = get.fracs(ord1)
ord2.f = get.fracs(ord2)
%%R
library(IRanges)
%%R
ord1.r = IRanges(start=ord1.f$start, end=ord1.f$end)
ord2.r = IRanges(start=ord2.f$start, end=ord2.f$end)
%%R
ov = findOverlaps(ord1.r, ord2.r, select='first')
ov
%%R
ov = findOverlaps(ord1.r, ord2.r)
ov
%%R
otu.tbl.file1 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/0/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
otu.tbl.file2 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/100/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
physeq1 = readRDS(otu.tbl.file1)
physeq2 = readRDS(otu.tbl.file2)
%%R
ord1 = ordinate(physeq1, method='NMDS', distance='bray')
ord2 = ordinate(physeq2, method='NMDS', distance='bray')
%%R
grps = as.character(rep(seq(1,nrow(ord1$points) / 2), 2))
grps = append(grps, '2')
plot(ord1, type = "p", display='sites')
elps = ordiellipse(ord1, grps, kind="se", conf=0.95, lwd=2, col="blue")
elps = elps %>% summary %>% t %>% as.data.frame
elps
%%R
ggplot(elps, aes(NMDS1, NMDS2)) +
geom_point()
%%R
get.ellipse = function(ord){
grps = as.character(rep(seq(1,nrow(ord$points) / 2), 2))
grps = append(grps, '2')
plot(ord, type = "p", display='sites')
elps = ordiellipse(ord, grps, kind="se", conf=0.95, lwd=2, col="blue")
elps = elps %>% summary %>% t %>% as.data.frame
return(elps)
}
get.ellipse(ord1)
%%R
mid = function(x, y){ (x + y)/2 }
get.BD.range = function(tbl){
tbl = as.data.frame(tbl)
tbl$lib = gsub('__.+', '', rownames(tbl)) %>% as.character
tbl$BD.start = gsub('.+__([0-9.]+)-.+', '\\1', rownames(tbl)) %>% as.numeric
tbl$BD.end = gsub('.+-', '', rownames(tbl)) %>% as.numeric
tbl$BD.mid = mapply(mid, tbl$BD.start, tbl$BD.end)
return(tbl)
}
ord.BD = get.BD.range(ord1 %>% scores)
ord.BD %>% head
%%R
# making fixed BD-range & binning by BD.mid
BD.range = seq(1.6, 1.9, 0.004)
BD.range
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculating centroid of binned fraction samples
|
4,530
|
<ASSISTANT_TASK:>
Python Code:
!head ../data/model.txt
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
from IPython.display import display
model = pd.read_csv(
"../data/model.txt", delim_whitespace=True, skiprows = 3,
parse_dates = {'Timestamp': [0, 1]}, index_col = 'Timestamp')
model.head()
pd.tools.plotting.scatter_matrix(model.loc[model.index[:1000], 'M(m/s)':'D(deg)'])
pd.tools.plotting.scatter_matrix(
model.loc[model.sort_values('M(m/s)', ascending=False).index[:1000],
'M(m/s)':'D(deg)']
)
model.loc[:, 'M(m/s)'].plot.hist(bins=np.arange(0, 35))
model['month'] = model.index.month
model['year'] = model.index.year
model.groupby(by = ['year', 'month']).mean().head(24)
model.groupby(by=['year', 'month']).mean().plot(y='M(m/s)', figsize=(15, 5))
monthly = model.groupby(by=['year', 'month']).mean()
monthly['ma'] = monthly.loc[:, 'M(m/s)'].rolling(5, center=True).mean()
monthly.head()
monthly.loc[:, ['M(m/s)', 'ma']].plot(figsize=(15, 6))
monthly.loc[:, 'M(m/s)'].reset_index().pivot(index='year', columns='month')
monthly.loc[:, 'M(m/s)'].reset_index().pivot(
index='year', columns='month'
).T.loc['M(m/s)'].plot(
figsize=(15, 5), legend=False
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejercicios
Step2: Misma matriz scatter para los 1000 registros con mayor velocidad
Step3: Histórico de la velocidad media
Step4: Media móvil de los datos agrupados por mes y año
|
4,531
|
<ASSISTANT_TASK:>
Python Code:
'''
Solution
'''
import pandas as pd
# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
df = pd.read_table('smsspamcollection/SMSSpamCollection',
sep='\t',
header=None,
names=['label', 'sms_message'])
# Output printing out first 5 columns
df.head()
'''
Solution
'''
df['label'] = df.label.map({'ham':0, 'spam':1})
print(df.shape)
df.head() # returns (rows, columns)
'''
Solution:
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
lower_case_documents = []
for i in documents:
lower_case_documents.append(i.lower())
print(lower_case_documents)
'''
Solution:
'''
sans_punctuation_documents = []
import string
for i in lower_case_documents:
sans_punctuation_documents.append(i.translate(str.maketrans('', '', string.punctuation)))
print(sans_punctuation_documents)
'''
Solution:
'''
preprocessed_documents = []
for i in sans_punctuation_documents:
preprocessed_documents.append(i.split(' '))
print(preprocessed_documents)
'''
Solution
'''
frequency_list = []
import pprint
from collections import Counter
for i in preprocessed_documents:
frequency_counts = Counter(i)
frequency_list.append(frequency_counts)
pprint.pprint(frequency_list)
'''
Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the
document-term matrix generation happens. We have created a sample document set 'documents'.
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
'''
Solution
'''
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer()
'''
Practice node:
Print the 'count_vector' object which is an instance of 'CountVectorizer()'
'''
print(count_vector)
'''
Solution:
'''
count_vector.fit(documents)
count_vector.get_feature_names()
'''
Solution
'''
doc_array = count_vector.transform(documents).toarray()
doc_array
'''
Solution
'''
frequency_matrix = pd.DataFrame(doc_array,
columns = count_vector.get_feature_names())
frequency_matrix
'''
Solution
'''
# split into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0]))
'''
[Practice Node]
The code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data
and then transforming the data into a document-term matrix; secondly, for the testing data we are only
transforming the data into a document-term matrix.
This is similar to the process we followed in Step 2.3
We will provide the transformed data to students in the variables 'training_data' and 'testing_data'.
'''
'''
Solution
'''
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
'''
Instructions:
Calculate probability of getting a positive test result, P(Pos)
'''
'''
Solution (skeleton code will be provided)
'''
# P(D)
p_diabetes = 0.01
# P(~D)
p_no_diabetes = 0.99
# Sensitivity or P(Pos|D)
p_pos_diabetes = 0.9
# Specificity or P(Neg/~D)
p_neg_no_diabetes = 0.9
# P(Pos)
p_pos = (p_diabetes * p_pos_diabetes) + (p_no_diabetes * (1 - p_neg_no_diabetes))
print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))
'''
Instructions:
Compute the probability of an individual having diabetes, given that, that individual got a positive test result.
In other words, compute P(D|Pos).
The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
'''
'''
Solution
'''
# P(D|Pos)
p_diabetes_pos = (p_diabetes * p_pos_diabetes) / p_pos
print('Probability of an individual having diabetes, given that that individual got a positive test result is:\
',format(p_diabetes_pos))
'''
Instructions:
Compute the probability of an individual not having diabetes, given that, that individual got a positive test result.
In other words, compute P(~D|Pos).
The formula is: P(~D|Pos) = (P(~D) * P(Pos|~D) / P(Pos)
Note that P(Pos/~D) can be computed as 1 - P(Neg/~D).
Therefore:
P(Pos/~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1
'''
'''
Solution
'''
# P(Pos/~D)
p_pos_no_diabetes = 0.1
# P(~D|Pos)
p_no_diabetes_pos = (p_no_diabetes * p_pos_no_diabetes) / p_pos
print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\
,p_no_diabetes_pos
'''
Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or
P(F,I).
The first step is multiplying the probabilities of Jill Stein giving a speech with her individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text
The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text
The third step is to add both of these probabilities and you will get P(F,I).
'''
'''
Solution: Step 1
'''
# P(J)
p_j = 0.5
# P(J|F)
p_j_f = 0.1
# P(J|I)
p_j_i = 0.1
p_j_text = p_j * p_j_f * p_j_i
print(p_j_text)
'''
Solution: Step 2
'''
# P(G)
p_g = 0.5
# P(G|F)
p_g_f = 0.7
# P(G|I)
p_g_i = 0.2
p_g_text = p_g * p_g_f * p_g_i
print(p_g_text)
'''
Solution: Step 3: Compute P(F,I) and store in p_f_i
'''
p_f_i = p_j_text + p_g_text
print('Probability of words freedom and immigration being said are: ', format(p_f_i))
'''
Instructions:
Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I) and store it in a variable p_j_fi
'''
'''
Solution
'''
p_j_fi = p_j_text / p_f_i
print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))
'''
Instructions:
Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I) and store it in a variable p_g_fi
'''
'''
Solution
'''
p_g_fi = p_g_text / p_f_i
print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))
'''
Instructions:
We have loaded the training data into the variable 'training_data' and the testing data into the
variable 'testing_data'.
Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier
'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier.
'''
'''
Solution
'''
from sklearn.naive_bayes import MultinomialNB
naive_bayes = MultinomialNB()
naive_bayes.fit(training_data, y_train)
'''
Instructions:
Now that our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.
'''
'''
Solution
'''
predictions = naive_bayes.predict(testing_data)
'''
Instructions:
Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions
you made earlier stored in the 'predictions' variable.
'''
'''
Solution
'''
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(y_test, predictions)))
print('Precision score: ', format(precision_score(y_test, predictions)))
print('Recall score: ', format(recall_score(y_test, predictions)))
print('F1 score: ', format(f1_score(y_test, predictions)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1.2
Step2: Step 2.1
Step3: Step 2
Step4: Step 3
Step5: Step 4
Step6: Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.
Step 2.3
Step7: Instructions
Step8: Data preprocessing with CountVectorizer()
Step9: Instructions
Step10: The get_feature_names() method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'.
Step11: Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately.
Step12: Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created.
Step13: Step 3.2
Step14: Step 4.1
Step15: Using all of this information we can calculate our posteriors as follows
Step16: Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption.
Step 4.2
Step17: Now we can compute the probability of P(J|F,I), that is the probability of Jill Stein saying the words Freedom and Immigration and P(G|F,I), that is the probability of Gary Johnson saying the words Freedom and Immigration.
Step18: And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for Gary Johnson of the Libertarian party.
Step 5
Step19: Now that predictions have been made on our test set, we need to check the accuracy of our predictions.
Step 6
|
4,532
|
<ASSISTANT_TASK:>
Python Code:
def plot_predict_actual_values(date, y_pred, y_test, ylabel):
plot the predicted values and actual values (for the test data)
fig, axs = plt.subplots(figsize=(16,6))
axs.plot(date, y_pred, color='red', label='predicted values')
axs.plot(date, y_test, color='blue', label='actual values')
axs.set_title('the predicted values and actual values (for the test data)')
plt.xlabel('test datetime')
plt.ylabel(ylabel)
plt.legend(loc='upper left')
plt.show()
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
raw_df = pd.read_csv("data/data.csv")
raw_df.head()
df = pd.DataFrame(pd.to_datetime(raw_df.StartTime))
# we can find 'AvgRate' is of two scales: 'Mbps' and 'Gbps'
raw_df.AvgRate.str[-4:].unique()
# Unify AvgRate value
df['AvgRate'] = raw_df.AvgRate.apply(lambda x:float(x[:-4]) if x.endswith("Mbps") else float(x[:-4])*1000)
df["total"] = raw_df["total"]
df.head()
ax = df.plot(y='AvgRate',figsize=(16,6), title="AvgRate of network traffic data")
ax = df.plot(y='total',figsize=(16,6), title="total bytes of network traffic data")
from bigdl.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
# we look back one week data which is of the frequency of 2h.
look_back = 84
# specify the number of steps to be predicted,one day is selected by default.
horizon = 1
tsdata_train, _, tsdata_test = TSDataset.from_pandas(df, dt_col="StartTime", target_col=["AvgRate","total"], with_split=True, test_ratio=0.1)
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_test]:
tsdata.gen_dt_feature()\
.impute(mode="last")\
.scale(standard_scaler, fit=(tsdata is tsdata_train))
for tsdata in [tsdata_train, tsdata_test]:
tsdata.roll(lookback=look_back, horizon=horizon, target_col="AvgRate")
x_train, y_train = tsdata_train.to_numpy()
x_test, y_test = tsdata_test.to_numpy()
x_train.shape, y_train.shape, x_test.shape, y_test.shape
from bigdl.chronos.forecaster.lstm_forecaster import LSTMForecaster
# build model
forecaster = LSTMForecaster(past_seq_len=x_train.shape[1],
input_feature_num=x_train.shape[-1],
output_feature_num=y_train.shape[-1],
hidden_dim=16,
layer_num=2,
lr=0.001)
%%time
forecaster.fit(data=(x_train, y_train), batch_size=1024, epochs=50)
# make prediction
y_pred = forecaster.predict(x_test)
y_pred_unscale = tsdata_test.unscale_numpy(y_pred)
y_test_unscale = tsdata_test.unscale_numpy(y_test)
from bigdl.orca.automl.metrics import Evaluator
# evaluate with sMAPE
print("sMAPE is", Evaluator.evaluate("smape", y_test_unscale, y_pred_unscale))
# evaluate with mean_squared_error
print("mean_squared error is", Evaluator.evaluate("mse", y_test_unscale, y_pred_unscale))
for tsdata in [tsdata_train, tsdata_test]:
tsdata.roll(lookback=look_back, horizon=horizon, target_col=["AvgRate","total"])
x_train_m, y_train_m = tsdata_train.to_numpy()
x_test_m, y_test_m = tsdata_test.to_numpy()
y_train_m, y_test_m = y_train_m[:, 0, :], y_test_m[:, 0, :]
x_train_m.shape, y_train_m.shape, x_test_m.shape, y_test_m.shape
from bigdl.chronos.forecaster.tf.mtnet_forecaster import MTNetForecaster
mtnet_forecaster = MTNetForecaster(target_dim=y_train_m.shape[-1],
feature_dim=x_train_m.shape[-1],
long_series_num=6,
series_length=12,
ar_window_size=6,
cnn_height=4)
%%time
hist = mtnet_forecaster.fit(data=(x_train_m, y_train_m), batch_size=1024, epochs=20)
y_pred_m = mtnet_forecaster.predict(x_test_m)
y_pred_m_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_pred_m, axis=1))[:, 0, :]
y_test_m_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_test_m, axis=1))[:, 0, :]
from bigdl.chronos.metric.forecast_metrics import Evaluator
res = Evaluator.evaluate(["smape", "mse"],
y_true=y_test_m_unscale,
y_pred=y_pred_m_unscale)
# evaluate with sMAPE
print("sMAPE is", res[0])
# evaluate with mean_squared_error
print("mean_squared error is", res[1])
multi_target_value = ["AvgRate","total"]
test_date=df[-y_test_m_unscale.shape[0]:].index
plot_predict_actual_values(test_date, y_pred_m_unscale[:,0], y_test_m_unscale[:,0], ylabel=multi_target_value[0])
plot_predict_actual_values(test_date, y_pred_m_unscale[:,1], y_test_m_unscale[:,1], ylabel=multi_target_value[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Traffic Forecasting (using time series data)
Step2: Download raw dataset and load into dataframe
Step3: Below are some example records of the data
Step4: Data pre-processing
Step5: Plot the data to see how the KPI's look like
Step6: Feature Engineering & Data Preperation
Step7: Time series forecasting
Step8: For univariate forcasting, we use LSTMForecaster for forecasting.
Step9: First we initiate a LSTMForecaster.
Step10: Then we use fit to train the model. Wait sometime for it to finish.
Step11: After training is finished. You can use the forecaster to do prediction and evaluation.
Step12: Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
Step13: calculate the symetric mean absolute percentage error.
Step14: multivariate forecasting
Step15: For multivariate forecasting, we use MTNetForecaster for forecasting.
Step16: First, we initialize a mtnet_forecaster according to input data shape. The lookback length is equal to (long_series_num+1)*series_length Details refer to chronos docs.
Step17: Now we train the model and wait till it finished.
Step18: Use the model for prediction and inverse the scaling of the prediction results
Step19: plot actual and prediction values for AvgRate KPI
Step20: plot actual and prediction values for total bytes KPI
|
4,533
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats
import numpy as np
def E(W, s):
N = len(s)
return -0.5 * np.sum(W[i, j] * s[i] * s[j] for i, j in np.ndindex(N, N))
N = 6
beta_0 = 0.007
tau = 1.06
epsilon = 1e-20
t_max = 150
W = np.random.random(size=(N, N))
W = (W + W.T) / 2 # make symmetric
for i in range(N):
W[i, i] = 0
plt.imshow(W)
M = 1
beta = beta_0
s = np.random.choice([-1, 1], N)
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
validation_min = E(W, s)
for t in range(t_max):
for m in range(M):
i = np.random.randint(0, 6)
s_local = np.copy(s)
s_local[i] *= -1
E_1 = E(W, s)
E_2 = E(W, s_local)
E_d = E_2 - E_1
P = 1 / (1 + np.exp(beta*E_d))
# print("\nt:", t, " i:", i, "\n s1:", s, "\tE1:", E_1, "\n s2:", s_local, "\tE2:", E_2)
if np.random.random() < P:
s = np.copy(s_local)
# print("new s")
if E(W, s) < validation_min:
validation_min = E(W, s)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
M = 500
beta = beta_0
s = np.random.choice([-1, 1], N)
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
validation_min = E(W, s)
for t in range(t_max):
for m in range(M):
i = np.random.randint(0, 6)
s_local = np.copy(s)
s_local[i] *= -1
E_1 = E(W, s)
E_2 = E(W, s_local)
E_d = E_2 - E_1
P = 1 / (1 + np.exp(beta*E_d))
# print("\nt:", t, " i:", i, "\n s1:", s, "\tE1:", E_1, "\n s2:", s_local, "\tE2:", E_2)
if np.random.random() < P:
s = np.copy(s_local)
# print("new s")
if E(W, s) < validation_min:
validation_min = E(W, s)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
# generate all posible states & energies
all_states = [[0, 0, 0, 0, 0, 0] for i in range(2**6)]
all_energies = [0.0 for i in range(2**6)]
for si in range(2**6):
all_states[si] = [int(x) for x in list('{0:06b}'.format(si))]
all_energies[si] = E(W, all_states[si])
plt.figure(figsize=(10, 5))
plt.scatter(range(2**6), all_energies)
plt.title('histogram of all possible energies')
plt.grid()
plt.show()
probab_beta = [0.005, 1, 3]
for beta in probab_beta:
Z = 0
for en in all_energies:
Z += np.exp(-beta * en)
all_probabilities = [0.0 for i in range(2**6)]
for si in range(2**6):
all_probabilities[si] = np.exp(-beta * all_energies[si])
plt.figure(figsize=(10, 5))
plt.scatter(range(2**6), all_probabilities)
plt.title('histogram of all possible probabilities for beta {}'.format(beta))
plt.grid()
plt.show()
# Other parameters and W from exercise 1.
epsilon = 1e-50
s = np.random.choice([-1., 1.], N)
e = np.zeros_like(s)
beta = beta_0
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
for t in range(t_max):
#print('t =', t, '- beta =', beta)
distance = np.inf
while distance >= epsilon:
e_old = e.copy()
for i in range(N):
neighbors = range(N)
neighbors.remove(i)
e[i] = -np.sum(W[i, j] * s[j] for j in neighbors)
s[i] = np.tanh(-beta * e[i])
#print(distance)
distance = np.linalg.norm(e - e_old)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
#print('-'*10)
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Simulation with M=1
Step3: Simulation with M=500
Step4: All possible states
Step5: Exercise 2
|
4,534
|
<ASSISTANT_TASK:>
Python Code:
import sys
if 'google.colab' in sys.modules:
!pip install --upgrade pip
# Install the TensorFlow Extended library
!pip install -U tfx
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
# TODO: Your code goes here
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = # TODO: Your code goes here
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
# print first ten lines of the file
!head {_data_filepath}
# Here, you create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen, enable_cache=True)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen, enable_cache=True)
context.show(statistics_gen.outputs['statistics'])
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen, enable_cache=True)
context.show(schema_gen.outputs['schema'])
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator, enable_cache=True)
context.show(example_validator.outputs['anomalies'])
_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}
NUMERICAL_FEATURES = ['trip_miles', 'fare', 'trip_seconds']
BUCKET_FEATURES = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
CATEGORICAL_NUMERICAL_FEATURES = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
CATEGORICAL_STRING_FEATURES = [
'payment_type',
'company',
]
# Number of vocabulary terms used for encoding categorical features.
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized categorical are hashed.
OOV_SIZE = 10
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def t_name(key):
Rename the feature keys so that they don't clash with the raw keys when
running the Evaluator component.
Args:
key: The original feature key
Returns:
key with '_xf' appended
return key + '_xf'
_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when you may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if you're doing development
import importlib
importlib.reload(taxi_constants)
_NUMERICAL_FEATURES = taxi_constants.NUMERICAL_FEATURES
_BUCKET_FEATURES = taxi_constants.BUCKET_FEATURES
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_CATEGORICAL_NUMERICAL_FEATURES = taxi_constants.CATEGORICAL_NUMERICAL_FEATURES
_CATEGORICAL_STRING_FEATURES = taxi_constants.CATEGORICAL_STRING_FEATURES
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
def _make_one_hot(x, key):
Make a one-hot tensor to encode categorical features.
Args:
X: A dense tensor
key: A string key for the feature in the input
Returns:
A dense one-hot tensor as a float list
integerized = tft.compute_and_apply_vocabulary(x,
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE,
vocab_filename=key, name=key)
depth = (
tft.experimental.get_vocabulary_size_by_name(key) + _OOV_SIZE)
one_hot_encoded = tf.one_hot(
integerized,
depth=tf.cast(depth, tf.int32),
on_value=1.0,
off_value=0.0)
return tf.reshape(one_hot_encoded, [-1, depth])
def _fill_in_missing(x):
Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
if not isinstance(x, tf.sparse.SparseTensor):
return x
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in _NUMERICAL_FEATURES:
# If sparse make it dense, setting nan's to 0 or '', and apply zscore.
outputs[taxi_constants.t_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]), name=key)
for key in _BUCKET_FEATURES:
outputs[taxi_constants.t_name(key)] = tf.cast(tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT, name=key),
dtype=tf.float32)
for key in _CATEGORICAL_STRING_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(_fill_in_missing(inputs[key]), key)
for key in _CATEGORICAL_NUMERICAL_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(tf.strings.strip(
tf.strings.as_string(_fill_in_missing(inputs[key]))), key)
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_LABEL_KEY] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform, enable_cache=True)
transform.outputs
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}
from typing import Dict, List, Text
import os
import glob
from absl import logging
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_transform import TFTransformOutput
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when you may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if you're doing development
import importlib
importlib.reload(taxi_constants)
_LABEL_KEY = taxi_constants.LABEL_KEY
_BATCH_SIZE = 40
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
tf_transform_output.transformed_metadata.schema)
def _get_tf_examples_serving_signature(model, tf_transform_output):
Returns a serving signature that accepts `tensorflow.Example`.
# You need to track the layers in the model in order to save it.
model.tft_layer_inference = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_example):
Returns the output to be used in the serving signature.
raw_feature_spec = tf_transform_output.raw_feature_spec()
# Remove label feature since these will not be present at serving time.
raw_feature_spec.pop(_LABEL_KEY)
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_inference(raw_features)
logging.info('serve_transformed_features = %s', transformed_features)
outputs = model(transformed_features)
return {'outputs': outputs}
return serve_tf_examples_fn
def _get_transform_features_signature(model, tf_transform_output):
Returns a serving signature that applies tf.Transform to features.
# You need to track the layers in the model in order to save it.
model.tft_layer_eval = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def transform_features_fn(serialized_tf_example):
Returns the transformed_features to be fed as input to evaluator.
raw_feature_spec = tf_transform_output.raw_feature_spec()
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_eval(raw_features)
logging.info('eval_transformed_features = %s', transformed_features)
return transformed_features
return transform_features_fn
def export_serving_model(tf_transform_output, model, output_dir):
Exports a keras model for serving.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
model: A keras model to export for serving.
output_dir: A directory where the model will be exported to.
# The layer has to be saved to the model for keras tracking purpases.
model.tft_layer = tf_transform_output.transform_features_layer()
signatures = {
'serving_default':
_get_tf_examples_serving_signature(model, tf_transform_output),
'transform_features':
_get_transform_features_signature(model, tf_transform_output),
}
model.save(output_dir, save_format='tf', signatures=signatures)
def _build_keras_model(tf_transform_output: TFTransformOutput
) -> tf.keras.Model:
Creates a DNN Keras model for classifying taxi data.
Args:
tf_transform_output: [TFTransformOutput], the outputs from Transform
Returns:
A keras Model.
feature_spec = tf_transform_output.transformed_feature_spec().copy()
feature_spec.pop(_LABEL_KEY)
inputs = {}
for key, spec in feature_spec.items():
if isinstance(spec, tf.io.VarLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=[None], name=key, dtype=spec.dtype, sparse=True)
elif isinstance(spec, tf.io.FixedLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=spec.shape or [1], name=key, dtype=spec.dtype)
else:
raise ValueError('Spec type is not supported: ', key, spec)
output = tf.keras.layers.Concatenate()(tf.nest.flatten(inputs))
output = tf.keras.layers.Dense(100, activation='relu')(output)
output = tf.keras.layers.Dense(70, activation='relu')(output)
output = tf.keras.layers.Dense(50, activation='relu')(output)
output = tf.keras.layers.Dense(20, activation='relu')(output)
output = tf.keras.layers.Dense(1)(output)
return tf.keras.Model(inputs=inputs, outputs=output)
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
model = _build_keras_model(tf_transform_output)
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
# Export the model.
export_serving_model(tf_transform_output, model, fn_args.serving_model_dir)
# use a TFX component to train a TensorFlow model
trainer = tfx.components.Trainer(
module_file= # TODO: Your code goes here,
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=10000),
eval_args=tfx.proto.EvalArgs(num_steps=5000))
context.run(trainer, enable_cache=True)
model_artifact_dir = trainer.outputs['model'].get()[0].uri
pp.pprint(os.listdir(model_artifact_dir))
model_dir = os.path.join(model_artifact_dir, 'Format-Serving')
pp.pprint(os.listdir(model_dir))
model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri
%load_ext tensorboard
%tensorboard --logdir {model_run_artifact_dir}
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when you may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if you're doing development
import importlib
importlib.reload(taxi_constants)
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(
signature_name='serving_default',
label_key=taxi_constants.LABEL_KEY,
preprocessing_function_names=['transform_features'],
)
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10})))
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(
feature_keys=['trip_start_hour'])
])
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case you validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make your
# candidate the first blessed model.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
context.run(model_resolver, enable_cache=True)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
context.run(evaluator, enable_cache=True)
evaluator.outputs
context.show(evaluator.outputs['evaluation'])
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
blessing_uri = evaluator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher, enable_cache=True)
pusher.outputs
push_uri = pusher.outputs['pushed_model'].get()[0].uri
model = tf.saved_model.load(push_uri)
for item in model.signatures.items():
pp.pprint(item)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install TFX
Step2: Restart the kernel
Step3: Let's check the library versions.
Step4: Set up pipeline paths
Step5: Download example data
Step6: Take a quick look at the CSV file.
Step7: Disclaimer
Step8: Run TFX components interactively
Step9: Let's examine the output artifacts of ExampleGen. This component produces two artifacts, training examples and evaluation examples
Step10: You can also take a look at the first three training examples
Step11: Now that ExampleGen has finished ingesting the data, the next step is data analysis.
Step12: After StatisticsGen finishes running, you can visualize the outputted statistics. Try playing with the different plots!
Step13: SchemaGen
Step14: After SchemaGen finishes running, you can visualize the generated schema as a table.
Step15: Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.
Step16: After ExampleValidator finishes running, you can visualize the anomalies as a table.
Step18: In the anomalies table, you can see that there are no anomalies. This is what you'd expect, since this the first dataset that you've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.
Step22: Next, you write a preprocessing_fn that takes in raw data as input, and returns transformed features that your model can train on
Step23: Now, you pass in this feature engineering code to the Transform component and run it to transform your data.
Step24: Let's examine the output artifacts of Transform. This component produces two types of outputs
Step25: Take a peek at the transform_graph artifact. It points to a directory containing three subdirectories.
Step26: The transformed_metadata subdirectory contains the schema of the preprocessed data. The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data.
Step35: After the Transform component has transformed your data into features, and the next step is to train a model.
Step36: Now, you pass in this model code to the Trainer component and run it to train the model.
Step37: Analyze Training with TensorBoard
Step38: Optionally, you can connect TensorBoard to the Trainer to analyze your model's training curves.
Step39: Evaluator
Step40: Next, you give this configuration to Evaluator and run it.
Step41: Now let's examine the output artifacts of Evaluator.
Step42: Using the evaluation output you can show the default visualization of global metrics on the entire evaluation set.
Step43: To see the visualization for sliced evaluation metrics, you can directly call the TensorFlow Model Analysis library.
Step44: This visualization shows the same metrics, but computed at every feature value of trip_start_hour instead of on the entire evaluation set.
Step45: Now can also verify the success by loading the validation result record
Step46: Pusher
Step47: Let's examine the output artifacts of Pusher.
Step48: In particular, the Pusher will export your model in the SavedModel format, which looks like this
|
4,535
|
<ASSISTANT_TASK:>
Python Code:
import graphviz
import pandas
from sklearn import tree
from sklearn.model_selection import train_test_split
clf = tree.DecisionTreeClassifier()
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/throughput.csv")
data = input[input.columns[6:9]]
target = input['cloud']
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
clf = clf.fit(X_train, y_train)
clf.score(X_test, y_test)
import graphviz
import pandas
from sklearn import tree
from sklearn.model_selection import train_test_split
clf = tree.DecisionTreeClassifier()
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/latency.csv")
data = input[input.columns[7:9]]
data['cloud'] = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
target = input['feed']
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
clf = clf.fit(X_train, y_train)
clf.score(X_test, y_test)
import tensorflow as tf
import numpy as np
import pandas
from tensorflow.python.ops import parsing_ops
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from sklearn.model_selection import train_test_split
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/latency.csv")
data = input[input.columns[7:9]]
data['cloud'] = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
X_train, X_test, y_train, y_test = train_test_split(data, input['feed'], test_size=0.4, random_state=0)
X_train_np = np.array(X_train, dtype=np.float32)
y_train_np = np.array(y_train, dtype=np.int32)
X_test_np = np.array(X_test, dtype=np.float32)
y_test_np = np.array(y_test, dtype=np.int32)
hparams = tensor_forest.ForestHParams(num_classes=7,
num_features=3,
num_trees=1,
regression=False,
max_nodes=500).fill()
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(hparams)
c = classifier.fit(x=X_train_np, y=y_train_np)
c.evaluate(x=X_test_np, y=y_test_np)
import tensorflow as tf
import numpy as np
import pandas
from tensorflow.python.ops import parsing_ops
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from sklearn.model_selection import train_test_split
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/throughput.csv")
data = input[input.columns[6:9]]
target = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
X_train_np = np.array(X_train, dtype=np.float32)
y_train_np = np.array(y_train, dtype=np.int32)
X_test_np = np.array(X_test, dtype=np.float32)
y_test_np = np.array(y_test, dtype=np.int32)
hparams = tensor_forest.ForestHParams(num_classes=3,
num_features=3,
num_trees=1,
regression=False,
max_nodes=500).fill()
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(hparams)
c = classifier.fit(x=X_train_np, y=y_train_np)
c.evaluate(x=X_test_np, y=y_test_np)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we evaluate scikit-learn accuracy where we predict feed implementation based on latency.
Step2: As you can see, scikit-learn has a 99% accuracy rate. We now do the same thing with tensorflow.
Step3: Looks like tensorflow has a 98% accuracy rate which is 1% less than scikit-learn algo. Let us use Tensorflow to look at the accuracy of predicting cloud vendor based on throughput.
|
4,536
|
<ASSISTANT_TASK:>
Python Code:
cd /tmp
# Delete the repo if it happens to already exist:
!rm -rf git-intro
# Create the repo
!git clone https://github.com/DS-100/git-intro git-intro
!ls -lh | grep git-intro
cd git-intro
# What files are in the repo?
!ls -lh
# What about hidden files?
!ls -alh
# What's the current status, according to git?
!git status
# What's the history of the repo?
!git log
# What does README.md look like currently?
!cat README.md
# We can use Python to compute the filename.
# Then we can reference Python variables in
# ! shell commands using {}, because Jupyter
# is magic.
import datetime
our_id = datetime.datetime.now().microsecond
filename = "our_file_{:d}.txt".format(our_id)
filename
!echo "The quick brown fox \
jumped over the lzy dog." > "{filename}"
!ls
!git add "{filename}"
!git status
!git commit -m 'Added our new file, "{filename}"'
!git status
!git log
!git remote -v
!git help push
!git push origin
!git help branch
!git branch --list
# Let's make a new name for the first commit, "going-back",
# with our ID in there so we don't conflict with other
# sections.
!git branch going-back-{our_id} first
!git branch --list
!git checkout going-back-{our_id}
!ls
!git status
!git log --graph --decorate first going-back-{our_id} master
new_filename = "our_second_file_{}.txt".format(our_id)
new_filename
!echo "Text for our second file!" > {new_filename}
!ls
!git add {new_filename}
!git commit -m'Adding our second file!'
!git status
!git log --graph --decorate first going-back-{our_id} master
!git push origin going-back-{our_id}
cd /tmp
!git clone https://github.com/DS-100/git-intro git-intro-2
cd /tmp/git-intro-2
!git checkout going-back-{our_id}
third_filename = "our_third_file_{}.txt".format(our_id)
third_filename
!echo "Someone else added this third file!" > {third_filename}
!git add {third_filename}
!git commit -m"Adding a third file!"
!git push
cd /tmp/git-intro
!git help fetch
!git fetch origin
!git log --graph --decorate going-back-{our_id} origin/going-back-{our_id}
!git merge origin/going-back-{our_id} --ff-only
!git log --graph --decorate
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Looking at files in a repo
Step2: The special .git directory is where git stores all its magic. If you delete it (or this whole directory), the repository won't be a repository any more.
Step3: Making changes
Step4: Creating the file only changed the local filesystem. We can go to the repository page on Github to verify that the file hasn't been added yet. You probably wouldn't want your changes to be published immediately to the world!
Step5: If you check again, our file still hasn't been published to the world. In git, you package together your new files and updates to old files, and then you create a new version called a "commit."
Step6: Now our local repository has this new commit in it. Notice that the log shows the message we wrote when we made the commit. It is very tempting to write something like "stuff" here. But then it will be very hard to understand your history, and you'll lose some of the benefits of git.
Step7: Now our commit is finally visible on Github. Even if we spill coffee on our laptop, our new state will be safely recorded in the remote repository.
Step8: Note
Step9: How does committing work?
Step10: Here origin is the name (according to git remote -v) of the repository you want to push to. If you omit a remote name, origin is also the default. Normally that's what you want.
Step11: Now we go back to our original repo.
Step12: You might just want the update. Or maybe you want to push your own commit to the same branch, and your git push failed.
Step13: Now we need to update our ref to the newer commit. In this case, it's easy, because we didn't have any further commits. Git calls that a "fast-forward" merge.
|
4,537
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.graph as graph
import bigbang.mailman as mailman
import bigbang.process as process
import bigbang.twopeople as twoppl
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
from pprint import pprint as pp
import pytz
import math
url = "http://mail.python.org/piperpmail/scipy-dev/"
arx= Archive(url,archive_dir="../archives")
arx.data.irow(0).Body
arx.data.shape
arx.data[arx.data['In-Reply-To'] > 0][:10]
messages = arx.data[['From']]
responses = arx.data[arx.data['In-Reply-To'] > 0][['From','Date','In-Reply-To']]
exchanges = pd.merge(messages,responses,how='inner',right_on='In-Reply-To',left_index=True,suffixes=['_original','_response'])
exchanges
exchanges.groupby(['From_original','From_response']).count()
twoppl.duration(exchanges, "oliphant at ee.byu.edu (Travis Oliphant)", "rossini at blindglobe.net (A.J. Rossini)" )
twoppl.panda_pair(exchanges, "oliphant at ee.byu.edu (Travis Oliphant)", "rossini at blindglobe.net (A.J. Rossini)" )
twoppl.duration(exchanges, "oliphant at ee.byu.edu (Travis Oliphant)", "rossini at blindglobe.net (A.J. Rossini)" )
pairs = twoppl.unique_pairs(exchanges)
pairs
allpairs = twoppl.panda_allpairs(exchanges, pairs)
allpairs
print "corresponding counts for number of replies up to 19"
print ("number of replies", "frequency of occurence")
for i in range(20):
print (i, len(allpairs[allpairs['num_replies'] <= i]) - len(allpairs[allpairs['num_replies'] <= i - 1]))
plt.hist(allpairs['num_replies'])
plt.title("Number of replies")
greaterThanFive = allpairs[allpairs['num_replies'] > 5]['num_replies']
counts = greaterThanFive.value_counts()
counts.plot()
#Completely arbitrary bins
#Group A reciprocity between (0, .25]
#Group B reciprocity between (.25, .5]
#Group C reciprocity between (.5, .75]
#Group D reciprocity between (.75, 1.00]
#"low" number of replies less than or equal to 10
#"moderate" number of replies between 10 and 20
#"high" replies greater than 20 replies
def genId(num):
if num <= .25:
return 'A.(0, .25]'
if num <= .5:
return "B.(.25, .5]"
if num <= .75:
return "C.(.5, .75]"
return "D.(.75, 1.00]"
def genNumReplies(num):
if num <= 10:
return 'a.low'
if num <= 20:
return "b.moderate"
return "c.high"
moreThanFive = allpairs[allpairs['num_replies'] > 5]
recipVec = moreThanFive['reciprocity']
numReplies = moreThanFive['num_replies']
ids = recipVec.apply(lambda val: genId(val))
groupedRep = numReplies.apply(lambda val: genNumReplies(val))
groupsdf = pd.DataFrame({"num_replies": numReplies, "ids": ids, "grouped_num_replies": groupedRep})
groupsdf
grpA = groupsdf[groupsdf["ids"] == "A.(0, .25]"]['num_replies']
grpB = groupsdf[groupsdf["ids"] == "B.(.25, .5]"]['num_replies']
grpC = groupsdf[groupsdf["ids"] == "C.(.5, .75]"]['num_replies']
grpD = groupsdf[groupsdf["ids"] == "D.(.75, 1.00]"]['num_replies']
grpA.value_counts().hist()
plt.title("Number of Replies for Reciprocity between 0-.25")
grpB.value_counts().hist()
plt.title("Number of Replies for Reciprocity between .25-.5")
grpC.value_counts().hist()
plt.title("Number of Replies for Reciprocity between .5-.75")
grpD.value_counts().hist()
plt.title("Number of Replies for Reciprocity between .75-1.0")
crossed = pd.crosstab(groupsdf["grouped_num_replies"], groupsdf["ids"])
crossed
crossed.plot()
crossed.apply(lambda r: r/sum(r), axis=0)
crossed.apply(lambda r: r/sum(r), axis=0).plot()
plt.title("normalized (columnwise) plot")
crossed.apply(lambda r: r/sum(r), axis=1)
crossed.apply(lambda r: r/sum(r), axis=1).plot()
plt.title("normalized (row-wise) plot")
recipVec2 = allpairs['reciprocity']
numReplies2 = allpairs['num_replies']
ids = recipVec2.apply(lambda val: genId(val))
groupedRep2 = numReplies2.apply(lambda val: genNumReplies(val))
groupsdf2 = pd.DataFrame({"num_replies": numReplies2, "ids": ids, "grouped_num_replies": groupedRep2})
crossed2 = pd.crosstab(groupsdf2["grouped_num_replies"], groupsdf2["ids"])
crossed2
crossed2.plot()
crossed2.apply(lambda r: r/sum(r), axis=0)
crossed2.apply(lambda r: r/sum(r), axis=0).plot()
plt.scatter(allpairs.num_replies, allpairs.reciprocity)
plt.title("number of replies vs. reciprocity")
allpairs['duration'] = allpairs['duration'].apply(lambda x: x.item() / pow(10,9))
plt.scatter(allpairs.duration, allpairs.num_replies)
plt.title("duration vs. number of replies")
df_filt = allpairs[allpairs['num_replies'] > 5]
plt.scatter(df_filt.reciprocity, df_filt.duration)
plt.title("reciprocity vs. duration")
plt.scatter(df_filt.reciprocity, df_filt.duration.apply(lambda x: math.log(x)))
plt.title("reciprocity vs. log of duration")
plt.scatter(df_filt.duration.apply(lambda x: math.log(x+1)), df_filt.num_replies.apply(lambda x: math.log(x+1)))
plt.title("log of duration vs. log of number of replies")
def calMessageLen(message):
if message == None:
return 0
return len(message)
arx.data['length'] = arx.data['Body'].apply(lambda x: calMessageLen(x))
arx.data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we'll import dependencies.
Step2: Let's begin with just one mailing list to simplify.
Step3: Let's look at the matrix of who replies to whom over the whole history of the list, to get a sense of the overall distribution
Step4: In particular we are interested in who replied to who at each time. Recall that this is an open mailing list--everybody potentially reads each message. A response from A to B is an indication that A read B's original message. Therefore, a response indicates not just a single message from A to B, but an exchange from B to A and back again.
Step5: The next step is to create a DataFrame that for each pair A and B
Step6: The "duration" column gives us a datetime data structure so we must be careful later on when extracting information from this column.
Step7: Now let's create a dataframe that consists of these three quantities (duration, number of replies, and reciprocity) for each pair of contributers who interacted.
Step8: Using panda_allpairs, we can create the desired data frame
Step9: The printed values below display the counts associated for a given number of replies.
Step10: To get a better idea of what's going on for larger values, let's look at the rows that have number of replies greater than 5.
Step11: The graph seems to follow a power law which is expected for this type of data.
Step12: The following lines generate a data frame that contains three columns
Step13: Now that we have this data frame lets look at the corresponding histograms for each "level" of reciprocity.
Step14: It's pretty hard to compare the four histograms so let's create a contingency table for the groupsdf data frame.
Step15: Since each reciprocity group has a different amount of counts, let's normalize the counts to get a better picture of what's going on.
Step16: We see that at the very extremes, namely reciprocity between 0-.25 and reciprocity between .75-1.0, there are some clear differences; reciprocity seems to be positively associated with the number of replies as we had initially expected.
Step17: Now will do the normalization by row. This will give us the relative proportion of some bin for number of replies is distributed across the bins for reciprocity.
Step18: Now let's go back and do the exact same thing but not removing entries with a very low amount of replies.
Step19: Now we will look at various scatterplots for different variables to get a rough sense of how our data is spread.
Step20: Now let's look at some scatterplots for the entries with number of replies greater than 5.
Step21: Since we actually have the bodies of each message, we will now procceed by seeing if there are any patterns between the type of messages sent and reciprocity, duration, and the number of replies.
|
4,538
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
'''compute norm of a sparse vector
Thanks to: Jaiyam Sharma'''
def norm(x):
sum_sq=x.dot(x.T)
norm=np.sqrt(sum_sq)
return(norm)
wiki = graphlab.SFrame('people_wiki.gl/')
wiki = wiki.add_row_number()
wiki
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
wiki
def sframe_to_scipy(column):
Convert a dict-typed SArray into a SciPy sparse matrix.
Returns
-------
mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i.
mapping : a dictionary where mapping[j] is the word whose values are in column j.
# Create triples of (row_id, feature_id, count).
x = graphlab.SFrame({'X1':column})
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack('X1', ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# We first fit the transformer using the above data.
f.fit(x)
# The transform method will add a new column that is the transformed version
# of the 'word' column.
x = f.transform(x)
# Get the feature mapping.
mapping = f['feature_encoding']
# Get the actual word id.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
start=time.time()
corpus, mapping = sframe_to_scipy(wiki['tf_idf'])
end=time.time()
print end-start
assert corpus.shape == (59071, 547979)
print 'Check passed correctly!'
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print index_bits
print powers_of_two
print index_bits.dot(powers_of_two)
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
def train_lsh(data, num_vector=16, seed=None):
dim = data.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = []
# Fetch the list of document ids associated with the bin and add the document id to the end.
table[bin_index].append(data_index)
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print 'Passed!'
else:
print 'Check your code.'
wiki[wiki['name'] == 'Barack Obama']
model['bin_indices'][wiki[wiki['name'] == 'Barack Obama']['id']] #obama bin index
wiki[wiki['name'] == 'Joe Biden']
obama_bits = np.array(model['bin_index_bits'][wiki[wiki['name'] == 'Barack Obama']['id']], dtype=int)
print obama_bits
biden_bits = np.array(model['bin_index_bits'][wiki[wiki['name'] == 'Joe Biden']['id']], dtype=int)
print biden_bits
print "# of similar bits = 14 " + str(len(obama_bits[0]) - np.bitwise_xor(obama_bits[0], biden_bits[0]).sum())
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's
print model['bin_indices'][22745] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
model['table'][model['bin_indices'][35817]] # all document ids in the same bin as barack obama
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print '================= Cosine distance from Barack Obama'
print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf))
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print diff
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = 0 if alternate_bits[i] == 1 else 1
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
# Update candidate_set with the documents in this bin.
candidate_set.update(table[nearby_bin])
return candidate_set
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print 'Passed test'
else:
print 'Check your code'
print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261'
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print 'Passed test'
else:
print 'Check your code'
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in xrange(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = graphlab.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
query(corpus[35817,:], model, k=10, max_search_radius=3)
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
wiki[wiki['name']=='Barack Obama']
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in xrange(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print 'Radius:', max_search_radius
print result.join(wiki[['id', 'name']], on='id').sort('distance')
average_distance_from_query = result['distance'][1:].mean()
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
print "radius = 2"
print "10"
average_distance_from_query_history
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
max_radius = 17
precision = {i:[] for i in xrange(max_radius)}
average_distance = {i:[] for i in xrange(max_radius)}
query_time = {i:[] for i in xrange(max_radius)}
np.random.seed(0)
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in xrange(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
precision = {i:[] for i in xrange(5,20)}
average_distance = {i:[] for i in xrange(5,20)}
query_time = {i:[] for i in xrange(5,20)}
num_candidates_history = {i:[] for i in xrange(5,20)}
ground_truth = {}
np.random.seed(0)
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in xrange(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the Wikipedia dataset
Step2: For this assignment, let us assign a unique ID to each document.
Step3: Extract TF-IDF matrix
Step5: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https
Step6: The conversion should take a few minutes to complete.
Step7: Checkpoint
Step8: Train an LSH model
Step9: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
Step10: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
Step11: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
Step12: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
Step13: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
Step14: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
Step15: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer
Step16: Since it's the dot product again, we batch it with a matrix operation
Step17: This array gives us the integer index of the bins for all documents.
Step18: Checkpoint.
Step19: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Step20: Quiz Question. What is the document id of Barack Obama's article?
Step21: Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
Step22: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
Step23: Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
Step24: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
Step25: There are four other documents that belong to the same bin. Which documents are they?
Step26: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
Step27: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Step29: With this output in mind, implement the logic for nearby bin search
Step30: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
Step31: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
Step32: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Step33: Let's try it out with Obama
Step34: To identify the documents, it's helpful to join this table with the Wikipedia table
Step35: We have shown that we have a working LSH implementation!
Step36: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables
Step37: Some observations
Step38: Quality metrics for neighbors
Step39: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
Step40: The observations for Barack Obama generalize to the entire dataset.
|
4,539
|
<ASSISTANT_TASK:>
Python Code:
#Set model parameters
#square neighborhood radius
R = 3
#number of states / colors
K = 8
#excitation threshhold
T = 6
#initial conditions on 300x300 lattice
initial = random_state_v2((300,300), K, 2.0/9)
#initialize CA object with chosen parameters and initial condition
spirals = greenberg_hastings(initial, R, K, T)
#evolve the CA for 300 time steps
spirals.evolve(300)
spacetime_animation(spirals.get_spacetime(), colors = plt.cm.rainbow)
#Set model parameters
#square neighborhood radius
R = 3
#number of states / colors
K = 8
#excitation threshhold
T = 6
#initial conditions on 300x300 lattice
initial = random_state_v2((300,300), K, 2.0/9)
#initialize CA object with chosen parameters and initial condition
spirals = greenberg_hastings(initial, R, K, T)
#evolve the CA for 300 time steps
spirals.animate(time = 300)
spacetime_animation(spirals.get_spacetime(), colors = plt.cm.rainbow)
R = 1
K = 4
T = 2
initial = random_state_v1((250, 250), 0.2)
example1 = greenberg_hastings(initial, R, K, T)
example1.evolve(300)
spacetime_animation(example1.get_spacetime(), colors = plt.cm.rainbow)
R = 1
K = 10
T = 2
initial = random_state_v1((200, 200), 0.5)
example2 = greenberg_hastings(initial, R, K, T)
example2.drive(200, 0.02)
spacetime_animation(example2.get_spacetime(), colors = plt.cm.rainbow)
R = 1
K = 10
T = 2
initial = random_state_v1((200, 200), 0.5)
example3 = greenberg_hastings(initial, R, K, T)
example3.drive(200, 0.1)
spacetime_animation(example3.get_spacetime(), colors = plt.cm.rainbow, interval = 5)
R = 1
K = 10
T = 2
initial = random_state_v1((200, 200), 0.5)
example4 = greenberg_hastings(initial, R, K, T)
example4.drive(300, 0.5)
spacetime_animation(example4.get_spacetime(), colors = plt.cm.rainbow, interval = 5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Animate the spacetime field produced by the above code
Step2: You can also animate on the fly using the .animate() method. We run the same code again, but use .animate(duration = 300) instead of .evolve(300). Both methods produce a 300x300x300 spacetime field that can be visualized with spacetime_animation.
Step3: Again we can still use spacetime_animation, this time if we want to watch that specific evolution again. Since the .animate() method animates on the fly, it is typically slower.
Step4: Some phenomonolgy
Step5: Global oscillations induced by stochastic driving
Step6: Keep same model parameters and initial conditions, now just bump up the driving rate and see global oscillations
Step7: Increase driving further, collective oscillations become more pronounced
|
4,540
|
<ASSISTANT_TASK:>
Python Code:
import importlib
autograd_available = True
# if automatic differentiation is available, use it
try:
import autograd
except ImportError:
autograd_available = False
pass
if autograd_available:
import autograd.numpy as np
from autograd import grad
else:
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
if autograd_available:
print('Using autograd to compute gradients')
else:
print('Using hand-calculated gradient')
# Valley
def myfun(x):
return (x[0]**2)/16 + 9*(x[1]**2)
if autograd_available:
gradient = grad(myfun)
else:
def gradient(x):
grad = [x[0]/8, 18*x[1]]
return grad;
x = np.arange(-5.0, 5.0, 0.02)
y = np.arange(-2.0, 2.0, 0.02)
X, Y = np.meshgrid(x, y)
fZ = myfun([X,Y])
plt.figure(1,figsize=(10,6))
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
plt.show()
epsilon = 0.1
start = np.array([-4.0,-1.0])
points = []
while len(points) < 200:
points.append( (start,myfun(start)) )
start = start - np.array([epsilon*gradient(start)[0], epsilon*gradient(start)[1]])
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(1,figsize=(16,6))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.xlim(-5,0)
plt.ylim(-2,2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("f(x^{(i)})")
plt.show()
def plot_function(epsilon, start_x, start_y):
start = [start_x,start_y]
points = []
while len(points) < 200:
points.append( (start,myfun(start)) )
start = start - np.array([epsilon*gradient(start)[0], epsilon*gradient(start)[1]])
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(3,figsize=(15,5))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.xlim(-5,0)
plt.ylim(-2,2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("f(x^{(i)})")
plt.show()
epsilon_values = np.arange(0.0,0.12,0.0002)
interactive_update = interactive(plot_function, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.1, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')),
start_x = widgets.FloatSlider(min=-5.0,max=0.0,step=0.001,value=-4.0, continuous_update=False, description='x'), \
start_y = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, value=-1.0, continuous_update=False, description='y'))
output = interactive_update.children[-1]
output.layout.height = '370px'
interactive_update
# Rosenbrock function
def rosenbrock_fun(x):
return (1-x[0])**2+100*((x[1]-(x[0])**2)**2)
if autograd_available:
rosenbrock_gradient = grad(rosenbrock_fun)
else:
def rosenbrock_gradient(x):
grad = [-2*(1-x[0])-400*(x[1]-x[0]**2)*x[0], 200*(x[1]-x[0]**2)]
return grad
xr = np.arange(-1.6, 1.6, 0.01)
yr = np.arange(-1.0, 3.0, 0.01)
Xr, Yr = np.meshgrid(xr, yr)
fZr = rosenbrock_fun([Xr,Yr])
def plot_function_rosenbrock(epsilon, start_x, start_y):
start = [start_x,start_y]
points = []
while len(points) < 1000:
points.append( (start,rosenbrock_fun(start)) )
rgradient = rosenbrock_gradient(start)
start = start - np.array([epsilon*rgradient[0], epsilon*rgradient[1]])
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(4,figsize=(15,5))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(Xr,Yr,fZr,levels=20)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("f(x^{(i)})")
plt.show()
epsilon_values = np.arange(0.0,0.007,0.00002)
interactive_update = interactive(plot_function_rosenbrock, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.001, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')), \
start_x = widgets.FloatSlider(min=-1.0,max=2.0,step=0.0001,value=0.6, continuous_update=False, description='x'), \
start_y = widgets.FloatSlider(min=-1.0, max=2.0, step=0.0001, value=0.1, continuous_update=False, description='y'))
output = interactive_update.children[-1]
output.layout.height = '350px'
interactive_update
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the function to minimize as a simple python function.<br>
Step2: Plot the function as a 2d surface plot. Different colors indicate different values of the function.
Step3: Carry out the simple gradient descent strategy by using only the sign of the gradient. Carry out 200 iterations (without using a stopping criterion). The values of epsilon and the starting point are specified
Step4: Plot the trajectory and the value of the function (right subplot). Note that the minimum of this function is achieved for (0,0) and is 0
Step5: This is an interactive demonstration of gradient descent, where you can specify yourself the starting point as well as the step value. You can see that depending on the step size, the minimization can get unstable
Step6: Next, we consider the so-called Rosenbrock function, which is given by
|
4,541
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
FIELDS = {
'auth_dv': 'user', # Credentials used for dv.
'auth_sheet': 'user', # Credentials used for sheet.
'auth_bigquery': 'service', # Credentials used for bigquery.
'recipe_slug': '', # Name of Google BigQuery dataset to create.
'command': '', # Action to take.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'__comment__': 'Ensure dataset exists.',
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','prefix': 'DV_Sheet_','kind': 'string','order': 2,'default': '','description': 'Name of Google BigQuery dataset to create.'}}
}
},
{
'drive': {
'__comment__': 'Copy the default template to sheet with the recipe name',
'auth': 'user',
'copy': {
'source': 'https://docs.google.com/spreadsheets/d/10ByZKMIPZQQOEwJlskzggRhhQqe44on_ebUxkjmZI_w/',
'destination': {'field': {'name': 'recipe_slug','prefix': 'DV Sheet ','kind': 'string','order': 3,'default': '','description': 'Name of Google Sheet to create.'}}
}
}
},
{
'dv_sheets': {
'__comment': 'Depending on users choice, execute a different part of the solution.',
'auth_dv': {'field': {'name': 'auth_dv','kind': 'authentication','order': 1,'default': 'user','description': 'Credentials used for dv.'}},
'auth_sheets': {'field': {'name': 'auth_sheet','kind': 'authentication','order': 2,'default': 'user','description': 'Credentials used for sheet.'}},
'auth_bigquery': {'field': {'name': 'auth_bigquery','kind': 'authentication','order': 3,'default': 'service','description': 'Credentials used for bigquery.'}},
'sheet': {'field': {'name': 'recipe_slug','prefix': 'DV Sheet ','kind': 'string','order': 4,'default': '','description': 'Name of Google Sheet to create.'}},
'dataset': {'field': {'name': 'recipe_slug','prefix': 'DV_Sheet_','kind': 'string','order': 5,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'command': {'field': {'name': 'command','kind': 'choice','choices': ['Clear Partners','Clear Advertisers','Clear Campaigns','Clear Creatives','Clear Insertion Orders','Clear Line Items','Clear Preview','Clear Patch','Load Partners','Load Advertisers','Load Campaigns','Load Creatives','Load Insertion Orders','Load Line Items','Preview','Patch'],'order': 6,'default': '','description': 'Action to take.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Get Cloud Project ID
Step2: 3. Get Client Credentials
Step3: 4. Enter Bulk Editor For DV360 Parameters
Step4: 5. Execute Bulk Editor For DV360
|
4,542
|
<ASSISTANT_TASK:>
Python Code:
mod = pysces.model('lin4_fb')
sc = psctb.Symca(mod)
sc.do_symca()
sc.cc_results
sc.cc_results.ccJR1_R4
sc.cc_results.ccJR1_R4.expression
sc.cc_results.ccJR1_R4.numerator
sc.cc_results.ccJR1_R4.denominator
sc.cc_results.ccJR1_R4.value
sc.cc_results.ccJR1_R4.CP001
sc.cc_results.ccJR1_R4.CP002
sc.cc_results.ccJR1_R4.CP001.percentage
sc.cc_results.ccJR1_R4.CP002.percentage
mod.reLoad()
# mod.Vf_4 has a default value of 50
mod.Vf_4 = 0.1
# calculating new steady state
mod.doMca()
# now ccJR1_R4 and its two control patterns should have new values
sc.cc_results.ccJR1_R4
# original value was 0.000
sc.cc_results.ccJR1_R4.CP001
# original value was 0.964
sc.cc_results.ccJR1_R4.CP002
# resetting to default Vf_4 value and recalculating
mod.reLoad()
mod.doMca()
# This path leads to the provided layout file
path_to_layout = '~/Pysces/psc/lin4_fb.dict'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)
else:
path_to_layout = path.expanduser(path_to_layout)
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# clicking on CP002 shows that this control pattern representing
# the chain of effects passing through the feedback loop
# is totally responsible for the observed control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# clicking on CP001 shows that this control pattern representing
# the chain of effects of the main pathway does not contribute
# at all to the control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='percentage')
percentage_scan_plot = percentage_scan_data.plot()
# set the x-axis to a log scale
percentage_scan_plot.ax.semilogx()
# enable all the lines
percentage_scan_plot.toggle_category('Control Patterns', True)
percentage_scan_plot.toggle_category('CP001', True)
percentage_scan_plot.toggle_category('CP002', True)
# display the plot
percentage_scan_plot.interact()
value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='value')
value_scan_plot = value_scan_data.plot()
# set the x-axis to a log scale
value_scan_plot.ax.semilogx()
# enable all the lines
value_scan_plot.toggle_category('Control Coefficients', True)
value_scan_plot.toggle_category('ccJR1_R4', True)
value_scan_plot.toggle_category('Control Patterns', True)
value_scan_plot.toggle_category('CP001', True)
value_scan_plot.toggle_category('CP002', True)
# display the plot
value_scan_plot.interact()
# Create a variant of mod with 'C' fixed at its steady-state value
mod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3')
# Instantiate Symca object the 'internal_fixed' argument set to 'True'
sc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True)
# Run the 'do_symca' method (internal_fixed can also be set to 'True' here)
sc_fixed_S3.do_symca()
sc_fixed_S3.cc_results_1
sc_fixed_S3.cc_results_0
sc.save_results()
# the following code requires `pandas` to run
import pandas as pd
# load csv file at default path
results_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
results_path = psctb.utils.misc.unix_to_windows_path(results_path)
else:
results_path = path.expanduser(results_path)
saved_results = pd.read_csv(results_path)
# show first 20 lines
saved_results.head(n=20)
# saving session
sc.save_session()
# create new Symca object and load saved results
new_sc = psctb.Symca(mod)
new_sc.load_session()
# display saved results
new_sc.cc_results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Additionally Symca has the following arguments
Step2: do_symca has the following arguments
Step3: Inspecting an individual control coefficient yields a symbolic expression together with a value
Step4: In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$.
Step5: Numerator expression (as a SymPy expression)
Step6: Denominator expression (as a SymPy expression)
Step7: Value (as a float64)
Step8: Additional, less pertinent, attributes are abs_value, latex_expression, latex_expression_full, latex_numerator, latex_name, name and denominator_object.
Step9: Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).
Step10: Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.
Step11: Control pattern graphs
Step12: highlight_patterns has the following optional arguments
Step13: Parameter scans
Step14: As previously described, these data can be displayed using ScanFig by calling the plot method of percentage_scan_data. Furthermore, lines can be enabled/disabled using the toggle_category method of ScanFig or by clicking on the appropriate buttons
Step15: A value plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present
Step16: Fixed internal metabolites
Step17: The normal sc_fixed_S3.cc_results object is still generated, but will be invalid for the fixed model. Each additional cc_results_N contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These cc_results_N objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal cc_results object.
Step18: cc_results_0 contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the S3 demand block consists of a single reaction, this object also contains the control coefficient of R4 on J_R4, which is equal to one. This results object is useful confirming that the results were generated as expected.
Step19: If the demand block of S3 in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional cc_results_N object containing the control coefficients of that reaction block.
Step20: save_results has the following optional arguments
Step21: Saving/loading sessions
|
4,543
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Upgrade to TensorFlow 2.5+
!pip install --upgrade tensorflow
#@title Install TF Quant Finance
!pip install tf-quant-finance
#@title Install QuantLib
!pip install QuantLib-Python
!nvidia-smi
#@title Imports { display-mode: "form" }
import matplotlib.pyplot as plt
import numpy as np
import time
import tensorflow as tf
import QuantLib as ql
# tff for Tensorflow Finance
import tf_quant_finance as tff
from IPython.core.pylabtools import figsize
figsize(21, 14) # better graph size for Colab
#@title Set up parameters
dtype = np.float64 #@param
num_samples = 200000 #@param
num_timesteps = 100 #@param
expiries = [1.0] # This can be a rank 1 Tensor
dt = 1. / num_timesteps
rate = tf.constant(0.03, dtype=dtype)
sigma = tf.constant(0.1, dtype=dtype)
spot = tf.constant(700, dtype=dtype)
strikes = tf.constant([600, 650, 680], dtype=dtype)
def set_up_pricer(expiries, watch_params=False):
Set up European option pricing function under Black-Scholes model.
Args:
expiries: List of expiries at which to to sample the trajectories.
watch_params: A Python bool. When `True`, gradients of the price function wrt the inputs
are computed more efficiently.
Returns:
A callable that accepts a rank 1 tensor of strikes, and scalar values for
the spots and volatility values. The callable outputs prices of
the European call options on the grid `expiries x strikes`.
def price_eu_options(strikes, spot, sigma):
# Define drift and volatility functions.
def drift_fn(t, x):
del t, x
return rate - 0.5 * sigma**2
def vol_fn(t, x):
del t, x
return tf.reshape(sigma, [1, 1])
# Use GenericItoProcess class to set up the Ito process
process = tff.models.GenericItoProcess(
dim=1,
drift_fn=drift_fn,
volatility_fn=vol_fn,
dtype=dtype)
log_spot = tf.math.log(tf.reduce_mean(spot))
if watch_params:
watch_params_list = [sigma]
else:
watch_params_list = None
paths = process.sample_paths(
expiries, num_samples=num_samples,
initial_state=log_spot,
watch_params=watch_params_list,
# Select a random number generator
random_type=tff.math.random.RandomType.PSEUDO_ANTITHETIC,
time_step=dt)
prices = (tf.exp(-tf.expand_dims(rate * expiries, axis=-1))
* tf.reduce_mean(tf.nn.relu(tf.math.exp(paths) - strikes), 0))
return prices
return price_eu_options
price_eu_options = tf.function(set_up_pricer(expiries))
#@title Pricing time a CPU. Note TensorFlow does automatic multithreading.
# First run (includes graph optimization time)
with tf.device("/cpu:0"):
price_eu_options(strikes, spot, sigma)
# Second run (excludes graph optimization time)
time_start = time.time()
with tf.device("/cpu:0"):
prices = price_eu_options(strikes, spot, sigma)
time_end = time.time()
time_price_cpu = time_end - time_start
print("Time (seconds) to price a European Call Option on a CPU: ", time_price_cpu)
price_eu_options_xla = tf.function(set_up_pricer(expiries), jit_compile=True)
#@title Pricing times on a CPU with XLA compilation
# First run (includes graph optimization time)
with tf.device("/cpu:0"):
price_eu_options_xla(strikes, spot, sigma)
# Second run (excludes graph optimization time)
time_start = time.time()
with tf.device("/cpu:0"):
prices = price_eu_options_xla(strikes, spot, sigma)
time_end = time.time()
time_price_cpu_xla = time_end - time_start
print("Time (seconds) to price a European Call Option on a CPU with XLA: ", time_price_cpu_xla)
#@title Monte Carlo sampling in QuantLib
num_samples = 200000 #@param
num_timesteps = 100 #@param
expiry = 1.0
calculation_date = ql.Date(1, 1, 2010)
maturity_date = ql.Date(1, 1, 2011)
day_count = ql.Thirty360()
calendar = ql.NullCalendar()
ql_strike_price = 550
sigma_ql = 0.1
ql_volatility = ql.SimpleQuote(sigma_ql)
ql_risk_free_rate = 0.03
option_type = ql.Option.Call
ql.Settings.instance().evaluationDate = calculation_date
payoff = ql.PlainVanillaPayoff(option_type, ql_strike_price)
eu_exercise = ql.EuropeanExercise(maturity_date)
european_option_ql = ql.VanillaOption(payoff, eu_exercise)
flat_ts = ql.YieldTermStructureHandle(
ql.FlatForward(calculation_date, ql_risk_free_rate, day_count)
)
flat_vol_ts = ql.BlackVolTermStructureHandle(
ql.BlackConstantVol(calculation_date, calendar,
ql.QuoteHandle(ql_volatility), day_count)
)
spot_ql = 700
spot_price = ql.SimpleQuote(spot_ql)
spot_handle = ql.QuoteHandle(
spot_price
)
bsm_process = ql.BlackScholesProcess(spot_handle,
flat_ts,
flat_vol_ts)
# Compute the same price number_of_options times
engine = ql.MCEuropeanEngine(bsm_process, "PseudoRandom",
timeSteps=num_timesteps,
requiredSamples=num_samples,
seed=42)
european_option_ql.setPricingEngine(engine)
# Price
t = time.time()
price_ql = european_option_ql.NPV()
time_price_ql = time.time() - t
print("Time (seconds) to price a European Call Option using QuantLib: ", time_price_ql)
#@title Plot the results
ind = np.arange(1) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
fig.set_figheight(9)
fig.set_figwidth(12)
ax.bar(ind - width/2, [time_price_cpu], width,
label='TensorFlow (CPU)', color='darkorange')
ax.bar(ind + width/2, [time_price_ql], width,
label='QuantLib')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Time (sec) to sample {}'.format(num_samples))
ax.set_title('Monte Carlo sampling comparison')
ax.set_xticks(ind)
ax.legend()
plt.show()
#@title Pricing times on CPU and GPU platforms
# CPU without XLA
with tf.device("/cpu:0"):
price_eu_options(strikes, spot, sigma)
time_start = time.time()
with tf.device("/cpu:0"):
prices = price_eu_options(strikes, spot, sigma)
time_end = time.time()
time_price_cpu = time_end - time_start
# CPU with XLA
with tf.device("/cpu:0"):
price_eu_options_xla(strikes, spot, sigma)
time_start = time.time()
with tf.device("/cpu:0"):
prices = price_eu_options_xla(strikes, spot, sigma)
time_end = time.time()
time_price_cpu_xla = time_end - time_start
# GPU without XLA
with tf.device("/gpu:0"):
price_eu_options(strikes, spot, sigma)
# Second run (excludes graph optimization time)
time_start = time.time()
with tf.device("/gpu:0"):
prices = price_eu_options(strikes, spot, sigma)
time_end = time.time()
time_price_gpu = time_end - time_start
# GPU with XLA
with tf.device("/gpu:0"):
price_eu_options_xla(strikes, spot, sigma)
# Second run (excludes graph optimization time)
time_start = time.time()
with tf.device("/gpu:0"):
prices = price_eu_options_xla(strikes, spot, sigma)
time_end = time.time()
time_price_gpu_xla = time_end - time_start
#@title Plot the results
ind = np.arange(1) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
fig.set_figheight(9)
fig.set_figwidth(12)
ax.bar(ind - width/8, [time_price_cpu], width / 8,
label='TensorFlow (CPU)')
ax.bar(ind, [time_price_cpu_xla], width / 8,
label='TensorFlow (CPU XLA)')
ax.bar(ind + width/8, [time_price_gpu], width / 8,
label='TensorFlow (GPU)')
ax.bar(ind + width/4, [time_price_gpu_xla], width / 8,
label='TensorFlow (GPU XLA)')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Time (sec) to sample {}'.format(num_samples))
ax.set_title('Monte Carlo sampling comparison')
ax.set_xticks(ind)
ax.legend()
plt.show()
# Set up the pricer
expiries = [0.5, 1.0]
strikes = tf.constant([600, 650, 680], dtype=dtype)
sigma = tf.constant(0.1, dtype=dtype)
price_eu_options = set_up_pricer(expiries, watch_params=True)
price_eu_options_no_watched_params = set_up_pricer(expiries)
@tf.function(jit_compile=True)
def price_eu_options_xla(strikes, spot, sigma):
return price_eu_options_no_watched_params(strikes, spot, sigma)
@tf.function
def vega_fn(sigma):
fn = lambda sigma: price_eu_options(strikes, spot, sigma)
return tff.math.fwd_gradient(fn, sigma,
use_gradient_tape=True)
@tf.function(jit_compile=True)
def vega_fn_xla(sigma):
return vega_fn(sigma)
@tf.function
def delta_fn(spot):
fn = lambda spot: price_eu_options(strikes, spot, sigma)
return tff.math.fwd_gradient(fn, spot,
use_gradient_tape=True)
@tf.function(jit_compile=True)
def delta_fn_xla(spot):
return delta_fn(spot)
estimated_deltas = delta_fn_xla(spot)
print("Estimated deltas on grid expiries x strikes: \n", estimated_deltas.numpy())
estimated_vegas = vega_fn_xla(sigma)
print("Estimated vegas on grid expiries x strikes: \n", estimated_vegas.numpy())
expiries_tensor = tf.expand_dims(tf.convert_to_tensor(expiries, dtype=dtype), axis=1)
true_delta_fn = lambda spot : tff.black_scholes.option_price(volatilities=sigma,
strikes=strikes,
spots=spot,
expiries=expiries_tensor,
discount_rates=rate)
true_vega_fn = lambda sigma : tff.black_scholes.option_price(volatilities=sigma,
strikes=strikes,
spots=spot,
expiries=expiries_tensor,
discount_rates=rate)
true_delta = tff.math.fwd_gradient(true_delta_fn, spot)
true_vega = tff.math.fwd_gradient(true_vega_fn, sigma)
print("True deltas on grid expiries x strikes: \n", true_delta.numpy())
print("True vegas on grid expiries x strikes: \n", true_vega.numpy())
print("Relative error in delta estimation: \n", np.max(abs(estimated_deltas - true_delta) / true_delta))
print("Relative error in vega estimation: \n", np.max(abs(estimated_vegas - true_vega) / true_vega))
#@title Greek computation speed
# Price CPU with XLA
## warmup
with tf.device("/cpu:0"):
price_eu_options_xla(strikes, spot, sigma)
## measure time
time_start = time.time()
with tf.device("/cpu:0"):
prices = price_eu_options_xla(strikes, spot, sigma)
time_end = time.time()
time_price_cpu = time_end - time_start
# Delta CPU with XLA
## warmup
with tf.device("/cpu:0"):
delta_fn_xla(spot)
## measure time
time_start = time.time()
with tf.device("/cpu:0"):
delta_fn_xla(spot)
time_end = time.time()
time_delta_cpu = time_end - time_start
# Vega CPU with XLA
## warmup
with tf.device("/cpu:0"):
vega_fn_xla(spot)
## measure time
time_start = time.time()
with tf.device("/cpu:0"):
vega_fn_xla(spot)
time_end = time.time()
time_vega_cpu = time_end - time_start
# Price GPU with XLA
## warmup
with tf.device("/gpu:0"):
price_eu_options_xla(strikes, spot, sigma)
## measure time
time_start = time.time()
with tf.device("/gpu:0"):
prices = price_eu_options_xla(strikes, spot, sigma)
time_end = time.time()
time_price_gpu = time_end - time_start
# Delta GPU with XLA
## warmup
with tf.device("/gpu:0"):
delta_fn_xla(spot)
## measure time
time_start = time.time()
with tf.device("/gpu:0"):
delta_fn_xla(spot)
time_end = time.time()
time_delta_gpu = time_end - time_start
# Vega GPU with XLA
## warmup
with tf.device("/gpu:0"):
vega_fn_xla(spot)
## measure time
time_start = time.time()
with tf.device("/gpu:0"):
vega_fn_xla(spot)
time_end = time.time()
time_vega_gpu = time_end - time_start
#@title CPU greeks computation speed
ind = np.arange(1) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
fig.set_figheight(9)
fig.set_figwidth(12)
ax.bar(ind - width/8, [time_price_cpu], width / 8,
label='Price time (CPU)')
ax.bar(ind, [time_delta_cpu], width / 8,
label='Delta time (CPU)')
ax.bar(ind + width/8, [time_vega_cpu], width / 8,
label='Vega time (CPU)')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Time (sec)')
ax.set_title('Monte Carlo. Greeks computation on CPU')
ax.set_xticks(ind)
ax.legend()
plt.show()
#@title GPU greeks computation speed
ind = np.arange(1) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
fig.set_figheight(9)
fig.set_figwidth(12)
ax.bar(ind - width/8, [time_price_gpu], width / 8,
label='Price time (GPU)')
ax.bar(ind, [time_delta_gpu], width / 8,
label='Delta time (GPU)')
ax.bar(ind + width/8, [time_vega_gpu], width / 8,
label='Vega time (GPU)')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Time (sec)')
ax.set_title('Monte Carlo. Greeks computation on GPU')
ax.set_xticks(ind)
ax.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Monte Carlo Pricing in Tensorflow Quant Finance (TFF) using Euler Scheme
Step2: Diffusion process $X(t) = (X_1(t), .. X_n(t))$ is a solution to a Stochastic Differential Equation (SDE)
Step4: Setting up the Euler sampling for European call option pricing under Black-Scholes
Step5: Better performance with XLA
Step6: For the reference we provide performance of the Monte Carlo pricer in QuantLib
Step7: GPU vs CPU performace
Step8: Greek computation
|
4,544
|
<ASSISTANT_TASK:>
Python Code:
# Import TensorFlow and enable eager execution
# This code requires TensorFlow version >=1.9
import tensorflow as tf
tf.enable_eager_execution()
# We'll generate plots of attention in order to see which parts of an image
# our model focuses on during captioning
import matplotlib.pyplot as plt
# Scikit-learn includes many helpful utilities
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import re
import numpy as np
import os
import time
import json
from glob import glob
from PIL import Image
import pickle
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract = True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
name_of_zip = 'train2014.zip'
if not os.path.exists(os.path.abspath('.') + '/' + name_of_zip):
image_zip = tf.keras.utils.get_file(name_of_zip,
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/zips/train2014.zip',
extract = True)
PATH = os.path.dirname(image_zip)+'/train2014/'
else:
PATH = os.path.abspath('.')+'/train2014/'
# read the json file
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# storing the captions and the image name in vectors
all_captions = []
all_img_name_vector = []
for annot in annotations['annotations']:
caption = '<start> ' + annot['caption'] + ' <end>'
image_id = annot['image_id']
full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id)
all_img_name_vector.append(full_coco_image_path)
all_captions.append(caption)
# shuffling the captions and image_names together
# setting a random state
train_captions, img_name_vector = shuffle(all_captions,
all_img_name_vector,
random_state=1)
# selecting the first 30000 captions from the shuffled set
num_examples = 30000
train_captions = train_captions[:num_examples]
img_name_vector = img_name_vector[:num_examples]
len(train_captions), len(all_captions)
def load_image(image_path):
img = tf.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize_images(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
# getting the unique images
encode_train = sorted(set(img_name_vector))
# feel free to change the batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(
encode_train).map(load_image).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
# This will find the maximum length of any caption in our dataset
def calc_max_length(tensor):
return max(len(t) for t in tensor)
# The steps above is a general process of dealing with text processing
# choosing the top 5000 words from the vocabulary
top_k = 5000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<unk>",
filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~ ')
tokenizer.fit_on_texts(train_captions)
train_seqs = tokenizer.texts_to_sequences(train_captions)
tokenizer.word_index = {key:value for key, value in tokenizer.word_index.items() if value <= top_k}
# putting <unk> token in the word2idx dictionary
tokenizer.word_index[tokenizer.oov_token] = top_k + 1
tokenizer.word_index['<pad>'] = 0
# creating the tokenized vectors
train_seqs = tokenizer.texts_to_sequences(train_captions)
# creating a reverse mapping (index -> word)
index_word = {value:key for key, value in tokenizer.word_index.items()}
# padding each vector to the max_length of the captions
# if the max_length parameter is not provided, pad_sequences calculates that automatically
cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
# calculating the max_length
# used to store the attention weights
max_length = calc_max_length(train_seqs)
# Create training and validation sets using 80-20 split
img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector,
cap_vector,
test_size=0.2,
random_state=0)
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
# feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
vocab_size = len(tokenizer.word_index)
# shape of the vector extracted from InceptionV3 is (64, 2048)
# these two variables represent that
features_shape = 2048
attention_features_shape = 64
# loading the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# using map to load the numpy files in parallel
# NOTE: Be sure to set num_parallel_calls to the number of CPU cores you have
# https://www.tensorflow.org/api_docs/python/tf/py_func
dataset = dataset.map(lambda item1, item2: tf.py_func(
map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=8)
# shuffling and batching
dataset = dataset.shuffle(BUFFER_SIZE)
# https://www.tensorflow.org/api_docs/python/tf/contrib/data/batch_and_drop_remainder
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(1)
def gru(units):
# If you have a GPU, we recommend using the CuDNNGRU layer (it provides a
# significant speedup).
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, 64, hidden_size)
score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, 64, 1)
# we get 1 at the last axis because we are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since we have already extracted the features and dumped it using pickle
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.units)
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, vocab_size)
optimizer = tf.train.AdamOptimizer()
# We are masking the loss calculated for padding
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
EPOCHS = 20
for epoch in range(EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * BATCH_SIZE, 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss += (loss / int(target.shape[1]))
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step())
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
loss.numpy() / int(target.shape[1])))
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / len(cap_vector))
print ('Epoch {} Loss {:.6f}'.format(epoch + 1,
total_loss/len(cap_vector)))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input, features, hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()
result.append(index_word[predicted_id])
if index_word[predicted_id] == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for l in range(len_result):
temp_att = np.resize(attention_plot[l], (8, 8))
ax = fig.add_subplot(len_result//2, len_result//2, l+1)
ax.set_title(result[l])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
# captions on the validation set
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([index_word[i] for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image)
print ('Real Caption:', real_caption)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
# opening the image
Image.open(img_name_val[rid])
image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension,
origin=image_url)
result, attention_plot = evaluate(image_path)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download and prepare the MS-COCO dataset
Step2: Optionally, limit the size of the training set for faster training
Step3: Preprocess the images using InceptionV3
Step4: Initialize InceptionV3 and load the pretrained Imagenet weights
Step5: Caching the features extracted from InceptionV3
Step6: Preprocess and tokenize the captions
Step7: Split the data into training and testing
Step8: Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.
Step9: Model
Step10: Training
Step11: Caption!
Step12: Try it on your own images
|
4,545
|
<ASSISTANT_TASK:>
Python Code:
import re
pattern = 'this'
text = 'Does this text match the pattern'
match = re.search(pattern, text)
s = match.start()
e = match.end()
print('Found "{}" \n in "{}" from {} to {} ("{}")'.format(match.re.pattern,match.string, s, e, text[s:e]))
import re
regexes = [
re.compile(p)
for p in ['this', 'that']
]
text = 'Does this text match the pattern'
print('Text: {!r}\n'.format(text))
for regex in regexes:
print('Seeking "{}"->'.format(regex.pattern), end= '')
if regex.search(text):
print('match!')
else:
print('no match')
import re
text = 'abbaaabbbbaaaaa'
pattern = 'ab'
for match in re.findall(pattern, text):
print('Found {!r}'.format(match))
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
print('Found at {:d}:{:d}'.format(s,e))
import re
def test_patterns(text, patterns):
Given source text and a list of patterns, look for
matches for each pattern within the text and print
them to stdout.
# Look for each pattern in the text and print the results
for pattern, desc in patterns:
print("'{}' ({})\n".format(pattern, desc))
print(" '{}'".format(text))
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
substr = text[s:e]
n_backslashes = text[:s].count('\\')
prefix = '.' * (s + n_backslashes)
print(" {}'{}'".format(prefix, substr))
print()
return
test_patterns('abbaabbba',
[
('ab*','a followdd by zero or more b'),
('ab+','a followed by one or more b'),
('ab?', 'a followed by zero or one b'),
('ab{3}','a followed by three b'),
('ab{2,3}', 'a followed by two or three b')
]
)
test_patterns('abbaabbba',
[
('ab*?','a followdd by zero or more b'),
('ab+?','a followed by one or more b'),
('ab??', 'a followed by zero or one b'),
('ab{3}?','a followed by three b'),
('ab{2,3}?', 'a followed by two or three b')
]
)
test_patterns(
'abbaabbba',
[
('[ab]', 'either a or b'),
('a[ab]+', 'a followed by one or more a or b'),
('a[ab]+?', 'a followed by one or more a or b, not greedy'),
]
)
test_patterns(
'This is some text -- with punctuation',
[
('[^-. ]+', 'sequence withouct -, ., or space')
]
)
test_patterns(
'This is some text -- with punctuation',
[
('[a-z]+', 'sequence of lowercase letters'),
('[A-Z]+', 'sequecne of uppercase letters'),
('[a-zA-Z]+', 'sequecne of letters of either case'),
('[A-Z][a-z]+', 'one uppercase followed by lowercase')
]
)
test_patterns(
'abbaabbba',
[
('a.', 'a followed by any one character'),
('b.', 'b followed by any one character'),
('a.*b', 'a followed by anything, end in b'),
('a.*?b', 'a followed by anythin, end in b')
]
)
test_patterns(
'A prime #1 example!',
[
(r'\d+', 'sequece of digits'),
(r'\D+', 'sequence of non-digits'),
(r'\s+', 'sequence of whitespace'),
(r'\S+', 'sequence of non-whitespace'),
(r'\w+', 'alphanumeric characters'),
(r'\W+', 'non-alphanumeric')
]
)
test_patterns(
'This is some text -- with punctuation.',
[(r'^\w+', 'word at start of string'),
(r'\A\w+', 'word at start of string'),
(r'\w+\S*$', 'word near end of string'),
(r'\w+\S*\Z', 'word near end of string'),
(r'\w*t\w*', 'word containing t'),
(r'\bt\w+', 't at start of word'),
(r'\w+t\b', 't at end of word'),
(r'\Bt\B', 't, not start or end of word')],
)Constraining the Search
import re
text = 'This is some text --with punctuation.'
pattern = 'is'
print('Text :',text)
print('pattern:', pattern)
m = re.match(pattern, text)
print('Match', m)
s = re.search(pattern ,text)
print('Search', s)
test_patterns(
'abbaaabbbbaaaaa',
[
('a(ab)', 'a followed by literal ab'),
('a(a*b*)','a followed by 0-n a and 0-b b'),
('a(ab)*', 'a followed by 0-n ab'),
('a(ab)+', 'a followed by 1-n ab')
]
)
import re
text = 'This is some text -- with punctuation'
print(text)
print()
patterns = [
(r'^(\w+)', 'word at start of string'),
(r'(\w+)\S*$', 'word at end, with optional punctuation'),
(r'(\bt\w+)\W+(\w+)', 'word starting with t, another word'),
(r'(\w+t)\b', 'word ending with t')
]
for pattern, desc in patterns:
regex = re.compile(pattern)
match = regex.search(text)
print("'{}' ({})\n".format(pattern, desc))
print(' ', match.groups())
print()
import re
text = 'This is some text -- with punctuation'
print(text)
print()
patterns = [
r'(?P<first_word>\w+)',
r'(?P<last_word>\w+)\S*$',
r'(?P<t_word>\bt\w+)\W+(?P<other_word>\w+)',
r'(?P<ends_with_t>\w+t)\b'
]
for pattern in patterns:
regex = re.compile(pattern)
match = regex.search(text)
print("'{}'".format(pattern))
print(' ', match.groups())
print(' ', match.groupdict())
print()
import re
text = 'This is some text -- with punctuation.'
pattern = r'\bT\w+'
with_case = re.compile(pattern)
without_case = re.compile(pattern, re.IGNORECASE)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('Case-sensitive:')
for match in with_case.findall(text):
print(' {!r}'.format(match))
print('Case-insensitive:')
for match in without_case.findall(text):
print(' {!r}'.format(match))
import re
text = 'This is some text -- with punctuation.\nA second line.'
pattern = r'(^\w+)|(\w+\S*$)'
single_line = re.compile(pattern)
multiline = re.compile(pattern, re.MULTILINE)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('Single Line :')
for match in single_line.findall(text):
print(' {!r}'.format(match))
print('Multline :')
for match in multiline.findall(text):
print(' {!r}'.format(match))
import re
text = u'Français złoty Österreich 中国矿业大学'
pattern = r'\w+'
ascii_pattern = re.compile(pattern, re.ASCII)
unicode_pattern = re.compile(pattern)
print('Text :', text)
print('Pattern :', pattern)
print('ASCII :', list(ascii_pattern.findall(text)))
print('Unicode :', list(unicode_pattern.findall(text)))
import re
address = re.compile(
'''
[\w\d.+-]+ # username
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # TODO: support more top-level domains
''',
re.VERBOSE)
candidates = [
u'first.last@example.com',
u'first.last+category@gmail.com',
u'valid-address@mail.example.com',
u'not-valid@example.foo',
]
for candidate in candidates:
match = address.search(candidate)
print('{:<30} {}'.format(
candidate, 'Matches' if match else 'No match'),
)
import re
bold = re.compile(r'\*{2}(.*?)\*{2}')
text = 'Make this **bold**. This **too**.'
print('Text:', text)
print('Bold:', bold.sub(r'<b>\1</b>', text))
import re
bold = re.compile(r'\*{2}(?P<bold_text>.*?)\*{2}')
text = 'Make this **bold**. This **too**.'
print('Text:', text)
print('Bold:', bold.sub(r'<b>\g<bold_text></b>', text))
import re
text = '''Paragraph one
on two lines.
Paragraph two.
Paragraph three.'''
print('With findall:')
for num, para in enumerate(re.findall(r'(.+?)(\n{2,}|$)',
text,
flags=re.DOTALL)):
print(num, repr(para))
print()
print()
print('With split:')
for num, para in enumerate(re.split(r'\n{2,}', text)):
print(num, repr(para))
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 Compiling Expressions
Step2: 3 Multiple Matches
Step4: 4 Repetition
Step5: When processing a repetition instruction, re will usually consume as much of the input as possible while matching the pattern. This so-called greedy behavior may result in fewer individual matches, or the matches may include more of the input text than intended. Greediness can be turned off by following the repetition instruction with ?.
Step6: 5 character Sets
Step7: 5 Escape Codes
Step8: 6 Anchoring
Step9: 7 Constraining the Search
Step10: 8 Dissecting Matches with groups
Step11: 8 Search Options
Step12: 8.1 Input with mulitline
Step13: 9 Unicode
Step14: 10 Verbose Expression Syntax
Step15: 11 Modifying Strings with Patterns
Step16: 12 Spliting with patterns
|
4,546
|
<ASSISTANT_TASK:>
Python Code:
from devito import *
from sympy import init_printing, symbols, solve
init_printing(use_latex=True)
grid = Grid(shape=(5, 6), extent=(1., 1.))
grid
?Function
f = Function(name='g', grid=grid)
f
f.data
g = TimeFunction(name='g', grid=grid)
g
from examples.cfd import init_smooth, plot_field
nt = 100 # Number of timesteps
dt = 0.2 * 2. / 80 # Timestep size (sigma=0.2)
c = 1 # Value for c
# Then we create a grid and our function
grid = Grid(shape=(81, 81), extent=(2., 2.))
u = TimeFunction(name='u', grid=grid)
# We can now set the initial condition and plot it
init_smooth(field=u.data[0], dx=grid.spacing[0], dy=grid.spacing[1])
init_smooth(field=u.data[1], dx=grid.spacing[0], dy=grid.spacing[1])
plot_field(u.data[0])
op = Operator(update)
op(time=nt+1, dt=dt)
plot_field(u.data[0])
print(op.ccode)
u = TimeFunction(name='u', grid=grid, space_order=2)
u.dx2
u = TimeFunction(name='u', grid=grid, space_order=4)
u.dx2
grid_3d = Grid(shape=(5, 6, 7), extent=(1., 1., 1.))
u = TimeFunction(name='u', grid=grid_3d, space_order=2)
u
import numpy as np
from examples.seismic import plot_image
t0, tn, dt = 214., 400, 4.2 # Start, end and timestep size
nt = int(1 + (tn - t0) / dt) # Number of timesteps
# A 120x120 grid that defines our square domain
grid = Grid(shape=(120, 120), extent=(1800., 1800.))
# Load and plot the initial "warmed-up" wavefield
u = TimeFunction(name='u', grid=grid, space_order=2, time_order=2)
u.data[:] = np.load('wavefield.npy')
plot_image(u.data[0])
# Square slowness for a constant wave speed of 1.5m/s
m = Function(name='m', grid=grid)
m.data[:] = 1. / 1.5**2
# Reset the wavefield, so that we can run the cell multiple times
u.data[:] = np.load('wavefield.npy')
# Please implement your wave equation operator here
plot_image(u.data[0])
m.data[:, 60:] = 1. / 2.5**2 # Set a new wave speed
plot_image(m.data)
u.data[:] = np.load('wavefield.npy') # Reset our wave field u
plot_image(u.data[0])
op(t=60, dt=dt)
plot_image(u.data[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From equation to stencil code in a few lines of Python
Step2: Functions and data
Step3: Ok, let's create a function $f(x, y)$ and look at the data Devito has associated with it. Please note that it is important to use explicit keywords, such as name or grid when creating Devitos Function objects.
Step4: By default Devito's Function objects will use the spatial dimensions (x, y) for 2D grids and (x, y, z) for 3D grids. To solve a PDE for several timesteps, we need a time dimension for our symbolic function. For this Devito provides a second function type, TimeFunction, that provides the correct dimension and some other intricacies needed to create a time stepping scheme.
Step5: What does the shape of the associated data look like? Can you guess why?
Step6: Next, we want to discretize our governing equation so that we can create a functional Operator from it. We can start by simply writing out the equation as a symbolic expression, while using the shorthand expressions for derivatives that the Function objects provide. This will create a symbolic object of the dicrestized equation.
Step7: Please note that the Operator is where all the Devito power is hidden, as the it will automatically generate and compile optimized C stencil code. We can look at this code - although we don't need to execute it.
Step8: Second derivatives and high-order stencils
Step9: We can arbitrarily drive the discretization order up if require higher order stencils.
Step10: To implement diffusion or wave equations, we need to take the Laplacian $\nabla^2 u$, which is simply the second derivative in all space dimensions. For this, Devito also provides a shorthand expression, which means we do not have to hard-code the problem dimension (2D or 3D) in the code. To change the problem dimension we can create another Grid object and use this to re-define our Functions.
Step11: Exercise 3
Step12: To remind ourselves, the governing equation we want to implement is
Step13: <button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>Solution</button>
|
4,547
|
<ASSISTANT_TASK:>
Python Code:
cube_params = {
'freq' : 604000,
'alpha' : 0,
'delta' : 0,
'spe_bw' : 4000,
'spe_res' : 1,
's_f' : 4,
's_a' : 0}
# freq_init = cube_params['freq'] - cube_params['spe_bw']/2.0
# freq_end = cube_params['freq'] + cube_params['spe_bw']/2.0
# molist_present = theoretical_presence(molist, freq_init, freq_end)
# all_subsets = sum(map(lambda r: list(combinations(molist_present, r)),
# range(1, len(molist_present)+1)), [])
# random_index = np.random.randint(len(all_subsets), size=25)
# isolist = []
# for i in random_index:
# isolist.append(all_subsets[i])
# save_isolist(isolist)
isolist = load_isolist()
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 0
# cube_name = 'data/cube_fixed_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
# log=open('data/isolist_variable_width.log', 'w')
# cube_n = 25
# cube_name = 'data/cube_variable_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube_variable_width(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
# dictionary = gen_words(molist, cube_params)
# save_dictionary(dictionary, 'band_9')
# dictionary = gen_words(molist, cube_params, True)
# save_dictionary(dictionary, 'band_9_dual')
# dictionary = load_dictionary('band_9')
dictionary = load_dictionary('band_9_dual')
confusion_matrix = []
results = []
noise_pixel = (0,0)
train_pixel = (1,1)
for i in range(0, 1):
if (i == 0):
cube_name = 'data/cube_fixed_width_'
if (i == 25):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, train_pixel, noise_pixel)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
if i == 0:
confusion_matrix = [get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True)]
results = [get_results(confusion_matrix[i])]
else:
confusion_matrix.append(get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True))
results.append(get_results(confusion_matrix[i]))
print(i)
cube_params['freq'] = 277000
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 50
# cube_name = 'data/cube_fixed_width_'
# for i in range(0, 25):
# # Creation of the cube
# gen_cube(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 75
# cube_name = 'data/cube_variable_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube_variable_width(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
# dictionary = gen_words(molist, cube_params)
# save_dictionary(dictionary, 'band_7')
# dictionary = gen_words(molist, cube_params, True)
# save_dictionary(dictionary, 'band_7_dual')
# dictionary = load_dictionary('band_7')
dictionary = load_dictionary('band_7_dual')
for i in range(50, 100):
if (i == 50):
cube_name = 'data/cube_fixed_width_'
if (i == 75):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params,
train_pixel, noise_pixel
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
confusion_matrix.append(get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True))
results.append(get_results(confusion_matrix[i]))
print(i)
latexify(fig_height=6.9)
plt.subplot(3, 1, 1)
plt.title("Precision of Predictions for Fixed Width")
plt.xlabel("Precision")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Precision"]) for means in results[:25]], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["Precision"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 2)
plt.title("Recall of Predictions for Fixed Width")
plt.xlabel("Recall")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Recall"]) for means in results[:25] if np.mean(means["Recall"]) > 0.3 and np.mean(means["Recall"]) < 1], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["Recall"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 3)
plt.title("F-Score of Predictions for Fixed Width")
plt.xlabel("F-Score")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["F-Score"]) for means in results[:25]], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["F-Score"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.tight_layout()
plt.savefig("images/hist1.pdf")
latexify(fig_height=6.9)
plt.subplot(3, 1, 1)
plt.title("Precision of Predictions for Variable Width")
plt.xlabel("Precision")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Precision"]) for means in results[25:50]], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["Precision"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 2)
plt.title("Recall of Predictions for Variable Width")
plt.xlabel("Recall")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Recall"]) for means in results[25:50] if np.mean(means["Recall"]) > 0.3], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["Recall"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 3)
plt.title("F-Score of Predictions for Variable Width")
plt.xlabel("F-Score")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["F-Score"]) for means in results[25:50]], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["F-Score"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.tight_layout()
plt.savefig("images/hist2.pdf")
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605075, xmax = 605275)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
current_isotopes = [""]
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(605075, 605275) \
and line[0] not in current_isotopes:
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line')
plt.text(isotope_frequency + 1.5, -0.125, isotope_name, size='8', rotation='vertical')
current_isotopes.append(line[0])
plt.title("Blending case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/blending.pdf")
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605140, xmax = 605200)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
current_isotopes = [""]
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(605140, 605275) \
and line[0] not in current_isotopes:
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
plt.text(isotope_frequency + 1.5, -0.125, isotope_name, size='8', rotation='vertical', color='g')
current_isotopes.append(line[0])
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(605075, 605200):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
# print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
# print(probability)
match = "match " + str(int(probability)) + " \%"
if '33SO2-f605162.1267' in detections.index[pch]:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
break
elif 'OS17O-f605172.0102' in detections.index[pch]:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
plt.title("Blending case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/blending.pdf")
file_path = "data/cube_fixed_width_6.fits"
dictionary = load_dictionary('band_9_dual')
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, 4)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605350, xmax = 605390)
plt.ylim(ymin = -1,ymax = 1.)
lines = get_lines_from_fits(file_path)
for i in range(0, len(lines)):
isotope_frequency = int(lines[i][1])
isotope_name = lines[i][0] + "-f" + str(lines[i][1])
if isotope_frequency in range(605335, 605375):
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
if (i == 27):
plt.text(isotope_frequency + 1.5, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
elif (i == 28):
plt.text(isotope_frequency + 2.25, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
else:
plt.text(isotope_frequency + 1, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
alpha_columns = pd.Series(alpha[:,0])
alpha_columns.index = dictionary_recal.columns
alpha_columns = alpha_columns[alpha_columns > 0]
hardcoder = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(605350, 605390):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
print(probability)
match = "match " + str(int(probability)) + " \%"
if hardcoder == 0:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
hardcoder = hardcoder + 1
break
else:
hardcoder = hardcoder - 1
continue
plt.title("Hyperfine lines case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/hyperfine.pdf")
file_path = "data/cube_fixed_width_1.fits"
dictionary = load_dictionary('band_9_dual')
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, 4)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
latexify()
file_path = "data/cube_fixed_width_1.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 604356, xmax = 604456)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(604356, 604456):
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
plt.text(isotope_frequency + 2, 0, isotope_name, size='8', rotation='vertical', color='g')
alpha_columns = pd.Series(alpha[:,0])
alpha_columns.index = dictionary_recal.columns
alpha_columns = alpha_columns[alpha_columns > 0]
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(604356, 604456):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
print(probability)
match = "match " + str(int(probability)) + " \%"
plt.text(frecuencia + 2.5, 0.725, match, size='10', rotation='vertical', color='r')
plt.title("Double peaks for single Line")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/doublepeak.pdf")
np.mean([np.mean(means["F-Score"]) for means in results])
min_distance_req_list = pd.DataFrame([])
for i in range(0, 100):
if (i == 0 or i == 50):
cube_name = 'data/cube_fixed_width_'
if (i == 25 or i == 75):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
lines = get_lines_from_fits(file_path)
sorted_lines = sorted([lines[idx][1] for idx in range(0, len(lines) )])
min_distance_req = True
last_freq = float(sorted_lines[0])
for idx in range(1, len(sorted_lines)):
distance = float(sorted_lines[idx]) - last_freq
if(distance <= 1):
min_distance_req = False
break
last_freq = float(sorted_lines[idx])
if len(min_distance_req_list) == 0:
if (min_distance_req):
min_distance_req_list = [i]
else:
if (min_distance_req):
min_distance_req_list.append(i)
min_distance_req_list
results_filtered = [results[min_distance_req_list[0]]]
for ix in min_distance_req_list[1:]:
results_filtered.append(results[ix])
np.mean([np.mean(means["F-Score"]) for means in results_filtered])
cf_filtered = [confusion_matrix[min_distance_req_list[0]]]
for ix in min_distance_req_list[1:]:
cf_filtered.append(confusion_matrix[ix])
confusion_matrix[0]
latexify()
n = 5
fig, axes = plt.subplots(nrows=4, ncols=5)
filtered_matrices = confusion_matrix[:20]
for ax, matrix in zip(axes.flat, filtered_matrices):
order_index = np.argsort([float(f.split('f')[1].split('&')[0]) for f in matrix.index])
order_columns = np.argsort([float(f.split('f')[1].split('&')[0]) for f in matrix.columns])
im = ax.matshow(matrix[order_columns].iloc[order_index], cmap='hot')
ax.set_xticklabels([])
ax.set_yticklabels([])
fig.suptitle("Modified Confusion Matrices")
fig.colorbar(im, ax=axes.ravel().tolist())
plt.savefig("images/confusion_matrix.pdf")
latexify()
# Plot Precision-Recall curve for each cube
precision_avg = [np.mean(means["Precision"]) for means in results[:50]]
recall_avg = [np.mean(means["Recall"]) for means in results[:50]]
area = simps(precision_avg, dx=0.01)
plt.clf()
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Overall')
precision_avg = [np.mean(means["Precision"]) for means in results[:25]]
recall_avg = [np.mean(means["Recall"]) for means in results[:25]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Fixed width')
precision_avg = [np.mean(means["Precision"]) for means in results[25:50]]
recall_avg = [np.mean(means["Recall"]) for means in results[25:50]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Variable width ')
plt.xlim([0.2, 1.0])
plt.ylim([0.6, 1.01])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curves Band 9')
plt.legend(loc="lower left")
plt.savefig("images/results1.pdf")
latexify()
# Plot Precision-Recall curve for each cube
from scipy.integrate import simps
precision_avg = [np.mean(means["Precision"]) for means in results[50:100]]
recall_avg = [np.mean(means["Recall"]) for means in results[50:100]]
area = simps(precision_avg, dx=0.01)
plt.clf()
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Overall')
precision_avg = [np.mean(means["Precision"]) for means in results[50:75]]
recall_avg = [np.mean(means["Recall"]) for means in results[50:75]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Fixed Width')
precision_avg = [np.mean(means["Precision"]) for means in results[75:100]]
recall_avg = [np.mean(means["Recall"]) for means in results[75:100]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Variable Width ')
plt.xlim([0.415, 0.854])
plt.ylim([0.745, 0.96])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curves Band 7')
plt.legend(loc="lower left")
plt.savefig("images/results2.pdf")
def latexify(fig_width=None, fig_height=None, columns=1):
Set up matplotlib's RC params for LaTeX plotting.
Call this before plotting a figure.
Parameters
----------
fig_width : float, optional, inches
fig_height : float, optional, inches
columns : {1, 2}
# code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples
# Width and max height in inches for IEEE journals taken from
# computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf
assert(columns in [1,2])
if fig_width is None:
fig_width = 4.89 if columns==1 else 6.9 # width in inches
if fig_height is None:
golden_mean = (sqrt(5)-1.0)/2.0 # Aesthetic ratio
fig_height = fig_width*golden_mean # height in inches
MAX_HEIGHT_INCHES = 24.0
if fig_height > MAX_HEIGHT_INCHES:
print("WARNING: fig_height too large:" + fig_height +
"so will reduce to" + MAX_HEIGHT_INCHES + "inches.")
fig_height = MAX_HEIGHT_INCHES
params = {'backend': 'ps',
'text.latex.preamble': ['\usepackage{gensymb}'],
'axes.labelsize': 8, # fontsize for x and y labels (was 10)
'axes.titlesize': 8,
'text.fontsize': 8, # was 10
'legend.fontsize': 8, # was 10
'xtick.labelsize': 10,
'ytick.labelsize': 8,
'text.usetex': True,
'figure.figsize': [fig_width,fig_height],
'font.family': 'serif'
}
matplotlib.rcParams.update(params)
def format_axes(ax):
for spine in ['top', 'right']:
ax.spines[spine].set_visible(False)
for spine in ['left', 'bottom']:
ax.spines[spine].set_color(SPINE_COLOR)
ax.spines[spine].set_linewidth(0.5)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
for axis in [ax.xaxis, ax.yaxis]:
axis.set_tick_params(direction='out', color=SPINE_COLOR)
return ax
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To select the isolist, the wavelength range of the cube is obtained, and a searching from theoretical catalog Splatalogue is performed. All the isotopes that have spectral lines on the range of the cube are determinated.
Step2: Then, we get all the possible combination sets of the previously determined isotopes.
Step3: Finally, random sets previously determinated are selected in order to generate the data cubes.
Step4: Generate Datacubes in Band 9, Fixed Width
Step5: Generate Datacubes in Band 9, Variable (TO DO
Step6: Creation of Dictionary
Step7: Recalibration of Dictionary
Step8: Testing Band 7
Step9: Generate Datacubes in Band 7, Variable (TO DO
Step10: Creation of Dictionary
Step11: Training
Step12: Testing
Step13: Blending case
Step14: Hyperfine lines case
Step16: Double peaks for single Line
|
4,548
|
<ASSISTANT_TASK:>
Python Code:
# Get http://geneontology.org/ontology/go-basic.obo
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
from goatools.obo_parser import GODag
obodag = GODag("go-basic.obo")
from __future__ import print_function
from goatools.associations import read_ncbi_gene2go
geneid2gos_mouse = read_ncbi_gene2go("gene2go", taxids=[10090])
print("{N:,} annotated mouse genes".format(N=len(geneid2gos_mouse)))
from goatools.test_data.genes_NCBI_10090_ProteinCoding import GeneID2nt as GeneID2nt_mus
from goatools.go_enrichment import GOEnrichmentStudy
goeaobj = GOEnrichmentStudy(
GeneID2nt_mus.keys(), # List of mouse protein-coding genes
geneid2gos_mouse, # geneid/GO associations
obodag, # Ontologies
propagate_counts = False,
alpha = 0.05, # default significance cut-off
methods = ['fdr_bh']) # defult multipletest correction method
# Data will be stored in this variable
import os
geneid2symbol = {}
# Get xlsx filename where data is stored
ROOT = os.path.dirname(os.getcwd()) # go up 1 level from current working directory
din_xlsx = os.path.join(ROOT, "goatools/test_data/nbt_3102/nbt.3102-S4_GeneIDs.xlsx")
# Read data
if os.path.isfile(din_xlsx):
import xlrd
book = xlrd.open_workbook(din_xlsx)
pg = book.sheet_by_index(0)
for r in range(pg.nrows):
symbol, geneid, pval = [pg.cell_value(r, c) for c in range(pg.ncols)]
if geneid:
geneid2symbol[int(geneid)] = symbol
# 'p_' means "pvalue". 'fdr_bh' is the multipletest method we are currently using.
geneids_study = geneid2symbol.keys()
goea_results_all = goeaobj.run_study(geneids_study)
goea_results_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]
goeaobj.wr_xlsx("nbt3102_symbols.xlsx", goea_results_sig, itemid2name=geneid2symbol)
goeaobj.wr_xlsx("nbt3102_geneids.xlsx", goea_results_sig)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1b. Download Associations, if necessary
Step2: 2. Load Ontologies, Associations and Background gene set
Step3: 2b. Load Associations
Step4: 2c. Load Background gene set
Step5: 3. Initialize a GOEA object
Step6: 4. Read study genes
Step7: 5. Run Gene Ontology Enrichment Analysis (GOEA)
Step8: 6. Write results to an Excel file and to a text file
|
4,549
|
<ASSISTANT_TASK:>
Python Code:
import growler
growler.__meta__.version_info
app = growler.App("NotebookServer")
@app.use
def print_client_info(req, res):
ip = req.ip
reqpath = req.path
print("[{ip}] {path}".format(ip=ip, path=reqpath))
print(" >", req.headers['USER-AGENT'])
print(flush=True)
i = 0
@app.get("/")
def index(req, res):
global i
res.send_text("It Works! (%d)" % i)
i += 1
app.print_middleware_tree()
app.create_server_and_run_forever(host='127.0.0.1', port=9000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create growler application with name NotebookServer
Step2: Add a general purpose method which prints ip address and the USER-AGENT header
Step3: Next, add a route matching any GET requests for the root (/) of the site. This uses a simple global variable to count the number times this page has been accessed, and return text to the client
Step4: We can see the tree of middleware all requests will pass through - Notice the router object that was implicitly created which will match all requests.
Step5: Use the helper method to create the asyncio server listening on port 9000.
|
4,550
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
import pandas as pd
import numpy as np
# Create feature matrix with two highly correlated features
X = np.array([[1, 1, 1],
[2, 2, 0],
[3, 3, 1],
[4, 4, 0],
[5, 5, 1],
[6, 6, 0],
[7, 7, 1],
[8, 7, 0],
[9, 7, 1]])
# Convert feature matrix into DataFrame
df = pd.DataFrame(X)
# View the data frame
df
# Create correlation matrix
corr_matrix = df.corr().abs()
# Select upper triangle of correlation matrix
upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))
# Find index of feature columns with correlation greater than 0.95
to_drop = [column for column in upper.columns if any(upper[column] > 0.95)]
# Drop features
df.drop(df.columns[to_drop], axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Identify Highly Correlated Features
Step3: Drop Marked Features
|
4,551
|
<ASSISTANT_TASK:>
Python Code:
%%HTML
<style>
.rendered_html {
font-size: 0.7em;
}
.CodeMirror-scroll {
font-size: 1.2em;
}
.rendered_html table, .rendered_html th, .rendered_html tr, .rendered_html td, .rendered_html h2, .rendered_html h4 {
font-size: 100%;
}
</style>
import pandas as pd
import pandas as pd
%matplotlib inline
# Read data from data/coffees.csv
data =
# .head()
# .loc or .iloc
# [] indexing on a series
print("Dataset length :")
# len()
print()
# .describe()
# .isnull() and boolean indexing with []
# .dtypes
# print the first element of the series with [] indexing
print()
# print its type()
print()
# cast the coffees column using pd.to_numeric, and coerce errors
data.coffees =
data.head()
# Use .dropna() using a subset, and pass inplace
data.head()
# Cast to int using .astype()
data.coffees =
data.head()
# pd.to_datetime()
data.timestamp =
# Confirm dtypes
data.dtypes
# .describe(), passing the include kwarg to see all information
# What do the first few rows look like ?
# .plot() on the coffees series
# .plot() on the dataframe,
# pass x kwarg to plot against the timestamp
# use a dot-dash style
# .tail() with ten rows
# Use conditional indexing against the timestamp
data =
data.tail()
# Once again, plot the data against the timestamp
# .value_counts()
# .plot() a bar chart of the value counts
# Create a series of the weekdays
# for each entry using .dt.weekday
weekdays =
# .assign() it to our dataframe
data =
data.head()
weekday_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
weekday_dict = {key: weekday_names[key] for key in range(7)}
def day_of_week(idx):
return weekday_dict[idx]
# Use .apply() to apply a custom function to the weekdays column
data.weekdays =
data.head()
# .groupby() the weekdays and then .count() rows in each group
weekday_counts =
# We can reorder this dataframe by our weekday_names
# list using .loc, indexing with the names
weekday_counts =
weekday_counts
# Plot a bar chart of the coffees data in weekday_counts
# Title : "Datapoints added on each weekday"
# Set the dataframe's .index property
data.index =
# Let's drop the timestamp column, as we no longer need it
data.head()
# pd.date_range, with daily frequency, and normalisation
midnights =
midnights
# Take the union of the existing and new indices
new_index =
new_index
# .reindex() the dataframe to get an upsampled dataframe
upsampled_data =
upsampled_data.head(10)
# .interpolate the upsampled_data using the time method
upsampled_data =
upsampled_data.head(10)
# .resample() the upsampled dataframe,
# using .asfreq() to get only exactly daily values
daily_data =
# Drop the contributor column, we no longer need it
daily_data =
# Generate a column of weekday_names
daily_data["weekdays"] =
daily_data.head()
# Let's plot the data once more, to see how we're doing
# Use .diff() on the coffees column; follow up with .shift()
coffees_made =
# Add this as a column to the dataframe
daily_data["coffees_made_today"] =
daily_data.head(n=10)
# .groupby weekdays, take the mean, and
# grab the coffees_made_today column
coffees_by_day =
coffees_by_day
# Sort coffees_by_day by our list of weekend names
coffees_by_day =
# Plot a bar chart
# Bring in data/department_members.csv;
# have the first column be the index, and parse the dates
people =
people.head()
# Use an outer join, then interpolate over
# missing values using nearest values
daily_data =
daily_data.head(n=15)
# New column is the ratio of coffees made on a
# given day to number of members in the department
daily_data["coffees_per_person"] =
# Let's drop those remaining NaNs while we're at it
daily_data.head(n=10)
# Plot the coffees_per_person column
# read data/coffee_status.csv
# parse_dates as kwarg; also pass index_col
machine_status =
machine_status.head()
# .value_counts()
# Make a pd.Series from the status series where things are OK
numerical_status =
numerical_status.plot()
# .join() daily_data with machine_status
daily_data =
daily_data.head()
# Column depicting when the status was "OK"
# Cast the series to ints before as you create a new column in the dataframe
daily_data["numerical_status"] =
daily_data.head()
# Plot both columns on the same graph, using default args
# Resample weekly, taking the mean
# of each week to get a weekly value
weekly_data =
# Plot the coffees per person and the machine's status
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Systems check
Step2: Note
Step3: Note
Step4: Note
Step5: We have an index, and three columns
Step6: Definitely a string. We'll note this as something to fix after we finish looking around.
Step7: Note
Step8: What else can we find out ?
Step9: Looks like we also have some missing data - we have 671 rows, but the coffees column only has 658 entries.
Step10: Note
Step11: The contributor column makes sense as object, because we expect strings there; but surely the timestamp should be a timestamp-type, and coffees should be numerical ?
Step12: It looks like the timestamp field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas interpret these strings as datetimes for us automatically.
Step13: #### The coffees column contains NaNs.
Step14: The coffees column is of type float.
Step15: Let's have pandas parse the timestamp strings to datetime objects.
Step16: So where do we stand ?
Step17: Note
Step18: pandas is plotting the coffees against the index, which is just a series of integers.
Step19: We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.
Step20: After mid-March, things start getting spaced rather erratically.
Step21: Note
Step22: 1. Contributions to the time-series
Step23: Note
Step24: On which weekdays were contributions made ?
Step25: Can we replace these integers with actual weekdays ?
Step26: Let's group by these weekdays.
Step27: Note
Step28: 2. Weekday trends
Step29: Let's add some rows at midnight on every day.
Step30: Note
Step31: Note
Step32: Note
Step33: We're now ready to resample the time-series at a daily frequency.
Step34: Let's begin by figuring out how many coffees are made on any given day.
Step35: Note
Step36: Let's order this series and then plot it.
Step37: Wednesdays was seminar day...
Step38: Let's join the datasets.
Step39: Note
Step40: We can now plot this column.
Step41: Those are strange plateaus. We'll pull in another dataset, telling us when the machine was broken.
Step42: Note
Step43: A quick trick to plot this as a time-series...
Step44: Note
Step45: We'll bring in this numerical representation of status column into our dataframe too.
Step46: Let's plot both the coffees per person and the numerical status.
Step47: We see a strong weekday-weekend effect. Resampling weekly will fix that.
|
4,552
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-2', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
4,553
|
<ASSISTANT_TASK:>
Python Code:
remote_data = True
remote_server_auto = True
case_name = 'cylinder'
data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CYLINDER'
data_host='dstandingford@vis03'
paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver'
if not remote_server_auto:
paraview_cmd=None
if not remote_data:
data_host='localhost'
paraview_cmd=None
# Validation criteria setup for cylinder - note that the timestep (dt=0.002) chosen is just
# small enough to capture the Direct frequency. A smaller timestep (dt=0.001) gives a more
# accurate output.
validate = True
regression = True
if (validate):
valid = True
valid_lower_strouhal = 0.1790
valid_upper_strouhal = 0.1820
print 'VALIDATING CYLINDER CASE'
if (regression):
print 'REGRESSION CYLINDER CASE'
%pylab inline
from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
import pylab as pl
import math
from zutil.post import pvserver_connect
if remote_data:
pvserver_connect(data_host=data_host,data_dir=data_dir,paraview_cmd=paraview_cmd)
from zutil.post import get_case_parameters,print_html_parameters
parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir)
from zutil.post import get_status_dict
status=get_status_dict(case_name,data_host=data_host,data_dir=data_dir)
num_procs = str(status['num processor'])
# print parameters
from IPython.display import HTML
HTML(print_html_parameters(parameters))
diameter = 1.0
time_step = parameters['time marching']['unsteady']['time step']
cycles = parameters['time marching']['cycles']
mach = parameters['IC_1']['V']['Mach']
print 'mach = %.2f'%(mach)
kappa = 1.402
print 'kappa = %.3f'%(kappa)
R = 287.058
print 'R = %.3f'%(R)
temperature = parameters['IC_1']['temperature']
print 'temperature = %.2f'%(temperature) + ' Kelvin'
pressure = parameters['IC_1']['pressure']
print 'pressure = %.2f'%(pressure) + ' Pascals'
density = pressure/(R*temperature)
print 'density = %.2f'%(density) + ' kg/m^3'
speed_of_sound = sqrt(kappa*pressure/density)
print 'speed_of_sound = %.2f'%(speed_of_sound) + ' m/s'
u_ref = mach*speed_of_sound
print 'u_ref = %.2f'%(u_ref) + ' m/s'
from zutil.post import get_case_root, get_case_report, get_monitor_data
monitor_data = get_monitor_data(get_case_report(case_name),'probe','cp')
# clean up the probe history - remove the pseudo-timestep data
probe_data_x = []
probe_data_y = []
for i in range(0,len(monitor_data[0])):
if ((float(monitor_data[0][i])/float(cycles)) == int(monitor_data[0][i]/cycles)):
probe_data_x.append(float(monitor_data[0][i])*float(time_step)/float(cycles))
probe_data_y.append(float(monitor_data[1][i]))
# Find local maxima after 1 second
maxima_x = []
maxima_y = []
time_start = 1.0
for i in range(1,len(probe_data_x)-1):
time = probe_data_x[i]
if (time > time_start):
val_im1 = probe_data_y[i-1]
val_i = probe_data_y[i]
val_ip1 = probe_data_y[i+1]
if ((val_i > val_im1) and (val_i > val_ip1)):
maxima_x.append(probe_data_x[i])
maxima_y.append(probe_data_y[i])
# Calculate the Strouhal number
num_periods = len(maxima_x)-1
if (num_periods > 1):
frequency = num_periods/(maxima_x[len(maxima_x)-1]-maxima_x[0])
strouhal = frequency*diameter/u_ref
else:
print 'INSUFFICIENT NUMBER OF PERIODS'
strouhal = -100.0
if (validate):
valid = False
fig = pl.figure(figsize=(12, 8), dpi=150, facecolor='w', edgecolor='#E48B25')
fig.suptitle('2D Laminar Cylinder - Strouhal Number = ' + '%.4f'%strouhal,
fontsize=24, fontweight='normal', color = '#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('Time (seconds)', fontsize=18, fontweight='normal', color = '#5D5858')
ax.set_ylabel(r'$\mathbf{C_P}$' + ' at [1.07, 0.313]', fontsize=18, fontweight='normal', color = '#5D5858')
ax.set_xlim((0.0,2.0))
ax.set_ylim((-1.5,0.0))
ax.plot(probe_data_x, probe_data_y, color='r', label='Probe at [1.07, 0.313]')
ax.scatter(maxima_x, maxima_y, color='g', label='Local maxima ' + '(t > %.1f seconds)'%time_start)
legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16)
legend.get_frame().set_facecolor('white')
ax.tick_params(axis='x', pad=8)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
fig.savefig("images/cylinder_probe.png")
show()
from IPython.display import FileLink, display
display(FileLink('images/cylinder_probe.png'))
from zutil.post import residual_plot, get_case_report
residual_plot(get_case_report(case_name))
show()
# define function to help with validation check
def validate_data(name, value, valid_lower, valid_upper):
if ((value < valid_lower) or (value > valid_upper)):
print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper
return False
else:
return True
if (validate):
valid = valid and validate_data('strouhal', strouhal, valid_lower_strouhal, valid_upper_strouhal)
if (valid):
print 'VALIDATION = PASS :-)'
else:
print 'VALIDATION = FAIL :-('
if (regression):
import pandas as pd
pd.options.display.float_format = '{:,.6f}'.format
print 'REGRESSION DATA'
regress = {'version' : ['dt=0.001', 'dt=0.002', 'dt=0.005', 'v0.1 (dt=0.001)', 'v0.1 (dt=0.002)', 'CURRENT (dt=%.3f)'%time_step],
'Strouhal': [0.179974 , 0.179189 , 0.149542, 0.179974 , 0.179189, strouhal]}
regression_table = pd.DataFrame(regress, columns=['version','Strouhal'])
print regression_table
if remote_data:
print 'Disconnecting from remote paraview server connection'
Disconnect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: zCFD Validation and Regression¶
Step2: Initialise Environment
Step3: Data Connection
Step4: Get control dictionary¶
Step5: Get status file
Step6: Define test conditions
Step7: Plot pressure time-history at probe point
Step8: Convergence
Step9: Validation and regression
Step10: Cleaning up
|
4,554
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install portpicker
#@title
import multiprocessing
import os
import random
import portpicker
import tensorflow as tf
def create_in_process_cluster(num_workers, num_ps):
Creates and starts local servers and returns the cluster_resolver.
worker_ports = [portpicker.pick_unused_port() for _ in range(num_workers)]
ps_ports = [portpicker.pick_unused_port() for _ in range(num_ps)]
cluster_dict = {}
cluster_dict["worker"] = ["localhost:%s" % port for port in worker_ports]
if num_ps > 0:
cluster_dict["ps"] = ["localhost:%s" % port for port in ps_ports]
cluster_spec = tf.train.ClusterSpec(cluster_dict)
# Workers need some inter_ops threads to work properly.
worker_config = tf.compat.v1.ConfigProto()
if multiprocessing.cpu_count() < num_workers + 1:
worker_config.inter_op_parallelism_threads = num_workers + 1
for i in range(num_workers):
tf.distribute.Server(
cluster_spec,
job_name="worker",
task_index=i,
config=worker_config,
protocol="grpc")
for i in range(num_ps):
tf.distribute.Server(
cluster_spec,
job_name="ps",
task_index=i,
protocol="grpc")
cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(
cluster_spec, rpc_layer="grpc")
return cluster_resolver
# Set the environment variable to allow reporting worker and ps failure to the
# coordinator. This is a workaround and won't be necessary in the future.
os.environ["GRPC_FAIL_FAST"] = "use_caller"
NUM_WORKERS = 3
NUM_PS = 2
cluster_resolver = create_in_process_cluster(NUM_WORKERS, NUM_PS)
variable_partitioner = (
tf.distribute.experimental.partitioners.MinSizePartitioner(
min_shard_bytes=(256 << 10),
max_shards=NUM_PS))
strategy = tf.distribute.ParameterServerStrategy(
cluster_resolver,
variable_partitioner=variable_partitioner)
global_batch_size = 64
x = tf.random.uniform((10, 10))
y = tf.random.uniform((10,))
dataset = tf.data.Dataset.from_tensor_slices((x, y)).shuffle(10).repeat()
dataset = dataset.batch(global_batch_size)
dataset = dataset.prefetch(2)
with strategy.scope():
model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
model.compile(tf.keras.optimizers.SGD(), loss="mse", steps_per_execution=10)
working_dir = "/tmp/my_working_dir"
log_dir = os.path.join(working_dir, "log")
ckpt_filepath = os.path.join(working_dir, "ckpt")
backup_dir = os.path.join(working_dir, "backup")
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=log_dir),
tf.keras.callbacks.ModelCheckpoint(filepath=ckpt_filepath),
tf.keras.callbacks.BackupAndRestore(backup_dir=backup_dir),
]
model.fit(dataset, epochs=5, steps_per_epoch=20, callbacks=callbacks)
feature_vocab = [
"avenger", "ironman", "batman", "hulk", "spiderman", "kingkong", "wonder_woman"
]
label_vocab = ["yes", "no"]
with strategy.scope():
feature_lookup_layer = tf.keras.layers.StringLookup(
vocabulary=feature_vocab,
mask_token=None)
label_lookup_layer = tf.keras.layers.StringLookup(
vocabulary=label_vocab,
num_oov_indices=0,
mask_token=None)
raw_feature_input = tf.keras.layers.Input(
shape=(3,),
dtype=tf.string,
name="feature")
feature_id_input = feature_lookup_layer(raw_feature_input)
feature_preprocess_stage = tf.keras.Model(
{"features": raw_feature_input},
feature_id_input)
raw_label_input = tf.keras.layers.Input(
shape=(1,),
dtype=tf.string,
name="label")
label_id_input = label_lookup_layer(raw_label_input)
label_preprocess_stage = tf.keras.Model(
{"label": raw_label_input},
label_id_input)
def feature_and_label_gen(num_examples=200):
examples = {"features": [], "label": []}
for _ in range(num_examples):
features = random.sample(feature_vocab, 3)
label = ["yes"] if "avenger" in features else ["no"]
examples["features"].append(features)
examples["label"].append(label)
return examples
examples = feature_and_label_gen()
def dataset_fn(_):
raw_dataset = tf.data.Dataset.from_tensor_slices(examples)
train_dataset = raw_dataset.map(
lambda x: (
{"features": feature_preprocess_stage(x["features"])},
label_preprocess_stage(x["label"])
)).shuffle(200).batch(32).repeat()
return train_dataset
# These variables created under the `Strategy.scope` will be placed on parameter
# servers in a round-robin fashion.
with strategy.scope():
# Create the model. The input needs to be compatible with Keras processing layers.
model_input = tf.keras.layers.Input(
shape=(3,), dtype=tf.int64, name="model_input")
emb_layer = tf.keras.layers.Embedding(
input_dim=len(feature_lookup_layer.get_vocabulary()), output_dim=16384)
emb_output = tf.reduce_mean(emb_layer(model_input), axis=1)
dense_output = tf.keras.layers.Dense(units=1, activation="sigmoid")(emb_output)
model = tf.keras.Model({"features": model_input}, dense_output)
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.1)
accuracy = tf.keras.metrics.Accuracy()
assert len(emb_layer.weights) == 2
assert emb_layer.weights[0].shape == (4, 16384)
assert emb_layer.weights[1].shape == (4, 16384)
print(emb_layer.weights[0].device)
print(emb_layer.weights[1].device)
@tf.function
def step_fn(iterator):
def replica_fn(batch_data, labels):
with tf.GradientTape() as tape:
pred = model(batch_data, training=True)
per_example_loss = tf.keras.losses.BinaryCrossentropy(
reduction=tf.keras.losses.Reduction.NONE)(labels, pred)
loss = tf.nn.compute_average_loss(per_example_loss)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)
accuracy.update_state(labels, actual_pred)
return loss
batch_data, labels = next(iterator)
losses = strategy.run(replica_fn, args=(batch_data, labels))
return strategy.reduce(tf.distribute.ReduceOp.SUM, losses, axis=None)
coordinator = tf.distribute.coordinator.ClusterCoordinator(strategy)
@tf.function
def per_worker_dataset_fn():
return strategy.distribute_datasets_from_function(dataset_fn)
per_worker_dataset = coordinator.create_per_worker_dataset(per_worker_dataset_fn)
per_worker_iterator = iter(per_worker_dataset)
num_epochs = 4
steps_per_epoch = 5
for i in range(num_epochs):
accuracy.reset_states()
for _ in range(steps_per_epoch):
coordinator.schedule(step_fn, args=(per_worker_iterator,))
# Wait at epoch boundaries.
coordinator.join()
print("Finished epoch %d, accuracy is %f." % (i, accuracy.result().numpy()))
loss = coordinator.schedule(step_fn, args=(per_worker_iterator,))
print("Final loss is %f" % loss.fetch())
eval_dataset = tf.data.Dataset.from_tensor_slices(
feature_and_label_gen(num_examples=16)).map(
lambda x: (
{"features": feature_preprocess_stage(x["features"])},
label_preprocess_stage(x["label"])
)).batch(8)
eval_accuracy = tf.keras.metrics.Accuracy()
for batch_data, labels in eval_dataset:
pred = model(batch_data, training=False)
actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)
eval_accuracy.update_state(labels, actual_pred)
print("Evaluation accuracy: %f" % eval_accuracy.result())
with strategy.scope():
# Define the eval metric on parameter servers.
eval_accuracy = tf.keras.metrics.Accuracy()
@tf.function
def eval_step(iterator):
def replica_fn(batch_data, labels):
pred = model(batch_data, training=False)
actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)
eval_accuracy.update_state(labels, actual_pred)
batch_data, labels = next(iterator)
strategy.run(replica_fn, args=(batch_data, labels))
def eval_dataset_fn():
return tf.data.Dataset.from_tensor_slices(
feature_and_label_gen(num_examples=16)).map(
lambda x: (
{"features": feature_preprocess_stage(x["features"])},
label_preprocess_stage(x["label"])
)).shuffle(16).repeat().batch(8)
per_worker_eval_dataset = coordinator.create_per_worker_dataset(eval_dataset_fn)
per_worker_eval_iterator = iter(per_worker_eval_dataset)
eval_steps_per_epoch = 2
for _ in range(eval_steps_per_epoch):
coordinator.schedule(eval_step, args=(per_worker_eval_iterator,))
coordinator.join()
print("Evaluation accuracy: %f" % eval_accuracy.result())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parameter server training with ParameterServerStrategy
Step3: Cluster setup
Step4: The in-process cluster setup is frequently used in unit testing, such as here.
Step5: In order to use GPUs for training, allocate GPUs visible to each worker. ParameterServerStrategy will use all the available GPUs on each worker, with the restriction that all workers should have the same number of GPUs available.
Step6: If you instead create your dataset with tf.keras.utils.experimental.DatasetCreator, the code in dataset_fn will be invoked on the input device, which is usually the CPU, on each of the worker machines.
Step7: Callbacks and training
Step8: Direct usage with ClusterCoordinator (optional)
Step9: Generate toy examples in a dataset
Step10: Then, create the training dataset wrapped in a dataset_fn
Step11: Build the model
Step12: Let's confirm that the use of FixedShardsPartitioner split all variables into two shards and that each shard was assigned to a different parameter server
Step13: Define the training step
Step14: In the above training step function, calling Strategy.run and Strategy.reduce in the step_fn can support multiple GPUs per worker. If the workers have GPUs allocated, Strategy.run will distribute the datasets on multiple replicas.
Step15: Then, create a per-worker dataset and an iterator using the ClusterCoordinator.create_per_worker_dataset API, which replicates the dataset to all workers. In the per_worker_dataset_fn below, wrapping the dataset_fn into strategy.distribute_datasets_from_function is recommended to allow efficient prefetching to GPUs seamlessly.
Step16: The final step is to distribute the computation to remote workers using ClusterCoordinator.schedule
Step17: Here is how you can fetch the result of a RemoteValue
Step18: Alternatively, you can launch all steps and do something while waiting for
Step19: Distributed evaluation
|
4,555
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
_x = np.array([1, 2, 3, 4])
x = tf.convert_to_tensor(_x)
out = tf.diag(x)
print(out.eval())
_out = np.diag(_x)
assert np.array_equal(out.eval(), _out)
_X = np.array(
[[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
X = tf.convert_to_tensor(_X)
out = tf.diag_part(X)
print(out.eval())
_out = np.diag(_X)
assert np.array_equal(out.eval(), _out)
# Did you notice that np.diag() works like tf.diag or tf.diag_part
# according to the rank/dimensions of the input array?
_X = np.random.rand(2,3,4)
X = tf.convert_to_tensor(_X)
out = tf.transpose(X, [1, 2, 0])
print(out.get_shape())
_out = np.transpose(_X, [1, 2, 0])
assert np.array_equal(out.eval(), _out) # tf.transpose == np.transpose
out = tf.eye(3)
print(out.eval())
assert np.array_equal(out.eval(), np.eye(3)) # tf.eye == np.eye
_X = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
X = tf.convert_to_tensor(_X)
diagonal_tensor = tf.matrix_diag(X)
diagonal_part = tf.matrix_diag_part(diagonal_tensor)
print("diagonal_tensor =\n", diagonal_tensor.eval())
print("diagonal_part =\n", diagonal_part.eval())
_X= np.random.rand(1, 2, 3, 4)
X = tf.convert_to_tensor(_X)
out1 = tf.matrix_transpose(X)
out2 = tf.transpose(X, [0, 1, 3, 2])
print(out1.eval().shape)
assert np.array_equal(out1.eval(), out2.eval())
_X = np.array([[1, 2, 3], [4, 5, 6]])
_Y = np.array([[1, 1], [2, 2], [3, 3]])
X = tf.convert_to_tensor(_X)
Y = tf.convert_to_tensor(_Y)
out = tf.matmul(X, Y)
print(out.eval())
_out = np.dot(_X, _Y)
assert np.array_equal(out.eval(), _out) # tf.matmul(2d, 2d) == np.dot(2d, 2d)
_X = np.arange(1, 13, dtype=np.int32).reshape((2, 2, 3))
_Y = np.arange(13, 25, dtype=np.int32).reshape((2, 3, 2))
X = tf.convert_to_tensor(_X)
Y = tf.convert_to_tensor(_Y)
out = tf.matmul(X, Y)
print(out.eval())
# Note that this is not the same as the following.
#_out = np.dot(_X, _Y)
#print(_out)
# in TF 1.0, batch_matmul was absorbed in matmul.
_X = np.arange(1, 5, dtype=np.float32).reshape((2, 2))
X = tf.convert_to_tensor(_X)
out = tf.matrix_determinant(X)
print(out.eval())
_X = np.arange(1, 5, dtype=np.float64).reshape((2, 2))
X = tf.convert_to_tensor(_X)
out = tf.matrix_inverse(X)
print(out.eval())
_out = np.linalg.inv(_X)
assert np.allclose(out.eval(), _out)
_X = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], np.float32)
X = tf.convert_to_tensor(_X)
out = tf.cholesky(X)
print(out.eval())
_out = np.linalg.cholesky(_X)
assert np.array_equal(out.eval(), _out)
_X = np.diag((1, 2, 3))
X = tf.convert_to_tensor(_X, tf.float32)
eigenvals, eigenvecs = tf.self_adjoint_eig(X)
print("eigentvalues =\n", eigenvals.eval())
print("eigenvectors =\n", eigenvecs.eval())
_eigenvals, _eigenvecs = np.linalg.eig(_X)
assert np.allclose(eigenvals.eval(), _eigenvals)
assert np.allclose(eigenvecs.eval(), _eigenvecs)
_X = np.array(
[[1, 0, 0, 0, 2],
[0, 0, 3, 0, 0],
[0, 0, 0, 0, 0],
[0, 2, 0, 0, 0]], dtype=np.float32)
X = tf.convert_to_tensor(_X)
out = tf.svd(X, compute_uv=False)
print(out.eval())
_out = np.linalg.svd(_X, compute_uv=False)
assert np.allclose(out.eval(), _out)
_X = np.array(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
X = tf.convert_to_tensor(_X)
outs = [tf.reduce_sum(X),
tf.reduce_sum(X, axis=0),
tf.reduce_sum(X, axis=1, keep_dims=True),
"",
tf.reduce_prod(X),
tf.reduce_prod(X, axis=0),
tf.reduce_prod(X, axis=1, keep_dims=True),
"",
tf.reduce_min(X),
tf.reduce_min(X, axis=0),
tf.reduce_min(X, axis=1, keep_dims=True),
"",
tf.reduce_max(X),
tf.reduce_max(X, axis=0),
tf.reduce_max(X, axis=1, keep_dims=True),
"",
tf.reduce_mean(X),
tf.reduce_mean(X, axis=0),
tf.reduce_mean(X, axis=1, keep_dims=True)
]
for out in outs:
if out == "":
print()
else:
print("->", out.eval())
# If you remove the common suffix "reduce_", you will get the same
# result in numpy.
_X = np.array([[True, True],
[False, False]], np.bool)
X = tf.convert_to_tensor(_X)
outs = [tf.reduce_all(X),
tf.reduce_all(X, axis=0),
tf.reduce_all(X, axis=1, keep_dims=True),
"",
tf.reduce_any(X),
tf.reduce_any(X, axis=0),
tf.reduce_any(X, axis=1, keep_dims=True),
]
for out in outs:
if out == "":
print()
else:
print("->", out.eval())
# If you remove the common suffix "reduce_", you will get the same
# result in numpy.
_X = np.array([[0, 1, 0],
[1, 1, 0]])
X = tf.convert_to_tensor(_X)
outs = [tf.count_nonzero(X),
tf.count_nonzero(X, axis=0),
tf.count_nonzero(X, axis=1, keep_dims=True),
]
for out in outs:
print("->", out.eval())
# tf.count_nonzero == np.count_nonzero
_X = np.arange(1, 7).reshape((2, 3))
_Y = np.arange(1, 7).reshape((3, 2))
X = tf.convert_to_tensor(_X)
Y = tf.convert_to_tensor(_Y)
# Matrix multiplication
out1 = tf.matmul(X, Y)
out1_ = tf.einsum('ij,jk->ik', X, Y)
assert np.allclose(out1.eval(), out1_.eval())
# Dot product
flattened = tf.reshape(X, [-1])
out2 = tf.reduce_sum(flattened * flattened)
out2_ = tf.einsum('i,i->', flattened, flattened)
assert np.allclose(out2.eval(), out2_.eval())
# Outer product
expanded_a = tf.expand_dims(flattened, 1) # shape: (6, 1)
expanded_b = tf.expand_dims(flattened, 0) # shape: (1, 6)
out3 = tf.matmul(expanded_a, expanded_b)
out3_ = tf.einsum('i,j->ij', flattened, flattened)
assert np.allclose(out3.eval(), out3_.eval())
# Transpose
out4 = tf.transpose(X) # shape: (3, 2)
out4_ = tf.einsum('ij->ji', X)
assert np.allclose(out4.eval(), out4_.eval())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE on notation
Step2: Q2. Extract the diagonal of X.
Step3: Q3. Permutate the dimensions of x such that the new tensor has shape (3, 4, 2).
Step4: Q4. Construct a 3 by 3 identity matrix.
Step5: Q5. Predict the result of this.
Step6: Q6. Transpose the last two dimensions of x.
Step7: Q7. Multiply X by Y.
Step8: Q8. Multiply X and Y. The first axis represents batches.
Step9: Q9. Compute the determinant of X.
Step10: Q10. Compute the inverse of X.
Step11: Q11. Get the lower-trianglular in the Cholesky decomposition of X.
Step12: Q12. Compute the eigenvalues and eigenvectors of X.
Step13: Q13. Compute the singular values of X.
Step14: Reduction
Step15: Q15. Predict the results of these.
Step16: Q16. Predict the results of these.
Step17: Q17. Complete the einsum function that would yield the same result as the given function.
|
4,556
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases
# Import IREE's TensorFlow Compiler and Runtime.
import iree.compiler.tf
import iree.runtime
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tf.random.set_seed(91)
np.random.seed(91)
plt.style.use("seaborn-whitegrid")
plt.rcParams["font.family"] = "monospace"
plt.rcParams["figure.figsize"] = [8, 4.5]
plt.rcParams["figure.dpi"] = 150
# Print version information for future notebook users to reference.
print("TensorFlow version: ", tf.__version__)
print("Numpy version: ", np.__version__)
# Keras datasets don't provide metadata.
NUM_CLASSES = 10
NUM_ROWS, NUM_COLS = 28, 28
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape into grayscale images:
x_train = np.reshape(x_train, (-1, NUM_ROWS, NUM_COLS, 1))
x_test = np.reshape(x_test, (-1, NUM_ROWS, NUM_COLS, 1))
# Rescale uint8 pixel values into float32 values between 0 and 1:
x_train = x_train.astype(np.float32) / 255
x_test = x_test.astype(np.float32) / 255
# IREE doesn't currently support int8 tensors, so we cast them to int32:
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
print("Sample image from the dataset:")
sample_index = np.random.randint(x_train.shape[0])
plt.figure(figsize=(5, 5))
plt.imshow(x_train[sample_index].reshape(NUM_ROWS, NUM_COLS), cmap="gray")
plt.title(f"Sample #{sample_index}, label: {y_train[sample_index]}")
plt.axis("off")
plt.tight_layout()
BATCH_SIZE = 32
class TrainableDNN(tf.Module):
def __init__(self):
super().__init__()
# Create a Keras model to train.
inputs = tf.keras.layers.Input((NUM_COLS, NUM_ROWS, 1))
x = tf.keras.layers.Flatten()(inputs)
x = tf.keras.layers.Dense(128)(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Dense(10)(x)
outputs = tf.keras.layers.Softmax()(x)
self.model = tf.keras.Model(inputs, outputs)
# Create a loss function and optimizer to use during training.
self.loss = tf.keras.losses.SparseCategoricalCrossentropy()
self.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2)
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]) # inputs
])
def predict(self, inputs):
return self.model(inputs, training=False)
# We compile the entire training step by making it a method on the model.
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]), # inputs
tf.TensorSpec([BATCH_SIZE], tf.int32) # labels
])
def learn(self, inputs, labels):
# Capture the gradients from forward prop...
with tf.GradientTape() as tape:
probs = self.model(inputs, training=True)
loss = self.loss(labels, probs)
# ...and use them to update the model's weights.
variables = self.model.trainable_variables
gradients = tape.gradient(loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
return loss
exported_names = ["predict", "learn"]
backend_choice = "dylib-llvm-aot (CPU)" #@param [ "vmvx (CPU)", "dylib-llvm-aot (CPU)", "vulkan-spirv (GPU/SwiftShader – requires additional drivers) " ]
backend_choice = backend_choice.split(' ')[0]
# Compile the TrainableDNN module
# Note: extra flags are needed to i64 demotion, see https://github.com/google/iree/issues/8644
vm_flatbuffer = iree.compiler.tf.compile_module(
TrainableDNN(),
target_backends=[backend_choice],
exported_names=exported_names,
extra_args=["--iree-mhlo-demote-i64-to-i32=false",
"--iree-flow-demote-i64-to-i32"])
compiled_model = iree.runtime.load_vm_flatbuffer(
vm_flatbuffer,
backend=backend_choice)
#@title Benchmark inference and training
print("Inference latency:\n ", end="")
%timeit -n 100 compiled_model.predict(x_train[:BATCH_SIZE])
print("Training latancy:\n ", end="")
%timeit -n 100 compiled_model.learn(x_train[:BATCH_SIZE], y_train[:BATCH_SIZE])
# Run the core training loop.
losses = []
step = 0
max_steps = x_train.shape[0] // BATCH_SIZE
for batch_start in range(0, x_train.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_train.shape[0]:
continue
inputs = x_train[batch_start:batch_start + BATCH_SIZE]
labels = y_train[batch_start:batch_start + BATCH_SIZE]
loss = compiled_model.learn(inputs, labels).to_host()
losses.append(loss)
step += 1
print(f"\rStep {step:4d}/{max_steps}: loss = {loss:.4f}", end="")
#@title Plot the training results
import bottleneck as bn
smoothed_losses = bn.move_mean(losses, 32)
x = np.arange(len(losses))
plt.plot(x, smoothed_losses, linewidth=2, label='loss (moving average)')
plt.scatter(x, losses, s=16, alpha=0.2, label='loss (per training step)')
plt.ylim(0)
plt.legend(frameon=True)
plt.xlabel("training step")
plt.ylabel("cross-entropy")
plt.title("training loss");
#@title Evaluate the network on the test data.
accuracies = []
step = 0
max_steps = x_test.shape[0] // BATCH_SIZE
for batch_start in range(0, x_test.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_test.shape[0]:
continue
inputs = x_test[batch_start:batch_start + BATCH_SIZE]
labels = y_test[batch_start:batch_start + BATCH_SIZE]
prediction = compiled_model.predict(inputs).to_host()
prediction = np.argmax(prediction, -1)
accuracies.append(np.sum(prediction == labels) / BATCH_SIZE)
step += 1
print(f"\rStep {step:4d}/{max_steps}", end="")
print()
accuracy = np.mean(accuracies)
print(f"Test accuracy: {accuracy:.3f}")
#@title Display inference predictions on a random selection of heldout data
rows = 4
columns = 4
images_to_display = rows * columns
assert BATCH_SIZE >= images_to_display
random_index = np.arange(x_test.shape[0])
np.random.shuffle(random_index)
x_test = x_test[random_index]
y_test = y_test[random_index]
predictions = compiled_model.predict(x_test[:BATCH_SIZE]).to_host()
predictions = np.argmax(predictions, -1)
fig, axs = plt.subplots(rows, columns)
for i, ax in enumerate(np.ndarray.flatten(axs)):
ax.imshow(x_test[i, :, :, 0])
color = "#000000" if predictions[i] == y_test[i] else "#ff7f0e"
ax.set_xlabel(f"prediction={predictions[i]}", color=color)
ax.grid(False)
ax.set_yticks([])
ax.set_xticks([])
fig.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Import TensorFlow and Other Dependencies
Step2: 3. Load the MNIST Dataset
Step3: 4. Create a Simple DNN
Step4: 5. Compile the Model with IREE
Step5: Choose one of IREE's three backends to compile to. (Note
Step6: 6. Train the Compiled Model on MNIST
Step7: 7. Evaluate on Heldout Test Examples
|
4,557
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.analysis.graph as graph
import bigbang.ingress.mailman as mailman
import bigbang.analysis.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
urls = ["ipython-dev","hot",
"scipy-user",
"https://lists.wikimedia.org/pipermail/gendergap/",
"ipython-user"]
archives= [Archive(url,archive_dir="../archives") for url in urls]
date_from = pd.datetime(2011,11,1,tzinfo=pytz.utc)
date_to = pd.datetime(2011,12,1,tzinfo=pytz.utc)
def filter_by_date(df,d_from,d_to):
return df[(df['Date'] > d_from) & (df['Date'] < d_to)]
def draw_interaction_graph(ig):
#pdig = nx.to_pydot(ig)
#pdig.set_overlap('False')
pos = nx.graphviz_layout(ig,prog='neato')
node_size = [data['sent'] * 40 for name,data in ig.nodes(data=True)]
nx.draw(ig,
pos,
node_size = node_size,
node_color = 'w',
alpha = 0.4,
font_size=18,
font_weight='bold'
)
# edge width is proportional to replies sent
edgewidth=[d['weight'] for (u,v,d) in ig.edges(data=True)]
#overlay edges with width based on weight
nx.draw_networkx_edges(ig,pos,alpha=0.5,width=edgewidth,edge_color='r')
plt.figure(230,figsize=(12.5, 7.5))
for i,arx in enumerate(archives):
plt.subplot(230 + i) # create a subplot keyed to the index of this ml
df = arx.data
dff = filter_by_date(df,date_from,date_to)
ig = graph.messages_to_interaction_graph(dff)
print(urls[i])
print(nx.degree_assortativity_coefficient(ig))
draw_interaction_graph(ig)
plt.show()
dfs = [filter_by_date(arx.data,
date_from,
date_to) for arx in archives]
bdf = pd.concat(dfs)
#RG = graph.messages_to_reply_graph(messages)
IG = graph.messages_to_interaction_graph(bdf)
pdig = nx.to_pydot(IG)
pdig.set_overlap('False')
plt.figure(figsize=(12.5,7.5))
draw_interaction_graph(IG)
plt.show()
nx.write_edgelist(IG, "ig-edges.txt",delimiter="\t")
nx.write_gexf(IG,"all.gexf")
arx = archives[0]
arx.data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we'll import dependencies.
Step2: Now we will use BigBang to process mailing list archives we've already downloaded.
Step3: Here we will set the window for analysis. By default, November 2011.
Step4: This is a helper function to select messages from a dataframe that fall within a certain range of dates.
Step5: A function for drawing interaction graphs.
Step6: Now we'll use BigBang's graph processing methods to turn the processed messages into a graph of interactions.
Step7: Lastly, we use NetworkX's built in compatibility with Matplotlib to visualize the graph.
|
4,558
|
<ASSISTANT_TASK:>
Python Code:
# Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
def plot_training_accuracies(*args, **kwargs):
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
train_and_test(False, 0.01, tf.nn.relu)
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
train_and_test(False, 0.01, tf.nn.sigmoid)
train_and_test(False, 1, tf.nn.relu)
train_and_test(False, 1, tf.nn.relu)
train_and_test(False, 1, tf.nn.sigmoid)
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
train_and_test(False, 2, tf.nn.relu)
train_and_test(False, 2, tf.nn.sigmoid)
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
train_and_test(True, 0.01, tf.nn.relu)
train_and_test(True, 0.01, tf.nn.sigmoid)
train_and_test(True, 1, tf.nn.relu)
train_and_test(True, 1, tf.nn.sigmoid)
train_and_test(True, 2, tf.nn.relu)
train_and_test(True, 2, tf.nn.sigmoid)
train_and_test(True, 1, tf.nn.relu)
train_and_test(True, 2, tf.nn.relu)
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def batch_norm_test(test_training_accuracy):
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
batch_norm_test(True)
batch_norm_test(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Neural network classes for testing
Step9: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
Step10: Comparisons between identical networks, with and without batch normalization
Step11: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
Step12: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note
Step13: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
Step14: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
Step15: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
Step16: In this example, we switched to a sigmoid activation function. It appears to handle the higher learning rate well, with both networks achieving high accuracy.
Step17: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
Step18: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
Step19: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
Step20: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
Step21: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
Step22: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
Step23: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
Step24: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
Step25: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
Step26: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Step27: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
Step29: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Step31: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points
Step32: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
Step33: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
|
4,559
|
<ASSISTANT_TASK:>
Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
import django
django.setup()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## Use the ggplot style
plt.style.use('ggplot')
from external_plugins.spystats import tools
%run ../testvariogram.py
section.shape
minx,maxx,miny,maxy = getExtent(new_data)
maxy
## Let's build the partition
N = 6
xp,dx = np.linspace(minx,maxx,N,retstep=True)
yp,dy = np.linspace(miny,maxy,N,retstep=True)
dx
xx,yy = np.meshgrid(xp,yp)
coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)]
from functools import partial
tuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list)
len(tuples)
chunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples)
chunks_sizes = map(lambda df : df.shape[0],chunks)
chunk_w_size = zip(chunks,chunks_sizes)
## Here we can filter based on a threshold
threshold = 10
nonempty_chunks_w_size = filter(lambda (df,n) : df.shape[0] > threshold ,chunk_w_size)
chunks_non_empty, chunks_sizes = zip(*nonempty_chunks_w_size)
lengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty))
lengths.plot.hist()
cs = chunks_non_empty
variograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=600000),cs)
%time vars = map(lambda v : v.calculateEmpirical(),variograms)
%time vars = map(lambda v : v.calculateEnvelope(num_iterations=1),variograms)
%time lags = map(lambda v : v.lags,variograms)
lags = pd.DataFrame(lags).transpose()
lags = lags[[0]]
envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)
envhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)
variogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)
n_points = pd.DataFrame(map(lambda v : v.n_points,variograms))
points = n_points.transpose()
print(variogram.shape)
print(points.shape)
ejem1 = pd.DataFrame(variogram.values * points.values)
# Chunks (variograms) columns
# lag rows
vempchunk = ejem1.sum(axis=1) / points.sum(axis=1)
plt.plot(lags,vempchunk,'--',color='blue',lw=2.0)
## Cut some values
vchunk = pd.concat([lags,vempchunk],axis=1)
vchunk.columns = ['lags','semivariance']
v = vchunk[vchunk['lags'] < 500000]
plt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)
#vemp2
thrs_dist = 1000000
nt = 30 # num iterations
filename = "../HEC_runs/results/low_q/data_envelope.csv"
envelope_data = pd.read_csv(filename)
gvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = envelope_data
gvg.empirical = gvg.envelope.variogram
gvg.lags = gvg.envelope.lags
vdata = gvg.envelope.dropna()
gvg.plot(refresh=False,legend=False,percentage_trunked=20)
plt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)
cs = chunks_non_empty
variograms2 =map(lambda chunk : tools.Variogram(chunk,'residuals2',using_distance_threshold=600000),cs)
%time vars = map(lambda v : v.calculateEmpirical(),variograms2)
%time vars = map(lambda v : v.calculateEnvelope(num_iterations=1),variograms2)
%time lags = map(lambda v : v.lags,variograms2)
lags = pd.DataFrame(lags).transpose()
envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)
envhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)
variogram2 = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)
n_points = pd.DataFrame(map(lambda v : v.n_points,variograms2))
points = n_points.transpose()
ejem2 = pd.DataFrame(variogram2.values * points.values)
# Chunks (variograms) columns
# lag rows
vempchunk2 = ejem2.sum(axis=1) / points.sum(axis=1)
#plt.plot(lags,vempchunk2,'--',color='blue',lw=2.0)
thrs_dist = 1000000
nt = 30 # num iterations
filename = "../HEC_runs/results/low_q/data_envelope.csv"
envelope_data = pd.read_csv(filename)
gvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = envelope_data
gvg.empirical = vempchunk2
gvg.lags = gvg.envelope.lags
vdata = gvg.envelope.dropna()
gvg.plot(refresh=False,legend=False,percentage_trunked=20)
plt.plot(variograms2[0].lags,vempchunk2)
#plt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)
## Save the emp variogram
#variograms2[0].lags
outputvar = pd.DataFrame([variograms2[0].lags,vempchunk2]).transpose()
outputvar.columns = ['lags','semivariance']
outputvar.to_csv("/outputs/logbiomas_logsppn_residuals_variogram.csv")
## Here I´ll plot the thing
plt.plot(vdata.lags,vdata.variogram)
## test duplicates in new_data
new_data[new_data.duplicated(['newLon'], keep=False)]
## Remove duplications
withoutrep = new_data.drop_duplicates(subset=['newLon','newLat'])
print(new_data).shape
print(withoutrep).shape
new_data = withoutrep
def gaussianVariogram(h,sill=0,range_a=0,nugget=0):
if isinstance(h,np.ndarray):
Ih = np.array([1.0 if hx >= 0.0 else 0.0 for hx in h])
else:
Ih = 1.0 if h >= 0 else 0.0
#Ih = 1.0 if h >= 0 else 0.0
g_h = ((sill - nugget)*(1 - np.exp(-(h**2 / range_a**2)))) + nugget*Ih
return g_h
## Fitting model.
### Optimizing the empirical values
def theoreticalVariogram(model_function,sill,range_a,nugget):
return lambda x : model_function(x,sill,range_a,nugget)
dt = pd.concat([lags[[0]],vempchunk2],axis=1)
dt.columns = ['lags','semivariance']
vdata = dt.dropna()
import matplotlib.pylab as plt
hx = np.linspace(0,600000,100)
from scipy.optimize import curve_fit
s = 0.345
r = 100000.0
nugget = 0.33
init_vals = [0.34, 50000, 0.33] # for [amp, cen, wid]
bg, covar_gaussian = curve_fit(gaussianVariogram, xdata=vdata.lags.values, ydata=vdata.semivariance.values, p0=init_vals)
#vdata = gvg.envelope.dropna()
## The best parameters asre:
gau_var = gaussianVariogram(hx,bg[0],bg[1],bg[2])
## The best is the gaussian
#model_var = theoreticalVariogram(gaussianVariogram,bg[0],bg[1],bg[2])
#gvg.empirical = vdata.
#empirical variogram logbiomas ~ sppn
gvg.plot(refresh=False,legend=False)
#empirical variogram by chunks logbiomass ~ logsppn
#plt.plot(dt.lags,dt.semivariance,'--',color='blue',lw=2.0)
#model
plt.plot(hx,gau_var,color='green',lw=2.3)
#plt.plot(hx,exp_var)
print("Sill %s, Range %s, Nugget %s"%(bg[0],bg[1],bg[2]))
import statsmodels.regression.linear_model as lm
%time Mdist = gvg.distance_coordinates.flatten()
#%time vars = np.array(map(model_var,Mdist))
#CovMat = vars.reshape(len(section),len(section))
#X = section.logSppN.values
#Y = section.logBiomass.values
# But the data is massive and it consumes all my RAM (32 GB) I need to do something clever.
covar = []
for i in range(len(Mdist)):
x = Mdist.pop(i)
g = gaussianVariogram(x,bg[0],bg[1],bg[2])
covar.append(g)
### Apply the distance function, rememeber that now it receives a list (array) as X
%time vars = gaussianVariogram(Mdist,bg[0],bg[1],bg[2])
values = lags[[0]].values
for i in range(len(values)):
print(values[i] - values[i+1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Algorithm for processing Chunks
Step2: Take an average of the empirical variograms also with the envelope.
Step3: Let's bring the whole empirical variogram (calculated in HEC)
Step4: Ok, now the thing that I was supposed to do since the begining
Step5: The variograms for logbiomas and log sppn are different from logbiomass and log sppn.
Step6: The best fitted parameters are
Step7: Now for finishing I need to perform the lucky step,
|
4,560
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Adam Li <adam2392@gmail.com>
# Alex Rockhill <aprockhill@mailbox.org>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import fetch_fsaverage
# paths to mne datasets - sample sEEG and FreeSurfer's fsaverage subject
# which is in MNI space
misc_path = mne.datasets.misc.data_path()
sample_path = mne.datasets.sample.data_path()
subjects_dir = op.join(sample_path, 'subjects')
# use mne-python's fsaverage data
fetch_fsaverage(subjects_dir=subjects_dir, verbose=True) # downloads if needed
raw = mne.io.read_raw(op.join(misc_path, 'seeg', 'sample_seeg_ieeg.fif'))
events, event_id = mne.events_from_annotations(raw)
epochs = mne.Epochs(raw, events, event_id, detrend=1, baseline=None)
epochs = epochs['Response'][0] # just process one epoch of data for speed
montage = epochs.get_montage()
# first we need a head to mri transform since the data is stored in "head"
# coordinates, let's load the mri to head transform and invert it
this_subject_dir = op.join(misc_path, 'seeg')
head_mri_t = mne.coreg.estimate_head_mri_t('sample_seeg', this_subject_dir)
# apply the transform to our montage
montage.apply_trans(head_mri_t)
# now let's load our Talairach transform and apply it
mri_mni_t = mne.read_talxfm('sample_seeg', op.join(misc_path, 'seeg'))
montage.apply_trans(mri_mni_t) # mri to mni_tal (MNI Taliarach)
# for fsaverage, "mri" and "mni_tal" are equivalent and, since
# we want to plot in fsaverage "mri" space, we need use an identity
# transform to equate these coordinate frames
montage.apply_trans(
mne.transforms.Transform(fro='mni_tal', to='mri', trans=np.eye(4)))
epochs.set_montage(montage)
# compute the transform to head for plotting
trans = mne.channels.compute_native_head_t(montage)
# note that this is the same as:
# ``mne.transforms.invert_transform(
# mne.transforms.combine_transforms(head_mri_t, mri_mni_t))``
fig = mne.viz.plot_alignment(epochs.info, trans, 'fsaverage',
subjects_dir=subjects_dir, show_axes=True,
surfaces=['pial', 'head'], coord_frame='mri')
aseg = 'aparc+aseg' # parcellation/anatomical segmentation atlas
labels, colors = mne.get_montage_volume_labels(
montage, 'fsaverage', subjects_dir=subjects_dir, aseg=aseg)
# separate by electrodes which have names like LAMY 1
electrodes = set([''.join([lttr for lttr in ch_name
if not lttr.isdigit() and lttr != ' '])
for ch_name in montage.ch_names])
print(f'Electrodes in the dataset: {electrodes}')
electrodes = ('LPM', 'LSMA') # choose two for this example
for elec in electrodes:
picks = [ch_name for ch_name in epochs.ch_names if elec in ch_name]
fig = plt.figure(num=None, figsize=(8, 8), facecolor='black')
mne.viz.plot_channel_labels_circle(labels, colors, picks=picks, fig=fig)
fig.text(0.3, 0.9, 'Anatomical Labels', color='white')
picks = [ii for ii, ch_name in enumerate(epochs.ch_names) if
any([elec in ch_name for elec in electrodes])]
labels = ('ctx-lh-caudalmiddlefrontal', 'ctx-lh-precentral',
'ctx-lh-superiorfrontal', 'Left-Putamen')
fig = mne.viz.plot_alignment(mne.pick_info(epochs.info, picks), trans,
'fsaverage', subjects_dir=subjects_dir,
surfaces=[], coord_frame='mri')
brain = mne.viz.Brain('fsaverage', alpha=0.1, cortex='low_contrast',
subjects_dir=subjects_dir, units='m', figure=fig)
brain.add_volume_labels(aseg='aparc+aseg', labels=labels)
brain.show_view(azimuth=120, elevation=90, distance=0.25)
brain.enable_depth_peeling()
epochs.plot()
# get standard fsaverage volume (5mm grid) source space
fname_src = op.join(subjects_dir, 'fsaverage', 'bem',
'fsaverage-vol-5-src.fif')
vol_src = mne.read_source_spaces(fname_src)
evoked = epochs.average()
stc = mne.stc_near_sensors(
evoked, trans, 'fsaverage', subjects_dir=subjects_dir, src=vol_src,
verbose='error') # ignore missing electrode warnings
stc = abs(stc) # just look at magnitude
clim = dict(kind='value', lims=np.percentile(abs(evoked.data), [10, 50, 75]))
brain = stc.plot_3d(
src=vol_src, subjects_dir=subjects_dir,
view_layout='horizontal', views=['axial', 'coronal', 'sagittal'],
size=(800, 300), show_traces=0.4, clim=clim,
add_data_kwargs=dict(colorbar_kwargs=dict(label_font_size=8)))
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=3, interpolation='linear', framerate=5,
# time_viewer=True, filename='./mne-test-seeg.m4')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's load some sEEG data with channel locations and make epochs.
Step2: Let use the Talairach transform computed in the Freesurfer recon-all
Step3: Let's check to make sure everything is aligned.
Step4: Let's also look at which regions of interest are nearby our electrode
Step5: Now, let's the electrodes and a few regions of interest that the contacts
Step6: Next, we'll get the epoch data and plot its amplitude over time.
Step7: We can visualize this raw data on the fsaverage brain (in MNI space) as
Step8: Plot 3D source (brain region) visualization
|
4,561
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import GPy
import pods
%matplotlib inline
import matplotlib as plt
plt.rcParams['figure.figsize'] = (10.0, 4.0)
# This downloads cel files if they are not present.
# These cel files would be needed if you want to do
# the full Bioconductor analysis below.
process_data_in_R = False
if process_data_in_R:
%load_ext rmagic
datapath = os.path.join(pods.datasets.data_path, 'fruitfly_tomancak')
# download the original cel files and prepare to process!
data_set = 'fruitfly_tomancak_cel_files'
if not pods.datasets.data_available(data_set):
data = pods.datasets.download_data(data_set)
else:
# download the puma-processed affymetrix data.
data = pods.datasets.fruitfly_tomancak()
%%R -i datapath
returnpath <- getwd()
setwd(datapath)
if(!file.exists("tomancak_exprs.csv")) {
source("http://www.bioconductor.org/biocLite.R")
biocLite("puma")
library(puma)
print("Processing data with PUMA")
expfiles <- c(paste("embryo_tc_4_", 1:12, ".CEL", sep=""), paste("embryo_tc_6_", 1:12, ".CEL", sep=""), paste("embryo_tc_8_", 1:12, ".CEL", sep=""))
library(puma)
drosophila_exp_set <- justmmgMOS(filenames=expfiles, celfile.path=datapath)
pData(drosophila_exp_set) <- data.frame("time.h" = rep(1:12, 3), row.names=rownames(pData(drosophila_exp_set)))
write.reslts(drosophila_exp_set, file='tomancak')
}
else {
print("Processed data found on disk.")
}
setwd(returnpath)
data = pods.datasets.fruitfly_tomancak()
X = data['X']
Y = data['Y']
def fit_probe(id, independent=False):
Fit a set of probe repeats as either independent or correlated.
# set up the covariance function.
lengthscale = 2.
if independent:
kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=0)
name = 'independent gp'
else:
kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=1)
kern.coregion.W[0] = 1.
kern.coregion.W[1] = 1.
kern.coregion.W[2] = 1.
kern.coregion.W.constrain_fixed()
name = 'joint gp'
kern += GPy.kern.Bias(2)
m = GPy.models.GPRegression(X, Y[id][:, None], kern)
m.name = name
m.optimize(messages=True)
return m
import matplotlib.pyplot as plt
from IPython.display import display
def show_model_fit(m):
display(m)
fig, ax = plt.subplots(1, 3, sharex=False,sharey=True, figsize=(12, 3.5))
symbols = ['x', 'x', 'x']
replicate_str = 'replicate {}'
#linecolors = [(1.,0.,0.), (0., 1., 0.), (0., 0., 1.)]
mi, ma = np.inf, -np.inf
for replicate in range(3):
# Plot the result without noise (trying to estimate underlying gene expression)
which_data_rows=np.nonzero(X[:,1]==replicate)
data_symbol=symbols[replicate]
pl = m.plot_f(ax=ax[replicate], fixed_inputs=[(1, replicate)],
which_data_rows=which_data_rows,
data_symbol=data_symbol,
)
ax[replicate].plot(m.X[which_data_rows, 0].flatten(),
m.Y[which_data_rows].flatten(),
data_symbol, c='k', mew=1.5,
)
ax[replicate].text(.98, .98, replicate_str.format(replicate), ha='right', va='top',
transform=ax[replicate].transAxes)
_mi, _ma = ax[replicate].get_ylim()
if _mi < mi: mi = _mi
if _ma > ma: ma = _ma
for _ax in ax:
_ax.set_ylim(mi, ma)
ax[0].set_ylabel('gene expression [arbitrary]')
ax[1].set_xlabel('time [hrs]')
fig.text(0.5,1, m.name, ha='center', va='top', size=22,
bbox=dict(facecolor='white', edgecolor='k', lw=.4, boxstyle='round'), clip_on=False)
fig.tight_layout(rect=(0,0,1,1))
#GPy.plotting.matplot_dep.base_plots.align_subplots(1, 3, xlim=ax[0].get_xlim(), ylim=ax[0].get_ylim())
def fit_and_display(probe_id):
mc = fit_probe(probe_id, independent=True)
mi = fit_probe(probe_id, independent=False)
show_model_fit(mc)
show_model_fit(mi)
return mc, mi
mc, mi = fit_and_display('141201_at')
print mc.log_likelihood() - mi.log_likelihood()
mc, mi = fit_and_display('141200_at')
def fit_probe(id, independent=False):
Fit a set of probe repeats as either independent or correlated.
# set up a likelihood function, Student-t with 5 degress of freedom.
likelihood = GPy.likelihoods.StudentT(deg_free=5)
# use the Laplace approximation for inference.
inference = GPy.inference.latent_function_inference.laplace.Laplace()
# set up the covariance function as before.
lengthscale = 3.3
if independent:
kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=0)
name = 'independent gp'
else:
kern = GPy.kern.RBF(1,lengthscale=lengthscale)**GPy.kern.Coregionalize(1,3, rank=1)
kern.coregion.W[0] = 1.
kern.coregion.W[1] = 1.
kern.coregion.W[2] = 1.
kern.coregion.W.constrain_fixed()
name = 'joint gp'
kern += GPy.kern.Bias(2)
m = GPy.core.GP(X, Y[id][:, None], likelihood=likelihood, inference_method=inference, kernel=kern, name=name)
m.optimize(messages=True)
return m
mc, mi = fit_and_display('141201_at')
mc, mi = fit_and_display('141200_at')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use Bioconductor for Processing
Step2: This portion of the code will do the PUMA analysis of the gene
Step3: Read Gene Expression Data into Pandas Dataframe
Step5: The gene expression data is now loaded into the python environment. We have made use of pandas a python library for handling data structures. It provides us with a DataFrame object which gives some functionality similar to that of R for basic analysis.
Step6: We'd like to display the model fits. The GPy software allows us to select which data to plot from the model. Below there's a function for plotting the data associated with each repliacte alongside the fit.
Step7: Now let's fit each of the two models to a given probe id. First we select a probe_id where the correspondence between the repeats is not very strong (even if it's there).
Step8: Viewing the results here, for the independent model two of the fits adjudge there to be no signal. The constrained model shares information across the fits and can therefore determine better parameters. We can compute the ratio of the likelihods given
Step9: As a second gene, we consider 141200_at. This gene turns out to have a more consistent response across the three repeats.
Step11: Naturally, the likelihood still favours the constrained model.
|
4,562
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from sqlalchemy import create_engine
def connection(user,passwd,dbname, echo_i=False):
str1 = ('postgresql+pg8000://' + user +':' + passw + '@switch-db2.erg.berkeley.edu:5432/'
+ dbname + '?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory')
engine = create_engine(str1,echo=echo_i)
return engine
user = 'jdlara'
passw = 'Amadeus-2010'
dbname = 'apl_cec'
engine= connection(user,passw,dbname)
excel_file = '~/Box Sync/EPIC-Biomass/PGEFeedersFinal.xlsx'
tab_name = ['substation_banks','substations','feeders_limits_data','feeder_minimpacts']
schema_for_upload = 'PGE'
for name in tab_name:
pd_data = pd.read_excel(excel_file, sheetname=name, encoding='UTF-8')
pd_data.to_sql(name, engine, schema=schema_for_upload, if_exists='replace',chunksize=100)
def create_geom(table,schema,engine):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
print query
k.execute(query)
query = ('alter table ' + table + ' drop column if exists geom;')
print query
k.execute(query)
query = 'SELECT AddGeometryColumn (\''+ schema + '\',\''+ table + '\',\'geom\''+',4326,\'POINT\',2);'
print query
k.execute(query)
query = ('UPDATE ' + table + ' set geom = ST_SetSRID(st_makepoint(' + table + '.lon, ' +
table + '.lat), 4326)::geometry;')
k.execute(query)
print query
return 'geom column added with SRID 4326'
table = 'feeders'
schema = 'PGE'
create_geom(table,schema,engine)
col = 'feeder_no'
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
k.execute(query)
query = ('alter table ' + table + ' ADD CONSTRAINT '+ table +'_pk PRIMARY KEY (' + col + ')')
print query
k.execute(query)
ALTER TABLE table_name
ADD CONSTRAINT [ constraint_name ]
PRIMARY KEY (index_col1, index_col2, ... index_col_n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
Step2: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
Step3: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this
Step4: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.
|
4,563
|
<ASSISTANT_TASK:>
Python Code:
plt.imshow(X_train[0].reshape(28,28)) # This is what the image looks like
from scipy.ndimage import interpolation
def moments(image):
c0,c1 = np.mgrid[:image.shape[0],:image.shape[1]] # A trick in numPy to create a mesh grid
totalImage = np.sum(image) #sum of pixels
m0 = np.sum(c0*image)/totalImage #mu_x
m1 = np.sum(c1*image)/totalImage #mu_y
m00 = np.sum((c0-m0)**2*image)/totalImage #var(x)
m11 = np.sum((c1-m1)**2*image)/totalImage #var(y)
m01 = np.sum((c0-m0)*(c1-m1)*image)/totalImage #covariance(x,y)
mu_vector = np.array([m0,m1]) # Notice that these are \mu_x, \mu_y respectively
covariance_matrix = np.array([[m00,m01],[m01,m11]]) # Do you see a similarity between the covariance matrix
return mu_vector, covariance_matrix
def deskew(image):
c,v = moments(image)
alpha = v[0,1]/v[0,0]
affine = np.array([[1,0],[alpha,1]])
ocenter = np.array(image.shape)/2.0
offset = c-np.dot(affine,ocenter)
return interpolation.affine_transform(image,affine,offset=offset)
examples = (4181, 0), (3,1), (56282, 2), (25829,3), (9741,4) , (26901,5), (50027,6), (17935,7) , (41495, 8), (14662, 9)
from mpl_toolkits.axes_grid1 import AxesGrid
grid = AxesGrid(plt.figure(figsize=(8,15)), 141, # similar to subplot(141)
nrows_ncols=(10, 2),
axes_pad=0.05,
label_mode="1",
)
for examplenum,num in examples:
im = grid[2*num].imshow(X_train[examplenum].reshape(28,28))
im2 = grid[2*num+1].imshow(deskew(X_train[examplenum].reshape(28,28)))
plt.subplot(1, 2, 1)
plt.imshow(X_train[3].reshape(28,28))
newim = deskew(X_train[3].reshape(28,28))
plt.subplot(1, 2, 2)
plt.imshow(newim)
from sklearn.preprocessing import OneHotEncoder
from sklearn import linear_model
import sklearn.metrics as metrics
def createModel(x,y):
yp = OneHotEncoder()
y = yp.fit_transform(y.reshape(60000,1)).toarray()
clf = linear_model.Ridge (alpha = 0)
clf.fit(x,y)
return clf
def predict(model,x):
return np.argmax(model.predict(x),axis=1)
model_unchanged = createModel(X_train,labels_train)
metrics.accuracy_score(predict(model_unchanged,X_train),labels_train)
metrics.accuracy_score(predict(model_unchanged,X_test),labels_test)
def deskewAll(X):
currents = []
for i in range(len(X)):
currents.append(deskew(X[i].reshape(28,28)).flatten())
return np.array(currents)
X_train_deskewed = deskewAll(X_train)
X_test_deskewed = deskewAll(X_test)
model_deskewed = createModel(X_train_deskewed,labels_train)
metrics.accuracy_score(predict(model_deskewed,X_train_deskewed),labels_train)
metrics.accuracy_score(predict(model_deskewed,X_test_deskewed),labels_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Results
|
4,564
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Discretization
c1=20 # Number of grid points per dominant wavelength
c2=0.5 # CFL-Number
nx=2000 # Number of grid points
T=10 # Total propagation time
# Source Signal
f0= 10 # Center frequency Ricker-wavelet
q0= 1 # Maximum amplitude Ricker-Wavelet
xscr = 100 # Source position (in grid points)
# Receiver
xrec1=400 # Position Reciever 1 (in grid points)
xrec2=800 # Position Reciever 2 (in grid points)
xrec3=1800 # Position Reciever 3 (in grid points)
# Velocity and density
modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))
rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))
# Init wavefields
vx=np.zeros(nx)
p=np.zeros(nx)
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
x=np.arange(0,dx*nx,dx) # Space vector
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
i_dx3=1.0/(dx**3)
c9=dt**3/24.0
kx=np.arange(5,nx-4)
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Calculating spatial derivative
p_x=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1])
p_xxx=i_dx3*(-3.0)*(p[kx+1]-p[kx])+i_dx3*(1.0)*(p[kx+2]-p[kx-1])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*p_x-l[kx]*c9*1/(rho[kx]**2)*(p_xxx)
# Calculating spatial derivative
vx_x= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2])
vx_xxx=i_dx3*(-3.0)*(vx[kx]-vx[kx-1])+i_dx3*(1.0)*(vx[kx+1]-vx[kx-2])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(vx_x)-l[kx]**2.0*c9*1.0/(rho[kx])*(vx_xxx)
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!")
## Save seismograms
np.save("Seismograms/FD_1D_DX4_DT4_LW_fast",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms
|
4,565
|
<ASSISTANT_TASK:>
Python Code:
# This cell loads the data and cleans it for you, and log10 transforms the drug resistance values.
# Remember to run this cell if you want to have the data loaded into memory.
DATA_HANDLE = 'drug_data/hiv-protease-data.csv' # specify the relative path to the protease drug resistance data
N_DATA = 8 # specify the number of columns in the CSV file that are drug resistance measurements.
CONSENSUS = 'sequences/hiv-protease-consensus.fasta' # specify the relative path to the HIV protease consensus sequence
data, drug_cols, feat_cols = cf.read_data(DATA_HANDLE, N_DATA)
consensus_map = cf.read_consensus(CONSENSUS)
data = cf.clean_data(data, feat_cols, consensus_map)
for name in drug_cols:
data[name] = data[name].apply(np.log10)
data.head()
Complete the function below to compute the correlation score.
Use the scipy.stats.pearsonr(x, y) function to find the correlation score between two arrays of things.
You do not need to type the whole name, as I have imported the pearsonr name for you, so you only have to do:
pearsonr(x, y)
Procedure:
1. Select two columns' names to compare.
2. Make sure to drop NaN values. the pearsonr function cannot deal with NaN values.
(Refer to the Lecture notebook if you forgot how to do this.)
3. Pass the data in to pearsonr().
def corr_score(drug1, drug2):
### BEGIN SOLUTION
# Get the subset of data
subset = data[[drug1, drug2]].dropna()
# Return the pearsonr score.
return pearsonr(subset[drug1], subset[drug2])
### END SOLUTION
assert corr_score('IDV', 'FPV') == (0.79921991532901282, 2.6346448659104859e-306)
assert corr_score('ATV', 'FPV') == (0.82009597442033089, 2.5199367322520278e-231)
assert corr_score('NFV', 'DRV') == (0.69148264851159791, 4.0640711263961111e-82)
assert corr_score('LPV', 'SQV') == (0.76682619729899326, 4.2705737581002648e-234)
# Fill in the code here to clean the data.
def return_cleaned_data(drug_name, data):
# Select the subsets of columns of interest.
# Fade out the drug_name and feat_cols variables
cols_of_interest = []
cols_of_interest.append(drug_name)
cols_of_interest.extend(feat_cols)
subset = data[cols_of_interest].dropna() # fade out .dropna()
Y = subset[drug_name] # fade out drug_name, fade out .apply(np.log10)
X = subset[feat_cols]
# We call on a custom function to binarize the sequence feature matrix.
# You can inspect the code in the custom_funcs.py file.
lb = LabelBinarizer()
lb.fit(list('CHIMSVAGLPTRFYWDNEQK'))
X_binarized = pd.DataFrame()
for col in X.columns:
binarized_cols = lb.transform(X[col])
for i, c in enumerate(lb.classes_):
X_binarized[col + '_' + c] = binarized_cols[:,i]
return X_binarized, Y
X_binarized, Y = return_cleaned_data('FPV', data)
len(X_binarized), len(Y)
num_estimators = [10, 30, 50, 80, 100, 300, 500 800] # fill in the number of estimators to try here.
models = {'Random Forest':RandomForestRegressor,
'Ada Boost':AdaBoostRegressor,
'Gradient Boost':GradientBoostingRegressor,
'Extra Trees':ExtraTreesRegressor} # fill in the other models here
# Initialize a dictionary to hold the models' MSE values.
mses = dict()
for model_name, model in models.items():
mses[model_name] = dict()
for n in num_estimators:
mses[model_name][n] = 0
# Iterate over the models, and number of estimators.
for model_name, model in models.items():
for n_est in num_estimators:
### Begin Here
print(model_name, n_est)
# Set up the cross-validation iterator
cv_iterator = cv.ShuffleSplit(len(X_binarized), test_size=0.3, n_iter=5)
# Initialize the model
m = model(n_estimators=n_est) # fill in the parameters
# Collect the cross-validation scores. Remember that mse will be negative, and needs to
# be transformed to be positive.
cv_scores = cv.cross_val_score(m, X_binarized, Y, cv=cv_iterator, scoring='mean_squared_error')
### End Here
# Store the mean MSEs.
mses[model_name][n_est] = np.mean(-cv_scores)
# When you're done, run the following cell to make your plot.
pd.DataFrame(mses).plot()
plt.xlabel('Num Estimators')
plt.ylabel('MSE')
# Load in the data and binarize it.
proteases = [s for s in SeqIO.parse('sequences/HIV1-protease.fasta', 'fasta') if len(s) == 99]
alignment = MultipleSeqAlignment(proteases)
proteases_df = pd.DataFrame(np.array([list(rec) for rec in alignment], str))
proteases_df.index = [s.id for s in proteases]
proteases_df.columns = [i for i in range(1, 100)]
X_global = cf.binarize_seqfeature(proteases_df)
# Train your model here, with optimized parameters for best MSE minimization.
### BEGIN
model = RandomForestRegressor() # put your best model here.
model.fit(X_binarized, Y)
preds = model.predict(X_global)
plt.hist(preds)
### END
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem Set on Machine Learning
Step2: Question
Step3: Question
|
4,566
|
<ASSISTANT_TASK:>
Python Code:
import SimpleITK as sitk
import numpy as np
%matplotlib inline
import gui
from matplotlib import pyplot as plt
from ipywidgets import interact, fixed
# Utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data).
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
dimension = 2
print("*Identity Transform*")
identity = sitk.Transform(dimension, sitk.sitkIdentity)
print("Dimension: " + str(identity.GetDimension()))
# Points are always defined in physical space
point = (1.0, 1.0)
def transform_point(transform, point):
transformed_point = transform.TransformPoint(point)
print("Point " + str(point) + " transformed is " + str(transformed_point))
transform_point(identity, point)
print("*Translation Transform*")
translation = sitk.TranslationTransform(dimension)
print("Parameters: " + str(translation.GetParameters()))
print("Offset: " + str(translation.GetOffset()))
print("FixedParameters: " + str(translation.GetFixedParameters()))
transform_point(translation, point)
print("")
translation.SetParameters((3.1, 4.4))
print("Parameters: " + str(translation.GetParameters()))
transform_point(translation, point)
print("*Affine Transform*")
affine = sitk.AffineTransform(dimension)
print("Parameters: " + str(affine.GetParameters()))
print("FixedParameters: " + str(affine.GetFixedParameters()))
transform_point(affine, point)
print("")
affine.SetTranslation((3.1, 4.4))
print("Parameters: " + str(affine.GetParameters()))
transform_point(affine, point)
def myshow(img, title=None, margin=0.05, dpi=80):
nda = sitk.GetArrayViewFromImage(img)
spacing = img.GetSpacing()
ysize = nda.shape[0]
xsize = nda.shape[1]
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(title, figsize=figsize, dpi=dpi)
ax = fig.add_axes([margin, margin, 1 - 2 * margin, 1 - 2 * margin])
extent = (0, xsize * spacing[1], 0, ysize * spacing[0])
t = ax.imshow(
nda, extent=extent, interpolation="hamming", cmap="gray", origin="lower"
)
if title:
plt.title(title)
grid = sitk.GridSource(
outputPixelType=sitk.sitkUInt16,
size=(250, 250),
sigma=(0.5, 0.5),
gridSpacing=(5.0, 5.0),
gridOffset=(0.0, 0.0),
spacing=(0.2, 0.2),
)
myshow(grid, "Grid Input")
def resample(image, transform):
# Output image Origin, Spacing, Size, Direction are taken from the reference
# image in this call to Resample
reference_image = image
interpolator = sitk.sitkCosineWindowedSinc
default_value = 100.0
return sitk.Resample(image, reference_image, transform, interpolator, default_value)
translation.SetOffset((3.1, 4.6))
transform_point(translation, point)
resampled = resample(grid, translation)
myshow(resampled, "Resampled Translation")
translation.SetOffset(-1 * np.array(translation.GetParameters()))
transform_point(translation, point)
resampled = resample(grid, translation)
myshow(resampled, "Inverse Resampled")
def affine_translate(transform, x_translation=3.1, y_translation=4.6):
new_transform = sitk.AffineTransform(transform)
new_transform.SetTranslation((x_translation, y_translation))
resampled = resample(grid, new_transform)
myshow(resampled, "Translated")
return new_transform
affine = sitk.AffineTransform(dimension)
interact(
affine_translate,
transform=fixed(affine),
x_translation=(-5.0, 5.0),
y_translation=(-5.0, 5.0),
);
def affine_scale(transform, x_scale=3.0, y_scale=0.7):
new_transform = sitk.AffineTransform(transform)
matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))
matrix[0, 0] = x_scale
matrix[1, 1] = y_scale
new_transform.SetMatrix(matrix.ravel())
resampled = resample(grid, new_transform)
myshow(resampled, "Scaled")
print(matrix)
return new_transform
affine = sitk.AffineTransform(dimension)
interact(affine_scale, transform=fixed(affine), x_scale=(0.2, 5.0), y_scale=(0.2, 5.0));
def affine_rotate(transform, degrees=15.0):
parameters = np.array(transform.GetParameters())
new_transform = sitk.AffineTransform(transform)
matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))
radians = -np.pi * degrees / 180.0
rotation = np.array(
[[np.cos(radians), -np.sin(radians)], [np.sin(radians), np.cos(radians)]]
)
new_matrix = np.dot(rotation, matrix)
new_transform.SetMatrix(new_matrix.ravel())
resampled = resample(grid, new_transform)
print(new_matrix)
myshow(resampled, "Rotated")
return new_transform
affine = sitk.AffineTransform(dimension)
interact(affine_rotate, transform=fixed(affine), degrees=(-90.0, 90.0));
def affine_shear(transform, x_shear=0.3, y_shear=0.1):
new_transform = sitk.AffineTransform(transform)
matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))
matrix[0, 1] = -x_shear
matrix[1, 0] = -y_shear
new_transform.SetMatrix(matrix.ravel())
resampled = resample(grid, new_transform)
myshow(resampled, "Sheared")
print(matrix)
return new_transform
affine = sitk.AffineTransform(dimension)
interact(affine_shear, transform=fixed(affine), x_shear=(0.1, 2.0), y_shear=(0.1, 2.0));
translate = (8.0, 16.0)
rotate = 20.0
affine = sitk.AffineTransform(dimension)
affine = affine_translate(affine, translate[0], translate[1])
affine = affine_rotate(affine, rotate)
resampled = resample(grid, affine)
myshow(resampled, "Single Transform")
composite = sitk.CompositeTransform(dimension)
translation = sitk.TranslationTransform(dimension)
translation.SetOffset(-1 * np.array(translate))
composite.AddTransform(translation)
affine = sitk.AffineTransform(dimension)
affine = affine_rotate(affine, rotate)
composite.AddTransform(translation)
composite = sitk.CompositeTransform(dimension)
composite.AddTransform(affine)
resampled = resample(grid, composite)
myshow(resampled, "Two Transforms")
composite = sitk.CompositeTransform(dimension)
composite.AddTransform(affine)
composite.AddTransform(translation)
resampled = resample(grid, composite)
myshow(resampled, "Composite transform in reverse order")
def resample_display(image, euler2d_transform, tx, ty, theta):
euler2d_transform.SetTranslation((tx, ty))
euler2d_transform.SetAngle(theta)
resampled_image = sitk.Resample(image, euler2d_transform)
plt.imshow(sitk.GetArrayFromImage(resampled_image))
plt.axis("off")
plt.show()
logo = sitk.ReadImage(fdata("SimpleITK.jpg"))
euler2d = sitk.Euler2DTransform()
# Why do we set the center?
euler2d.SetCenter(
logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0)
)
interact(
resample_display,
image=fixed(logo),
euler2d_transform=fixed(euler2d),
tx=(-128.0, 128.0, 2.5),
ty=(-64.0, 64.0),
theta=(-np.pi / 4.0, np.pi / 4.0),
);
euler2d = sitk.Euler2DTransform()
# Why do we set the center?
euler2d.SetCenter(
logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0)
)
tx = 64
ty = 32
euler2d.SetTranslation((tx, ty))
extreme_points = [
logo.TransformIndexToPhysicalPoint((0, 0)),
logo.TransformIndexToPhysicalPoint((logo.GetWidth(), 0)),
logo.TransformIndexToPhysicalPoint((logo.GetWidth(), logo.GetHeight())),
logo.TransformIndexToPhysicalPoint((0, logo.GetHeight())),
]
inv_euler2d = euler2d.GetInverse()
extreme_points_transformed = [inv_euler2d.TransformPoint(pnt) for pnt in extreme_points]
min_x = min(extreme_points_transformed)[0]
min_y = min(extreme_points_transformed, key=lambda p: p[1])[1]
max_x = max(extreme_points_transformed)[0]
max_y = max(extreme_points_transformed, key=lambda p: p[1])[1]
# Use the original spacing (arbitrary decision).
output_spacing = logo.GetSpacing()
# Identity cosine matrix (arbitrary decision).
output_direction = [1.0, 0.0, 0.0, 1.0]
# Minimal x,y coordinates are the new origin.
output_origin = [min_x, min_y]
# Compute grid size based on the physical size and spacing.
output_size = [
int((max_x - min_x) / output_spacing[0]),
int((max_y - min_y) / output_spacing[1]),
]
resampled_image = sitk.Resample(
logo,
output_size,
euler2d,
sitk.sitkLinear,
output_origin,
output_spacing,
output_direction,
)
plt.imshow(sitk.GetArrayViewFromImage(resampled_image))
plt.axis("off")
plt.show()
img = logo
# Generate random samples inside the image, we will obtain the intensity/color values at these points.
num_samples = 10
physical_points = []
for pnt in zip(*[list(np.random.random(num_samples) * sz) for sz in img.GetSize()]):
physical_points.append(img.TransformContinuousIndexToPhysicalPoint(pnt))
# Create an image of size [num_samples,1...1], actual size is dependent on the image dimensionality. The pixel
# type is irrelevant, as the image is just defining the interpolation grid (sitkUInt8 has minimal memory footprint).
interp_grid_img = sitk.Image(
[num_samples] + [1] * (img.GetDimension() - 1), sitk.sitkUInt8
)
# Define the displacement field transformation, maps the points in the interp_grid_img to the points in the actual
# image.
displacement_img = sitk.Image(
[num_samples] + [1] * (img.GetDimension() - 1),
sitk.sitkVectorFloat64,
img.GetDimension(),
)
for i, pnt in enumerate(physical_points):
displacement_img[[i] + [0] * (img.GetDimension() - 1)] = np.array(pnt) - np.array(
interp_grid_img.TransformIndexToPhysicalPoint(
[i] + [0] * (img.GetDimension() - 1)
)
)
# Actually perform the resampling. The only relevant choice here is the interpolator. The default_output_pixel_value
# is set to 0.0, but the resampling should never use it because we expect all points to be inside the image and this
# value is only used if the point is outside the image extent.
interpolator_enum = sitk.sitkLinear
default_output_pixel_value = 0.0
output_pixel_type = (
sitk.sitkFloat32
if img.GetNumberOfComponentsPerPixel() == 1
else sitk.sitkVectorFloat32
)
resampled_points = sitk.Resample(
img,
interp_grid_img,
sitk.DisplacementFieldTransform(displacement_img),
interpolator_enum,
default_output_pixel_value,
output_pixel_type,
)
# Print the interpolated values per point
for i in range(resampled_points.GetWidth()):
print(
str(physical_points[i])
+ ": "
+ str(resampled_points[[i] + [0] * (img.GetDimension() - 1)])
+ "\n"
)
file_names = ["cxr.dcm", "photo.dcm", "POPI/meta/00-P.mhd", "training_001_ct.mha"]
images = []
image_file_reader = sitk.ImageFileReader()
for fname in file_names:
image_file_reader.SetFileName(fdata(fname))
image_file_reader.ReadImageInformation()
image_size = list(image_file_reader.GetSize())
# 2D image posing as a 3D one
if len(image_size) == 3 and image_size[2] == 1:
image_size[2] = 0
image_file_reader.SetExtractSize(image_size)
images.append(image_file_reader.Execute())
# 2D image
elif len(image_size) == 2:
images.append(image_file_reader.Execute())
# 3D image grab middle x-z slice
elif len(image_size) == 3:
start_index = [0, image_size[1] // 2, 0]
image_size[1] = 0
image_file_reader.SetExtractSize(image_size)
image_file_reader.SetExtractIndex(start_index)
images.append(image_file_reader.Execute())
# 4/5D image
else:
raise ValueError(f"{image.GetDimension()}D image not supported.")
# Notice that in the display the coronal slices are flipped. As we are
# using matplotlib for display, it is not aware of radiological conventions
# and treats the image as an isotropic array of pixels.
gui.multi_image_display2D(images);
def resize_and_scale_uint8(image, new_size, outside_pixel_value=0):
Resize the given image to the given size, with isotropic pixel spacing
and scale the intensities to [0,255].
Resizing retains the original aspect ratio, with the original image centered
in the new image. Padding is added outside the original image extent using the
provided value.
:param image: A SimpleITK image.
:param new_size: List of ints specifying the new image size.
:param outside_pixel_value: Value in [0,255] used for padding.
:return: a 2D SimpleITK image with desired size and a pixel type of sitkUInt8
# Rescale intensities if scalar image with pixel type that isn't sitkUInt8.
# We rescale first, so that the zero padding makes sense for all original image
# ranges. If we resized first, a value of zero in a high dynamic range image may
# be somewhere in the middle of the intensity range and the outer border has a
# constant but arbitrary value.
if (
image.GetNumberOfComponentsPerPixel() == 1
and image.GetPixelID() != sitk.sitkUInt8
):
final_image = sitk.Cast(sitk.RescaleIntensity(image), sitk.sitkUInt8)
else:
final_image = image
new_spacing = [
((osz - 1) * ospc) / (nsz - 1)
for ospc, osz, nsz in zip(
final_image.GetSpacing(), final_image.GetSize(), new_size
)
]
new_spacing = [max(new_spacing)] * final_image.GetDimension()
center = final_image.TransformContinuousIndexToPhysicalPoint(
[sz / 2.0 for sz in final_image.GetSize()]
)
new_origin = [
c - c_index * nspc
for c, c_index, nspc in zip(center, [sz / 2.0 for sz in new_size], new_spacing)
]
final_image = sitk.Resample(
final_image,
size=new_size,
outputOrigin=new_origin,
outputSpacing=new_spacing,
defaultPixelValue=outside_pixel_value,
)
return final_image
# Select the arbitrary new size
new_size = [128, 128]
resized_images = [resize_and_scale_uint8(image, new_size, 50) for image in images]
gui.multi_image_display2D(resized_images);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating and Manipulating Transforms
Step2: Transform are defined by two sets of parameters, the Parameters and FixedParameters. FixedParameters are not changed during the optimization process when performing registration. For the TranslationTransform, the Parameters are the values of the translation Offset.
Step3: The affine transform is capable of representing translations, rotations, shearing, and scaling.
Step4: A number of other transforms exist to represent non-affine deformations, well-behaved rotation in 3D, etc. See the Transforms tutorial for more information.
Step5: Create a grid image.
Step6: To apply the transform, a resampling operation is required.
Step7: What happened? The translation is positive in both directions. Why does the output image move down and to the left? It important to keep in mind that a transform in a resampling operation defines the transform from the output space to the input space.
Step8: An affine (line preserving) transformation, can perform translation
Step9: or scaling
Step10: or rotation
Step11: or shearing
Step12: Composite Transform
Step13: can also be represented with two Transform objects applied in sequence with a Composite Transform,
Step14: Beware, tranforms are non-commutative -- order matters!
Step15: Resampling
Step16: Common Errors
Step17: Are you puzzled by the result? Is the output just a copy of the input? Add a rotation to the code above and see what happens (euler2d.SetAngle(0.79)).
Step18: <font color="red">Homework
Step20: <font color="red">Homework
|
4,567
|
<ASSISTANT_TASK:>
Python Code:
import steps.interface
from steps.model import *
from steps.geom import *
from steps.sim import *
from steps.saving import *
from steps.rng import *
import numpy as np
import math
# Potassium conductance = 0.036 S/cm2
# Potassium single-channel conductance
K_G = 20.0e-12 # Siemens
# Potassium channel density
K_ro = 18.0e12 # per square meter
# Potassium reversal potential
K_rev = -77e-3 # volts
# Sodium conductance = 0.120 S/cm2
# Sodium single-channel conductance
Na_G = 20.0e-12 # Siemens
# Sodium channel density
Na_ro = 60.0e12 # per square meter
# Sodium reversal potential
Na_rev = 50e-3 # volts
# Leak single-channel conductance
L_G = 0.3e-12 # Siemens
# Leak density
L_ro = 10.0e12 # per square meter
# Leak reveral potential
leak_rev = -54.4e-3 # volts
# A table of potassium channel population factors:
# n0, n1, n2, n3, n4
K_facs = [ 0.21768, 0.40513, 0.28093, 0.08647, 0.00979 ]
# A table of sodium channel population factors
# m0h0, m1h0, m2h0, m3h0, m0h1, m1h1, m2h1, m3h1:
Na_facs = [[0.34412, 0.05733, 0.00327, 6.0e-05],
[0.50558, 0.08504, 0.00449, 0.00010]]
# Temperature for gating kinetics
celsius = 20.0
# Current injection
Iclamp = 50.0e-12 # amps
# Voltage range for gating kinetics in Volts
Vrange = [-100.0e-3, 50e-3, 1e-4]
def HHRateFunction(A, B, C, D, F, H, V):
num = A + B * V
denom = C + H * math.exp((V + D) / F)
if num == denom == 0:
return F * B / (H * math.exp((V + D) / F))
else:
return num / denom
# The simulation dt
DT_sim = 1.0e-4 # seconds
# The time until which the simulation should be run
ENDT = 4.0e-3
model = Model()
r = ReactionManager()
with model:
ssys = SurfaceSystem.Create()
# Potassium channel
Ko, Kc = SubUnitState.Create()
KSU = SubUnit.Create([Ko, Kc])
VGKC = Channel.Create([KSU]*4)
# Sodium channel
Na_mo, Na_mc, Na_hi, Na_ha = SubUnitState.Create()
NamSU, NahSU = SubUnit.Create(
[Na_mo, Na_mc],
[Na_hi, Na_ha]
)
VGNaC = Channel.Create([NamSU, NamSU, NamSU, NahSU])
# Leak channel
lsus = SubUnitState.Create()
Leak = Channel.Create([lsus])
thi = math.pow(3.0, ((celsius-6.3)/10.0))
_a_n = VDepRate(lambda V: thi * 1e3 * HHRateFunction(-0.55, -0.01, -1, 55, -10, 1, V*1e3), vrange=Vrange)
_b_n = VDepRate(lambda V: thi * 1e3 * HHRateFunction(1, 0, 0, 65, 80, 8, V*1e3), vrange=Vrange)
_a_m = VDepRate(lambda V: thi * 1e3 * HHRateFunction(-4, -0.1, -1, 40, -10, 1, V*1e3), vrange=Vrange)
_b_m = VDepRate(lambda V: thi * 1e3 * HHRateFunction(1, 0, 0, 65, 18, 0.25, V*1e3), vrange=Vrange)
_a_h = VDepRate(lambda V: thi * 1e3 * HHRateFunction(1, 0, 0, 65, 20, 1 / 0.07, V*1e3), vrange=Vrange)
_b_h = VDepRate(lambda V: thi * 1e3 * HHRateFunction(1, 0, 1, 35, -10, 1, V*1e3), vrange=Vrange)
with model:
with ssys:
with VGKC[...]:
Kc.s <r[1]> Ko.s
r[1].K = _a_n, _b_n
with VGNaC[...]:
Na_hi.s <r[1]> Na_ha.s
r[1].K = _a_h, _b_h
Na_mc.s <r[1]> Na_mo.s
r[1].K = _a_m, _b_m
with model:
with ssys:
VGKC_I = OhmicCurr.Create(VGKC[Ko, Ko, Ko, Ko], K_G, K_rev)
VGNaC_I = OhmicCurr.Create(VGNaC[Na_mo, Na_mo, Na_mo, Na_ha], Na_G, Na_rev)
Leak_I = OhmicCurr.Create(Leak[lsus], L_G, leak_rev)
mesh = TetMesh.LoadAbaqus('meshes/axon.inp', scale=1e-6)
with mesh:
facetris = TriList([tri for tri in mesh.tris if tri.center.z == mesh.bbox.min.z])
injverts = facetris.verts
memb_tris = mesh.surface - facetris
# The points along (z) axis at which to record potential
pot_pos = np.arange(mesh.bbox.min.z, mesh.bbox.max.z, 10e-6)
pot_tet = TetList(mesh.tets[0, 0, z] for z in pot_pos)
with mesh:
cyto = Compartment.Create(mesh.tets)
patch = Patch.Create(memb_tris, cyto, None, ssys)
# Create the membrane across which the potential will be solved
membrane = Membrane.Create([patch])
rng = RNG('mt19937', 512, 1234)
sim = Simulation('Tetexact', model, mesh, rng, True)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
rs = ResultSelector(sim)
NaCurrs = rs.TRIS(memb_tris).VGNaC_I.I
KCurrs = rs.TRIS(memb_tris).VGKC_I.I
CellPot = rs.TETS(pot_tet).V
NaCurrs.metaData['trizpos'] = [tri.center.z for tri in memb_tris]
KCurrs.metaData['trizpos'] = [tri.center.z for tri in memb_tris]
NaCurrs.metaData['triarea'] = [tri.Area for tri in memb_tris]
KCurrs.metaData['triarea'] = [tri.Area for tri in memb_tris]
CellPot.metaData['tetzpos'] = pot_pos
NaCurrs.toFile('NaCurrs.dat')
KCurrs.toFile('KCurrs.dat')
CellPot.toFile('CellPot.dat')
sim.toSave(NaCurrs, KCurrs, CellPot, dt=DT_sim)
sim.newRun()
# Inject channels
surfarea = sim.patch.Area
for state in VGNaC:
prop = Na_facs[state.Count(Na_ha)][state.Count(Na_mo)]
sim.patch.VGNaC[state].Count = Na_ro * surfarea * prop
for state in VGKC:
prop = K_facs[state.Count(Ko)]
sim.patch.VGKC[state].Count = K_ro * surfarea * prop
sim.patch.Leak[lsus].Count = L_ro * surfarea
# Set dt for membrane potential calculation to 0.01ms
sim.EfieldDT = 1.0e-5
# Initialize potential to -65mV
sim.membrane.Potential = -65e-3
# Set capacitance of the membrane to 1 uF/cm^2 = 0.01 F/m^2
sim.membrane.Capac = 1.0e-2
# Set resistivity of the conduction volume to 100 ohm.cm = 1 ohm.meter
sim.membrane.VolRes = 1.0
# Set the current clamp
sim.VERTS(injverts).IClamp = Iclamp/len(injverts)
# Run the simulation
sim.run(ENDT)
sim = Simulation('TetODE', model, mesh, calcMembPot=True)
rs = ResultSelector(sim)
CellPot = rs.TETS(pot_tet).V
CellPot.metaData['tetzpos'] = pot_pos
CellPot.toFile('CellPotODE.dat')
sim.toSave(CellPot)
sim.newRun()
# Inject channels
surfarea = sim.patch.Area
for state in VGNaC:
prop = Na_facs[state.Count(Na_ha)][state.Count(Na_mo)]
sim.patch.VGNaC[state].Count = Na_ro * surfarea * prop
for state in VGKC:
prop = K_facs[state.Count(Ko)]
sim.patch.VGKC[state].Count = K_ro * surfarea * prop
sim.patch.Leak[lsus].Count = L_ro * surfarea
# Initialize potential to -65mV
sim.membrane.Potential = -65e-3
# Set capacitance of the membrane to 1 uF/cm^2 = 0.01 F/m^2
sim.membrane.Capac = 1.0e-2
# Set resistivity of the conduction volume to 100 ohm.cm = 1 ohm.meter
sim.membrane.VolRes = 1.0
# Set the current clamp
sim.VERTS(injverts).IClamp = Iclamp/len(injverts)
sim.setTolerances(1e-3, 1e-4)
# Run the simulation
EFDt = 1e-5
for i in range(int(ENDT // EFDt) + 1):
sim.run(i * EFDt)
if i % 10 == 0:
CellPot.save()
%reset -f
import steps.interface
from steps.saving import *
from matplotlib import pyplot as plt
import numpy as np
def plotPotential(CellPot, tidx):
plt.plot(
CellPot.metaData['tetzpos'] * 1e6,
CellPot.data[0, tidx, :] * 1e3,
label=f'{CellPot.time[0, tidx]*1e3} ms'
)
def plotCurrents(NaCurrs, KCurrs, tidx, nbins=100):
for results, currName in zip([NaCurrs, KCurrs], ['Na', 'K']):
data = results.data[0, tidx, :] * 1e12
pos = results.metaData['trizpos'] * 1e6
areas = results.metaData['triarea'] * 1e12
bins = np.histogram_bin_edges(pos, nbins)
dig = np.digitize(pos, bins)
# Ignore empty bins
with np.errstate(invalid='ignore'):
meanData = np.bincount(dig, weights=data) / np.bincount(dig, weights=areas)
meanPos = np.bincount(dig, weights=pos) / np.bincount(dig)
plt.plot(meanPos, meanData, label=f'{currName} {results.time[0, tidx]*1e3} ms')
CellPot = ResultSelector.FromFile('CellPot.dat')
plt.figure(figsize=(10, 7))
plotPotential(CellPot, 10)
plotPotential(CellPot, 20)
plotPotential(CellPot, 30)
plt.xlabel('Z-axis (um)')
plt.ylabel('Membrane potential (mV)')
plt.legend()
plt.show()
NaCurrs = ResultSelector.FromFile('NaCurrs.dat')
KCurrs = ResultSelector.FromFile('KCurrs.dat')
plt.figure(figsize=(10, 7))
plotCurrents(NaCurrs, KCurrs, 10)
plotCurrents(NaCurrs, KCurrs, 20)
plotCurrents(NaCurrs, KCurrs, 30)
plt.xlabel('Z-axis (um)')
plt.ylabel('Current (pA/um^2)')
plt.legend()
plt.show()
CellPotODE = ResultSelector.FromFile('CellPotODE.dat')
plt.figure(figsize=(10, 7))
plotPotential(CellPotODE, 10)
plotPotential(CellPotODE, 20)
plotPotential(CellPotODE, 30)
plt.xlabel('Z-axis (um)')
plt.ylabel('Membrane potential (mV)')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we define some parameters for the simulation, which are intended to remain constant throughout the script. We start with the potassium channel and define the single-channel conductance, channel density and reversal potential, keeping to a conductance to 0.036 S/cm2.
Step2: The first thing to note is that, as usual in STEPS, units are s.i., which means in the above example the single channel conductance is given in Siemens and the reversal potential for the ohmic current is in volts.
Step3: The HH model also includes a leak conductance, which may also be discretised. The overall conductance is
Step4: The next parameters require a little explanation. Taking the potassium conductance as an example, the
Step5: We now define some more important parameters for our simulation. The first is temperature assumed for
Step6: Next we will declare a function that computes the HH transition rates in a standardized form. Each of the equations shown above can be modified to fit the following generic form (see Nelson ME (2005) Electrophysiological Models In
Step7: Note that, under certain conditions, both the numerator and the denominator can go to zero for some specific membrane potential. In this case, we use L'Hôpital's rule to return a meaningful value.
Step8: Model declaration
Step9: The leak channel can only be in a single state but still needs to have an associated SubUnitState. It is however possible to skip declaring a SubUnit by giving the SubUnitState directly to the Channel constructor. Note that we declared a surface system since voltage dependent reactions are going to happen on the cell membrane.
Step10: We first declare voltage dependent rates with the VDepRate class. Its constructor takes a function as first parameter and an optional voltage range (with the vrange parameter). The function must take a voltage in Volts as a parameter and return a reaction rate constant in S.I. units. Here the rates correspond to the classical HH rates.
Step11: Note that if we decided to declare n0, n1, etc. states explicitely, we could have declared reactions in the following way
Step12: As this guide uses jupyter notebook, we need to rewrite both context managers with model
Step13: We then compute some element lists that will be used later
Step14: We first create a list (facetris) of the triangles that are on one of the ends of the axon (that is oriented along the z-axis) and extract the corresponding vertices with facetris.verts. We will use these vertices to inject current in the axon. The triangles in the membrane are all triangles in the mesh surface except the ones that are on the injection face. Finally, we declare a list of regularly spaced tetrahedrons along the axon that we will record potential from.
Step15: The last line creates a new and very important object for the membrane potential calculation, the 'membrane' itself. The membrane class, steps.geom.Membrane, simply consists of one or more patch objects which must together form one continuous surface, although the membrane may be 'open' or 'closed' ('closed' means all member triangles are directly connected to 3 other membrane triangles and so form a closed surface, and 'open' means some triangles have fewer than 3 neighbours and so the surface contains holes). Any channels that exist in the patch(es) that comprise(s) the membrane are available to conduct a current (specified by steps.model.OhmicCurr or steps.model.GHKCurr objects). The INNER compartment(s) to the membrane patches will comprise the 'conduction volume' representing the intracellular region. The potential at all vertices in the membrane and conduction volume will be calculated and will vary with any channel, capacitive or externally applied currents, relative to the (earthed) extracellular region.
Step16: Note that we pass an additional boolean flag to the Simulation constructor, it specifies whether membrane potential calculations should be performed. This flag defaults to False so we could omit it in previous chapters, but since we do need membrane potential computations, we need to set it to True.
Step17: Note that we set the count of channels by using the Na_facs and K_facs arrays that contain the fractions of channels that should be in each state. K_facs is sorted such that the first element corresponds to the state in which all subunits are in the closed state and the last element corresponds to all subunits in open state. Na_facs contains two arrays, the first one corresponding to the Na_hSU subunit in state Na_hi and the second one corresponds to state Na_ha. The two arrays are organized like K_facs but for the Na_mSU subunit.
Step18: The first line sets the time-step period for the potential calculation, specified in seconds. This tells STEPS how often to perform the 'E-Field' calculation to evaluate potential, and update any voltage-dependent processes in the simulation. The optimal value for this time-step will vary for different simulations, so some things should be kept in mind when making the choice. Firstly, the time-step should be short enough that the voltage change occurring during each time-step is small and voltage can be assumed constant during each time-step for any voltage-dependent processes in the model. A large time-step may result in loss of accuracy. Secondly, the shorter the time-step the slower the simulation will be. Thirdly, the time-step must be shorter or equal to the simulation time-step (this is 0.1ms in our model) so that at least one membrane potential calculation can be carried out per simulation time-step. As a rough guide 0.01ms is usually highly accurate, and it is not recommended to exceed 0.1ms. So for this simulation we choose a calculation time-step of 0.01ms (which happens to be the default value).
Step19: We then run the simulation with
Step20: Simulation with TetODE
Step21: It is unfortunately not possible to record information about the spatial currents in TetODE (see available simulation paths per solver), we thus remove anything to do with recording the Na+ and K+ currents, which only leaves electrical potential
Step22: Note that, this time, we do not supply a dt argument to sim.toSave(...). Instead, for reasons we will explain later, we will manually tell STEPS when the data should be saved.
Step23: There is one important additions to the TetODE solver, the steps.sim.Simulation.setTolerances method.
Step24: The first function argument is absolute tolerance, the second is relative tolerance. In this example we set an absolute tolerance of $10^{-3}$ and relative tolerance of $10^{-4}$.
Step25: In Tetexact simulations, we saved data every 0.1 ms. To avoid potential issues when mixing automatic data saving and manual control of simulation steps, we explicitely save the data from CellPot every 10 iterations of the loop, i.e. every 0.1 ms. This is done by calling CellPot.save().
Step26: Plotting the results
Step27: The first argument will be the loaded ResultSelector. We then define another function to plot both K+ and Na+ current densities through the membrane along the axon
Step28: Since we recorded currents for each membrane triangle, we first need to bin the currents along the axon.
Step29: We then plot the Na+ and K+ current densities along the axon for the same time steps
Step30: TetODE Simulation
|
4,568
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import scipy.stats as st
from scipy.stats import norm
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
from IPython.core.display import HTML
css = open('style-table.css').read() + open('style-notebook.css').read()
HTML('<style>{}</style>'.format(css))
bodytemp_df = pd.read_csv('data/human_body_temperature.csv')
bodytemp_df
sns.distplot(bodytemp_df.temperature, bins = 25)
st.normaltest(df['temperature'])
hyp_mean = 98.6
sample_meantemp = bodytemp_df['temperature'].mean()
sample_std = bodytemp_df['temperature'].std()
print('The sample mean is : ' , bodytemp_df['temperature'].mean(), ' degrees Farenheit')
print('The sample standard deviation is : ' , bodytemp_df['temperature'].std(), ' degrees Farenheit')
sem_temp = sample_std/np.sqrt(len(bodytemp_df))
sem_temp
sample_std/np.sqrt(130)
z_score = (sample_meantemp - hyp_mean) / (sem_temp)
z_score
p_value=st.norm.cdf(z_score)
p_value
new_hyp = 98.2
z_score_new = (sample_meantemp - new_hyp)/ (sem_temp)
print(z_score_new)
p_value_new = 1-st.norm.cdf(z_score_new)
p_value_new
z_critical = st.norm.ppf(.975)
conf_int = z_critical*sem_temp
print ('margin of error: ', conf_int)
print('upper limit of normal: ', sample_meantemp + conf_int)
print('lower limit of normal: ', sample_meantemp - conf_int)
female_df = bodytemp_df[bodytemp_df.gender == 'F'].copy()
male_df = bodytemp_df[bodytemp_df.gender == 'M'].copy()
male_mean = male_df['temperature'].mean()
print('Male mean is: ', male_mean)
female_mean = female_df['temperature'].mean()
print('Female mean is: ', female_mean)
male_std = male_df['temperature'].std()
print('Male standard deviation is: ', male_std)
female_std = female_df['temperature'].std()
print('Female standard deviation is: ', female_std)
difference_mean = female_mean - male_mean
print('Mean difference between two populations: ', difference_mean)
difference_sem = np.sqrt(male_std**2/len(male_df) + female_std**2/len(female_df))
print( 'Standard error of the mean: ', difference_sem)
z_score_diff = (difference_mean - 0)/difference_sem
z_score_diff
p_value_diff = 1-st.norm.cdf(z_score_diff)
p_value_diff
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So, we see that we have 130 data points to work with. First, we want to take a look at the overall distribution.
Step2: We see that our sample distribution does look like a normal distribution, albeit slightly left skewed. Nonetheless, we feel that it is reasonable to assume the CLT holds for this data. We see from our normaltest that the p-value returned is quite high, 25%. So we cannot reject the null hypothesis of this sample coming from a normal distribution. Thus both a visual inspection and a more rigorous computational one lets us conclude that the popluation is normally distributed in this case.
Step3: Since we have a 'large' sample size, we can estimate the population standard deviation, and population mean as being equal to the sample standard deviation and the sample mean. We use the sample standard deviation and sample size to obtain our best estimate of the standard error of the mean.
Step4: Thus from our hypothesis that our sample population mean is incorrect, and that the real mean is 98.6 degrees seems decidedly unlikely. We are confident that the probablity of finding a value at least as low in our sample population (more than 5 stds below the mean!), is only $2.45\times10^{-8}$. Thus we choose to reject the original hypothesis that the population mean is 98.6 degrees, based on the available data. Instead we shall accept the new value of 98.2, which as seen above, is well within one standard deviation of the sample mean.
Step5: So if human body temperature is outside of the range given above, then we are reasonably sure that the temperature is abnormal, as our range should encompass 95% of the population.
Step6: Now that we have our population mean difference, as well as the standard error of this mean, we can go ahead and look up p-values for our data, and compare it to some threshold. Let's use the standard 5% threshold.
|
4,569
|
<ASSISTANT_TASK:>
Python Code:
crime_rate_data = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv')
crime_rate_data
graphlab.canvas.set_target('ipynb')
crime_rate_data.show(view='Scatter Plot', x = "CrimeRate", y = "HousePrice")
crime_model = graphlab.linear_regression.create(crime_rate_data,
target = 'HousePrice',
features = ['CrimeRate'],
validation_set = None,
verbose = False)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(crime_rate_data['CrimeRate'], crime_rate_data['HousePrice'],
'.', crime_rate_data['CrimeRate'],
crime_model.predict(crime_rate_data), '-')
crime_rate_data_noCC = crime_rate_data[crime_rate_data['MilesPhila'] != 0.0]
crime_rate_data_noCC.show(view='Scatter Plot', x = "CrimeRate", y = "HousePrice")
crime_model_withNoCC = graphlab.linear_regression.create(crime_rate_data_noCC,
target = 'HousePrice',
features = ['CrimeRate'],
validation_set = None,
verbose = False)
plt.plot(crime_rate_data_noCC['CrimeRate'], crime_rate_data_noCC['HousePrice'], '.',
crime_rate_data_noCC['CrimeRate'], crime_model_withNoCC.predict(crime_rate_data_noCC), '-')
crime_model.get('coefficients')
crime_model_withNoCC.get('coefficients')
crime_rate_data_noHighEnd = crime_rate_data_noCC[crime_rate_data_noCC['HousePrice'] < 350000]
crime_model_noHighEnd = graphlab.linear_regression.create(crime_rate_data_noHighEnd,
target = 'HousePrice',
features = ['CrimeRate'],
validation_set = None,
verbose = False)
crime_model_withNoCC.get('coefficients')
crime_model_noHighEnd.get('coefficients')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fit the regression model using crime rate as the feature
Step2: Look at the fit of the (initial) model
Step3: We can see that there is an outlier in the data, where the crime rate is high, but still, the house price is higher, hence not following the trend. This point is the center of the city (Center City data point)
Step4: Notice the difference in the previous scatter plot and this one after removing the outlier (city center)
Step5: Look at the fit of the model with outlier removed
Step6: Compare coefficients for full data fit Vs. data with CenterCity removed
Step7: Remove high-value outlier neighborhoods and redo analysis
Step8: How much do the coefficients change?
|
4,570
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
df = pd.read_csv("../data/hanford.csv")
df.describe()
df['Mortality'].hist(bins=5)
df['Mortality'].mean()
df['Mort_high'] = df['Mortality'].apply(lambda x:1 if x>=147.1 else 0)
df['Expo_high'] = df['Exposure'].apply(lambda x:1 if x>=3.41 else 0)
def exposure_high(x):
if x >= 3.41:
return 1
else:
return 0
df
lm = LogisticRegression()
x = np.asarray(df[['Exposure']])
y = np.asarray(df['Mort_high'])
lm = lm.fit(x,y)
lm.predict([50])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step4: 5. Create a logistic regression model
Step5: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
|
4,571
|
<ASSISTANT_TASK:>
Python Code:
from gvanim import Animation
from gvanim.jupyter import interactive
ga = Animation()
heap = [ None, 5, 6, 7, 8, 9, 10, 11, 12 ]
ga.label_node( 1, heap[ 1 ] )
for i in range( 2, len( heap ) ):
ga.label_node( i, heap[ i ] )
ga.add_edge( i // 2, i )
def down_heap( i, n ):
t = heap[ i ]
while i <= n // 2:
ga.highlight_node( i )
ga.next_step()
j = 2 * i
if j < n and heap[ j ] < heap[ j + 1 ]: j += 1
ga.highlight_edge( i, j )
ga.next_step()
if t >= heap[ j ]: break
heap[ i ] = heap[ j ]
ga.highlight_node( i )
ga.highlight_node( j )
ga.label_node( i, heap[ i ] )
ga.label_node( j, heap[ j ] )
ga.next_step()
i = j
heap[ i ] = t
ga.highlight_node( i )
ga.label_node( i, heap[ i ] )
ga.next_step()
n = len( heap ) - 1
ga.next_step()
for i in range( n // 2, 0, -1 ):
down_heap( i, n )
ga.next_step()
while n > 1:
heap[ 1 ], heap[ n ] = heap[ n ], heap[ 1 ]
ga.highlight_node( 1 )
ga.highlight_node( n )
ga.label_node( 1, heap[ 1 ] )
ga.label_node( n, heap[ n ] )
ga.next_step()
n -= 1
down_heap( 1, n )
interactive( ga, 400 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define an heap
Step2: Now draw it (nodes will be named as the array indices and labelled as the array values)
Step3: Define the usual iterative down heap procedure (endowed with animation actions)
Step4: Fix the heap calling down_heap on his lower half
Step5: And finally exchange the top with heap positions starting form the last one (fixing again with down_heap)
Step6: We are ready to plot the animation interactively!
|
4,572
|
<ASSISTANT_TASK:>
Python Code:
with open('../pipeline/data/Day90ApartmentData.json') as g:
my_dict2 = json.load(g)
dframe = DataFrame(my_dict2)
dframe = dframe.T
dframe = dframe[['content', 'laundry', 'price', 'dog', 'bed',
'bath', 'feet', 'long', 'parking', 'lat', 'smoking', 'getphotos',
'cat', 'hasmap', 'wheelchair', 'housingtype']]
dframe = dframe.dropna()
dframe.describe()
dframe2 = DataFrame(my_dict2)
dframe2 = dframe2.T
dframe2 = dframe2[['content', 'laundry', 'price', 'dog', 'bed',
'bath', 'feet', 'long', 'parking', 'lat', 'smoking', 'getphotos',
'cat', 'hasmap', 'wheelchair', 'housingtype']]
dframe2.describe()
dframe = pd.get_dummies(dframe2, columns = ['laundry', 'parking', 'smoking', 'wheelchair', 'housingtype'])
pd.set_option('display.max_columns', 500)
dframe = df
from sklearn.cross_validation import train_test_split, ShuffleSplit
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import accuracy_score as acc
dframe.describe(include='all')
from sklearn.cross_validation import train_test_split
# X_train, X_test, y_train, y_test = train_test_split(
# dframe.drop('price', axis = 1), dframe.price, test_size=0.33)
# print X_train.shape
# print y_train.shape
from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor()
reg.fit(X_train, y_train)
print reg.score(X_test, y_test)
scores = []
for thous in range(1000,len(dframe),1000):
temp_dframe = dframe[:thous]
X_train, X_test, y_train, y_test = train_test_split(
temp_dframe.drop('price', axis = 1), temp_dframe.price, test_size=0.33)
reg = RandomForestRegressor()
reg.fit(X_train,y_train)
pred = reg.predict(X_test)
# pred = [float(x) for x in pred]
# y_test = [float(x) for x in y_test]
score = acc(pred, np.array(y_test))
scores.append(score)
scores
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.plot(range(1000,len(dframe),1000),scores)
def listing_cleaner(entry):
print entry
df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'a', 'c'],
'C': [1, 2, 3]})
df
pd.get_dummies(df, columns=['A','C'])
listing_cleaner(my_dict['5465197037'])
type(dframe['bath']['5399866740'])
dframe.bath = dframe.bath.replace('shared',0.5)
dframe.bath = dframe.bath.replace('split',0.5)
dframe.smoking = dframe.smoking.replace(np.nan, 'smoking')
dframe.furnished = dframe.furnished.replace(np.nan,'not furnished')
dframe.wheelchair = dframe.wheelchair.replace(np.nan, 'not wheelchair accessible')
dframe.describe()
dframe.bed.unique()
from sklearn.preprocessing import Imputer, LabelEncoder
def meanimputer(column):
imp = Imputer(missing_values='NaN', strategy='mean', axis=1)
imp.fit(column)
X = imp.transform(column)
return X[0]
arr = np.array([np.nan, 'house', 'boat', 'houseboat', 'house', np.nan, 'house','houseboat'])
prac_df = DataFrame()
prac_df['arr'] = arr
prac_df['arr']
modeimputer(prac_df['arr'])
def modeimputer(column):
le = LabelEncoder()
column = le.fit_transform(column)
print le.classes_
print type(le.classes_[0])
print column
nan = le.transform([np.nan])[0]
print nan
print type(column)
column = list(column)
for _,i in enumerate(column):
if i == nan:
column[_] = np.nan
imp = Imputer(missing_values='NaN', strategy='most_frequent', axis=1)
imp.fit(column)
X = imp.transform(column)
for _,i in enumerate(X[0]):
if np.isnan(i):
X[_] = 0
X = X.astype(int)
Y = le.inverse_transform(X)
return Y
import pandas as pd
import numpy as np
from sklearn.base import TransformerMixin
class ModeImputer(TransformerMixin):
def __init__(self):
Impute missing values.
Columns of dtype object are imputed with the most frequent value
in column.
Columns of other types are imputed with mean of column.
Credit:http://stackoverflow.com/questions/25239958/
impute-categorical-missing-values-in-scikit-learn
def fit(self, X, y=None):
self.fill = pd.Series([X[c].value_counts().index[0]
if X[c].dtype == np.dtype('O') else X[c].mean() for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.fill)
data = [
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
[np.nan, np.nan, np.nan]
]
X = pd.DataFrame(data)
xt = ModeImputer().fit_transform(X)
print('before...')
print(X)
print('after...')
print(xt)
dframe = ModeImputer().fit_transform(dframe)
dframe.head()
dframe.describe(include = 'all')
dframe.bed.mean()
dframe.parking.unique()
u_dframe = DataFrame()
dframe['bath'] = meanimputer(dframe['bath'])
dframe['bed'] = meanimputer(dframe['bed'])
dframe['feet'] = meanimputer(dframe['feet'])
dframe['lat'] = meanimputer(dframe['lat'])
dframe['long'] = meanimputer(dframe['long'])
dframe.head()
dframe.describe(include='all')
data = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
plt.figure(figsize=(15,10))
plt.scatter(data = data, x = 'long',y='lat')
XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index]
from sklearn.cluster import KMeans
km = KMeans(n_clusters=40)
km.fit(data)
neighborhoods = km.cluster_centers_
%pylab inline
figure(1,figsize=(20,12))
plot([row[1] for row in data],[row[0] for row in data],'b.')
for i in km.cluster_centers_:
plot(i[1],i[0], 'g*',ms=25)
'''Note to Riley: come back and make it look pretty'''
neighborhoods = neighborhoods.tolist()
for i in enumerate(neighborhoods):
i[1].append(i[0])
print neighborhoods
def clusterer(X, Y,neighborhoods):
neighbors = []
for i in neighborhoods:
distance = ((i[0]-X)**2 + (i[1]-Y)**2)
neighbors.append(distance)
closest = min(neighbors)
return neighbors.index(closest)
neighborhoodlist = []
for i in dframe.index:
neighborhoodlist.append(clusterer(dframe['lat'][i],dframe['long'][i],neighborhoods))
dframe['neighborhood'] = neighborhoodlist
dframe
from sklearn import preprocessing
def CategoricalToBinary(dframe,column_name):
le = preprocessing.LabelEncoder()
listy = le.fit_transform(dframe[column_name])
dframe[column_name] = listy
unique = dframe[column_name].unique()
serieslist = [list() for _ in xrange(len(unique))]
for column, _ in enumerate(serieslist):
for i, item in enumerate(dframe[column_name]):
if item == column:
serieslist[column].append(1)
else:
serieslist[column].append(0)
dframe[column_name+str(column)] = serieslist[column]
return dframe
pd.set_option('max_columns', 100)
dframe = CategoricalToBinary(dframe,'housingtype')
dframe = CategoricalToBinary(dframe,'parking')
dframe = CategoricalToBinary(dframe,'laundry')
dframe = CategoricalToBinary(dframe,'smoking')
dframe = CategoricalToBinary(dframe,'wheelchair')
dframe = CategoricalToBinary(dframe,'neighborhood')
dframe
dframe = dframe.drop('date',1)
dframe = dframe.drop('housingtype',1)
dframe = dframe.drop('parking',1)
dframe = dframe.drop('laundry',1)
dframe = dframe.drop('smoking',1)
dframe = dframe.drop('wheelchair',1)
dframe = dframe.drop('neighborhood',1)
dframe = dframe.drop('time',1)
columns=list(dframe.columns)
from __future__ import division
print len(dframe)
df2 = dframe[dframe.price < 10000][columns].dropna()
print len(df2)
print len(df2)/len(dframe)
price = df2[['price']].values
columns.pop(columns.index('price'))
features = df2[columns].values
from sklearn.cross_validation import train_test_split
features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1, random_state=42)
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
reg = RandomForestRegressor()
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
print r2_score(forest_pred, price_test)
plt.scatter(forest_pred,price_test)
df2['predictions'] = reg.predict(df2[columns])
df2['predictions_diff'] = df2['predictions']-df2['price']
sd = np.std(df2['predictions_diff'])
sns.kdeplot(df2['predictions_diff'][df2['predictions_diff']>-150][df2['predictions_diff']<150])
sns.plt.xlim(-150,150)
data = df2[dframe.lat > 45.45][df2.lat < 45.6][df2.long < -122.4][df2.long > -122.8][df2['predictions_diff']>-150][df2['predictions_diff']<150]
plt.figure(figsize=(15,10))
plt.scatter(data = data, x = 'long',y='lat', c = 'predictions_diff',s=10,cmap='coolwarm')
dframe
print np.mean([1,2,34,np.nan])
def averager(dframe):
dframe = dframe.T
dframe.dropna()
averages = {}
for listing in dframe:
try:
key = str(dframe[listing]['bed'])+','+str(dframe[listing]['bath'])+','+str(dframe[listing]['neighborhood'])+','+str(dframe[listing]['feet']-dframe[listing]['feet']%50)
if key not in averages:
averages[key] = {'average_list':[dframe[listing]['price']], 'average':0}
elif key in averages:
averages[key]['average_list'].append(dframe[listing]['price'])
except TypeError:
continue
for entry in averages:
averages[entry]['average'] = np.mean(averages[entry]['average_list'])
return averages
averages = averager(dframe)
print averages
dframe['averages']= averages[str(dframe['bed'])+','+str(dframe['bath'])+','+str(dframe['neighborhood'])+','+str(dframe['feet']-dframe['feet']%50)]
dframe.T
reg = RandomForestRegressor(n_estimators = 100)
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
print r2_score(forest_pred, price_test)
print plt.scatter(pred,price_test)
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth = 5)
reg.fit(features_train, price_train)
print len(features_train[0])
columns = [str(x) for x in columns]
print columns
from sklearn.tree import export_graphviz
export_graphviz(reg,feature_names=columns)
def neighborhood_optimizer(dframe,neighborhood_number_range, counter_num):
XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index]
r2_dict = []
for i in neighborhood_number_range:
counter = counter_num
average_accuracy_list = []
while counter > 0:
km = KMeans(n_clusters=i)
km.fit(data)
neighborhoods = km.cluster_centers_
neighborhoods = neighborhoods.tolist()
for x in enumerate(neighborhoods):
x[1].append(x[0])
neighborhoodlist = []
for z in dframe.index:
neighborhoodlist.append(clusterer(dframe['lat'][z],dframe['long'][z],neighborhoods))
dframecopy = dframe.copy()
dframecopy['neighborhood'] = Series((neighborhoodlist), index=dframe.index)
df2 = dframecopy[dframe.price < 10000][['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'price','neighborhood']].dropna()
features = df2[['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'neighborhood']].values
price = df2[['price']].values
features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1)
reg = RandomForestRegressor()
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
counter -= 1
average_accuracy_list.append(r2_score(forest_pred, price_test))
total = 0
for entry in average_accuracy_list:
total += entry
r2_accuracy = total/len(average_accuracy_list)
r2_dict.append((i,r2_accuracy))
print r2_dict
return r2_dict
neighborhood_number_range = [i for _,i in enumerate(range(2,31,2))]
neighborhood_number_range
r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10)
r2_dict[:][0]
plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict])
neighborhood_number_range = [i for _,i in enumerate(range(7,15))]
neighborhood_number_range
r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10)
print r2_dict
plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict])
r2_dict = neighborhood_optimizer(dframe,[10,11,12],25)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Clean up the data a bit
Step3: It looks like Portland!!!
Step4: We'll use K Means Clustering because that's the clustering method I recently learned in class! There may be others that work better, but this is the tool that I know
Step5: We chose our neighborhoods!
Step6: Create a function that will label each point with a number coresponding to it's neighborhood
Step7: Here's the new Part. We're breaking out the neighborhood values into their own columns. Now the algorithms can read them as categorical data rather than continuous data.
Step8: Ok, lets put it through Decision Tree!
Step9: Wow! up to .87! That's our best yet! What if we add more trees???
Step10: Up to .88!
Step11: Looks like the optimum is right around 10 or 11, and then starts to drop off. Let's get a little more granular and look at a smaller range
Step12: Trying a few times, it looks like 10, 11 and 12 get the best results at ~.85. Of course, we'll need to redo some of these optomizations after we properly process our data. Hopefully we'll see some more consistency then too.
|
4,573
|
<ASSISTANT_TASK:>
Python Code:
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
plt.scatter(X[:,0], X[:,1], s=100)
plt.xlim(-4,4)
plt.ylim(-3,3)
plt.title("original data")
plt.show()
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
Z = pca.transform(X)
Z
w, V = np.linalg.eig(pca.get_covariance())
V.T.dot(X.T).T # 이 V는 축이 하나 없어진 V이다. 뒤에 부호가 바뀐 이유는 아이겐 밸류에서 중요한 것은 절대값이다. 바뀔 수 있다.
plt.scatter(Z[:,0], Z[:,1], c='r', s=100)
plt.xlim(-4,4)
plt.ylim(-3,3)
plt.title("transformed data")
plt.show()
plt.scatter(Z[:,0], np.zeros_like(Z[:,1]), c='g', s=100)
plt.xlim(-4,4)
plt.ylim(-3,3)
plt.title("transformed and truncated data")
plt.show()
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:,2:]
plt.scatter(X[:, 0], X[:, 1], c=iris.target, s=200, cmap=plt.cm.jet);
X2 = PCA(2).fit_transform(X) #이 안에 2라는 숫자는 무엇이냐? 나중에 2차원으로 해라. 원래 2차원인데? 회전만 시켜라. 이렇게 하면
# 사실 x2를 볼 필요가 없고 x1만 보면 된다.
plt.scatter(X2[:, 0], X2[:, 1], c=iris.target, s=200, cmap=plt.cm.jet)
plt.xlim(-6, 6)
plt.show()
X1 = PCA(1).fit_transform(X)
sns.distplot(X1[iris.target==0], color="b", bins=20, rug=True, kde=False)
sns.distplot(X1[iris.target==1], color="g", bins=20, rug=True, kde=False)
sns.distplot(X1[iris.target==2], color="r", bins=20, rug=True, kde=False)
plt.xlim(-6, 6)
plt.show()
X3 = PCA(2).fit_transform(iris.data) #여기서는 4개가 다 들어갔다. 그 중에서 2개만 뽑은 것이다.
plt.scatter(X3[:, 0], X3[:, 1], c=iris.target, s=200, cmap=plt.cm.jet);
# 피처가 원래 4개가 있었는데 잘 조합해서 x1다시를 만든다. x2 마찬가지로 다시를 만들어서 조합을 하면 아주 선명하게 분리가 된다.
X4 = PCA(3).fit_transform(iris.data)
from mpl_toolkits.mplot3d import Axes3D
def plot_pca(azim):
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d');
ax.scatter(X4[:,0], X4[:,1], X4[:,2], c=iris.target, s=100, cmap=plt.cm.jet, alpha=1);
ax.view_init(20, azim)
plot_pca(-60)
from ipywidgets import widgets
widgets.interact(plot_pca, azim=widgets.IntSlider(min=0, max=180, step=10, value=0));
from sklearn.datasets import load_digits
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
N=2; M=5;
fig = plt.figure(figsize=(10, 4))
plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)
for i in range(N):
for j in range(M):
k = i*M+j
ax = fig.add_subplot(N, M, k+1)
ax.imshow(digits.images[k], cmap=plt.cm.bone, interpolation="none");
ax.grid(False)
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
plt.title(digits.target_names[k])
from sklearn.decomposition import PCA
pca = PCA(n_components=10)
X_pca = pca.fit_transform(X_digits)
print(X_digits.shape)
print(X_pca.shape)
plt.scatter(X_pca[:,0], X_pca[:,1], c=y_digits, s=100, cmap=plt.cm.jet)
plt.axis("equal")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
def plot_pca2(azim):
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d');
ax.scatter(X_pca[:,0], X_pca[:,1], X_pca[:,2], c=y_digits, s=100, cmap=plt.cm.jet, alpha=1);
ax.view_init(20, azim)
plot_pca2(-60)
from ipywidgets import widgets
widgets.interact(plot_pca2, azim=widgets.IntSlider(min=0,max=180,step=10,value=0));
N=2; M=5;
fig = plt.figure(figsize=(10,3.2))
plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)
for i in range(N):
for j in range(M):
k = i*M+j
p = fig.add_subplot(N, M, k+1)
p.imshow(pca.components_[k].reshape((8,8)), cmap=plt.cm.bone, interpolation='none')
plt.xticks([])
plt.yticks([])
plt.grid(False)
A1_mean = [1, 1]
A1_cov = [[2, 1], [1, 1]]
A1 = np.random.multivariate_normal(A1_mean, A1_cov, 50)
A2_mean = [5, 5]
A2_cov = [[2, 1], [1, 1]]
A2 = np.random.multivariate_normal(A2_mean, A2_cov, 50)
A = np.vstack([A1, A2])
B_mean = [5, 0]
B_cov = [[0.8, -0.7], [-0.7, 0.8]]
B = np.random.multivariate_normal(B_mean, B_cov, 50)
AB = np.vstack([A, B])
plt.scatter(A[:,0], A[:,1], c='r')
plt.scatter(B[:,0], B[:,1], c='g')
plt.show()
pca = PCA(n_components=2)
pca.fit(AB)
A_transformed = pca.transform(A)
B_transformed = pca.transform(B)
plt.scatter(A_transformed[:,0], A_transformed[:,1], c="r", s=100)
plt.scatter(B_transformed[:,0], B_transformed[:,1], c="g", s=100)
plt.show()
pca = PCA(n_components=1)
pca.fit(AB)
A_transformed = pca.transform(A)
B_transformed = pca.transform(B)
plt.scatter(A_transformed, np.zeros(len(A_transformed)), c="r", s=100)
plt.scatter(B_transformed, np.zeros(len(B_transformed)), c="g", s=100)
plt.show()
sns.distplot(A_transformed, color="b", bins=20, rug=True, kde=False)
sns.distplot(B_transformed, color="g", bins=20, rug=True, kde=False)
plt.show()
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(kernel="cosine", n_components=2) #데이터의 모양이 바뀐다. 이 때는 차원축소가 가능하다.
kpca.fit(AB)
A_transformed2 = kpca.transform(A)
B_transformed2 = kpca.transform(B)
plt.scatter(A_transformed2[:,0], A_transformed2[:,1], c="r", s=100)
plt.scatter(B_transformed2[:,0], B_transformed2[:,1], c="g", s=100)
plt.show()
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(kernel="cosine", n_components=1)
kpca.fit(AB)
A_transformed2 = kpca.transform(A)
B_transformed2 = kpca.transform(B)
plt.scatter(A_transformed2, np.zeros(len(A_transformed2)), c="r", s=100)
plt.scatter(B_transformed2, np.zeros(len(B_transformed2)), c="g", s=100)
plt.show()
sns.distplot(A_transformed2, color="b", bins=20, rug=True, kde=False)
sns.distplot(B_transformed2, color="g", bins=20, rug=True, kde=False)
plt.show()
from sklearn.datasets import make_circles
np.random.seed(0)
X, y = make_circles(n_samples=400, factor=.3, noise=.05)
reds = y == 0
blues = y == 1
plt.plot(X[reds, 0], X[reds, 1], "ro")
plt.plot(X[blues, 0], X[blues, 1], "bo")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
kpca = KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=10) #radial basis fuunction_rbf
kpca.fit(X)
A_transformed2 = kpca.transform(X[reds])
B_transformed2 = kpca.transform(X[blues])
plt.scatter(A_transformed2[:,0], A_transformed2[:,1], c="r", s=100)
plt.scatter(B_transformed2[:,0], B_transformed2[:,1], c="b", s=100)
plt.show()
from sklearn.datasets import fetch_mldata
from sklearn.decomposition import PCA
wine = fetch_mldata("wine")
X, y = wine.data, wine.target
pca = PCA().fit(X)
var = pca.explained_variance_
cmap = sns.color_palette()
plt.bar(np.arange(1,len(var)+1), var/np.sum(var), align="center", color=cmap[0])
plt.step(np.arange(1,len(var)+1), np.cumsum(var)/np.sum(var), where="mid", color=cmap[1])
plt.show()
X_pca = PCA(2).fit_transform(X)
cmap = mpl.colors.ListedColormap(sns.color_palette("Set1"))
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap=cmap)
plt.show()
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X_pca, y)
xmin, xmax = X_pca[:,0].min(), X_pca[:,0].max()
ymin, ymax = X_pca[:,1].min(), X_pca[:,1].max()
XGrid, YGrid = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000), np.arange(ymin, ymax, (ymax-ymin)/1000))
ZGrid = np.reshape(clf.predict(np.array([XGrid.ravel(), YGrid.ravel()]).T), XGrid.shape)
cmap = mpl.colors.ListedColormap(sns.color_palette("Set3"))
plt.contourf(XGrid, YGrid, ZGrid, cmap=cmap)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap=cmap)
plt.show()
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
model1 = linear_model.LogisticRegression()
model1.fit(X_digits, y_digits)
pca = decomposition.PCA()
logistic = linear_model.LogisticRegression()
model2 = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
model2.fit(X_digits, y_digits)
from sklearn.metrics import classification_report
print(classification_report(y_digits, model1.predict(X_digits)))
print(classification_report(y_digits, model2.predict(X_digits)))
from itertools import combinations
for x in combinations(np.arange(4), 2):
print(x)
from sklearn.datasets import load_iris
iris = load_iris()
y = iris.target
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_score
for idx in combinations(np.arange(4), 2):
X = iris.data[:, idx]
model = LogisticRegression().fit(X, y)
print(precision_score(y, model.predict(X), average="micro")) #average가 micro이면 가중치평균을 최소화 하겠다는 의미
from sklearn.metrics import classification_report
print(classification_report(y, model.predict(X)))
from sklearn.decomposition import PCA
X = PCA(2).fit_transform(iris.data)
model = LogisticRegression().fit(X, y)
print(precision_score(y, model.predict(X)))
print(classification_report(y, model.predict(X)))
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
model1 = linear_model.LogisticRegression()
model1.fit(X_digits, y_digits)
pca = decomposition.PCA(16)
logistic = linear_model.LogisticRegression()
model2 = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
model2.fit(X_digits, y_digits)
X_digits.shape
from sklearn.metrics import classification_report
print(classification_report(y_digits, model1.predict(X_digits)))
print(classification_report(y_digits, model2.predict(X_digits)))
X = np.array([[-1, -1], [-2, -1], [-3,-2], [1, 1], [2, 1], [3, 2]])
X
np.cov(X)
np.cov(X.T)
npcov = np.cov(X.T, bias=True)
npcov
L, V = np.linalg.eig(npcov)
L #람다. 고유값
V #고유벡터.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 위로 세워지지 않고 옆으로 뻗은 이유는 확률 변수의 좌표 변환 이야기를 다시 하는 것이다.~~
Step2: IRIS 데이터의 예
Step3: 이미지 PCA
Step4: 다차원 공간에서 그림 1개는 점 1개이다. 예를 들어 0 한 개는 어느 한 공간(다른 은하계)에 모여 있다는 의미이다
Step5: 지금은 2차원이라서 뭉쳐 있지만 차원을 늘릴수록 떨어져 있는다.
Step6: 이미지가 벡터이면 거꾸로 벡터도 이미지이다. 벡터도 점이기 때문에 거꾸로 이미지가 될 수 있다.
Step7: 이런 좌표를 카테시안 좌표라고 한다. 카테시안? 좌표계(r과 세터_각도)를 이용하면 구할 수 있다. 비선형을 이용해서 구하는 방식을 커널방식이라고 한다.
Step8: 이걸로 차원 축소가 불가능하다.
Step9: 디시전트리로는 이것을 나눌수 있다. 하지만 퍼셉트론이나 로지스틱회귀분석은 어떻게 해서도 나눌 수 없다.
Step10: 성분 수의 결정
Step11: 최소한 5개는 남겨두어야 80% 이상을 유지할 수 있는 기준점이 된다. 차원을 줄이다보면 어느 순간 퍼포먼스가 살짝 올라갈 때가 있다. 경우에 따라서 공선성이 줄어드면 퍼포먼스의 베리언스가 줄어들기 때문에 살짝 올라간다. 그 때의 차원수를 선택해도 좋다.
Step12: 파이프라인
Step13: 실습
Step14: 이렇게 해서 PCA는 끝이 아니고 파이프라인
Step15: 질문 내용
|
4,574
|
<ASSISTANT_TASK:>
Python Code:
import sys
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import PIL
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
# the following line is not required if BatchFlow is installed as a python package.
sys.path.append('../..')
from batchflow import D, B, V, C, R, P
from batchflow.utils import plot_images
from batchflow.opensets import MNIST
from batchflow.models.tf import TFModel
from batchflow.models.torch import TorchModel
from batchflow.models.metrics import ClassificationMetrics
plt.style.use('ggplot')
mnist = MNIST()
config = {
'model': TorchModel,
'channels': 'first'}
# or for TensorFlow model
# config = {
# 'model': TFModel,
# 'channels': 'last'}
model_config = {
'inputs/images/shape': B.image_shape,
'inputs/labels/classes': D.num_classes,
'initial_block/inputs': 'images',
'body': {'layout': 'cna cna cna',
'filters': [16, 32, 64],
'kernel_size': [7, 5, 3],
'strides': 2},
'head': {'layout': 'Pf',
'units': 10},
'loss': 'ce',
'optimizer': 'Adam',
'output': dict(predicted=['proba', 'labels'])
}
def custom_filter(image, kernel_weights=None):
Apply filter with custom kernel to image
Parameters
----------
kernel_weights: np.array
Weights of kernel.
Returns
-------
filtered image
if kernel_weights is None:
kernel_weights = np.ones((3,3))
kernel_weights[1][1] = 10
kernel = PIL.ImageFilter.Kernel(kernel_weights.shape, kernel_weights.ravel())
return image.filter(kernel)
train_pipeline = (
mnist.train.p
.init_variable('loss_history', default=[])
.init_model('dynamic', C('model'), 'conv', config=model_config)
.apply_transform(custom_filter, src='images', p=0.8)
.shift(offset=P(R('randint', 8, size=2)), p=0.8)
.rotate(angle=P(R('uniform', -10, 10)), p=0.8)
.scale(factor=P(R('uniform', 0.8, 1.2, size=R([1, 2]))), preserve_shape=True, p=0.8)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.train_model('conv', fetches='loss', images=B('images'), targets=B('labels'),
save_to=V('loss_history', mode='a'))
) << config
validation_pipeline = (
mnist.test.p
.init_variable('predictions')
.init_variable('metrics', default=None)
.import_model('conv', train_pipeline)
.apply_transform(custom_filter, src='images', p=0.8)
.shift(offset=P(R('randint', 8, size=2)), p=0.8)
.rotate(angle=P(R('uniform', -10, 10)), p=0.8)
.scale(factor=P(R('uniform', 0.8, 1.2, size=R([1, 2]))), preserve_shape=True, p=0.8)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('conv', images=B('images'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics(ClassificationMetrics, targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
) << config
MAX_ITER = 500
FREQUENCY = N_LAST = 20
batch_size = 128
for curr_iter in tqdm(range(1, MAX_ITER + 1)):
train_pipeline.next_batch(batch_size)
validation_pipeline.next_batch(batch_size)
if curr_iter % FREQUENCY == 0:
metrics = validation_pipeline.v('metrics')
accuracy = metrics[-N_LAST:].evaluate('accuracy')
#Early stopping
if accuracy > 0.9:
print('Early stop on {} iteration. Accuracy: {}'.format(curr_iter, accuracy))
break
plt.figure(figsize=(15, 5))
plt.plot(train_pipeline.v('loss_history'))
plt.xlabel("Iterations"), plt.ylabel("Loss")
plt.show()
inference_pipeline = (mnist.test.p
.init_variables('proba', 'labels')
.import_model('conv', train_pipeline)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('conv', images=B('images'),
fetches=['predicted_proba', 'predicted_labels'],
save_to=[V('proba'), V('labels')])) << config
batch = inference_pipeline.next_batch(12, shuffle=True)
plot_images(np.squeeze(batch.images), batch.labels,
batch.pipeline.v('proba'), ncols=4, figsize=(30, 35))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You don't need to implement a MNIST dataset. It is already done for you.
Step2: We can use deep learning frameworks such as TensorFlow or PyTorch to make a neural network. These frameworks have a lot of differences under the hood. Batchflow allows us not to dive deep into each of them and use the same model configuration, thereby allowing us to build framework-agnostic models.
Step3: As we already learned from the previous tutorials, first of all you have to define model configuration and create train and test pipelines.
Step5: Train pipeline
Step6: When config is defined, next step is to create a pipeline. Note that rotate and scale are methods of the ImagesBatch class. You can see all avalible augmentations in images tutorial.
Step7: Validation pipeline
Step8: Training process
Step9: Take a look at the loss history during training.
Step10: Results
Step11: It's always interesting to look at the images, so let's draw them.
|
4,575
|
<ASSISTANT_TASK:>
Python Code:
def int_montecarlo1(f, a, b, N):
# Evaluación numérica de integrales por Montecarlo tipo 1
# f=f(x) es la función a integrar (debe ser declarada previamente) que devuelve para cada x su valor imagen,
# a y b son los límites inferior y superior del intervalo donde se integrará la función, y N es el número
# de puntos con que se aproximará.
return (b-a)/N*np.sum(f(np.random.uniform(a, b, N)))
def int_montecarlo2(region, a1, b1, a2, b2, N):
# Evaluación numérica de integrales por Montecarlo tipo 2
# region=region(x,y) retorna True si la coordenada (x,y) pertenece a la región a integrar y False de lo
# contrario , a1, b1, a2, b2 son los límites del rectángulo que contiene la región, y N es el número de
# puntos con que se aproximará.
return (b-a)/N*np.sum(f(np.random.uniform(a, b, N)))
A_R = (b1-a1)*(b2-a2)
x = np.random.uniform(a1, b1, N.astype(int))
y = np.random.uniform(a2, b2, N.astype(int))
return A_R*np.sum(region(x,y))/N
# Importamos librerías
import numpy as np
import pandas as pd
import random
import pandas as pd
import numpy as np
i = np.arange(4)
df = pd.DataFrame(index=i,columns=['$a_i$', '$b_i$', '$c_i$', '$d_i$', '$e_i$', '$f_i$', '$p_i$'], dtype='float')
df.index.name = "$i$"
df['$a_i$'] = [0.5, 0.5, 0.5, 0.5]
df['$b_i$'] = [0.0, 0.0, 0.0, 0.0]
df['$c_i$'] = [0.0, 0.0, 0.0, 0.0]
df['$d_i$'] = [0.5, 0.5, 0.5, 0.5]
df['$e_i$'] = [1.0, 50.0, 1.0, 50.0]
df['$f_i$'] = [1.0, 1.0, 50.0, 50.0]
df['$p_i$'] = [0.1, 0.2, 0.3, 0.4]
df.round(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Considere las funciones $f_1(x)=\sqrt{1+x^{4}}$, $f_2(x)=\ln(\ln x)$, $f_3(x)=\frac {1}{\ln x}$, $f_4(x)=e^{e^{x}}$, $f_5(x)=e^{-{\frac {x^{2}}{2}}}$ y $f_6(x)=\sin(x^{2})$.
Step2: Ejemplo 2. Fractal aleatorio tipo Barnsley
|
4,576
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.event import define_target_events
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channels ['STI 014']
raw.info['bads'] += ['EEG 053'] # bads
# pick MEG channels
picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=True,
include=include, exclude='bads')
reference_id = 5 # presentation of a smiley face
target_id = 32 # button press
sfreq = raw.info['sfreq'] # sampling rate
tmin = 0.1 # trials leading to very early responses will be rejected
tmax = 0.59 # ignore face stimuli followed by button press later than 590 ms
new_id = 42 # the new event id for a hit. If None, reference_id is used.
fill_na = 99 # the fill value for misses
events_, lag = define_target_events(events, reference_id, target_id,
sfreq, tmin, tmax, new_id, fill_na)
print(events_) # The 99 indicates missing or too late button presses
# besides the events also the lag between target and reference is returned
# this could e.g. be used as parametric regressor in subsequent analyses.
print(lag[lag != fill_na]) # lag in milliseconds
# #############################################################################
# Construct epochs
tmin_ = -0.2
tmax_ = 0.4
event_id = dict(early=new_id, late=fill_na)
epochs = mne.Epochs(raw, events_, event_id, tmin_,
tmax_, picks=picks, baseline=(None, 0),
reject=dict(mag=4e-12))
# average epochs and get an Evoked dataset.
early, late = [epochs[k].average() for k in event_id]
times = 1e3 * epochs.times # time in milliseconds
title = 'Evoked response followed by %s button press'
fig, axes = plt.subplots(2, 1)
early.plot(axes=axes[0], time_unit='s')
axes[0].set(title=title % 'late', ylabel='Evoked field (fT)')
late.plot(axes=axes[1], time_unit='s')
axes[1].set(title=title % 'early', ylabel='Evoked field (fT)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Find stimulus event followed by quick button presses
Step3: View evoked response
|
4,577
|
<ASSISTANT_TASK:>
Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using K acytelation.
Step2: Chemical Vector
|
4,578
|
<ASSISTANT_TASK:>
Python Code:
from pygmyhdl import *
initialize()
# The following function will define a chunk of logic, hence the @chunk decorator precedes it.
# The blinker logic takes three inputs:
# clk_i: This is a clock signal input.
# led_o: This is an output signal that drives an LED on and off.
# length: This is the number of bits in the counter that generates the led_o output.
@chunk
def blinker(clk_i, led_o, length):
# Define a multi-bit signal (or bus) with length bits.
# Assign it a display name of 'cnt' for use during simulation.
cnt = Bus(length, name='cnt')
# Define a piece of sequential logic. Every time there is a positive
# edge on the clock input (i.e., it goes from 0 -> 1), the value of
# cnt is increased by 1. So over a sequence of clock pulses, the
# cnt value will progress 0, 1, 2, 3, ...
@seq_logic(clk_i.posedge)
def counter_logic():
cnt.next = cnt + 1
# This is a piece of simple combinational logic. It just connects the
# most significant bit (MSB) of the counter to the LED output.
@comb_logic
def output_logic():
led_o.next = cnt[length-1]
clk = Wire(name='clk') # This is a single-bit signal that carries the clock input.
led = Wire(name='led') # This is another single-bit signal that receives the LED output.
blinker(clk_i=clk, led_o=led, length=3); # Attach the clock and LED signals to a 3-bit blinker circuit.
clk_sim(clk, num_cycles=16) # Pulse the clock input 16 times.
show_waveforms()
show_text_table()
toVerilog(blinker, clk_i=clk, led_o=led, length=22) # Give it the function name, signal connections, and counter size.
print(open('blinker.v').read())
with open('blinker.pcf', 'w') as pcf:
pcf.write(
'''
set_io led_o 99
set_io clk_i 21
'''
)
!yosys -q -p "synth_ice40 -blif blinker.blif" blinker.v
!arachne-pnr -q -d 1k -p blinker.pcf blinker.blif -o blinker.asc
!icepack blinker.asc blinker.bin
!iceprog blinker.bin
print(open('blinker.py').read())
!python blinker.py
!yosys -q -p "synth_ice40 -blif blinker.blif" blinker.v
!arachne-pnr -q -d 1k -p blinker.pcf blinker.blif -o blinker.asc
!icepack blinker.asc blinker.bin
!iceprog blinker.bin
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After importing, set up the module to get it ready for what comes next
Step2: Next, the logic that blinks the LED is defined
Step3: The blinker logic is encapsulated in a Python function that's preceded with the @chunk decorator.
Step4: Finally, the blinker logic is instantiated by calling the blinker function
Step5: While it's not done here, it's possible to call the blinker function several times using
Step6: After the simulation completes, a graphical view of the waveforms for each of the named signals is generated
Step7: You can also view the signal values in a tabular format
Step8: Looking at the waveforms, you can see the led output pulses low-high one time for every eight pulses
Step9: The toVerilog function creates a file called blinker.v
Step10: So now you're ready to compile the Verilog code and program your first FPGA, right?
Step11: Now you really are ready to program the FPGA.
Step12: The options modify the operation of Yosys as follows
Step13: The options specify the following place-and-route operations
Step14: <a name="Downloading"></a>Downloading to the FPGA
Step15: After the download completes, the blinker logic starts to run inside the FPGA,
Step16: Then, just execute the file with Python
Step17: At this point, you will have the blinker.v Verilog file and the blinker.pcf file with the pin assignments.
|
4,579
|
<ASSISTANT_TASK:>
Python Code:
# Import the Kotta module
from kotta import Kotta, KottaJob
from kotta.kotta_functions import *
# Create a Kotta Connection using Login with Amazon credentials
# The token from Kotta is stored in the auth.file
konn = Kotta(open('../auth.file').read())
''' A typical python function
my_sum takes a list of numbers and returns the sum
'''
def my_sum(items):
return sum(items)
result = my_sum (range(0,100))
print(result)
@kottajob(konn, 'Test', 5)
def my_sum(items):
return sum(items)
result = my_sum (range(0,100))
print(result)
''' Now let's send the function to run remotely on Kotta
The decorated functions take and return python objects, when run in non-blocking mode.
Non-blocking is the default behavior.
'''
@kottajob(konn, 'Test', 5, block=False)
def my_sum(items):
import time
time.sleep(5)
return sum(items)
job_hndl = my_sum (range(0,100))
print(job_hndl)
status = job_hndl.status(konn)
print("Status : ", status)
result = job_hndl.get_results(konn)
print("Result : ", result)
#print(job_hndl.STDERR)
#print(job_hndl.STDOUT)
@kottajob(konn, 'Test', 5, block=True)
def numpy_sum(items):
np_array = numpy.array(items)
return numpy.sum(np_array)
job = numpy_sum(range(0,100))
# Check the outputs
# Work in progress!!!
print(job.outputs)
print(job.status(konn))
print(job.STDOUT)
### Easy Parallelism
@kottajob(konn, 'Test', 5, block=False)
def my_sum(items):
import time
time.sleep(2)
return sum(items)
jobs = []
for item in range(0,100,20):
jobs.extend([ my_sum(range(item,item+20)) ])
print(jobs)
[job.wait(konn) for job in jobs]
returns_array = [job.get_results(konn) for job in jobs]
print(returns_array)
sum(returns_array)
@kottajob(konn, 'Test', 10)
def file_sum(inputs=[]):
import os
print(os.listdir('.'))
data = [ int(line.strip()) for line in open(os.path.basename(inputs[0]), 'r').readlines() ]
total = sum(data)
length= len(data)
return total, length
returns = file_sum(inputs=['s3://klab-jobs/1m_shuffled.txt'])
print(returns)
import numpy as np
np.sum(np.array(range(0,10)))
import numpy as np
@kottajob(konn, 'Test', 10, requirements='numpy')
def numpy_chunk_adder(data):
import numpy as np
total = np.sum(data)
return total
returns = []
for i in range(0,100,20):
returns.extend([numpy_chunk_adder(np.array(range(i,i+20)))])
print(returns)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A simple python function
Step2: Running my_sum on Kotta
Step3: Running my_sum on Kotta non-blocking
Step4: Handling errors
|
4,580
|
<ASSISTANT_TASK:>
Python Code:
from lightning import Lightning
from numpy import random, zeros
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
x = random.randn(100)
y = random.randn(100)
lgn.scatter(x, y, brush=True, zoom=False)
x = random.rand(100) * 10
y = random.rand(100) * 10
viz = lgn.scatter(x, y, brush=True, zoom=False)
viz
sx, sy = viz.points()
lgn.scatter(sx, sy, zoom=False)
inds = viz.selected()
groups = zeros(100)
groups[inds] = 1
lgn.scatter(x, y, group=groups, zoom=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to server
Step2: <hr> Adding brushing
Step3: <hr> Getting selections
Step4: Let's say I use the brush in the visualization above to select all points between 0 and 4. Below, I grab those points, and replot them -- note the new scale.
Step5: I can use a different accessor, selected, to grab the indices of the selected points.
Step6: Let's replot all points, but show the selected ones in a different color.
|
4,581
|
<ASSISTANT_TASK:>
Python Code:
x = int(input("Please enter an integer: "))
x
if x < 0:
x = 0
print('Negative changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for w in words:
print(w, len(w))
for w in words[:]:
if len(w) > 6:
words.insert(0, w)
words
for i in range(5):
print(i)
list(range(5, 10))
list(range(0, 10, 3))
list(range(-10, -100, -30))
a = ['Mary', 'had', 'a', 'little', 'lamb']
for i in range(len(a)):
print(i, a[i])
range(10)
type(range(10))
r1 = range(2)
dir(r1)
iter(r1)
ir1 = iter(r1)
dir(ir1)
next(ir1)
next(ir1)
next(ir1)
list(range(2))
for n in range(2, 10):
for x in range(2, n):
if n % x == 0:
print("{0} equals {1} * {2}".format(n, x, n//x))
break
else:
# loop fell through without finding a factor
print(n, ' is a prime number')
for num in range(2, 10):
if num % 2 == 0:
print("Found an even number", num)
continue
print("Found a number", num)
while True:
pass
class MyEmptyClass:
pass
def initlog(*args):
pass ## TODO: To implement this!
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: for Statements
Step2: The range() Function
Step3: In many ways the object returned by range() behaves as if it is a list, but in fact it isn’t. It is an object which returns the successive items of the desired sequence when you iterate over it, but it doesn’t really make the list, thus saving space.
Step4: break and continue Statements, and else Cluases on Loops
Step5: pass Statements
Step6: Note
|
4,582
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from time import time, sleep
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
--------------------------------------
-- Tech note
--------------------------------------
Inspired by torch I would use
np.multiply, np.add, np.divide, np.subtract instead of *,+,/,-
for better memory handling
Suppose you allocated a variable
a = np.zeros(...)
So, instead of
a = b + c # will be reallocated, GC needed to free
I would go for:
np.add(b,c,out = a) # puts result in `a`
But it is completely up to you.
%run hw6_Modules.ipynb
def sgd_momentum(x, dx, config, state):
This is a very ugly implementation of sgd with momentum
just to show an example how to store old grad in state.
config:
- momentum
- learning_rate
state:
- old_grad
# x and dx have complex structure, old dx will be stored in a simpler one
state.setdefault('old_grad', {})
i = 0
for cur_layer_x, cur_layer_dx in zip(x,dx):
for cur_x, cur_dx in zip(cur_layer_x,cur_layer_dx):
cur_old_grad = state['old_grad'].setdefault(i, np.zeros_like(cur_dx))
np.add(config['momentum'] * cur_old_grad, config['learning_rate'] * cur_dx, out = cur_old_grad)
cur_x -= cur_old_grad
i += 1
# Generate some data
N = 500
X1 = np.random.randn(N,2) + np.array([2,2])
X2 = np.random.randn(N,2) + np.array([-2,-2])
Y = np.concatenate([np.ones(N),np.zeros(N)])[:,None]
Y = np.hstack([Y, 1-Y])
X = np.vstack([X1,X2])
plt.scatter(X[:,0],X[:,1], c = Y[:,0], edgecolors= 'none')
# net = Sequential()
# net.add(Linear(2, 2))
# net.add(SoftMax())
criterion = ClassNLLCriterion()
# print(net)
# Test something like that then
net = Sequential()
net.add(Linear(2, 4))
net.add(ReLU())
net.add(Linear(4, 2))
net.add(SoftMax())
# Iptimizer params
optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9}
optimizer_state = {}
# Looping params
n_epoch = 20
batch_size = 128
# batch generator
def get_batches(dataset, batch_size):
X, Y = dataset
n_samples = X.shape[0]
# Shuffle at the start of epoch
indices = np.arange(n_samples)
np.random.shuffle(indices)
for start in range(0, n_samples, batch_size):
end = min(start + batch_size, n_samples)
batch_idx = indices[start:end]
yield X[batch_idx], Y[batch_idx]
loss_history = []
for i in range(n_epoch):
for x_batch, y_batch in get_batches((X, Y), batch_size):
net.zeroGradParameters()
# Forward
predictions = net.forward(x_batch)
loss = criterion.forward(predictions, y_batch)
# Backward
dp = criterion.backward(predictions, y_batch)
net.backward(x_batch, dp)
# Update weights
sgd_momentum(net.getParameters(),
net.getGradParameters(),
optimizer_config,
optimizer_state)
loss_history.append(loss)
# Visualize
display.clear_output(wait=True)
plt.figure(figsize=(8, 6))
plt.title("Training loss")
plt.xlabel("#iteration")
plt.ylabel("loss")
plt.plot(loss_history, 'b')
plt.show()
print('Current loss: %f' % loss)
import os
from sklearn.datasets import fetch_mldata
# Fetch MNIST dataset and create a local copy.
if os.path.exists('mnist.npz'):
with np.load('mnist.npz', 'r') as data:
X = data['X']
y = data['y']
else:
mnist = fetch_mldata("mnist-original")
X, y = mnist.data / 255.0, mnist.target
np.savez('mnist.npz', X=X, y=y)
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
Y = encoder.fit_transform(y.reshape(-1, 1))
from sklearn.model_selection import train_test_split
train_sample, test_sample, train_sample_answers, test_sample_answers = train_test_split(X, Y, test_size=0.2, random_state=42)
from sklearn.metrics import accuracy_score
plt.figure(figsize=(8, 6))
plt.title("Training loss")
plt.xlabel("#iteration")
plt.ylabel("loss")
for Activation in [ReLU, LeakyReLU]:
net = Sequential()
net.add(Linear(X.shape[1], 42))
net.add(Activation())
net.add(Linear(42, Y.shape[1]))
net.add(SoftMax())
loss_history = []
optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9}
optimizer_state = {}
for i in range(n_epoch):
for x_batch, y_batch in get_batches((train_sample, train_sample_answers), batch_size):
net.zeroGradParameters()
# Forward
predictions = net.forward(x_batch)
loss = criterion.forward(predictions, y_batch)
# Backward
dp = criterion.backward(predictions, y_batch)
net.backward(x_batch, dp)
# Update weights
sgd_momentum(net.getParameters(),
net.getGradParameters(),
optimizer_config,
optimizer_state)
loss_history.append(loss)
test_sample_answers_true = test_sample_answers.argmax(axis=1)
test_sample_answers_predicted = net.forward(test_sample).argmax(axis=1)
plt.plot(loss_history, label=Activation())
print('Accuracy using {} = {}'.format(Activation(), accuracy_score(test_sample_answers_true, test_sample_answers_predicted)))
plt.legend()
plt.show()
net = Sequential()
net.add(Linear(X.shape[1], 42))
net.add(Dropout())
net.add(LeakyReLU())
net.add(Linear(42, Y.shape[1]))
net.add(SoftMax())
optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9}
optimizer_state = {}
for i in range(n_epoch):
for x_batch, y_batch in get_batches((train_sample, train_sample_answers), batch_size):
net.zeroGradParameters()
# Forward
predictions = net.forward(x_batch)
loss = criterion.forward(predictions, y_batch)
# Backward
dp = criterion.backward(predictions, y_batch)
net.backward(x_batch, dp)
# Update weights
sgd_momentum(net.getParameters(),
net.getGradParameters(),
optimizer_config,
optimizer_state)
test_sample_answers_true = test_sample_answers.argmax(axis=1)
test_sample_answers_predicted = net.forward(test_sample).argmax(axis=1)
print('Accuracy = {}'.format(accuracy_score(test_sample_answers_true, test_sample_answers_predicted)))
# Your code goes here. ################################################
# Extract inner representation for train and validation,
# you should get (n_samples, 30) matrices
# Your code goes here. ################################################
# Now build a logistic regression or small classification net
cnet = Sequential()
cnet.add(Linear(30, 2))
cnet.add(SoftMax())
# Learn the weights
# Your code goes here. ################################################
# Now chop off decoder part
# (you may need to implement `remove` method for Sequential container)
# Your code goes here. ################################################
# And add learned layers ontop.
autoenc.add(cnet[0])
autoenc.add(cnet[1])
# Now optimize whole model
# Your code goes here. ################################################
# np.clip(prediction,0,1)
#
# Your code goes here. ################################################
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Важно
Step4: Optimizer is implemented for you.
Step5: Toy example
Step6: Define a logistic regression for debugging.
Step7: Start with batch_size = 1000 to make sure every step lowers the loss, then try stochastic version.
Step8: Train
Step9: Digit classification
Step10: One-hot encode the labels first.
Step11: Compare ReLU, ELU activation functions.
Step12: Finally, use all your knowledge to build a super cool model on this dataset, do not forget to split dataset into train and validation. Use dropout to prevent overfitting, play with learning rate decay. You can use data augmentation such as rotations, translations to boost your score. Use your knowledge and imagination to train a model.
Step13: Print here your accuracy. It should be around 90%.
Step14: Следствие
Step15: Some time ago NNs were a lot poorer and people were struggling to learn deep models. To train a classification net people were training autoencoder first (to train autoencoder people were pretraining single layers with RBM), then substituting the decoder part with classification layer (yeah, they were struggling with training autoencoders a lot, and complex techniques were used at that dark times). We are going to this now, fast and easy.
Step16: What do you think, does it make sense to build real-world classifiers this way ? Did it work better for you than a straightforward one? Looks like it was not the same ~8 years ago, what has changed beside computational power?
|
4,583
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys, time, gzip
import pickle as pkl
import numpy as np
import pandas as pd
from scipy.sparse import lil_matrix, issparse, hstack, vstack
import matplotlib.pyplot as plt
import seaborn as sns
from models import MTC
from sklearn.linear_model import LogisticRegression
# from tools import calc_RPrecision_HitRate
from tools import calc_metrics
TOPs = [5, 10, 20, 30, 50, 100, 200, 300, 500, 1000]
datasets = ['aotm2011', '30music']
dix = 1
dataset_name = datasets[dix]
dataset_name
data_dir = 'data/%s/setting1' % dataset_name
Y_trndev = pkl.load(gzip.open(os.path.join(data_dir, 'Y_train_dev.pkl.gz'), 'rb'))
Y_test = pkl.load(gzip.open(os.path.join(data_dir, 'Y_test.pkl.gz'), 'rb'))
song2pop = pkl.load(gzip.open('data/%s/setting2/song2pop.pkl.gz' % dataset_name, 'rb'))
songsets = pkl.load(gzip.open(os.path.join(data_dir, 'songs_train_dev_test_s1.pkl.gz'), 'rb'))
songset_trndev = songsets['train_song_set'] + songsets['dev_song_set']
songset_test = songsets['test_song_set']
pl_indices = np.where(Y_test.sum(axis=0).A.reshape(-1) > 0)[0]
lengths = Y_trndev.sum(axis=0).A.reshape(-1)[pl_indices]
Y_pred = lil_matrix(Y_test.shape, dtype=np.float)
np.random.seed(1234567890)
for ix in range(len(songset_test)):
sort_ix = np.argsort(-lengths)
long_ix = [sort_ix[0]]
longest = lengths[sort_ix[0]]
for i in range(1, sort_ix.shape[0]):
if lengths[sort_ix[i]] < longest:
break
else:
short_ix.append(sort_ix[i])
long_ix = np.random.permutation(long_ix)
rec_ix = long_ix[0]
Y_pred[ix, pl_indices[rec_ix]] = 1
lengths[rec_ix] += 1
Y_pred = Y_pred.tocsc()
rps_longest = []
hitrates_longest = {top: [] for top in TOPs}
aucs_longest = []
for j in range(Y_test.shape[1]):
if (j+1) % 100 == 0:
sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1]))
sys.stdout.flush()
y_true = Y_test[:, j].toarray().reshape(-1)
if y_true.sum() < 1:
continue
y_pred = Y_pred[:, j].A.reshape(-1)
rp, hr_dict, auc = calc_metrics(y_true, y_pred, tops=TOPs)
rps_longest.append(rp)
for top in TOPs:
hitrates_longest[top].append(hr_dict[top])
aucs_longest.append(auc)
print('\n%d / %d' % (len(rps_longest), Y_test.shape[1]))
longest_perf = {dataset_name:
{'Test': {'R-Precision': np.mean(rps_longest),
'Hit-Rate': {top: np.mean(hitrates_longest[top]) for top in hitrates_longest},
'AUC': np.mean(aucs_longest)}}}
longest_perf
fperf_longest = os.path.join(data_dir, 'perf-longest.pkl')
print(fperf_longest)
pkl.dump(longest_perf, open(fperf_longest, 'wb'))
pkl.load(open(fperf_longest, 'rb'))
pl_indices = np.where(Y_test.sum(axis=0).A.reshape(-1) > 0)[0]
lengths = Y_trndev.sum(axis=0).A.reshape(-1)[pl_indices]
Y_pred = lil_matrix(Y_test.shape, dtype=np.float)
np.random.seed(1234567890)
for ix in range(len(songset_test)):
sort_ix = np.argsort(lengths)
short_ix = [sort_ix[0]]
shortest = lengths[sort_ix[0]]
for i in range(1, sort_ix.shape[0]):
if lengths[sort_ix[i]] > shortest:
break
else:
short_ix.append(sort_ix[i])
short_ix = np.random.permutation(short_ix)
rec_ix = short_ix[0]
Y_pred[ix, pl_indices[rec_ix]] = 1
lengths[rec_ix] += 1
Y_pred = Y_pred.tocsc()
rps_shortest = []
hitrates_shortest = {top: [] for top in TOPs}
aucs_shortest = []
ndcgs_shortest = []
for j in range(Y_test.shape[1]):
if (j+1) % 100 == 0:
sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1]))
sys.stdout.flush()
y_true = Y_test[:, j].toarray().reshape(-1)
if y_true.sum() < 1:
continue
y_pred = Y_pred[:, j].A.reshape(-1)
rp, hr_dict, auc, ndcg = calc_metrics(y_true, y_pred, tops=TOPs)
rps_shortest.append(rp)
for top in TOPs:
hitrates_shortest[top].append(hr_dict[top])
aucs_shortest.append(auc)
ndcgs_shortest.append(ndcg)
print('\n%d / %d' % (len(rps_shortest), Y_test.shape[1]))
shortest_perf = {dataset_name:
{'Test': {'R-Precision': np.mean(rps_shortest),
'Hit-Rate': {top: np.mean(hitrates_shortest[top]) for top in hitrates_shortest},
'AUC': np.mean(aucs_shortest),
'NDCG': np.mean(ndcgs_shortest)}}}
shortest_perf
fperf_shortest = os.path.join(data_dir, 'perf-shortest.pkl')
print(fperf_shortest)
pkl.dump(shortest_perf, open(fperf_shortest, 'wb'))
pkl.load(open(fperf_shortest, 'rb'))
rps_poptest = []
hitrates_poptest = {top: [] for top in TOPs}
aucs_poptest = []
ndcgs_poptest = []
for j in range(Y_test.shape[1]):
if (j+1) % 100 == 0:
sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1]))
sys.stdout.flush()
y_true = Y_test[:, j].toarray().reshape(-1)
if y_true.sum() < 1:
continue
y_pred = np.asarray([song2pop[sid] for sid, _ in songset_test])
rp, hr_dict, auc, ndcg = calc_metrics(y_true, y_pred, tops=TOPs)
rps_poptest.append(rp)
for top in TOPs:
hitrates_poptest[top].append(hr_dict[top])
aucs_poptest.append(auc)
ndcgs_poptest.append(ndcg)
print('\n%d / %d' % (len(rps_poptest), Y_test.shape[1]))
fig = plt.figure(figsize=[20, 5])
ax1 = plt.subplot(131)
ax1.hist(rps_poptest, bins=100)
ax1.set_yscale('log')
ax1.set_title('R-Precision')
#ax.set_xlim(0, xmax)
ax2 = plt.subplot(132)
ax2.hist(aucs_poptest, bins=100)
ax2.set_yscale('log')
ax2.set_title('AUC')
ax3 = plt.subplot(133)
ax3.hist(ndcgs_poptest, bins=100)
ax3.set_yscale('log')
ax3.set_title('NDCG')
pass
poptest_perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps_poptest),
'Hit-Rate': {top: np.mean(hitrates_poptest[top]) for top in hitrates_poptest},
'AUC': np.mean(aucs_poptest),
'NDCG': np.mean(ndcgs_poptest)}}}
poptest_perf
fperf_poptest = os.path.join(data_dir, 'perf-poptest.pkl')
print(fperf_poptest)
pkl.dump(poptest_perf, open(fperf_poptest, 'wb'))
pkl.load(open(fperf_poptest, 'rb'))
rps_lrpop = []
hitrates_lrpop = {top: [] for top in TOPs}
aucs_lrpop = []
ndcgs_lrpop = []
nsong_trndev = len(songset_trndev)
nsong_test = len(songset_test)
for j in range(Y_test.shape[1]):
if (j+1) % 10 == 0:
sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1]))
sys.stdout.flush()
y_true = Y_test[:, j].toarray().reshape(-1)
if y_true.sum() < 1:
continue
X_train = np.asarray([song2pop[sid] for sid, _ in songset_trndev]).reshape(nsong_trndev, 1)
Y_train = Y_trndev[:, j].A.reshape(-1)
clf = LogisticRegression()
clf.fit(X_train, Y_train)
X_test = np.asarray([song2pop[sid] for sid, _ in songset_test]).reshape(nsong_test, 1)
y_pred = clf.decision_function(X_test).reshape(-1)
rp, hr_dict, auc, ndcg = calc_metrics(y_true, y_pred, tops=TOPs)
rps_lrpop.append(rp)
for top in TOPs:
hitrates_lrpop[top].append(hr_dict[top])
aucs_lrpop.append(auc)
ndcgs_lrpop.append(ndcg)
print('\n%d / %d' % (len(rps_lrpop), Y_test.shape[1]))
lrpop_perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps_lrpop),
'Hit-Rate': {top: np.mean(hitrates_lrpop[top]) for top in hitrates_lrpop},
'AUC': np.mean(aucs_lrpop),
'NDCG': np.mean(ndcgs_lrpop)}}}
lrpop_perf
fperf_lrpop = os.path.join(data_dir, 'perf-lrpop.pkl')
print(fperf_lrpop)
pkl.dump(lrpop_perf, open(fperf_lrpop, 'wb'))
pkl.load(open(fperf_lrpop, 'rb'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given a new song, recommend to the longest playlist
Step2: Given a new song, recommend to the shortest playlist
Step3: Popularity (in test set) as song score
Step4: Logistic Regression with only song popularity as feature
|
4,584
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/')
import py_entitymatching as em
import pandas as pd
import os
# Display the versions
print('python version: ' + sys.version )
print('pandas version: ' + pd.__version__ )
print('magellan version: ' + em.__version__ )
# Get the paths
path_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/fodors.csv'
path_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/zagats.csv'
# Load csv files as dataframes and set the key attribute in the dataframe
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
print('Number of tuples in A: ' + str(len(A)))
print('Number of tuples in B: ' + str(len(B)))
print('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B)))
A.head(2)
B.head(2)
# Display the keys of the input tables
em.get_key(A), em.get_key(B)
# If the tables are large we can downsample the tables like this
A1, B1 = em.down_sample(A, B, 200, 1, show_progress=False)
len(A1), len(B1)
# But for the purposes of this notebook, we will use the entire table A and B
# Blocking plan
# A, B -- Overlap blocker [name] --------------------|---> candidate set
# Create overlap blocker
ob = em.OverlapBlocker()
# Block tables using 'name' attribute
C = ob.block_tables(A, B, 'name', 'name',
l_output_attrs=['name', 'addr', 'city', 'phone'],
r_output_attrs=['name', 'addr', 'city', 'phone'],
overlap_size=1, show_progress=False
)
len(C)
# Sample candidate set
S = em.sample_table(C, 450)
# Label S
G = em.label_table(S, 'gold')
# Load the pre-labeled data
path_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/lbl_restnt_wf1.csv'
G = em.read_csv_metadata(path_G,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(G)
# Split S into development set (I) and evaluation set (J)
IJ = em.split_train_test(G, train_proportion=0.7, random_state=0)
I = IJ['train']
J = IJ['test']
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
nb = em.NBMatcher(name='NaiveBayes')
# Generate features
feature_table = em.get_features_for_matching(A, B)
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=feature_table,
attrs_after='gold',
show_progress=False)
# Display first few rows
H.head(3)
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric='precision', random_state=0)
result['cv_stats']
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric='recall', random_state=0)
result['cv_stats']
# Convert J into a set of feature vectors using feature table
L = em.extract_feature_vecs(J, feature_table=feature_table,
attrs_after='gold', show_progress=False)
# Train using feature vectors from I
dt.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
target_attr='gold')
# Predict on L
predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
append=True, target_attr='predicted', inplace=False)
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'gold', 'predicted')
em.print_eval_summary(eval_result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matching two tables typically consists of the following three steps
Step2: Block Tables To Get Candidate Set
Step3: Match tuple pairs in candidate set
Step4: Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.
Step5: For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.
Step6: Splitting the labeled data into development and evaluation set
Step7: Selecting the best learning-based matcher
Step8: Creating features
Step9: Converting the development set to feature vectors
Step10: Selecting the best matcher using cross-validation
Step11: We observe that the best matcher (RF) is getting us the best precision and recall. So, we select this matcher and now we can proceed on to evaluating the best matcher on the unseen data (the evaluation set).
Step12: Training the selected matcher
Step13: Predicting the matches
Step14: Evaluating the predictions
|
4,585
|
<ASSISTANT_TASK:>
Python Code:
from shenfun import *
from shenfun.la import SolverGeneric1ND
import sympy as sp
r = 1
theta, phi = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
N, M = 64, 64
L0 = FunctionSpace(N, 'C', domain=(0, np.pi))
F1 = FunctionSpace(M, 'F', dtype='d')
T = TensorProductSpace(comm, (L0, F1), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
v = TestFunction(T)
u = TrialFunction(T)
#sph = sp.functions.special.spherical_harmonics.Ynm
#ue = sph(6, 3, theta, phi)
ue = sp.cos(8*(sp.sin(theta)*sp.cos(phi) + sp.sin(theta)*sp.sin(phi) + sp.cos(theta)))
alpha = 1000
g = (-div(grad(u))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g*T.coors.sg)
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
from IPython.display import Math
Math((-div(grad(u))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
#Math((grad(u)).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
mats = inner(v, (-div(grad(u))+alpha*u)*T.coors.sg)
mats[3].mats
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(inner(1, (uj-uq)**2)), np.linalg.norm(uj-uq))
np.linalg.norm(u_hat - u_hat.backward().forward())
import matplotlib.pyplot as plt
%matplotlib inline
plt.spy(Sol1.solvers1D[1].mat, markersize=0.2)
#raise RuntimeError
u_hat2 = u_hat.refine([N*3, M*3])
surf3D(u_hat2, wrapaxes=[1])
Math((div(grad(div(grad(u))))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
g = (div(grad(div(grad(u))))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g)
# Take scalar product
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
mats = inner(v, div(grad(div(grad(u)))) + alpha*u)
# Solve
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
# Transform back to real space.
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(dx((uj-uq)**2)))
r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
L0 = FunctionSpace(20, 'L', domain=(0, 1))
F1 = FunctionSpace(20, 'L', domain=(0, np.pi))
F2 = FunctionSpace(20, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
p = TrialFunction(T)
Math((div(grad(div(grad(p))))).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
q = TestFunction(T)
A = inner(div(grad(q)), div(grad(p)))
L0 = FunctionSpace(8, 'C', domain=(0, np.pi))
F1 = FunctionSpace(8, 'F', dtype='D')
F2 = FunctionSpace(8, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2))
p = TrialFunction(T)
Math((div(grad(div(grad(p))))).tolatex(funcname='u'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define spherical coordinates $(r, \theta, \phi)$
Step2: We define bases with the domains $\theta \in [0, \pi]$ and $\phi \in [0, 2\pi]$. Also define a tensorproductspace, test- and trialfunction. Note that the new coordinates and the position vector are fed to the TensorProductSpace and not the individual spaces
Step3: Use one spherical harmonic function as a manufactured solution
Step4: Compute the right hand side on the quadrature mesh and take the scalar product
Step5: Note that we can use the shenfun operators div and grad on a trialfunction u, and then switch the trialfunction for a sympy function ue. The operators will then make use of sympy's derivative method on the function ue. Here (-div(grad(u))+alpha*u) corresponds to the equation we are trying to solve
Step6: Evaluated with u=ue and you get the exact right hand side f.
Step7: And the linear system of equations can be solved using the generic SolverGeneric1ND, that can be used for any problem that only has non-periodic boundary conditions in one dimension.
Step8: Transform back to real space and compute the error.
Step9: Postprocessing
Step10: The periodic solution does not contain the periodic points twice, i.e., the computational mesh contains $0$, but not $2\pi$. It looks better if we wrap the periodic dimension all around to $2\pi$, and this is achieved with
Step11: Biharmonic equation
Step12: Remember that this equation uses constant radius r=1. We now solve the equation using the same manufactured solution as for the Helmholtz equation.
Step13: Want to see what the regular 3-dimensional biharmonic equation looks like in spherical coordinates? This is extremely tedious to derive by hand, but in shenfun you can get there with the following few lines of code
Step14: I don't know if this is actually correct, because I haven't derived it by hand and I haven't seen it printed anywhere, but at least I know the Cartesian equation is correct
|
4,586
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import FileLink, display
display(FileLink('data/NASA_TM_81232.pdf'))
remote_data = True
remote_server_auto = True
case_name = 'caratung-ar-6p0-pitch-8p0'
data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CARATUNG'
data_host='dstandingford@vis03'
paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver'
if not remote_server_auto:
paraview_cmd=None
if not remote_data:
data_host='localhost'
paraview_cmd=None
# Validation for Caradonna Tung Rotor (Mach at Tip - 0.877) from NASA TM 81232, page 34
validate = True
regression = True
# Make movie option currently not working - TODO
make_movie = False
if (validate):
valid = True
validation_tol = 0.0100
valid_lower_cl_0p50 = 0.2298-validation_tol
valid_upper_cl_0p50 = 0.2298+validation_tol
valid_lower_cl_0p68 = 0.2842-validation_tol
valid_upper_cl_0p68 = 0.2842+validation_tol
valid_lower_cl_0p80 = 0.2736-validation_tol
valid_upper_cl_0p80 = 0.2736+validation_tol
valid_lower_cl_0p89 = 0.2989-validation_tol
valid_upper_cl_0p89 = 0.2989+validation_tol
valid_lower_cl_0p96 = 0.3175-validation_tol
valid_upper_cl_0p96 = 0.3175+validation_tol
print 'VALIDATING CARADONNA TUNG CASE'
if (regression):
print 'REGRESSION CARADONNA TUNG CASE'
%pylab inline
from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
import pylab as pl
from zutil.post import pvserver_connect
if remote_data:
pvserver_connect(data_host=data_host,data_dir=data_dir,paraview_cmd=paraview_cmd)
from zutil.post import get_case_parameters,print_html_parameters
parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir)
# print parameters
from zutil.post import get_status_dict
status=get_status_dict(case_name,data_host=data_host,data_dir=data_dir)
num_procs = str(status['num processor'])
from IPython.display import HTML
HTML(print_html_parameters(parameters))
aspect_ratio = 6.0
Pitch = 8.0
from zutil.post import for_each
from zutil import rotate_vector
from zutil.post import get_csv_data
def plot_cp_profile(ax,file_root,span_loc,ax2):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall.UpdatePipeline()
point_data = CellDatatoPointData(Input=wall)
point_data.PassCellData = 0
point_data.UpdatePipeline()
merged = MergeBlocks(Input=point_data)
merged.UpdatePipeline()
wall_slice = Slice(Input=merged, SliceType="Plane" )
wall_slice.SliceType.Normal = [0.0,1.0,0.0]
wall_slice.SliceType.Origin = [0, span_loc*aspect_ratio, 0]
wall_slice.UpdatePipeline()
sorted_line = PlotOnSortedLines(Input=wall_slice)
sorted_line.UpdatePipeline()
slice_client = servermanager.Fetch(sorted_line)
for_each(slice_client,func=plot_array,axis=ax,span_loc=span_loc,axis2=ax2)
def plot_array(data_array,pts_array,**kwargs):
ax = kwargs['axis']
span_loc = kwargs['span_loc']
ax2 = kwargs['axis2']
data = []
pos = []
pos_y = []
count = 0
cp_array = data_array.GetPointData()['cp']
for p in pts_array.GetPoints()[:,0]:
cp = float(cp_array[count])
# transform to local Cp
cp = cp/(span_loc)**2
data.append(cp)
pt_x = pts_array.GetPoints()[count,0]
pt_z = pts_array.GetPoints()[count,2]
# rotate by -8 deg
pt_rot = rotate_vector([pt_x,0.0,pt_z],-8.0,0.0)
pt = pt_rot[0] + 0.25
pos.append(pt)
pos_y.append(pt_rot[2])
count+=1
ax.plot(pos, data , color='g',linestyle='-',marker='None',label='zCFD')
ax2.plot(pos, pos_y , color='grey',linestyle='-',marker='None',label='profile')
def plot_experiment(ax, filename):
header = True
remote = False
# Note - this returns a pandas dataframe object
df = get_csv_data(filename,True,False)
x = []
y = []
for ind in range(0,len(df.index)-1):
x.append(df[list(df.columns.values)[0]][ind])
y.append(-df[list(df.columns.values)[1]][ind])
ax.scatter(x, y, color='grey', label='Experiment')
from zutil.post import get_case_root, cp_profile_wall_from_file_span
from zutil.post import ProgressBar
from collections import OrderedDict
factor = 0.0
pbar = ProgressBar()
plot_list = OrderedDict([(0.50,{'exp_data_file': 'data/cp-0p50.txt', 'cp_axis':[0.0,1.0,1.2,-1.0]}),
(0.68,{'exp_data_file': 'data/cp-0p68.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),
(0.80,{'exp_data_file': 'data/cp-0p80.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),
(0.89,{'exp_data_file': 'data/cp-0p89.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),
(0.96,{'exp_data_file': 'data/cp-0p96.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]})])
fig = pl.figure(figsize=(25, 30),dpi=100, facecolor='w', edgecolor='k')
fig.suptitle('Caradonna Tung Hover Rotor (' + r'$\mathbf{M_{TIP}}$' + ' = 0.877)',
fontsize=28, fontweight='normal', color = '#5D5858')
pnum=1
cl = {}
for plot in plot_list:
pbar+=5
span_loc = plot + factor
ax = fig.add_subplot(3,2,pnum)
ax.set_title('$\mathbf{C_P}$' + ' at ' + '$\mathbf{r/R}$' + ' = ' + str(span_loc) + '\n',
fontsize=24, fontweight='normal', color = '#E48B25')
ax.grid(True)
ax.set_xlabel('$\mathbf{x/c}$', fontsize=24, fontweight='bold', color = '#5D5858')
ax.set_ylabel('$\mathbf{C_p}$', fontsize=24, fontweight='bold', color = '#5D5858')
ax.axis(plot_list[plot]['cp_axis'])
ax2 = ax.twinx()
ax2.set_ylabel('$\mathbf{z/c}$', fontsize=24, fontweight='bold', color = '#5D5858')
ax2.axis([0,1,-0.5,0.5])
plot_cp_profile(ax,get_case_root(case_name,num_procs),span_loc,ax2)
normal = [0.0, 1.0, 0.0]
origin = [0.0, span_loc*aspect_ratio, 0.0]
# Check this - alpha passed via kwargs to post.py
# THESE NUMBERS ARE COMPLETELY WRONG - CHECK
forces = cp_profile_wall_from_file_span(get_case_root(case_name,num_procs), normal, origin, alpha=Pitch)
cd = forces['friction force'][0] + forces['pressure force'][0]
cs = forces['friction force'][1] + forces['pressure force'][1]
cl[plot] = forces['friction force'][2] + forces['pressure force'][2]
print cd, cs, cl[plot]
plot_experiment(ax,plot_list[plot]['exp_data_file'])
ax.legend(loc='upper right', shadow=True)
legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16)
legend.get_frame().set_facecolor('white')
ax.tick_params(axis='x', pad=16)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax2.yaxis.get_major_ticks():
tick.label2.set_fontsize(18)
tick.label2.set_fontweight('normal')
tick.label2.set_color('#E48B25')
pnum=pnum+1
fig.subplots_adjust(hspace=0.3)
fig.subplots_adjust(wspace=0.4)
fig.savefig("images/Caradonna_Tung_CP_profile.png")
pbar.complete()
show()
from IPython.display import FileLink, display
display(FileLink('images/Caradonna_Tung_CP_profile.png'))
from zutil.post import residual_plot, get_case_report
residual_plot(get_case_report(case_name))
show()
if make_movie:
from zutil.post import get_case_root
from zutil.post import ProgressBar
pb = ProgressBar()
vtu = PVDReader( FileName=[get_case_root(case_name,num_procs)+'.pvd'] )
vtu.UpdatePipeline()
pb += 20
merged = CleantoGrid(Input=vtu)
merged.UpdatePipeline()
pb += 20
point_data = CellDatatoPointData(Input=merged)
point_data.PassCellData = 0
point_data.PieceInvariant = 1
point_data.UpdatePipeline()
pb.complete()
if make_movie:
# from paraview.vtk.dataset_adapter import DataSet
from vtk.numpy_interface.dataset_adapter import DataSet
stream = StreamTracer(Input=point_data)
stream.SeedType = "Point Source"
stream.SeedType.Center = [49673.0, 58826.0, 1120.0]
stream.SeedType.Radius = 1
stream.SeedType.NumberOfPoints = 1
stream.Vectors = ['POINTS', 'V']
stream.MaximumStreamlineLength = 135800.00000000035
# IntegrationDirection can be FORWARD, BACKWARD, or BOTH
stream.IntegrationDirection = 'BACKWARD'
stream.UpdatePipeline()
stream_client = servermanager.Fetch(stream)
upstream_data = DataSet(stream_client)
stream.IntegrationDirection = 'FORWARD'
stream.UpdatePipeline()
stream_client = servermanager.Fetch(stream)
downstream_data = DataSet(stream_client)
if make_movie:
def vtk_show(renderer, w=100, h=100):
Takes vtkRenderer instance and returns an IPython Image with the rendering.
from vtk import vtkRenderWindow,vtkWindowToImageFilter,vtkPNGWriter
renderWindow = vtkRenderWindow()
renderWindow.SetOffScreenRendering(1)
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(w, h)
renderWindow.Render()
windowToImageFilter = vtkWindowToImageFilter()
windowToImageFilter.SetInput(renderWindow)
windowToImageFilter.Update()
writer = vtkPNGWriter()
writer.SetWriteToMemory(1)
writer.SetInputConnection(windowToImageFilter.GetOutputPort())
writer.Write()
data = str(buffer(writer.GetResult()))
from IPython.display import Image
return Image(data)
if make_movie:
#print stream_data.GetPoint(0)
from zutil.post import ProgressBar
pb = ProgressBar()
wall = PVDReader( FileName=[get_case_root(case_name,num_procs)+'_wall.pvd'] )
wall.UpdatePipeline()
merged = CleantoGrid(Input=wall)
merged.UpdatePipeline()
point_data = CellDatatoPointData(Input=merged)
point_data.PassCellData = 0
point_data.PieceInvariant = 1
point_data.UpdatePipeline()
total_pts = 100# stream_data.GetNumberOfPoints()
scene = GetAnimationScene()
scene.EndTime = total_pts
scene.PlayMode = 'Snap To TimeSteps'
scene.AnimationTime = 0
a1_yplus_PVLookupTable = GetLookupTableForArray( "yplus", 1, RGBPoints=[96.69050598144531, 0.23, 0.299, 0.754, 24391.206581115723, 0.865, 0.865, 0.865, 48685.72265625, 0.706, 0.016, 0.15], VectorMode='Magnitude', NanColor=[0.25, 0.0, 0.0], ColorSpace='Diverging', ScalarRangeInitialized=1.0 )
a1_yplus_PiecewiseFunction = CreatePiecewiseFunction( Points=[96.69050598144531, 0.0, 0.5, 0.0, 48685.72265625, 1.0, 0.5, 0.0] )
drepr = Show() # GetDisplayProperties( Contour1 )
drepr.EdgeColor = [0.0, 0.0, 0.5000076295109483]
drepr.SelectionPointFieldDataArrayName = 'yplus'
#DataRepresentation4.SelectionCellFieldDataArrayName = 'eddy'
drepr.ColorArrayName = ('POINT_DATA', 'yplus')
drepr.LookupTable = a1_yplus_PVLookupTable
drepr.ScaleFactor = 0.08385616838932038
drepr.Interpolation = 'Flat'
drepr.ScalarOpacityFunction = a1_yplus_PiecewiseFunction
view = GetRenderView()
if not view:
# When using the ParaView UI, the View will be present, not otherwise.
view = CreateRenderView()
scene.ViewModules = [view]
view.CameraViewUp = [0.0, 0.0, 1.0]
view.CameraPosition = list(upstream_data.GetPoint(0))
view.CameraFocalPoint = list(upstream_data.GetPoint(1))
view.CameraParallelScale = 0.499418869125992
view.CenterOfRotation = [49673.0, 58826.0, 1120.0]
view.CenterAxesVisibility = 0
view.ViewSize = [3840,2160]
view.LightSwitch=0
view.UseLight = 1
#RenderView2.SetOffScreenRendering(1)
#Render()
pb+=20
camera = view.GetActiveCamera()
key_frames = []
for p in range(total_pts):
pt = stream_data.GetPoint(p)
#print pt
frame = CameraKeyFrame()
frame.Position = list(pt)
frame.ViewUp = [0.0, 0.0, 1.0]
frame.FocalPoint = camera.GetFocalPoint()
frame.KeyTime = p/total_pts
key_frames.append(frame)
pb+=20
cue = GetCameraTrack()
cue.Mode = 'Interpolate Camera'
cue.AnimatedProxy = view
cue.KeyFrames = key_frames
TimeAnimationCue4 = GetTimeTrack()
scene.Cues = [cue]
for t in range(total_pts-1):
print 'Generating: ' + str(t)
pt = stream_data.GetPoint(t)
view.CameraPosition = list(pt)
view.CameraFocalPoint = list(stream_data.GetPoint(t+1))
#vtk_show(view.GetRenderer())
Render()
#scene.AnimationTime = t
WriteImage('movies/caradonna_'+str(t)+'.png')
pb.complete()
if (validate):
def validate_data(name, value, valid_lower, valid_upper):
if ((value < valid_lower) or (value > valid_upper)):
print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper
return False
else:
return True
valid = validate_data('C_L[0.50]', cl[0.50], valid_lower_cl_0p50, valid_upper_cl_0p50) and valid
valid = validate_data('C_L[0.68]', cl[0.68], valid_lower_cl_0p68, valid_upper_cl_0p68) and valid
valid = validate_data('C_L[0.80]', cl[0.80], valid_lower_cl_0p80, valid_upper_cl_0p80) and valid
valid = validate_data('C_L[0.89]', cl[0.89], valid_lower_cl_0p89, valid_upper_cl_0p89) and valid
valid = validate_data('C_L[0.96]', cl[0.96], valid_lower_cl_0p96, valid_upper_cl_0p96) and valid
if (valid):
print 'VALIDATION = PASS :-)'
else:
print 'VALIDATION = FAIL :-('
if (regression):
import pandas as pd
pd.options.display.float_format = '{:,.6f}'.format
print 'REGRESSION DATA'
regress = {'version' : ['v0.0', 'v0.1' , 'CURRENT'],
'C_L[0.50]' : [2.217000, 2.217000, cl[0.50]],
'C_L[0.68]' : [0.497464, 0.498132, cl[0.68]],
'C_L[0.80]' : [0.024460, 0.024495, cl[0.80]],
'C_L[0.89]' : [0.014094, 0.014099, cl[0.89]],
'C_L[0.96]' : [0.010366, 0.010396, cl[0.96]]}
regression_table = pd.DataFrame(regress, columns=['version','C_L[0.50]','C_L[0.68]',
'C_L[0.80]','C_L[0.89]','C_L[0.96]'])
print regression_table
if remote_data:
#print 'Disconnecting from remote paraview server connection'
Disconnect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define Data Location
Step2: Validation and regression
Step3: Initialise Environment
Step4: Data Connection
Step5: Get control dictionary
Step6: Get status file
Step7: Define test conditions
Step8: Cp Profile
Step10: Convergence
Step11: Check validation and regression¶
Step12: Cleaning up
|
4,587
|
<ASSISTANT_TASK:>
Python Code:
keep_prob = 0.5
do_dropout = True
import torch
import copy
w1 = torch.randn(4, 4) # 某层的weights
w = copy.deepcopy(w1)
w
def dropout_strict(w, keep_prob):
implement inverted dropout ensuring that the share of kept neurons is strictly keep_prob.
Args:
w (torch.tensor) : weights before dropout
keep_prob(float) : keep probability
k = round(w.shape[1] * keep_prob)
_, indices = torch.topk(torch.randn(w.shape[0], w.shape[1]), k)
keep = torch.zeros(4, 4).scatter_(dim=1, index=indices, src=torch.ones_like(w))
w *= keep
w /= keep_prob
if do_dropout:
dropout_strict(w, keep_prob)
print(w)
import numpy as np
import copy
w1 = np.random.randn(4, 4) # 某层的weights
w = copy.deepcopy(w1)
w
def dropout_loose(w, keep_prob):
A simple Implementation of inverted dropout.
Args:
w(np.array) :- neurons subject to dropout
keep_prob(float) :- keep probability
keep = np.random.rand(w.shape[0], w.shape[1]) < keep_prob
w *= keep
w /= keep_prob
if do_dropout:
dropout_loose(w, keep_prob)
print(w)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 用pytorch来实现,尽量做到每层保留的节点为keep_prob * 节点数
Step4: 用numpy来实现,比较简单,当节点数量大时,随机的结果基本能够保证实际保留情况符合保留概率
|
4,588
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
sea_level = 0.5
def wave_height():
hs = np.random.random()
return hs
def wave_period(hs):
tps = np.random.random()
return tps
def water_elevation(hs, tps):
swel = np.random.random()
return swel
def storm_duration(hs, swel):
st_dur = np.random.random()
return st_dur
def inter_storm_duration():
int_st_dur = np.random.random()
return int_st_dur
STORMNUMBER = 0
SUM_TIME = 0
MAX_TIME = 100
STORMS = {}
while SUM_TIME < MAX_TIME:
# Define paramters
hs = wave_height()
tps = wave_period(hs)
swel = water_elevation(hs, tps)
st_dur = storm_duration(hs, swel)
int_st_dur = inter_storm_duration()
# Assign parameters to storm
STORMS[STORMNUMBER] = {'HS':hs, 'TPS':tps, 'SWEL':swel, 'DURATION':st_dur, 'RECOVER':int_st_dur}
# Increment total time
increment = st_dur + int_st_dur
SUM_TIME += increment
STORMNUMBER += 1
len(STORMS)
def Erosion(hs, tps, swel, duration, shoreline, sea_level):
erosion = (hs+tps+swel)*duration*abs(sea_level) #"some function using hs, tps, swel, and duration"
return erosion
def Recovery(int_storm_dur, shoreline, sea_level):
recovery = int_storm_dur*np.e**abs(sea_level)
return recovery
starting_position = 1
shoreline = [starting_position]
shoretime = [0]
i = 0
for storm in STORMS:
# Erode the shoreline
erosion = Erosion(STORMS[storm]['HS'],
STORMS[storm]['TPS'],
STORMS[storm]['SWEL'],
STORMS[storm]['DURATION'],
shoreline[i],
sea_level)
eroded_shoreline = shoreline[i]-erosion
shoreline.append(eroded_shoreline)
shoretime.append(shoretime[i]+STORMS[storm]['DURATION'])
i += 1
# Allow shoreline to recover
recovery = Recovery(STORMS[storm]['RECOVER'],
shoreline[i],
sea_level)
recovered_shoreline = shoreline[i]+recovery
shoreline.append(recovered_shoreline)
shoretime.append(shoretime[i]+STORMS[storm]['RECOVER'])
i += 1
%matplotlib inline
from matplotlib.pyplot import plot
plot(shoretime, shoreline)
plot(shoretime, shoreline)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1 <br>
Step1: Step 2 <br>
Step 3<br>
Step2: Step 4<br>
Step3: Step 5<br>
Step4: Step 6<br>
Step 7<br>
|
4,589
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb
%sql USE dognitiondb
%config SqlMagic.displaylimit=25
%%sql
SELECT COUNT(breed)
FROM dogs
%%sql
DESCRIBE complete_tests
%%sql
SELECT COUNT(DISTINCT Dog_Guid)
FROM complete_tests
WHERE created_at >= '2014-03-01'
%%sql
SELECT COUNT(*)
FROM dogs
%%sql
SELECT COUNT(exclude)
FROM dogs
%%sql
SELECT COUNT(DISTINCT dog_guid)
FROM dogs
WHERE exclude=1
%%sql
SELECT SUM(ISNULL(exclude))
FROM dogs
%%sql
SELECT test_name,
AVG(rating) AS AVG_Rating,
MIN(rating) AS MIN_Rating,
MAX(rating) AS MAX_Rating
FROM reviews
WHERE test_name="Memory versus Pointing";
%%sql
SELECT TIMESTAMPDIFF(MINUTE, start_time, end_time) AS Duration
FROM exam_answers
LIMIT 0 , 10;
%%sql
DESCRIBE exam_answers
%%sql
SELECT TIMESTAMPDIFF(MINUTE, start_time, end_time) AS Duration, dog_guid, start_time, end_time
FROM exam_answers
LIMIT 10000 , 10;
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS Duration_avg
FROM exam_answers
LIMIT 0 , 10;
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS Duration_avg
FROM exam_answers
WHERE test_name='Treat Warm-Up'
LIMIT 0 , 10;
%%sql
SELECT COUNT(DISTINCT test_name) AS test_name_num
FROM exam_answers
LIMIT 0 , 10;
%%sql
SELECT COUNT(DISTINCT test_name) AS test_name_num
FROM complete_tests
LIMIT 0 , 10;
%%sql
SELECT
MIN(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS 'Duration_min',
MAX(TIMESTAMPDIFF(MINUTE, start_time, end_time)) AS 'Duration_max'
FROM exam_answers
LIMIT 0 , 10;
%%sql
SELECT COUNT(start_time)
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) < 0
LIMIT 0 , 10;
%%sql
SELECT *
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) < 0
LIMIT 0 , 10;
%%sql
SELECT AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time))
FROM exam_answers
WHERE TIMESTAMPDIFF(MINUTE, start_time, end_time) > 0
LIMIT 0 , 10;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Jupyter interface conveniently tells us how many rows are in our query output, so we can compare the results of the COUNT function to the results of our SELECT function. If you run
Step2: COUNT is reporting how many rows are in the breed column in total. COUNT should give you the same output as Jupyter's output without displaying the actual rows of data that are being aggregated.
Step3: You can use the "*" in the parentheses of a COUNT function to count how many rows are in the entire table (or subtable). There are two fundamental difference between COUNT(*) and COUNT(column_name), though.
Step4: Question 3
Step5: The output of the second query should return a much smaller number than the output of the first query. That's because
Step6: 2. The SUM Function
Step7: The output should return a value of 34,025. When you add that number to the 1025 entries that have an exclude flag, you get a total of 35,050, which is the number of rows reported by SELECT COUNT(*) from dogs.
Step8: What if you wanted the average rating for each of the 40 tests in the Reviews table? One way to do that with the tools you know already is to write 40 separate queries like the ones you wrote above for each test, and then copy or transcribe the results into a separate table in another program like Excel to assemble all the results in one place. That would be a very tedious and time-consuming exercise. Fortunately, there is a very simple way to produce the results you want within one query. That's what we will learn how to do in MySQL Exercise 5. However, it is important that you feel comfortable with the syntax we have learned thus far before we start taking advantage of that functionality. Practice is the best way to become comfortable!
Step9: Question 7
Step10: If you explore your output you will find that some of your calculated durations appear to be "0." In some cases, you will see many entries from the same Dog_ID with the same start time and end time. That should be impossible. These types of entries probably represent tests run by the Dognition team rather than real customer data. In other cases, though, a "0" is entered in the Duration column even though the start_time and end_time are different. This is because we instructed the function to output the time difference in minutes; unless you change your settings, it will output "0" for any time differences less than the integer 1. If you change your function to output the time difference in seconds, the duration in most of these columns will have a non-zero number.
Step11: Question 9
Step12: Question 10
Step13: You should have discovered that the exam_answers table has many more test names than the completed_tests table. It turns out that this table has information about experimental tests that Dognition has not yet made available to its customers.
Step14: The minimum Duration value is negative! The end_times entered in rows with negative Duration values are earlier than the start_times. Unless Dognition has created a time machine, that's impossible and these entries must be mistakes.
Step15: Question 13
Step16: Question 14
|
4,590
|
<ASSISTANT_TASK:>
Python Code:
import graphviz
import nltk
from nltk import Nonterminal
from nltk.parse.generate import generate
from nltk.tree import Tree
def does_tcl_work():
Checks if Tcl is installed and works (e.g. it won't on a headless server).
tree = nltk.tree.Tree('test', [])
try:
tree._repr_png_()
return True
except:
return False
def draw_tree(tree):
Draws an NLTK parse tree via Graphviz.
def draw_tree_rec(curr_root, graph, last_node):
node_id = str(int(last_node) + 1)
for child in curr_root:
if isinstance(child, nltk.tree.Tree):
graph.node(node_id, child.label(), penwidth='0')
graph.edge(last_node, node_id, color='darkslategray3', style='bold')
node_id = draw_tree_rec(child, graph, node_id)
else:
graph.node(node_id, child, penwidth='0')
graph.edge(last_node, node_id, color='darkslategray3', style='bold')
node_id = str(int(node_id) + 1)
return str(int(node_id) + 1)
graph = graphviz.Graph()
graph.graph_attr['ranksep'] = '0.2'
graph.node('0', tree.label(), penwidth='0')
draw_tree_rec(tree, graph, '0')
return graph._repr_svg_()
# Use Graphviz to draw the tree if the Tcl backend of nltk doesn't work
if not does_tcl_work():
svg_formatter = get_ipython().display_formatter.formatters['image/svg+xml']
svg_formatter.for_type(nltk.tree.Tree, draw_tree)
# Delete the nltk drawing function, just to be sure
delattr(Tree, '_repr_png_')
# fromstring() returns a CFG instance from a string
# Observe the two ways one can specify alternations in the grammar
# and how terminal symbols are specified
toy_grammar = nltk.CFG.fromstring(
S -> NP VP
NP -> Pronoun | ProperNoun | Det Nominal
Nominal -> Nominal Noun
Nominal -> Noun
VP -> Verb | Verb PP | Verb NP | Verb NP PP | Verb NP NP | Verb NP NP PP
PP -> Preposition NP
Pronoun -> 'he' | 'she' | 'him' | 'her'
ProperNoun -> 'John' | 'Mary' | 'Fido'
Det -> 'a' | 'an' | 'the'
Noun -> 'flower' | 'bone' | 'necklace' | 'dream' | 'hole' | 'café' | 'house' | 'bed'
Verb -> 'loves' | 'gives' | 'gave' | 'sleeps' | 'digs' | 'dag' | 'ate'
Preposition -> 'in' | 'on' | 'behind'
)
# Now for some properties:
print('Max RHS length:', toy_grammar.max_len())
print('The start symbol is', toy_grammar.start())
print('Is it in CNF:', toy_grammar.is_chomsky_normal_form())
print('Is this a lexical grammar:', toy_grammar.is_lexical())
print('All productions:', toy_grammar.productions())
# Let's generate a few sentences
for sentence in generate(toy_grammar, n=10):
print(' '.join(sentence))
import random
from itertools import count
def generate_sample(grammar, start=None):
Generates a single sentence randomly.
gen = [start or grammar.start()]
curr_p = 0
while curr_p < len(gen):
production = random.choice(grammar.productions(lhs=gen[curr_p]))
if production.is_lexical():
gen[curr_p] = production.rhs()[0]
curr_p += 1
else:
gen = gen[:curr_p] + list(production.rhs()) + gen[curr_p + 1:]
return ' '.join(gen)
def generate_random(grammar, start=None, n=None):
Generates sentences randomly.
for i in count(0):
yield generate_sample(grammar, start)
if i == n:
break
for sentence in generate_random(toy_grammar, n=10):
print(sentence)
toy_parser = nltk.ChartParser(toy_grammar)
# the split() part is important
for tree in toy_parser.parse('John gave Mary a flower in the café'.split()):
display(tree)
# Converts the tree to CNF
tree.chomsky_normal_form()
display(tree)
# Let's convert it back...
tree.un_chomsky_normal_form()
print('The tree has', len(tree), 'children.')
print('The first child is another tree:', tree[0])
print('All nonterminals are Trees. They have labels:', tree[1].label())
print('Terminals are just strings:', tree[0][0][0])
nltk.app.rdparser()
# Your solution here
agr = nltk.CFG.fromstring(
)
aparser = nltk.ChartParser(agr)
# Test
for tree in aparser.parse('1 - 2 / ( 3 - 4 )'.split()):
display(tree)
# Your solution here
# Test
for tree in aparser.parse('1 - 2 / ( 3 - 4 )'.split()):
display(tree)
assert len(list(aparser.parse('1 - 2 + 3 / ( 4 - 5 )'.split()))) > 0
# Your solution here
# Test
tree = list(aparser.parse('1 - 2 / ( 3 - 4 )'.split()))[0]
tree.chomsky_normal_form()
display(tree)
def evaluate_tree(tree):
Returns the value of the expression represented by tree.
pass
# Test
assert evaluate_tree(next(aparser.parse('1+2'))) == 3
assert evaluate_tree(next(aparser.parse('1+2*3'))) == 7
assert evaluate_tree(next(aparser.parse('3/(2-3)-4/2-5'))) == -10
class CKYParser:
pass
import numpy
# Test
grammar = nltk.CFG.fromstring(
S -> NP VP | ProperNoun VP | NP Verb | ProperNoun Verb
NP -> Det Nominal | Det Noun
Nominal -> Nominal Noun | Noun Noun
VP -> Verb NP | Verb ProperNoun
Det -> 'the'
Noun -> 'dog' | 'bit'
ProperNoun -> 'John'
Verb -> 'bit'
)
parser = CKYParser(grammar)
print('Sentence is grammatical:', parser.parse('the dog bit John'.split()))
# Test
parser = CKYParser(grammar)
for tree in parser.parse('the dog bit John'.split()):
display(tree)
from nltk.corpus import treebank
# PTB file ids
print('Ids:', treebank.fileids())
# Words in one of the files
print('Words:', treebank.words('wsj_0003.mrg'))
# Word - POS-tag pairs
print('Tagged words:', treebank.tagged_words('wsj_0003.mrg'))
display(treebank.parsed_sents('wsj_0003.mrg')[0])
# Your solution here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 10. Syntax — Lab exercises
Step4: Disclaimer
Step7: Unfortunately, generate() only generates the sentences in order. Also, it can run into problems with recursive grammars. Here is a version that generates random sentences.
Step8: Sentences can also be parsed
Step9: The parse returns an iterator of nltk.tree.Tree objects. This class has some useful functions, such as
Step10: Note that in nltk, one can convert a Tree to CNF, but not the whole grammar. nltk has some strange design choices - the other being their reliance on Tcl. If you run this notebook on your own machine, a nifty grammar editing tool will pop up if you run
Step12: 2. Arithmetics
Step13: 2.2 Precedence
Step14: 2.3 CNF
Step16: 2.4 Evaluation*
Step17: 3. CKY
Step19: 3.2 Implement parse()
Step20: 3.3 The full monty
Step21: 4. Treebanks
|
4,591
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import linalg
#the function to calculate the coefficent
def approx(x, y, n, w):
x = np.matrix(x).transpose()
y = np.matrix(y).transpose()
f, b = x.shape
c, d = y.shape
if c != f or b != d:
print('The Input vector have wrong dimension')
return -1
j = np.matrix(np.arange(1,n+1))
V1 = np.cos(w*x*j)
j = np.matrix(np.arange(n+1,2*n+1))
V2 = np.sin(w*x*(j-n))
V = np.concatenate([V1,V2],axis=1)
Q, R = linalg.qr(V)
R = R[:2 * n, :2 * n]
Q = Q[:f, :2 * n]
# coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)),check_finite=False)
coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)))
n = int(len(coeff) / 2)
mag = np.sqrt(coeff[:n]**2+coeff[n:]**2)
angle = np.arctan2(coeff[:n],coeff[n:])
r = []
for i,(m,a) in enumerate(zip(mag,angle)):
r.append([float(m),i+1,float(a)])
return r
#the function to calculate the reconstructed function from the coefficent
def calc_fourier(X,coeff,vmed,w=0.5):
y = np.zeros_like(X) + vmed
for (m,i,p) in coeff:
y += m*np.sin(w*i*X+p)
return y
#approx a function and get both the coefficent and the reconstructed function
def fourier_approx(funzione,n=0,w=0.5):
fmean = np.mean(funzione)
funzione = list(funzione)
funzione = funzione + funzione[::-1]
mean = np.mean(funzione)
funzione = [z - mean for z in funzione]
T = np.linspace( 0, 4 * np.pi, num=len(funzione), endpoint=True)
if n == 0:
n = int(len(T) / 2) - 1
if n < 1:
return -1
coeff = approx(T, funzione, n, w)
T = np.array(T [:int(len(T)/2)])
funzione = np.array(funzione [:int(len(funzione)/2)])
y = calc_fourier(T,coeff[:index+1],0)
return y,coeff
import matplotlib.pyplot as plt
from pprint import pprint
f = np.linspace(0,1,100)
f = np.concatenate((f,f,f,f))
T = np.linspace( 0, 2 * np.pi, num=len(f), endpoint=True) - np.pi
plt.plot(T,f)
plt.show()
n = 50
w = 1
f_mean = np.mean(f)
f -= f_mean
plt.plot(T,f)
plt.show()
x = np.matrix(T).transpose()
y = np.matrix(f).transpose()
j = np.matrix(np.arange(1,n+1))
C1 = np.cos(w*x*j)
j = np.matrix(np.arange(n+1,2*n+1))
C2 = np.sin(w*x*(j-n))
C = np.concatenate([C1,C2],axis=1)
print(C)
Q, R = linalg.qr(C)
R = R[:2 * n, :2 * n]
Q = Q[:x.shape[0], :2 * n]
coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)))
# coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)),check_finite=False) Alternative way
n_ = int(len(coeff) / 2)
sin_coeff = coeff[:n_]
cos_coeff = coeff[n_:]
mag = np.sqrt(cos_coeff**2+sin_coeff**2)
phi = np.arctan2(sin_coeff,cos_coeff)
y = np.zeros_like(T)
for (m,i,p) in zip(mag,range(n),phi):
y += m*np.sin(w*(i+1)*T+p)
y += f_mean
f += f_mean
plt.plot(T,f)
plt.plot(T,y)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Final Code
Step2: How and Why it work
Step3: we start with f that is a sawthoot wave
Step4: we choose the number of armonichs that we want in this case 50 but $ n \in [1,N)$ where $N$ is len(f)
Step5: first we get rid of the mean
Step6: we transforme the array into a coloumn vector
Step7: we create the C matrix
Step8: the QR decomposition
Step9: we truncate the matrix so that it's possible to solve the system
Step10: we solve the system and get the coeff vector
Step11: We separate the matrix into the sin and cos coeff list
Step12: we convert to $c$ and $\phi$ and now we have all the coefficent ready to go
Step13: we calculate the result function and re-add the mean to the funciton
Step14: Result
|
4,592
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
k = np.arange(1,N)
h = (b-a)/N
I = h*0.5*f(a) + h*0.5*f(b) + h*f(a+k*h).sum()
return I
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
print(trapz(f, 0, 1, 1000))
print(integrate.quad(f, 0, 1)[0])
print('error: '+ str(integrate.quad(f, 0, 1)[1]))
print(trapz(g, 0, np.pi, 1000))
print(integrate.quad(g, 0, np.pi)[0])
print('error: '+ str(integrate.quad(g, 0, np.pi)[1]))
assert True # leave this cell to grade the previous one
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Trapezoidal rule
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
|
4,593
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-cc', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
4,594
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_text = source_text.split('\n')
source_id_text = []
for s in source_text:
source_sentence = [source_vocab_to_int[w] for w in s.split() if w != '']
source_id_text.append(source_sentence)
target_text = target_text.split('\n')
target_id_text = []
for s in target_text:
target_sentence = [target_vocab_to_int[w] for w in s.split() if w != '']
target_sentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(target_sentence)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_sequence_length')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_, targets, learning_rate, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def make_drop_cell(rnn_size, keep_prob):
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
return cell
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
cell_stack = tf.contrib.rnn.MultiRNNCell([make_drop_cell(rnn_size, keep_prob) for _ in range(num_layers)])
output, state = tf.nn.dynamic_rnn(cell_stack, embed, sequence_length=source_sequence_length, dtype=tf.float32)
return output, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
helper = tf.contrib.seq2seq.TrainingHelper(
inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
decoder = tf.contrib.seq2seq.BasicDecoder(
cell=dec_cell,
helper=helper,
initial_state=encoder_state,
output_layer=output_layer)
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length)
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings,
start_tokens,
end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(
cell=dec_cell,
helper=helper,
initial_state=encoder_state,
output_layer=output_layer)
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
# 1. Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Training decoder
with tf.variable_scope("decode"):
logits_train = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
# 5. Inference decoder
with tf.variable_scope("decode", reuse=True):
logits_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
return logits_train, logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
enc_output, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
logits_train, logits_infer = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)
return logits_train, logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 15
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 25
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence = sentence.lower()
sentence_id = []
for s in sentence.split():
try:
sentence_id.append(vocab_to_int[s])
except KeyError:
sentence_id.append(vocab_to_int['<UNK>'])
return sentence_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
4,595
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
4,596
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([1,2,3], [4,7, -1], 'bo')
fig1 = plt.figure()
# This line will be pasted in by course attendee.
plt.plot([1,2,3], [4,7,-1], '*r')
fig1.savefig("my_first_plot.jpeg", dpi=300)
import pandas as pd
import pandas_datareader.data as web # requires pandas-datareader package
from datetime import datetime
start = datetime(2016,11,9)
# end = datetime(2017, 7, 24)
tickers = ["AAPL", "MMM", "GOOGL"]
stock_panel = web.DataReader(tickers, "google", start, )
closing_pct_change = (stock_panel.loc["Close", :, :]
.pct_change())
# closing_pct_change.to_csv("closing_pct_change.csv")
# backup plan
# closing_pct_change = pd.read_csv("closing_pct_change.csv")
# Use this space to do the following: Plot a simple line graph using plt.plot. Use the data below.
xcoord = [1,2,3,4]
ycoord = [5, -3, 9, -1]
# Answer:
# Copy and paste the code above in the space below
fig2, ax2 = plt.subplots(figsize=(12,6)) # figure size is given in (wide, height) units
for ticker in tickers:
ax2.plot(closing_pct_change[ticker], label=ticker)
plt.legend(loc="best")
ax2.set_title("Stock prices percentage change")
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.style import use
%matplotlib inline
use('ggplot')
xx = np.linspace(0,10,50)
fig, ax = plt.subplots(figsize=(12,12))
for m, marker in enumerate(['-', '--', ':', '-.', 'o', '.', 'v','^', '1', '2','3','4','s', '*', 'H', 'D', '+']):
ax.plot(xx, (m+1)*xx+10, marker, label=marker)
ax.set_title("Marker and Line Style Codes", fontsize=12)
plt.legend(loc="best")
fig3, ax3 = plt.subplots(figsize=(6,6))
marker = ["go", "rD"]
for mark, ticker in zip(marker, tickers[1:]):
ax3.plot(closing_pct_change["AAPL"], closing_pct_change[ticker], mark, label=ticker, alpha=0.5)
ax3.set_xlabel("AAPL Percentage change")
ax3.set_title("Correlation of Percentage changes in closing stock price")
plt.legend(loc="best")
# oecd_currency_xchange.to_csv("oecd_currency_xchange.csv")
import pandas as pd
oecd_currency_xchange = pd.read_csv("oecd_currency_xchange.csv", infer_datetime_format=True, index_col=0, parse_dates=True )
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
n_cols = oecd_currency_xchange.shape[1]
color = sns.color_palette("tab20", n_cols)
non_oecd = ["Brazil", "China (People's Republic of)", "Colombia", "Costa Rica", "India", "South Africa"]
fig4, ax4 = plt.subplots(figsize=(12,12))
for _ in range(n_cols):
plotSeries = oecd_currency_xchange.iloc[:, _]
if plotSeries.name in non_oecd:
marker = '--'
else:
marker = '-'
ax4.plot(plotSeries, marker, label = plotSeries.name, color=color[_])
ax4.set_ylabel("Percent change from year 2000")
ax4.set_xlabel("Year")
ax4.set_title("Currency exchange rate per USD for some OECD and non-OECD countries\n2000 - present", fontsize=14)
plt.legend(loc="best");
fig4.savefig("OECD.jpeg", dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we need to import the matplotlib plotting library. We use the import and as keyword to do this. The keyword as allows us to replace the lengthy module name with an alias of our choosing. By convention, use plt.
Step2: Here above, you will see our very first plot using the plt.plot function
Step3: When run, we see that an instance of a Figure class has been created. However, we do not see anything more that that printed output. In fact, if we were using another Python IDE like Spyder or accessing Python from the command prompt (like IPython), we will see a blank pop window created. Since we are using Jupyter Notebooks, our particular graphical renderer (inline) will not display this.
Step4: The reason for initializing a figure instance calling plt.plot in the same cell is so that axes for this plot (call it ax1) will be contained in fig1. By doing this we can save the figure to a file by running the following command
Step5: Check your current working directory. You should see the jpeg file there now.
Step6: 2.1 Using plt.plot to plot lines
Step7: In matplotlib, a line can also be created for a time series object by passing a Series object indexed by datetime objects. If you pass such a series as an argument to plt.plot, matplotlib is smart enough to know that you are trying to make a time series plot. In our data above, we want to plot stock price changes, which is a type of time series.
Step8: In matplotlib, an axes can contain many different such lines by repeatedly calling the .plot method on the same axis object. This has the effect of constructing multiple lines on the same plot and is useful for comparing various time series objects.
Step9: Do you notice that the variation between AAPL and GOOGL are highly correlated?
Step10: 2.3.3 Combining line styles with color abbrevations
Step11: Here we see that Google's stock price movements correlate with Apple's stock price movements. This is to be expected as both are technology companies. Compare this with the correlation between 3M and Apple. As 3M is a manufacturing company, this correlation is lower.
|
4,597
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
try:
os.makedirs(os.path.join("data", "tutorial"))
print("Pasta criada.")
except OSError:
print("Pasta já existe!")
nome do arquivo
file_name = os.path.join("data", "tutorial", "orders.csv")
criação do DataFrame
df = pd.DataFrame(
columns=["user_id", "store_id", "product_id"],
data=[
[1, 1, 1],
[1, 2, 6],
[3, 2, 7],
[2, 2, 3],
[3, 2, 3],
[4, 3, 2],
[2, 3, 1],
[1, 1, 3],
[1, 3, 3],
[5, 1, 4],
[5, 1, 1],
[3, 2, 1],
[1, 2, 1],
[2, 2, 2],
[3, 2, 2],
[4, 2, 3],
[4, 1, 5],
[5, 1, 6],
[3, 1, 6],
[2, 3, 7],
]
)
df
salvando o DataFrame
df.to_csv(file_name, sep=",")
nome do arquivo
file_name = os.path.join("data", "tutorial", "stores.csv")
criação do DataFrame
df = pd.DataFrame(
columns=["store_id", "store_name"],
data=[
[1, "Pão de Açúcar"],
[2, "Dia"],
[3, "Extra"],
]
).set_index("store_id")
df
salvando o DataFrame
df.to_csv(
file_name, # nome do arquivo
sep="\t", # separador TAB
encoding="utf-8" # encoding: garante formato correto de strings no arquivo
)
nome do arquivo
file_name = os.path.join("data", "tutorial", "products.csv")
criação do DataFrame
df = pd.DataFrame(
columns=["product_id", "product_name"],
data=[
[1, "Leite"],
[2, "Ovos"],
[3, "Arroz"],
[4, "Feijão"],
[5, "Carne"],
[6, "Frango"],
[7, "Peixe"]
]
).set_index("product_id") # set_index: usa a coluna especificada como índice
df
salvando o DataFrame
df.to_csv(
file_name, # nome do arquivo
sep=";", # separador ;
encoding="utf-8" # encoding: garante formato correto de strings no arquivo
)
file_name = os.path.join("data", "tutorial", "orders.csv")
pd.read_csv(file_name)
definindo index_col para a primeira coluna
pd.read_csv(file_name, index_col=0)
definindo index_col para a segunda coluna
pd.read_csv(file_name, index_col=1)
Modificando o parâmetro sep
pd.read_csv(file_name, sep=";")
Usando explicitamente o valor default de sep
pd.read_csv(file_name, sep=",")
Usando header = None
pd.read_csv(file_name, header=None)
Usando outra linha como header; nota-se que as linhas acima são eliminadas
pd.read_csv(file_name, header=1)
Usando múltiplass linhas como header
pd.read_csv(file_name, header=[0,1,2])
lendo apenas duas colunas
pd.read_csv(file_name, usecols=["product_id", "store_id"])
orders = pd.read_csv(
os.path.join("data", "tutorial", "orders.csv"),
sep=",",
index_col=0
)
stores = pd.read_csv(
os.path.join("data", "tutorial", "stores.csv"),
sep="\t",
index_col=0,
encoding="utf-8"
)
products = pd.read_csv(
os.path.join("data", "tutorial", "products.csv"),
sep=";",
index_col=0,
encoding="utf-8"
)
novos dados de 'orders'
new_orders = pd.DataFrame(
columns=["user_id", "store_id", "product_id"],
data=[
[1, 1, 1],
[2, 1, 2],
[3, 2, 1],
[2, 1, 3],
[1, 2, 1],
]
)
new_orders
adicionando no 'orders' original
temp_orders = orders.append(new_orders)
temp_orders
new_orders.append(
pd.DataFrame(
columns=list(new_orders.columns) + ["new_column"],
data=[
[1, 2, 1, 51],
[1, 2, 1, 15]
]
)
)
new_stores = pd.DataFrame(
index=stores.index,
columns=["opens", "closes"],
data=[
["9:00", "22:00"],
["8:00", "18:00"],
["11:00", "21:00"],
]
)
new_stores
concatenando estruturas compatíveis
temp_stores = stores.join(new_stores)
temp_stores
modificando índices para exemplos
new_stores.index = [3, 4, 5]
concatenando estruturas com índices diferentes
stores.join(new_stores)
explicitando 'how=left'
stores.join(new_stores, how="left")
right join
stores.join(new_stores, how="right")
concatenando estruturas com índices diferentes
stores.join(new_stores, how="outer")
concatenando estruturas com índices diferentes
stores.join(new_stores, how="inner")
new_stores.join(new_stores, lsuffix="_original")
new_stores.join(new_stores, rsuffix="_copycat")
new_stores.join(new_stores, lsuffix="_original", rsuffix="_copycat")
reset_index para liberar as variáveis para o merge
stores = stores.reset_index()
products = products.reset_index()
adicionando store_name
df = pd.merge(
left=orders,
right=stores,
on="store_id"
)
df
adicionando product_name
df = pd.merge(
left=df,
right=products,
on="product_id"
)
df
path = os.path.join("data", "desafio")
input_file = {
"aisles": os.path.join(path, "aisles.csv"),
"departments": os.path.join(path, "departments.csv"),
"order_products": os.path.join(path, "order_products.csv"),
"orders": os.path.join(path, "orders.csv"),
"products": os.path.join(path, "products.csv")
}
output_file = os.path.join(path, "dataset.csv")
Entre com o Código aqui
Entre com o Código aqui
Entre com o Código aqui
Entre com o Código aqui
Entre com o Código aqui
Entre com o Código aqui
dataset.head(7)
dataset.tail(3)
Entre com o Código aqui
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Operações com Arquivos
Step5: Dataset orders.csv
Step9: Dataset stores.csv
Step13: Dataset product.csv
Step14: Leitura de Arquivos
Step17: Parâmetro index_col
Step20: Parâmetro sep
Step24: Parâmetro header
Step26: Parâmetro usecols
Step27: Agrupando DataFrames
Step30: Estruturas Homogêneas
Step31: Observações
Step33: Adicionando Colunas
Step40: Observações
Step41: Observação
Step45: Estruturas Heterogêneas
Step46: Desafio
Step47: Arquivo de Resposta
Step49: Construção do Dataset
Step51: products
Step53: aisles
Step55: department
Step57: order_products
Step59: Construção do Dataset
Step60: Visualização do Dataset
Step62: Salvando o Dataset
|
4,598
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
fsave_vertices = [np.arange(10242), np.arange(10242)]
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=1,
threshold=t_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(hemi='both', subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.set_data_time_index(0)
brain.show_view('lateral')
brain.save_image('clusters.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: It's a good idea to spatially smooth the data, and for visualization
Step6: Finally, we want to compare the overall activity levels in each condition,
Step7: Compute statistic
Step8: Visualize the clusters
|
4,599
|
<ASSISTANT_TASK:>
Python Code:
def to_binary(x):
the_sum = 0
# enumerate returns pairs of values from `x`
# as well as the index of each value
for index, value in enumerate(x):
the_sum += value * 2**index
return the_sum
my_list = [1, 1]
to_binary(my_list)
my_list = [1, 0, 0, 0, 1, 1, 0, 1]
to_binary(my_list)
import os
with open('data/alice_in_wonderland.txt', 'r') as file:
alice = file.read()
# how many characters are in Alice?
print('number of characters is {}'.format(len(alice)))
# how large is the file on disk?
print('number of bytes on disk is {}'.format(os.path.getsize('data/alice_in_wonderland.txt')))
# non-ASCI characters are characters that use more
# than 1 byte to represent the character
non_ascii = []
for character in alice:
# convert character to Unicode bytes and check how many bytes there are
if len(bytes(character, 'UTF-8')) > 1:
non_ascii.append(character)
# convert list to set to get only the unique characters
print('unique non-ASCII characters:', set(non_ascii))
import pickle
# open a file in write mode ('w') to write plain text
with open('data/alice_partial.txt', 'w') as file:
file.write(alice[:10000])
# open a file in write-binary ('wb') mode to write pickle protocol
with open('data/alice_partial.pickle', 'wb') as file:
pickle.dump(alice[:10000], file)
print('size of plain text file: {}'.format(os.path.getsize('data/alice_partial.txt')))
print('size of pickled file: {}'.format(os.path.getsize('data/alice_partial.pickle')))
import json
# use the `json` library to read json-structured plain text into Python objects
with open('data/good_movies.json', 'r') as file:
good_movies = json.loads(file.read())
# iterate over the movies, checking the list of stars for each
for movie in good_movies:
if 'Ben Affleck' in movie['stars']:
print(movie['title'])
# iterate over the movies, tallying the Oscars for movies in 2016
nominations_2016 = 0
for movie in good_movies:
if movie['year'] == 2016:
nominations_2016 += movie['oscar_nominations']
print(nominations_2016)
import numpy as np
rand_array = np.random.randint(1, high=100, size=100000)
def my_average(x):
the_sum = 0
for el in x:
the_sum += el
return the_sum / len(x)
def my_stdev(x):
the_sum = 0
the_avg = my_average(x)
for xi in x:
the_sum += (xi - the_avg) ** 2
return np.sqrt(the_sum / len(x))
def my_weighted_average(x, weights):
the_sum = 0
for el, weight in zip(x, weights):
the_sum += el * weight
return the_sum
print('average:', my_average(rand_array))
print('standard deviation:', my_stdev(rand_array))
rand_weights = np.random.random(size=100000)
rand_weights /= np.sum(rand_weights)
print('weighted average:', my_weighted_average(rand_array, rand_weights))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One note - there are actually 2 possible solutions to this problem, depending on which value of [1, 0, 0, 0, 1, 1, 0, 1] is treated as the least-significant bit (LSB). The solution above treats the left-most bit as the LSB (i.e. the bit that gets multiplied by $2^0=1$). How would you rewrite the function to treat the right-most bit as the LSB?
Step2: So this tells us that there are non-ASCII characters (characters that use more than 1 byte) in the file
Step3: Problem 3
Step4: Problem 4
Step5: A weight vector needs to sum to 1. So we'll create a vector of random numbers between 0 and 1 and normalize it (divide by its sum) so that it sums to 1.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.