text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step4: 5. Create a logistic regression model
Step5: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
df = pd.read_csv('../data/hanford.csv')
df.corr()
df.describe()
Q1 = df['Exposure'].quantile(q=0.25)
Q1
Q2 = df['Exposure'].quantile(q=0.5)
Q2
Q3 = df['Exposure'].quantile(q=0.75)
Q3
df['Mortality'].hist(bins=5)
df['Mortality'].mean()
df['Mort_high'] = df['Mortality'].apply(lambda x:1 if x>=157.1 else 0)
df['Expo_high'] = df['Exposure'].apply(lambda x:1 if x>=3.41 else 0)
df # logistic regression , high and low mortality are classified.
lm = linear
lm = lm.fit(x,y)
lm.predict([50])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: By default nbtlib.load will figure out by itself if the specified file is gzipped, but you can also use the gzipped= keyword only argument if you know in advance whether the file is gzipped or not.
Step2: The nbtlib.load function also accepts the byteorder= keyword only argument. It lets you specify whether the file is big-endian or little-endian. The default value is 'big', which means that the file is interpreted as big-endian by default. You can set it to 'little' to use the little-endian format.
Step3: Objects returned by the nbtlib.load function are instances of the nbtlib.File class. The nbtlib.load function is actually a small helper around the File.load classmethod. If you need to load files from an already opened file-like object, you can use the File.parse class method.
Step4: The File class inherits from Compound, which inherits from dict. This means that you can use standard dict operations to access data inside of the file.
Step5: Modifying files
Step6: If you don't want to use a context manager, you can call the .save method manually to overwrite the original file or make a copy by specifying a different path. The .save method also accepts the gzipped= keyword only argument. By default, the copy will be gzipped if the original file is gzipped. Similarly, you can use the byteorder= keyword only argument to specify whether the file should be saved using the big-endian or little-endian format. By default, the copy will be saved using the same format as the original file.
Step7: You can also write nbt data to an already opened file-like object using the .write method.
Step8: Creating files
Step9: New files are uncompressed by default. You can use the gzipped= keyword only argument to create a gzipped file. New files are also big-endian by default. You can use the byteorder= keyword only argument to set the endianness of the file to either 'big' or 'little'.
Step10: Performing operations on tags
Step11: Serializing nbt tags to snbt
Step12: Converting nbt tags to strings will serialize them to snbt. If you want more control over the way nbt tags are serialized, you can use the nbtlib.serialize_tag function. In fact, using str on nbt tags simply calls nbtlib.serialize_tag on the specified tag.
Step13: You might have noticed that by default, the nbtlib.serialize_tag function will render strings with single ' or double " quotes based on their content to avoid escaping quoting characters. The string is serialized such that the type of quotes used is different from the first quoting character found in the string. If the string doesn't contain any quoting character, the nbtlib.serialize_tag function will render the string as a double " quoted string.
Step14: You can overwrite this behavior by setting the quote= keyword only argument to either a single ' or a double " quote.
Step15: The nbtlib.serialize_tag function can be used with the compact= keyword only argument to remove all the extra whitespace from the output.
Step16: If you'd rather have something a bit more readable, you can use the indent= keyword only argument to tell the nbtlib.serialize_tag function to output indented snbt. The argument can be either a string or an integer and will be used to define how to render each indentation level.
Step17: If you need the output ot be indented with tabs instead, you can set the indent= argument to '\t'.
Step18: Note that the indent= keyword only argument can be set to any string, not just '\t'.
Step19: Creating tags from nbt literals
Step21: Note that the parser ignores whitespace.
Step22: Defining schemas
Step23: By default, you can interact with keys that are not defined in the schema. However, if you use the strict= keyword only argument, the schema instance will raise a TypeError whenever you try to access a key that wasn't defined in the original schema.
Step24: The schema function is a helper that creates a class that inherits from CompoundSchema. This means that you can also inherit from the class manually.
Step25: You can also set the strict class attribute to True to create a strict schema type.
Step26: Combining schemas and custom file types
Step27: Now let's test our schema by creating a structure. We can see that all the types are automatically applied.
Step28: Now we can create a custom file type that wraps our structure schema. Since structure files are always gzipped we can override the load method to default the gzipped argument to True. We also overwrite the constructor so that it can take directly an instance of our structure schema as argument.
Step29: We can now use the custom file type to load, edit and save structure files without having to specify the tags manually.
Step30: So now let's try to edit the structure. We're going to replace all the dirt blocks with stone blocks.
Step31: As you can see we didn't need to specify any tag to edit the file.
|
<ASSISTANT_TASK:>
Python Code:
import nbtlib
nbt_file = nbtlib.load('nbt_files/bigtest.nbt')
nbt_file['stringTest']
uncompressed_file = nbtlib.load('nbt_files/hello_world.nbt', gzipped=False)
uncompressed_file.gzipped
little_endian_file = nbtlib.load('nbt_files/hello_world_little.nbt', byteorder='little')
little_endian_file.byteorder
from nbtlib import File
with open('nbt_files/hello_world.nbt', 'rb') as f:
hello_world = File.parse(f)
hello_world
nbt_file.keys()
from nbtlib.tag import *
with nbtlib.load('nbt_files/demo.nbt') as demo:
demo['counter'] = Int(demo['counter'] + 1)
demo
demo = nbtlib.load('nbt_files/demo.nbt')
...
demo.save() # overwrite
demo.save('nbt_files/demo_copy.nbt', gzipped=True) # make a gzipped copy
demo.save('nbt_files/demo_little.nbt', byteorder='little') # convert the file to little-endian
nbtlib.load('nbt_files/demo_copy.nbt')['counter']
nbtlib.load('nbt_files/demo_little.nbt', byteorder='little')['counter']
with open('nbt_files/demo_copy.nbt', 'wb') as f:
demo.write(f)
new_file = File({
'foo': String('bar'),
'spam': IntArray([1, 2, 3]),
'egg': List[String](['hello', 'world'])
})
new_file.save('nbt_files/new_file.nbt')
loaded_file = nbtlib.load('nbt_files/new_file.nbt')
loaded_file.gzipped
loaded_file.byteorder
new_file = File(
{'thing': LongArray([1, 2, 3])},
gzipped=True,
byteorder='little'
)
new_file.save('nbt_files/new_file_gzipped_little.nbt')
loaded_file = nbtlib.load('nbt_files/new_file_gzipped_little.nbt', byteorder='little')
loaded_file.gzipped
loaded_file.byteorder
my_list = List[String](char.upper() for char in 'hello')
my_list.reverse()
my_list[3:]
my_array = IntArray([1, 2, 3])
my_array + 100
my_pizza = Compound({
'name': String('Margherita'),
'price': Double(5.7),
'size': String('medium')
})
my_pizza.update({'name': String('Calzone'), 'size': String('large')})
my_pizza['price'] = Double(my_pizza['price'] + 2.5)
my_pizza
example_tag = Compound({
'numbers': IntArray([1, 2, 3]),
'foo': String('bar'),
'syntax breaking': Float(42),
'spam': String('{"text":"Hello, world!\\n"}')
})
print(repr(example_tag))
print(str(example_tag))
print(example_tag)
from nbtlib import serialize_tag
print(serialize_tag(example_tag))
serialize_tag(example_tag) == str(example_tag)
print(String("contains 'single' quotes"))
print(String('contains "double" quotes'))
print(String('''contains 'single' and "double" quotes'''))
print(serialize_tag(String('forcing "double" quotes'), quote='"'))
print(serialize_tag(example_tag, compact=True))
nested_tag = Compound({
'foo': List[Int]([1, 2, 3]),
'bar': String('name'),
'values': List[Compound]([
{'test': String('a'), 'thing': ByteArray([32, 32, 32])},
{'test': String('b'), 'thing': ByteArray([64, 64, 64])}
])
})
print(serialize_tag(nested_tag, indent=4))
print(serialize_tag(nested_tag, indent='\t'))
print(serialize_tag(nested_tag, indent='. '))
from nbtlib import parse_nbt
parse_nbt('hello')
parse_nbt('{foo:[{bar:[I;1,2,3]},{spam:6.7f}]}')
parse_nbt({
foo: [1, 2, 3],
bar: "name",
values: [
{
test: "a",
thing: [B; 32B, 32B, 32B]
},
{
test: "b",
thing: [B; 64B, 64B, 64B]
}
]
})
from nbtlib import schema
MySchema = schema('MySchema', {
'foo': String,
'bar': Short
})
my_object = MySchema({'foo': 'hello world', 'bar': 21})
my_object['bar'] *= 2
my_object
MyStrictSchema = schema('MyStrictSchema', {
'foo': String,
'bar': Short
}, strict=True)
strict_instance = MyStrictSchema()
strict_instance.update({'foo': 'hello world'})
strict_instance
try:
strict_instance['something'] = List[String](['this', 'raises', 'an', 'error'])
except TypeError as exc:
print(exc)
from nbtlib import CompoundSchema
class MySchema(CompoundSchema):
schema = {
'foo': String,
'bar': Short
}
MySchema({'foo': 'hello world', 'bar': 42})
class MyStrictSchema(CompoundSchema):
schema = {
'foo': String,
'bar': Short
}
strict = True
try:
MyStrictSchema({'something': Byte(5)})
except TypeError as exc:
print(exc)
Structure = schema('Structure', {
'DataVersion': Int,
'author': String,
'size': List[Int],
'palette': List[schema('State', {
'Name': String,
'Properties': Compound,
})],
'blocks': List[schema('Block', {
'state': Int,
'pos': List[Int],
'nbt': Compound,
})],
'entities': List[schema('Entity', {
'pos': List[Double],
'blockPos': List[Int],
'nbt': Compound,
})],
})
new_structure = Structure({
'DataVersion': 1139,
'author': 'dinnerbone',
'size': [1, 2, 1],
'palette': [
{'Name': 'minecraft:dirt'}
],
'blocks': [
{'pos': [0, 0, 0], 'state': 0},
{'pos': [0, 1, 0], 'state': 0}
],
'entities': [],
})
type(new_structure['blocks'][0]['pos'])
type(new_structure['entities'])
class StructureFile(File, Structure):
def __init__(self, structure_data=None):
super().__init__(structure_data or {})
self.gzipped = True
@classmethod
def load(cls, filename, gzipped=True):
return super().load(filename, gzipped)
structure_file = StructureFile(new_structure)
structure_file.save('nbt_files/new_structure.nbt') # you can load it in a minecraft world!
with StructureFile.load('nbt_files/new_structure.nbt') as structure_file:
structure_file['palette'][0]['Name'] = 'minecraft:stone'
print(serialize_tag(StructureFile.load('nbt_files/new_structure.nbt'), indent=4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The pressure Poisson equation that's written above can be hard to write out without typos. The function build_up_b below represents the contents of the square brackets, so that the entirety of the Poisson pressure equation is slightly more manageable.
Step2: The function pressure_poisson is also defined to help segregate the different rounds of calculations. Note the presence of the pseudo-time variable nit. This sub-iteration in the Poisson calculation helps ensure a divergence-free field.
Step3: Finally, the rest of the cavity flow equations are wrapped inside the function cavity_flow, allowing us to easily plot the results of the cavity flow solver for different lengths of time.
Step4: Validation
Step5: Devito Implementation
Step6: Reminder
Step7: Validation
Step8: The Devito implementation produces results consistent with the benchmark solution. There is a small disparity in a few of the velocity values, but this is expected as the Devito 41 x 41 node grid is much coarser than the benchmark on a 1024 x 1024 node grid.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot, cm
%matplotlib inline
nx = 41
ny = 41
nt = 1000
nit = 50
c = 1
dx = 1. / (nx - 1)
dy = 1. / (ny - 1)
x = np.linspace(0, 1, nx)
y = np.linspace(0, 1, ny)
Y, X = np.meshgrid(x, y)
rho = 1
nu = .1
dt = .001
u = np.zeros((nx, ny))
v = np.zeros((nx, ny))
p = np.zeros((nx, ny))
def build_up_b(b, rho, dt, u, v, dx, dy):
b[1:-1, 1:-1] = (rho * (1 / dt *
((u[2:, 1:-1] - u[0:-2, 1:-1]) /
(2 * dx) + (v[1:-1, 2:] - v[1:-1, 0:-2]) / (2 * dy)) -
((u[2:, 1:-1] - u[0:-2, 1:-1]) / (2 * dx))**2 -
2 * ((u[1:-1, 2:] - u[1:-1, 0:-2]) / (2 * dy) *
(v[2:, 1:-1] - v[0:-2, 1:-1]) / (2 * dx))-
((v[1:-1, 2:] - v[1:-1, 0:-2]) / (2 * dy))**2))
return b
def pressure_poisson(p, dx, dy, b):
pn = np.empty_like(p)
pn = p.copy()
for q in range(nit):
pn = p.copy()
p[1:-1, 1:-1] = (((pn[2:, 1:-1] + pn[0:-2, 1:-1]) * dy**2 +
(pn[1:-1, 2:] + pn[1:-1, 0:-2]) * dx**2) /
(2 * (dx**2 + dy**2)) -
dx**2 * dy**2 / (2 * (dx**2 + dy**2)) *
b[1:-1,1:-1])
p[-1, :] = p[-2, :] # dp/dx = 0 at x = 2
p[:, 0] = p[:, 1] # dp/dy = 0 at y = 0
p[0, :] = p[1, :] # dp/dx = 0 at x = 0
p[:, -1] = p[:, -2] # p = 0 at y = 2
p[0, 0] = 0
return p, pn
def cavity_flow(nt, u, v, dt, dx, dy, p, rho, nu):
un = np.empty_like(u)
vn = np.empty_like(v)
b = np.zeros((nx, ny))
for n in range(0,nt):
un = u.copy()
vn = v.copy()
b = build_up_b(b, rho, dt, u, v, dx, dy)
p = pressure_poisson(p, dx, dy, b)[0]
pn = pressure_poisson(p, dx, dy, b)[1]
u[1:-1, 1:-1] = (un[1:-1, 1:-1]-
un[1:-1, 1:-1] * dt / dx *
(un[1:-1, 1:-1] - un[0:-2, 1:-1]) -
vn[1:-1, 1:-1] * dt / dy *
(un[1:-1, 1:-1] - un[1:-1, 0:-2]) -
dt / (2 * rho * dx) * (p[2:, 1:-1] - p[0:-2, 1:-1]) +
nu * (dt / dx**2 *
(un[2:, 1:-1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]) +
dt / dy**2 *
(un[1:-1, 2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2])))
v[1:-1,1:-1] = (vn[1:-1, 1:-1] -
un[1:-1, 1:-1] * dt / dx *
(vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) -
vn[1:-1, 1:-1] * dt / dy *
(vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) -
dt / (2 * rho * dy) * (p[1:-1, 2:] - p[1:-1, 0:-2]) +
nu * (dt / dx**2 *
(vn[2:, 1:-1] - 2 * vn[1:-1, 1:-1] + vn[0:-2, 1:-1]) +
dt / dy**2 *
(vn[1:-1, 2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2])))
u[:, 0] = 0
u[0, :] = 0
u[-1, :] = 0
u[:, -1] = 1 # Set velocity on cavity lid equal to 1
v[:, 0] = 0
v[:, -1] = 0
v[0, :] = 0
v[-1, :] = 0
return u, v, p, pn
#NBVAL_IGNORE_OUTPUT
u = np.zeros((nx, ny))
v = np.zeros((nx, ny))
p = np.zeros((nx, ny))
b = np.zeros((nx, ny))
nt = 1000
# Store the output velocity and pressure fields in the variables a, b and c.
# This is so they do not clash with the devito outputs below.
a, b, c, d = cavity_flow(nt, u, v, dt, dx, dy, p, rho, nu)
fig = pyplot.figure(figsize=(11, 7), dpi=100)
pyplot.contourf(X, Y, c, alpha=0.5, cmap=cm.viridis)
pyplot.colorbar()
pyplot.contour(X, Y, c, cmap=cm.viridis)
pyplot.quiver(X[::2, ::2], Y[::2, ::2], a[::2, ::2], b[::2, ::2])
pyplot.xlabel('X')
pyplot.ylabel('Y');
# Import u values at x=L/2 (table 6, column 2 rows 12-26) in Marchi et al.
Marchi_Re10_u = np.array([[0.0625, -3.85425800e-2],
[0.125, -6.96238561e-2],
[0.1875, -9.6983962e-2],
[0.25, -1.22721979e-1],
[0.3125, -1.47636199e-1],
[0.375, -1.71260757e-1],
[0.4375, -1.91677043e-1],
[0.5, -2.05164738e-1],
[0.5625, -2.05770198e-1],
[0.625, -1.84928116e-1],
[0.6875, -1.313892353e-1],
[0.75, -3.1879308e-2],
[0.8125, 1.26912095e-1],
[0.875, 3.54430364e-1],
[0.9375, 6.50529292e-1]])
# Import v values at y=L/2 (table 6, column 2 rows 27-41) in Marchi et al.
Marchi_Re10_v = np.array([[0.0625, 9.2970121e-2],
[0.125, 1.52547843e-1],
[0.1875, 1.78781456e-1],
[0.25, 1.76415100e-1],
[0.3125, 1.52055820e-1],
[0.375, 1.121477612e-1],
[0.4375, 6.21048147e-2],
[0.5, 6.3603620e-3],
[0.5625,-5.10417285e-2],
[0.625, -1.056157259e-1],
[0.6875,-1.51622101e-1],
[0.75, -1.81633561e-1],
[0.8125,-1.87021651e-1],
[0.875, -1.59898186e-1],
[0.9375,-9.6409942e-2]])
#NBVAL_IGNORE_OUTPUT
# Check results with Marchi et al 2009.
npgrid=[nx,ny]
x_coord = np.linspace(0, 1, npgrid[0])
y_coord = np.linspace(0, 1, npgrid[1])
fig = pyplot.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121)
ax1.plot(a[int(npgrid[0]/2),:],y_coord[:])
ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro')
ax1.set_xlabel('$u$')
ax1.set_ylabel('$y$')
ax1 = fig.add_subplot(122)
ax1.plot(x_coord[:],b[:,int(npgrid[1]/2)])
ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$v$')
pyplot.show()
from devito import Grid
grid = Grid(shape=(nx, ny), extent=(1., 1.))
x, y = grid.dimensions
t = grid.stepping_dim
from devito import TimeFunction, Function, \
Eq, solve, Operator, configuration
# Build Required Functions and derivatives:
# --------------------------------------
# |Variable | Required Derivatives |
# --------------------------------------
# | u | dt, dx, dy, dx**2, dy**2 |
# | v | dt, dx, dy, dx**2, dy**2 |
# | p | dx, dy, dx**2, dy**2 |
# | pn | dx, dy, dx**2, dy**2 |
# --------------------------------------
u = TimeFunction(name='u', grid=grid, space_order=2)
v = TimeFunction(name='v', grid=grid, space_order=2)
p = TimeFunction(name='p', grid=grid, space_order=2)
#Variables are automatically initalized at 0.
# First order derivatives will be handled with p.dxc
eq_u =Eq(u.dt + u*u.dx + v*u.dy, -1./rho * p.dxc + nu*(u.laplace), subdomain=grid.interior)
eq_v =Eq(v.dt + u*v.dx + v*v.dy, -1./rho * p.dyc + nu*(v.laplace), subdomain=grid.interior)
eq_p =Eq(p.laplace,rho*(1./dt*(u.dxc+v.dyc)-(u.dxc*u.dxc)+2*(u.dyc*v.dxc)+(v.dyc*v.dyc)), subdomain=grid.interior)
# NOTE: Pressure has no time dependence so we solve for the other pressure buffer.
stencil_u =solve(eq_u , u.forward)
stencil_v =solve(eq_v , v.forward)
stencil_p=solve(eq_p, p)
update_u =Eq(u.forward, stencil_u)
update_v =Eq(v.forward, stencil_v)
update_p =Eq(p.forward, stencil_p)
# Boundary Conds. u=v=0 for all sides
bc_u = [Eq(u[t+1, 0, y], 0)]
bc_u += [Eq(u[t+1, nx-1, y], 0)]
bc_u += [Eq(u[t+1, x, 0], 0)]
bc_u += [Eq(u[t+1, x, ny-1], 1)] # except u=1 for y=2
bc_v = [Eq(v[t+1, 0, y], 0)]
bc_v += [Eq(v[t+1, nx-1, y], 0)]
bc_v += [Eq(v[t+1, x, ny-1], 0)]
bc_v += [Eq(v[t+1, x, 0], 0)]
bc_p = [Eq(p[t+1, 0, y],p[t+1, 1,y])] # dpn/dx = 0 for x=0.
bc_p += [Eq(p[t+1,nx-1, y],p[t+1,nx-2, y])] # dpn/dx = 0 for x=2.
bc_p += [Eq(p[t+1, x, 0],p[t+1,x ,1])] # dpn/dy = 0 at y=0
bc_p += [Eq(p[t+1, x, ny-1],p[t+1, x, ny-2])] # pn=0 for y=2
bc_p += [Eq(p[t+1, 0, 0], 0)]
bc=bc_u+bc_v
optime=Operator([update_u, update_v]+bc_u+bc_v)
oppres=Operator([update_p]+bc_p)
# Silence non-essential outputs from the solver.
configuration['log-level'] = 'ERROR'
# This is the time loop.
for step in range(0,nt):
if step>0:
oppres(time_M = nit)
optime(time_m=step, time_M=step, dt=dt)
#NBVAL_IGNORE_OUTPUT
fig = pyplot.figure(figsize=(11,7), dpi=100)
# Plotting the pressure field as a contour.
pyplot.contourf(X, Y, p.data[0], alpha=0.5, cmap=cm.viridis)
pyplot.colorbar()
# Plotting the pressure field outlines.
pyplot.contour(X, Y, p.data[0], cmap=cm.viridis)
# Plotting velocity field.
pyplot.quiver(X[::2,::2], Y[::2,::2], u.data[0,::2,::2], v.data[0,::2,::2])
pyplot.xlabel('X')
pyplot.ylabel('Y');
#NBVAL_IGNORE_OUTPUT
# Again, check results with Marchi et al 2009.
fig = pyplot.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121)
ax1.plot(u.data[0,int(grid.shape[0]/2),:],y_coord[:])
ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro')
ax1.set_xlabel('$u$')
ax1.set_ylabel('$y$')
ax1 = fig.add_subplot(122)
ax1.plot(x_coord[:],v.data[0,:,int(grid.shape[0]/2)])
ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$v$')
pyplot.show()
#NBVAL_IGNORE_OUTPUT
fig = pyplot.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121)
ax1.plot(a[int(npgrid[0]/2),:],y_coord[:])
ax1.plot(u.data[0,int(grid.shape[0]/2),:],y_coord[:],'--')
ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro')
ax1.set_xlabel('$u$')
ax1.set_ylabel('$y$')
ax1 = fig.add_subplot(122)
ax1.plot(x_coord[:],b[:,int(npgrid[1]/2)])
ax1.plot(x_coord[:],v.data[0,:,int(grid.shape[0]/2)],'--')
ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$v$')
ax1.legend(['numpy','devito','Marchi (2009)'])
pyplot.show()
#Pressure norm check
tol = 1e-3
assert np.sum((c[:,:]-d[:,:])**2/ np.maximum(d[:,:]**2,1e-10)) < tol
assert np.sum((p.data[0]-p.data[1])**2/np.maximum(p.data[0]**2,1e-10)) < tol
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn import preprocessing
labels_vecs = preprocessing.LabelBinarizer().fit(labels).transform(labels)
from sklearn import model_selection
ss = model_selection.StratifiedShuffleSplit(n_splits=1, test_size=0.2)
splitter = ss.split(codes, labels)
split_i = next(splitter)
train_x, train_y = codes[split_i[0]], labels_vecs[split_i[0]]
val_x, val_y = codes[split_i[1][1::2]], labels_vecs[split_i[1][1::2]]
test_x, test_y = codes[split_i[1][::2]], labels_vecs[split_i[1][::2]]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
output_size = labels_vecs.shape[1]
fc1 = tf.contrib.layers.fully_connected(inputs_, 1024)
logits = tf.contrib.layers.fully_connected(fc1, output_size)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
epochs = 20
num_batches = 64
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
batch_i = 0
for x, y in get_batches(train_x, train_y, num_batches):
batch_i += 1
feed = {inputs_: x, labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}:, Batch: {}, Training loss: {:>2}".format(epoch + 1, epochs, batch_i, loss))
if batch_i % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(epoch, epochs),
"Iteration: {}".format(batch_i),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: (a)
Step3: (c)
Step4: and the maximum secondary current is
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
Vp = 600 # [V]
Vl = 120 # [V] which is also the load voltage
Vh = 480 # [V]
Sw = 10e3 # [VA]
n = Vh/Vl # = Nse/Nc
Sio = (n + 1)/n * Sw
print('''
Sio = {:.1f} kVA
==============
'''.format(Sio/1000))
Ip = Sio/Vp
print('''
Ip = {:.2f} A
============
'''.format(Ip))
Is = Sio/Vl
print('''
s = {:.0f} A
=========
'''.format(Is))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configurando a biblioteca
Step2: O objetivo desta atividade é escrever uma simulação animada de negociações e executá-la sobre seis grafos diferentes.
Step3: Segundo grafo
Step4: Terceiro grafo
Step5: Quarto grafo
Step6: Quinto grafo
Step7: Sexto grafo
Step8: Definições
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
sn.graph_width = 360
sn.graph_height = 360
sn.node_size = 25
def load_graph(path):
g = sn.load_graph(path, has_pos=True)
for n, m in g.edges():
g.edge[n][m]['strong'] = bool(g.edge[n][m]['strong'])
return g
def set_colors(g):
for n, m in g.edges():
if g.edge[n][m]['strong']:
g.edge[n][m]['color'] = (0, 0, 0)
else:
g.edge[n][m]['color'] = (192, 192, 192)
def show_graph(g):
set_colors(g)
sn.show_graph(g, nlab=True)
g1 = load_graph('power-1.gml')
show_graph(g1)
g2 = load_graph('power-2.gml')
show_graph(g2)
g3 = load_graph('power-3.gml')
show_graph(g3)
g4 = load_graph('power-4.gml')
show_graph(g4)
g5 = load_graph('power-5.gml')
show_graph(g5)
g6 = load_graph('power-6.gml')
show_graph(g6)
from random import randint, choice
TIMES = 100
def offer(g, n, m):
if g.edge[n][m]['strong']:
return 24 - g.node[m]['demand']
return 8 - g.node[m]['demand']
def snapshot(g, frames):
for n in g.nodes():
g.node[n]['label'] = str(g.node[n]['demand'])
frame = sn.generate_frame(g, nlab=True)
frames.append(frame)
def simulate(g):
frames = []
for n in g.nodes():
g.node[n]['demand'] = randint(1, 23)
snapshot(g, frames)
for _ in range(TIMES):
pass # seu código
sn.show_animation(frames)
simulate(g3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 3
Step2: Run the below cell, and copy the output into the Google Cloud Shell
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
# Check your project name
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
os.environ["BUCKET"] = "your-bucket-id-here" # Recommended: use your project name
%%bash
gsutil -m rm -r gs://$BUCKET/baby_app
gsutil -m cp -r application/ gs://$BUCKET/baby_app
%%bash
echo rm -r baby_app/
echo mkdir baby_app/
echo gsutil cp -r gs://$BUCKET/baby_app ./
echo python3 baby_app/main.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Autoencoder 소개
Step2: 데이터세트 로드하기
Step3: 첫 번째 예
Step4: x_train을 입력과 대상으로 사용하여 모델을 훈련합니다. encoder는 데이터세트를 784차원에서 잠재 공간으로 압축하는 방법을 배우고, decoder는 원본 이미지를 재구성하는 방법을 배웁니다. .
Step5: 모델이 훈련되었으므로 테스트 세트에서 이미지를 인코딩 및 디코딩하여 테스트해 보겠습니다.
Step6: 두 번째 예
Step7: 이미지에 임의의 노이즈를 추가합니다.
Step8: 노이즈가 있는 이미지를 플롯합니다.
Step9: 컨볼루셔널 autoencoder 정의하기
Step10: encoder의 요약을 살펴보겠습니다. 이미지가 28x28에서 7x7로 어떻게 다운샘플링되는지 확인하세요.
Step11: decoder는 이미지를 7x7에서 28x28로 다시 업샘플링합니다.
Step12: autoencoder에서 생성된 노이즈가 있는 이미지와 노이즈가 제거 된 이미지를 모두 플롯합니다.
Step13: 세 번째 예
Step14: 데이터를 [0,1]로 정규화합니다.
Step15: 이 데이터세트에서 1로 레이블이 지정된 정상 리듬만 사용하여 autoencoder를 훈련합니다. 정상 리듬과 비정상 리듬을 분리합니다.
Step16: 정상적인 ECG를 플롯합니다.
Step17: 비정상적인 ECG를 플롯합니다.
Step18: 모델 빌드하기
Step19: autoencoder는 일반 ECG만 사용하여 훈련되지만, 전체 테스트세트를 사용하여 평가됩니다.
Step20: 재구성 오류가 정상 훈련 예제에서 하나의 표준 편차보다 큰 경우, ECG를 비정상으로 분류합니다. 먼저, 훈련 세트의 정상 ECG, autoencoder에 의해 인코딩 및 디코딩된 후의 재구성, 재구성 오류를 플롯해 보겠습니다.
Step21: 이번에는 비정상적인 테스트 예제에서 비슷한 플롯을 만듭니다.
Step22: 이상 감지하기
Step23: 평균보다 표준 편차가 높은 임계값을 선택합니다.
Step24: 참고
Step25: 재구성 오류가 임계값보다 큰 경우 ECG를 이상으로 분류합니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
print (x_train.shape)
print (x_test.shape)
latent_dim = 64
class Autoencoder(Model):
def __init__(self, encoding_dim):
super(Autoencoder, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dim, activation='relu'),
])
self.decoder = tf.keras.Sequential([
layers.Dense(784, activation='sigmoid'),
layers.Reshape((28, 28))
])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Autoencoder(latent_dim)
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
autoencoder.fit(x_train, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test, x_test))
encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i])
plt.title("original")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
plt.title("reconstructed")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
print(x_train.shape)
noise_factor = 0.2
x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape)
x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape)
x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)
x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
plt.show()
class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2)])
self.decoder = tf.keras.Sequential([
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
autoencoder.fit(x_train_noisy, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test_noisy, x_test))
autoencoder.encoder.summary()
autoencoder.decoder.summary()
encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original + noise
ax = plt.subplot(2, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
bx = plt.subplot(2, n, i + n + 1)
plt.title("reconstructed")
plt.imshow(tf.squeeze(decoded_imgs[i]))
plt.gray()
bx.get_xaxis().set_visible(False)
bx.get_yaxis().set_visible(False)
plt.show()
# Download the dataset
dataframe = pd.read_csv('http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv', header=None)
raw_data = dataframe.values
dataframe.head()
# The last element contains the labels
labels = raw_data[:, -1]
# The other data points are the electrocadriogram data
data = raw_data[:, 0:-1]
train_data, test_data, train_labels, test_labels = train_test_split(
data, labels, test_size=0.2, random_state=21
)
min_val = tf.reduce_min(train_data)
max_val = tf.reduce_max(train_data)
train_data = (train_data - min_val) / (max_val - min_val)
test_data = (test_data - min_val) / (max_val - min_val)
train_data = tf.cast(train_data, tf.float32)
test_data = tf.cast(test_data, tf.float32)
train_labels = train_labels.astype(bool)
test_labels = test_labels.astype(bool)
normal_train_data = train_data[train_labels]
normal_test_data = test_data[test_labels]
anomalous_train_data = train_data[~train_labels]
anomalous_test_data = test_data[~test_labels]
plt.grid()
plt.plot(np.arange(140), normal_train_data[0])
plt.title("A Normal ECG")
plt.show()
plt.grid()
plt.plot(np.arange(140), anomalous_train_data[0])
plt.title("An Anomalous ECG")
plt.show()
class AnomalyDetector(Model):
def __init__(self):
super(AnomalyDetector, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Dense(32, activation="relu"),
layers.Dense(16, activation="relu"),
layers.Dense(8, activation="relu")])
self.decoder = tf.keras.Sequential([
layers.Dense(16, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(140, activation="sigmoid")])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = AnomalyDetector()
autoencoder.compile(optimizer='adam', loss='mae')
history = autoencoder.fit(normal_train_data, normal_train_data,
epochs=20,
batch_size=512,
validation_data=(test_data, test_data),
shuffle=True)
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
encoded_imgs = autoencoder.encoder(normal_test_data).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
plt.plot(normal_test_data[0], 'b')
plt.plot(decoded_imgs[0], 'r')
plt.fill_between(np.arange(140), decoded_imgs[0], normal_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
encoded_imgs = autoencoder.encoder(anomalous_test_data).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
plt.plot(anomalous_test_data[0], 'b')
plt.plot(decoded_imgs[0], 'r')
plt.fill_between(np.arange(140), decoded_imgs[0], anomalous_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
reconstructions = autoencoder.predict(normal_train_data)
train_loss = tf.keras.losses.mae(reconstructions, normal_train_data)
plt.hist(train_loss, bins=50)
plt.xlabel("Train loss")
plt.ylabel("No of examples")
plt.show()
threshold = np.mean(train_loss) + np.std(train_loss)
print("Threshold: ", threshold)
reconstructions = autoencoder.predict(anomalous_test_data)
test_loss = tf.keras.losses.mae(reconstructions, anomalous_test_data)
plt.hist(test_loss, bins=50)
plt.xlabel("Test loss")
plt.ylabel("No of examples")
plt.show()
def predict(model, data, threshold):
reconstructions = model(data)
loss = tf.keras.losses.mae(reconstructions, data)
return tf.math.less(loss, threshold)
def print_stats(predictions, labels):
print("Accuracy = {}".format(accuracy_score(labels, preds)))
print("Precision = {}".format(precision_score(labels, preds)))
print("Recall = {}".format(recall_score(labels, preds)))
preds = predict(autoencoder, test_data, threshold)
print_stats(preds, test_labels)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Térbeli görbék, adathalmazok
Step2: A következő kódcellában két dolog fog történni. Előszöris létrehozzuk az ax nevű axes objektumot, amelynek expliciten megadjuk, hogy 3D-s koordinátarendszer legyen. Illetve erre az objektumra hatva a plot függvénnyel létrehozzuk magát az ábrát. Figyeljük meg, hogy most a plot függvény háruom bemenő paramétert vár!
Step3: Ahogy a síkbeli ábráknál láttuk, a plot függvényt itt is használhatjuk rendezetlenül mintavételezett adatok ábrázolására is.
Step4: A stílusdefiníciók a 2D ábrákhoz hasonló kulcsszavas argumentumok alapján dolgozódnak fel! Lássunk erre is egy példát
Step5: Térbeli ábrák megjelenítése kapcsán rendszeresen felmerülő probléma, hogy jó irányból nézzünk rá az ábrára. Az ábra nézőpontjait a view_init függvény segítségével tudjuk megadni. A view_init két paramétere ekvatoriális gömbi koordinátarendszerben adja meg az ábra nézőpontját. A két bemenő paraméter a deklináció és az azimutszög fokban mérve. Például az $x$-tengely felől így lehet készíteni ábrát
Step6: Az $y$-tengely felől pedig így
Step7: Ha interaktív függvényeket használunk, akkor a nézőpontot az alábbiak szerint interaktívan tudjuk változtatni
Step8: Kétváltozós függvények és felületek
Step9: A plot_surface függvény segítségével jeleníthetjük meg ezt a függvényt.
Step10: Sokszor szemléletes a kirajzolódott felületet valamilyen színskála szerint színezni. Ezt a síkbeli ábráknál már megszokott módon a cmap kulcsszó segítségével tehetjük.
Step11: A térbeli felületek legáltalánosabb megadása kétparaméteres vektor értékű függvényekkel lehetséges. Azaz
Step12: Ábrázolni ismét a plot_surface függvény segítségével tudunk
Step13: A fenti ábrát egy kicsit arányosabbá tehetjük, ha a tengelyek megjelenítésének arányát, illetve a tengerek határait átállítjuk. Ezt a set_aspect, illetve a set_xlim, set_ylim és set_zlim függvények segítségével tehetjük meg
Step14: Végül tegyük ezt az ábrát is interaktívvá
Step15: Erőterek 3D-ben
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from mpl_toolkits.mplot3d import * #3D-s ábrák alcsomagja
from ipywidgets import * #interaktivitáshoz szükséges függvények
t=linspace(0,2*pi,100) # 100 pont 0 és 2*pi között
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t)
ax=subplot(1,1,1,projection='3d')
ax.plot(rand(10),rand(10),rand(10),'o')
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t,color='green',linestyle='dashed',linewidth=3)
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t)
ax.view_init(0,0)
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t)
ax.view_init(0,90)
def forog(th,phi):
ax=subplot(1,1,1,projection='3d')
ax.plot(sin(3*t),cos(3*t),t)
ax.view_init(th,phi)
interact(forog,th=(-90,90),phi=(0,360));
x,y = meshgrid(linspace(-3,3,250),linspace(-5,5,250)) # mintavételezési pontok legyártása.
z = -(sin(x) ** 10 + cos(10 + y * x) * cos(x))*exp((-x**2-y**2)/4) # függvény kiértékelés
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z,cmap='viridis')
theta,phi=meshgrid(linspace(0,2*pi,250),linspace(0,2*pi,250))
x=(4 + 1*cos(theta))*cos(phi)
y=(4 + 1*cos(theta))*sin(phi)
z=1*sin(theta)
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
ax.set_aspect('equal');
ax.set_xlim(-5,5);
ax.set_ylim(-5,5);
ax.set_zlim(-5,5);
def forog(th,ph):
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
ax.view_init(th,ph)
ax.set_aspect('equal');
ax.set_xlim(-5,5);
ax.set_ylim(-5,5);
ax.set_zlim(-5,5);
interact(forog,th=(-90,90),ph=(0,360));
phiv,thv=(2*pi*rand(100),pi*rand(100)) #Ez a két sor a térbeli egység gömb
xv,yv,zv=(cos(phiv)*sin(thv),sin(phiv)*sin(thv),cos(thv)) #100 véletlen pontját jelöli ki
uv,vv,wv=(xv,yv,zv) #Ez pedig a megfelelő pontokhoz hozzá rendel egy egy radiális vektort
ax = subplot(111, projection='3d')
ax.quiver(xv, yv, zv, uv, vv, wv, length=0.3,color='darkcyan')
ax.set_aspect('equal')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's query every talk description
Step2: Okay, make a dataframe and add some helpful columns
Step3: Show Profile Report of Pandas DF
Step4: Further Processing
Step5: visualize some relations
Step6: <br />
Step7: x axis
|
<ASSISTANT_TASK:>
Python Code:
base_url = "https://pydata.org"
r = rq.get(base_url + "/berlin2017/schedule/")
bs = bs4.BeautifulSoup(r.text, "html.parser")
data = {}
for ahref in tqdm_notebook(bs.find_all("a")):
if 'schedule/presentation' in ahref.get("href"):
url = ahref.get("href")
else:
continue
data[url] = {}
resp = bs4.BeautifulSoup(rq.get(base_url + url).text, "html.parser")
title = resp.find("h2").text
resp = resp.find_all(attrs={'class':"container"})[1]
when, who = resp.find_all("h4")
date_info = when.string.split("\n")[1:]
day_info = date_info[0].strip()
time_inf = date_info[1].strip()
room_inf = date_info[3].strip()[3:]
speaker = who.find("a").text
level = resp.find("dd").text
abstract = resp.find(attrs={'class':'abstract'}).text
description = resp.find(attrs={'class':'description'}).text
data[url] = {
'day_info': day_info,
'title': title,
'time_inf': time_inf,
'room_inf': room_inf,
'speaker': speaker,
'level': level,
'abstract': abstract,
'description': description
}
df = pd.DataFrame.from_dict(data, orient='index')
df.reset_index(drop=True, inplace=True)
import pandas_profiling as pp
pfr = pp.ProfileReport(df)
from IPython.display import display, HTML
display(HTML(
pfr.html.replace("<h3", "<h4").replace("<h2", "<h3").replace("<h1", "<h2")
))
# Tutorials on Friday
df.loc[df.day_info=='Friday', 'tutorial'] = True
df['tutorial'].fillna(False, inplace=True)
# time handling
df['time_from'], df['time_to'] = zip(*df.time_inf.str.split(u'\u2013'))
df.time_from = pd.to_datetime(df.time_from).dt.time
df.time_to = pd.to_datetime(df.time_to).dt.time
del df['time_inf']
df.head(3)
# Example: Let's query all non-novice talks on sunday, starting at 4 pm
tmp = df.query("(level!='Novice') & (day_info=='Sunday')")
tmp[tmp.time_from >= time(16)]
ax = df.level.value_counts().plot.pie(figsize=(3,3), autopct="%1.1f %%")
ax.axis("equal")
ax.set_ylabel("")
ax.set_title("levels of the talks where:")
plt.show()
ax = df.groupby("tutorial")['level'].value_counts().unstack(level=0).plot.pie(
subplots=True, legend=False, autopct="%1.1f %%", startangle=90, labels=["","",""])
for axx in ax:
axx.axis("equal")
axx.set_ylabel("")
ax[0].set_xlabel("Not tutorial")
ax[1].set_xlabel("tutorial")
plt.gcf().suptitle("Level of the talks where (splitted by tutorial or not):", fontsize=16)
plt.tight_layout()
axx.legend(
df.groupby("tutorial")['level'].value_counts().unstack(level=0).index.tolist(),
loc='center left', bbox_to_anchor=(1, .8))
plt.show()
categorical_cols = [col for col in df.columns if len(df[col].unique())<=len(df[col]) / 5]
f, ax = plt.subplots(2,2, figsize=(20,10), sharey="row")
for i, axx in enumerate(ax.flatten()):
col = categorical_cols[i]
df.groupby("tutorial")[col].value_counts().unstack(level=0).plot.bar(ax=axx, rot=0, stacked=True)
axx.set_title(col)
axx.set_facecolor("white")
axx.grid(True, color="lightgrey")
f.suptitle("Number of talks grouped by day, level, is tutorial, room: ...", fontsize=20)
f.tight_layout()
df.title = df.title.str.replace(".", "").replace(":", "").replace(",", "")
from itertools import chain
foo = pd.DataFrame(
pd.Series(list(chain(*df.title.str.split().apply(lambda x: np.unique(x)).tolist()))).value_counts(normalize=True)
).reset_index().rename(columns={0:'share', 'index': 'word'}).query("share >= 0.001")
foo.head()
foo['len'] = foo.word.str.len()
from gensim import corpora, models, similarities
documents = df.query("tutorial != True").title.tolist()
stoplist = set('''
for a of the and to in i be on with here we will an each its type as our their then apply them very would this
make large talk, basic search is are there more than pages it can or that they how by have what from talk use you
these using which but some not your do used at if like such has about - my one most those should between may good • why
give way time been need many so does case when also all into lot build features new who often discuss building
best text * was out
'''.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1]
for text in texts]
pd.DataFrame(dict(nr=dict(frequency))).sort_values('nr', ascending=False).query("nr>1")
keywords = pd.DataFrame(dict(nr=dict(frequency))).sort_values('nr', ascending=False).query("nr>1").index.tolist()
for keyword in keywords[::-1]:
df.loc[df.title.str.lower().str.contains(keyword), 'color'] = keyword
df.color.fillna("UNKNOWN", inplace=True)
df.head()
pd.set_option("max.colwidth", 400)
df['approach'] = np.where(df.tutorial, "tutorial", np.nan)
df.loc[df.title.str.lower().str.contains('ai'), 'approach'] = 'artificial intelligence'
df.loc[df.title.str.lower().str.contains('artificial intelligence'), 'approach'] = 'artificial intelligence'
df.loc[df.title.str.lower().str.contains('pandas'), 'approach'] = 'pandas'
df.loc[df.title.str.lower().str.contains('jupyter'), 'approach'] = 'jupyter'
df.loc[df.title.str.lower().str.contains('data scien'), 'approach'] = 'data science'
df.loc[df.title.str.lower().str.contains('machine learn'), 'approach'] = 'machine learning'
df.loc[df.title.str.lower().str.contains('deep'), 'approach'] = 'deep learning'
# attention: may change when querying again
df.loc[11, 'approach'] = 'R'
df.loc[13, 'approach'] = 'Julia'
df.loc[df.title.apply(lambda x: any([y in x.lower() for y in ['question', 'text', 'natural language', 'nlp', 'doc2vec']])), 'type'] = 'text / NLP'
df.loc[df.title.apply(lambda x: any([y in x.lower() for y in ['creativ', 'image', 'signal process', ]])), 'type'] = 'image process'
df['type'] = np.where(df.tutorial, "tutorial", df.type)
print '{'
for title in df.title:
if df.loc[df.title==title, 'tutorial'].values[0]:
continue
print "\t'" + title + "': '',"
print '}'
pd.DataFrame({'bla':rename_dict})
df['type'] = df.title.map(rename_dict)
df.groupby(["type", "level", "approach"]).color.count().reset_index().head(1)
gb = df.groupby(["type", "approach"]).color.count()#.reset_index()
for col in ['type', 'level', 'approach']:
gb[col] = gb[col].astype("category", )
cats = gb.level.unique()
gb.level.astype()
gb.unstack(level=1).plot.barh()
rename_dict = {
'Introduction to Search': 'search',
'Data Science for Digital Humanities: Extracting meaning from Images and Text': 'text / NLP / *2vec',
'TNaaS - Tech Names as a Service': 'creativity',
'Developments in Test-Driven Data Analysis': 'testing',
'Analysing user comments on news articels with Doc2Vec and Machine Learning classification': 'text / NLP / *2vec',
'Patsy: The Lingua Franca to and from R': 'R',
'Large Scale Vandalism Detection in Knowledge Bases': 'network data',
'Fast Multidimensional Signal Processing using Julia with Shearlabjl': 'images',
'Social Networks and Protest Participation: Evidence from 130 Million Twitter Users': 'ethics & social',
'Patterns for Collaboration between Data Scientists And Software Engineers': 'tools / frameworks',
'Blockchains for Artificial Intelligence': 'blockchain',
'Data Analytics and the new European Privacy Legislation': 'laws',
'Building smart IoT applications with Python and Spark': 'iot',
'“Which car fits my life?” - mobilede’s approach to recommendations': 'recommendation',
'Towards Pythonic Innovation in Recommender Systems': 'recommendation',
'Gold standard data: lessons from the trenches': 'tools / frameworks',
'Biases are bugs: algorithm fairness and machine learning ethics': 'ethics & social',
'On Bandits, Bayes, and swipes: gamification of search': 'active learning',
'Engage the Hyper-Python - a rattle-through many of the ways you can make a Python program faster': 'tools / frameworks',
'Fairness and transparency in machine learning: Tools and techniques': 'ethics & social',
"Machine Learning to moderate ads in real world classified's business": '',
'Size Matters! A/B Testing When Not Knowing Your Number of Trials': '',
'Is That a Duplicate Quora Question?': 'text / NLP / *2vec',
'Semi-Supervised Bootstrapping of Relationship Extractors with Distributional Semantics': '',
'Where are we looking? Prediciting human gaze using deep networks': '',
'Spying on my Network for a Day: Data Analysis for Networks': 'network data',
'Deep Learning for detection on a phone: how to stay sane and build a pipeline you can trust': '',
'A word is worth a thousand pictures: Convolutional methods for text': 'text / NLP / *2vec',
'Polynomial Chaos: A technique for modeling uncertainty': '',
'Kickstarting projects with Cookiecutter': 'tools / frameworks',
'What does it all mean? - Compositional distributional semantics for modelling natural language': 'text / NLP / *2vec',
'When the grassroots grow stronger - 2017 through the eyes of German open data activists': 'ethics & social',
'Finding Lane Lines for Self Driving Cars': 'images',
'Find the text similiarity you need with the next generation of word embeddings in Gensim': 'text / NLP / *2vec',
'Evaluating Topic Models': 'text / NLP / *2vec',
'Best Practices for Debugging': 'tools / frameworks',
'Data Science & Data Visualization in Python How to harness power of Python for social good?': 'ethics & social',
'Conversational AI: Building clever chatbots': '',
'AI assisted creativity': 'creativity',
}
df[['title', 'type', 'approach']]
df.groupby(["color", "day_info"])['level'].count().unstack(level=0).plot.bar(rot=0, cmap='inferno')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Area-open filter
Step2: Extinction filter
Step3: Bounding-box filter
Step4: Max-tree area signature analysis
|
<ASSISTANT_TASK:>
Python Code:
# This makes plots appear in the notebook
%matplotlib inline
import numpy as np # numpy is the major library in which siamxt was built upon
# we like the array programming style =)
# We are using PIL to read images
from PIL import Image
# and matplotlib to display images
import matplotlib.pyplot as plt
import time # Let's measure some processing times
import siamxt
# Loading the image.
# Make sure the image you read is either uint8 or uint16
img = np.asarray(Image.open("./Sample-images/apple.png"))
print "Image dimensions: %dx%d pixels" %img.shape
#Displaying the image
fig = plt.figure()
plt.imshow(img, cmap='Greys_r')
plt.axis('off')
plt.title("Original image")
#Structuring element with connectivity-8
Bc = np.ones((3,3),dtype = bool)
t = time.time()
#Building the max-tree with the connectivity defined
mxt = siamxt.MaxTreeAlpha(img,Bc)
t = time.time() -t
print "Max-tree build time: %fs" %t
print "Number of max-tree nodes: %d" %mxt.node_array.shape[1]
print "Number of max-tree leaves: %d" %(mxt.node_array[1,:] == 0).sum()
# Negating the image
img_max = img.max()
img_neg = img_max-img
# Area threshold
area = 25
#Building the max-tree of the negated image, i.e. min-tree
mxt_neg = siamxt.MaxTreeAlpha(img_neg,Bc)
# Making a hard copy of the max-tree
mxt_neg2 = mxt_neg.clone()
#Applying an area-open filter
mxt_neg.areaOpen(area)
#Recovering the image
img_filtered = mxt_neg.getImage()
# Negating the image back
img_filtered = img_max -img_filtered
#Displaying the filtered image
fig = plt.figure()
plt.imshow(img_filtered, cmap='Greys_r')
plt.axis('off')
plt.title("area-open, area = %d " %area)
# Number of leaves to be preserved
n = 5
# Select the area attribute of the max-tree nodes from NA
area = mxt_neg2.node_array[3,:]
# Computes the area extinction values
area_ext = mxt_neg2.computeExtinctionValues(area,"area")
# Applies the area extinction filter
mxt_neg2.extinctionFilter(area_ext,n)
#Recovering the image
img_filtered2 = mxt_neg2.getImage()
# Negating the image back
img_filtered2 = img_max -img_filtered2
#Displaying the filtered image
fig = plt.figure()
plt.imshow(img_filtered2, cmap='Greys_r')
plt.axis('off')
plt.title("area extinction filter, n= %d " %n)
# Loading license plate image
img = np.asarray(Image.open("./Sample-images/plate.png"))
print "Image dimensions: %dx%d pixels" %img.shape
# Negating the image
img_max = img.max()
img_neg = img.max() - img
#Displaying the image
fig = plt.figure()
plt.imshow(img_neg, cmap='Greys_r')
plt.axis('off')
plt.title("Negated image")
# Filter parameters
Wmin = 7 # Minimum bounding-box width
Hmin = 13 # Minimum bounding-box height
Wmax = 17 # Maximum bounding-box width
Hmax = 25 # Maximum bounding-box height
rr = 0.45 # Minimum rectangularity ratio
#Structuring element with connectivity-4
Bc = np.zeros((3,3), dtype = bool)
Bc[1,:] = True
Bc[:,1] = True
# Building the max-tree of the negated image, i.e. min-tree
mxt = siamxt.MaxTreeAlpha(img_neg,Bc)
# Bounding box-lengths
dx = mxt.node_array[7,:] - mxt.node_array[6,:]
dy = mxt.node_array[10,:] - mxt.node_array[9,:]
# Rectangularity ratio
RR = mxt.computeRR()
# Node selection criteria
nodes = (dx>Hmin) & (dx<Hmax) & (dy > Wmin) & (dy < Wmax) & (RR > rr)
#Filtering the max-tree
mxt.contractDR(nodes)
# Recovering the filtered the image
img_filtered = mxt.getImage()
#Displaying the filtered image
fig = plt.figure()
plt.imshow(img_filtered, cmap='Greys_r')
plt.axis('off')
plt.title("Bounding-box and rectangularity filter")
# Loading brain MR image
img = np.asarray(Image.open("./Sample-images/brain_coronal.png"))
print "Image dimensions: %dx%d pixels" %img.shape
#Displaying the original image
fig = plt.figure()
plt.imshow(img, cmap='Greys_r')
plt.axis('off')
plt.title("Original image")
# Building the max-tree
mxt = siamxt.MaxTreeAlpha(img,Bc)
#Hand-picked regional maxima
x,y = 83,90
# Node corresponding to a regional maximum
node = mxt.node_index[x,y]
# Extracting area attribute from NA
area = mxt.node_array[3,:]
# Area signature computation
levels,signature = mxt.getSignature(area, node)
#Gradient of the area signature
gradient = signature[0:-1] - signature[1:]
# Display area signature
fig = plt.figure()
plt.plot(levels,signature)
plt.grid()
plt.xlabel("Gray-level")
plt.ylabel("Area")
plt.title("Area signature")
# Display gradient of the area signature
fig = plt.figure()
plt.grid()
plt.plot(levels[0:-1],gradient)
plt.xlabel("Gray-level")
plt.ylabel("Gradient")
plt.title("Gradient signature")
# Selecting the second peak of the gradient
index = np.argsort(gradient)[::-1][1]
#reconstructing the node corresponding to the peak (largest area)
area1 = signature[index]
node1 = np.where(mxt.node_array[3,:] == area1)[0][0]
cc1 = mxt.recConnectedComponent(node1)
#reconstructing the node corresponding to the peak (smaller area)
area2 = signature[index + 1]
node2 = np.where(mxt.node_array[3,:] == area2)[0][0]
cc2 = mxt.recConnectedComponent(node2)
#Displaying the nodes reconstructed
fig = plt.figure()
plt.imshow(cc1, cmap='Greys_r')
plt.axis('off')
plt.title("Reconstruction at h = %d" %mxt.node_array[2,node1])
fig = plt.figure()
plt.imshow(cc2, cmap='Greys_r')
plt.axis('off')
plt.title("Reconstruction at h = %d" %mxt.node_array[2,node2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logical Operators
Step2: or
Step3: Short Circuit Evaluation
Step4: Let's try
Step5: Checking Numerical Ranges
Step7: Repetition Structures
Step8: while Loop
Step9: Infinite Loops
Step10: for Loop
Step11: Using the range function
Step12: Other uses of range
Step13: Can use a variable to control the number of times a for loop iterates
Step14: Calculate a Running Total
Step15: Augmented Assignment Operators
|
<ASSISTANT_TASK:>
Python Code:
x = 1
y = 2
y != x
if temperature < 20 and minutes > 12:
print("The temperature is in the danger zone.")
if temperature < 20 or temperature > 100:
print("The temperature is too extreme.")
if not(temperature > 100):
print("This is below the maximum temperature.")
x = 5
y = 6
a = 1000
b = 2000
x > y and a < b
# Not correct
if age > 18 :
print("You can vote")
# Correct
if age >= 18 :
print("You can vote")
# Correct, but less desirable
# Possibly incorrect if age is a float
if age > 17 :
print("You can vote")
def vowel_or_consonant():
Exercise: Vowel or Consonant
Reads a letter of the alphabet from the user. (You can assume that it's
lowercase.) If the user enters a, e, i, o or u then your program should
display "vowel". If the user enters y then your program should display
"sometimes a vowel, sometimes a consonant". Otherwise your program should
display a message indicating that the letter is a "consonant".
vowels = list("aeiou")
letter = raw_input("Input a letter of the alphabet: ")
if letter in vowels:
print("vowel")
elif letter == "y":
print("sometimes a vowel, sometimes a consonant")
else:
print("consonant")
vowel_or_consonant()
vowel_or_consonant()
vowel_or_consonant()
# Write a program that will echo back what the user types in
# Inputs: string from user, choice on whether to continue
# Processing: None
# Output: print string from user
# Create a variable to control the loop
keep_going = "y"
while keep_going == "y":
user_input = raw_input("What did you say? ")
print(user_input)
keep_going = raw_input("Again? Enter y ")
# Write a program that will echo back what the user types in
# Inputs: string from user, choice on whether to continue
# Processing: None
# Output: print string from user
# Create a variable to control the loop
keep_going = "y"
while keep_going == "y":
keep_going = raw_input("Again? Enter y ")
# Don't do this
# This is an infinite loop, because loop condition never changes
while True:
print("One Infinite Loop")
for number in [1, 2, 3, 4, 5]:
print(number)
for animal in ["giraffe", "otter", "bears", "panda", "capybara", "elephant", "python"]:
print(animal)
for element in [5, "string", 75.62]:
print(element)
# These two loops are not the same!
for number in [1, 2, 3, 4, 5]:
print(number)
for number in range(5):
print(number)
for number in range(1, 6):
print(number)
for number in range(1, 50, 7):
print(number)
iterations = raw_input("How many times? ")
iterations = int(iterations)
for count in range(iterations):
print(count)
# This program calculates the sum of a series of
# numbers entered by the user
iterations = raw_input("How many times? ")
iterations = int(iterations)
sum = 0
product = 1
for count in range(1, iterations + 1):
sum += count
product *= count
print(sum)
print(product)
# Let's re-write the examples
# Example: x = x + 1
x = 5
x += 1
# Example: total = total + number
total += number
# Example: balance = balance - withdrawal
balance -= withdrawal
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 分析数据
Step2: 练习
Step3: 问题 1
Step4: 问题 2
Step5: 问题 3
Step6: 观察
Step7: 练习
Step8: 问题 4
Step9: 问题 5
Step10: 练习:降维
Step11: 观察
Step12: 可视化一个双标图(Biplot)
Step13: 观察
Step14: 问题 7
Step15: 练习
Step16: 问题 8
Step17: 回答
Step18: 回答:
|
<ASSISTANT_TASK:>
Python Code:
# 检查你的Python版本
from sys import version_info
if version_info.major != 3:
raise Exception('请使用Python 3.x 来完成此项目')
# 引入这个项目需要的库
import numpy as np
import pandas as pd
import visuals as vs
from IPython.display import display # 使得我们可以对DataFrame使用display()函数
# 设置以内联的形式显示matplotlib绘制的图片(在notebook中显示更美观)
%matplotlib inline
# 高分辨率显示
# %config InlineBackend.figure_format='retina'
# 载入整个客户数据集
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape))
except:
print("Dataset could not be loaded. Is the dataset missing?")
# 显示数据集的一个描述
display(data.describe())
# TODO:从数据集中选择三个你希望抽样的数据点的索引
indices = []
# 为选择的样本建立一个DataFrame
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print("Chosen samples of wholesale customers dataset:")
display(samples)
# TODO:为DataFrame创建一个副本,用'drop'函数丢弃一个特征# TODO:
new_data = None
# TODO:使用给定的特征作为目标,将数据分割成训练集和测试集
X_train, X_test, y_train, y_test = (None, None, None, None)
# TODO:创建一个DecisionTreeRegressor(决策树回归器)并在训练集上训练它
regressor = None
# TODO:输出在测试集上的预测得分
score = None
# 对于数据中的每一对特征构造一个散布矩阵
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# TODO:使用自然对数缩放数据
log_data = None
# TODO:使用自然对数缩放样本数据
log_samples = None
# 为每一对新产生的特征制作一个散射矩阵
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# 展示经过对数变换后的样本数据
display(log_samples)
# 对于每一个特征,找到值异常高或者是异常低的数据点
for feature in log_data.keys():
# TODO: 计算给定特征的Q1(数据的25th分位点)
Q1 = None
# TODO: 计算给定特征的Q3(数据的75th分位点)
Q3 = None
# TODO: 使用四分位范围计算异常阶(1.5倍的四分位距)
step = None
# 显示异常点
print("Data points considered outliers for the feature '{}':".format(feature))
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# TODO(可选): 选择你希望移除的数据点的索引
outliers = []
# 以下代码会移除outliers中索引的数据点, 并储存在good_data中
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
# TODO:通过在good data上进行PCA,将其转换成6个维度
pca = None
# TODO:使用上面的PCA拟合将变换施加在log_samples上
pca_samples = None
# 生成PCA的结果图
pca_results = vs.pca_results(good_data, pca)
# 展示经过PCA转换的sample log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
# TODO:通过在good data上进行PCA,将其转换成两个维度
pca = None
# TODO:使用上面训练的PCA将good data进行转换
reduced_data = None
# TODO:使用上面训练的PCA将log_samples进行转换
pca_samples = None
# 为降维后的数据创建一个DataFrame
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
# 展示经过两个维度的PCA转换之后的样本log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
# 可视化双标图
vs.biplot(good_data, reduced_data, pca)
# TODO:在降维后的数据上使用你选择的聚类算法
clusterer = None
# TODO:预测每一个点的簇
preds = None
# TODO:找到聚类中心
centers = None
# TODO:预测在每一个转换后的样本点的类
sample_preds = None
# TODO:计算选择的类别的平均轮廓系数(mean silhouette coefficient)
score = None
# 从已有的实现中展示聚类的结果
vs.cluster_results(reduced_data, preds, centers, pca_samples)
# TODO:反向转换中心点
log_centers = None
# TODO:对中心点做指数转换
true_centers = None
# 显示真实的中心点
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
# 显示预测结果
for i, pred in enumerate(sample_preds):
print("Sample point", i, "predicted to be in Cluster", pred)
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# 读取包含聚类结果的数据
cluster_data = pd.read_csv("cluster.csv")
y = cluster_data['Region']
X = cluster_data.drop(['Region'], axis = 1)
# 划分训练集测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=24)
clf = RandomForestClassifier(random_state=24)
clf.fit(X_train, y_train)
score_with_cluster = clf.score(X_test, y_test)
# 移除cluster特征
X_train = X_train.copy()
X_train.drop(['cluster'], axis=1, inplace=True)
X_test = X_test.copy()
X_test.drop(['cluster'], axis=1, inplace=True)
clf.fit(X_train, y_train)
score_no_cluster = clf.score(X_test, y_test)
print("不使用cluster特征的得分: %.4f"%score_no_cluster)
print("使用cluster特征的得分: %.4f"%score_with_cluster)
# 根据‘Channel‘数据显示聚类的结果
vs.channel_results(reduced_data, outliers, pca_samples)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inside the Likelihood object is a "triplet" object called T, which contains an array of sample local groups, each with kinematic parameters consistent with the observational data. Let's plot these kinematic parameters in a "triangle" figure, to show all their 1 and 2-D marginal distributions.
Step2: The above plot shows a Gaussian Mixture model fitted Gaussians. The shaded regions show two standard deviations. The samples data has been preprocessed to zero the mean and scale by standard deviation. Since we are using the Gaussian Mixture Model to model the underlying PDF of the data, more components is always better.
Step3: The above plot shows that the points drawn from the model create a population that is very similar to the true data.
Step4: Reading Simulation Points
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import localgroup
import triangle
import sklearn
from sklearn import mixture
import numpy as np
import pickle
import matplotlib.patches as mpatches
L = localgroup.Likelihood(isPair=True)
L.generate(Nsamples=200000)
L.set_PDF(mixture.GMM(n_components=10, covariance_type='full'))
L.approximate()
figure_obs = L.plot_samples(10, color='b', overlay=False)
figure_model = L.model_gof(L.T.Nsamples, color="r", fig=None)
L.model_gof(L.T.Nsamples, color="r", fig=figure_obs)
red_patch = mpatches.Patch(color='red')
blue_patch = mpatches.Patch(color='blue')
figure_obs.legend(handles=[red_patch, blue_patch], labels=["Model Generated", "Observation Generated"])
figure_obs
#figure_obs.savefig("/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/model_gof.png")
path = '/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/data_files/complete_triplets.txt'
#path = '/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/data_files/MW_M31_pairs.txt'
npoints = 122000
halo_props = ['MW_Mvir', 'M31_Mvir', 'M33_Mvir']
Tr = localgroup.Triplet(isPair=True)
Tr.read_sim_points(path, npoints, halo_props, h=1.0, a=1.0)
Tr.transform_to_M31(sim=True)
Tr.mass_filter('sim')
Tr.dist_filter((Tr.sim_samples[:,0] < 10))
Tr.preprocess(L.samples_means, L.samples_stds, mode='sim')
sim_plot = Tr.plot_kinematics('sim', L.samples_means, L.samples_stds, color='c', fig=None)
#sim_plot = Tr.plot_kinematics('sim', L.samples_means, L.samples_stds, color='c', fig=None)
Tr.unprocess(L.samples_means, L.samples_stds, mode='sim')
data = np.transpose(np.vstack((np.transpose(Tr.sim_samples), np.log10(Tr.MW.Mvir), np.log10(Tr.M31.Mvir))))
labs=["mwd", "mwvr", "mwvt", "MWMvir", "M31Mvir"]
sim_plot = triangle.corner(data, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=None,\
plot_contours=True, show_titles=True, title_args={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='r', label_kwargs={"fontsize": 16})
red_patch = mpatches.Patch(color='r')
cyan_patch = mpatches.Patch(color='c')
sim_plot.legend(handles=[red_patch, cyan_patch], labels=["CONSUELO Prior", "GMM-fit CONSUELO Prior"])
Tr.preprocess(L.samples_means, L.samples_stds, mode='sim')
sim_plot
#name = 'gmm_CONSUELO_prior.png'
#sim_plot.savefig('/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/asurps/'+name)
dat = np.transpose(np.vstack((np.transpose(Tr.sim_samples), np.log10(Tr.M31.Mvir), np.log10(Tr.MW.Mvir))))
Tr.GMM(40, dat)
Tr.GMM_sample(12200000)
gmm_MW = np.copy(Tr.gmm_samples[:,4])
gmm_M31 = np.copy(Tr.gmm_samples[:,3])
gmm_LG = np.log10(np.power(10,gmm_MW) + np.power(10,gmm_M31))
cond = gmm_MW < gmm_M31
Tr.gmm_samples = Tr.gmm_samples[cond]
gmm_MW = gmm_MW[cond]
gmm_M31 = gmm_M31[cond]
gmm_LG = gmm_LG[cond]
Tr.gmm_samples = Tr.gmm_samples[:,0:3]
Tr.compute_model_weights(L, 'gmm')
Tr.calculate_N95()
Tr.unprocess(L.samples_means, L.samples_stds, 'gmm')
data2 = np.transpose(np.vstack((np.transpose(Tr.gmm_samples), gmm_MW, gmm_M31)))
labs=["mwd", "mwvr", "mwvt", "MWMvir", "M31Mvir"]
pl = triangle.corner(data2, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=None,\
plot_contours=True, show_titles=True, title_args={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='c')
Tr.preprocess(L.samples_means, L.samples_stds, mode='gmm')
labs = ["MWMvir", "M31Mvir", "MW+M31"]
all_mvir = np.transpose(np.vstack((gmm_MW, gmm_M31, gmm_LG)))
figure = triangle.corner(all_mvir, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=Tr.weights,\
plot_contours=True, show_titles=True, title_kwargs={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='red')
#figure = triangle.corner(all_mvir, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, show_titles=True, title_args={"fontsize": 12}, color='g')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the necessary environment variables and install the KubeFlow Pipeline SDK
Step2: Install the necessary python packages
Step3: Note
Step5: 2. Define pipeline tasks using the kfp library.
|
<ASSISTANT_TASK:>
Python Code:
! arena data list
KFP_SERVICE="ml-pipeline.kubeflow.svc.cluster.local:8888"
KFP_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp/0.1.14/kfp.tar.gz'
KFP_ARENA_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp-arena/kfp-arena-0.3.tar.gz'
KUBEFLOW_PIPELINE_LINK = ''
MOUNT="['user-susan:/training']"
GPUs=1
!pip3 install $KFP_PACKAGE --upgrade
!pip3 install $KFP_ARENA_PACKAGE --upgrade
import arena
import kfp.dsl as dsl
@dsl.pipeline(
name='pipeline to run jobs',
description='shows how to run pipeline jobs.'
)
def sample_pipeline(learning_rate='0.01',
dropout='0.9',
model_version='1'):
A pipeline for end to end machine learning workflow.
# 1. prepare data
prepare_data = arena.StandaloneOp(
name="prepare-data",
image="byrnedo/alpine-curl",
data=MOUNT,
command="mkdir -p /training/dataset/mnist && \
cd /training/dataset/mnist && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-images-idx3-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-labels-idx1-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-images-idx3-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-labels-idx1-ubyte.gz")
# 2. prepare source code
prepare_code = arena.StandaloneOp(
name="source-code",
image="alpine/git",
data=MOUNT,
command="mkdir -p /training/models/ && \
cd /training/models/ && \
if [ ! -d /training/models/tensorflow-sample-code ]; then https://github.com/cheyang/tensorflow-sample-code.git; else echo no need download;fi")
# 3. train the models
train = arena.StandaloneOp(
name="train",
image="tensorflow/tensorflow:1.11.0-gpu-py3",
gpus=GPUs,
data=MOUNT,
command="echo %s; \
echo %s; \
python /training/models/tensorflow-sample-code/tfjob/docker/mnist/main.py --max_steps 500 --data_dir /training/dataset/mnist --log_dir /training/output/mnist" % (prepare_data.output, prepare_code.output),
metric_name="Train-accuracy",
metric_unit="PERCENTAGE",
)
# 4. export the model
export_model = arena.StandaloneOp(
name="export-model",
image="tensorflow/tensorflow:1.11.0-py3",
data=MOUNT,
command="echo %s; \
python /training/models/tensorflow-sample-code/tfjob/docker/mnist/export_model.py --model_version=%s --checkpoint_step=400 --checkpoint_path=/training/output/mnist /training/output/models" % (train.output,model_version))
learning_rate = "0.001"
dropout = "0.8"
model_verison = "1"
arguments = {
'learning_rate': learning_rate,
'dropout': dropout,
'model_version': model_version,
}
import kfp
client = kfp.Client(host=KUBEFLOW_PIPELINE_LINK)
run = client.create_run_from_pipeline_func(sample_pipeline, arguments=arguments).run_info
print('The above run link is assuming you ran this cell on JupyterHub that is deployed on the same cluster. ' +
'The actual run link is ' + KUBEFLOW_PIPELINE_LINK + '/#/runs/details/' + run.id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import tree
import pandas as pd
import pandas_datareader as web
import numpy as np
df = web.DataReader('goog', 'yahoo', start='2012-5-1', end='2016-5-20')
df['B/S'] = (df['Close'].diff() < 0).astype(int)
closing = (df.loc['2013-02-15':'2016-05-21'])
ma_50 = (df.loc['2013-02-15':'2016-05-21'])
ma_100 = (df.loc['2013-02-15':'2016-05-21'])
ma_200 = (df.loc['2013-02-15':'2016-05-21'])
buy_sell = (df.loc['2013-02-15':'2016-05-21']) # Fixed
close = pd.DataFrame(closing)
ma50 = pd.DataFrame(ma_50)
ma100 = pd.DataFrame(ma_100)
ma200 = pd.DataFrame(ma_200)
buy_sell = pd.DataFrame(buy_sell)
clf = tree.DecisionTreeRegressor()
x = np.concatenate([close, ma50, ma100, ma200], axis=1)
y = buy_sell
clf.fit(x, y)
close_buy1 = close[:-1]
m5 = ma_50[:-1]
m10 = ma_100[:-1]
ma20 = ma_200[:-1]
# b = np.concatenate([close_buy1, m5, m10, ma20], axis=1)
predict = clf.predict(pd.concat([close_buy1, m5, m10, ma20], axis=1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the original rating dataset
Step2: Join the new user ratings with the orginal dataset
Step3: Re-train the model
Step4: Save the model
Step5: Load the model
Step6: Predict the top 10 movies for the new user
Step7: Let's take a look at the new_user_unrated_movies_RDD data
Step8: You would want to join the above data set to output the movie names and also filter out movies with less than X number of ratings.<br>
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.mllib.recommendation import Rating
new_user_ID = 0
new_user_ratings = [
Rating(0,260,9), # Star Wars (1977)
Rating(0,1,8), # Toy Story (1995)
Rating(0,16,7), # Casino (1995)
Rating(0,25,8), # Leaving Las Vegas (1995)
Rating(0,32,9), # Twelve Monkeys (a.k.a. 12 Monkeys) (1995)
Rating(0,335,4), # Flintstones, The (1994)
Rating(0,379,3), # Timecop (1994)
Rating(0,296,7), # Pulp Fiction (1994)
Rating(0,858,10) , # Godfather, The (1972)
Rating(0,50,8) # Usual Suspects, The (1995)
]
new_user_ratings_RDD = sc.parallelize(new_user_ratings)
new_user_ratings_RDD.collect()
from pyspark.mllib.recommendation import Rating
ratings = sc.textFile('ratings.dat') \
.map(lambda l: l.split("::")) \
.map(lambda p: Rating(
user = int(p[0]),
product = int(p[1]),
rating = float(p[2]),
))
ratings = ratings.union(new_user_ratings_RDD)
from pyspark.mllib.recommendation import ALS
rank = 50
numIterations = 20
lambdaParam = 0.1
model = ALS.train(ratings, rank, numIterations, lambdaParam)
# if there is an existing model, delete it
!rm -rf ./recommender_model
# save the model
model.save(sc, './recommender_model')
from pyspark.mllib.recommendation import MatrixFactorizationModel
model = MatrixFactorizationModel.load(sc, './recommender_model')
new_user_rated_movie_ids = map(lambda x: x[1], new_user_ratings)
# new_user_rated_movied_ids = [260, 1, 16, 25, 32, 335, 379, 296, 858, 50]
new_user_unrated_movies_RDD = ratings.filter(lambda r: r.product not in new_user_rated_movie_ids) \
.map(lambda x: (new_user_ID, x[0])) \
.distinct()
new_user_unrated_movies_RDD.take(5)
new_user_recommendations_RDD = model.predictAll(new_user_unrated_movies_RDD)
print(new_user_recommendations_RDD.take(10))
my_movie = sc.parallelize([(0, 500)]) # Quiz Show (1994)
individual_movie_rating_RDD = model.predictAll(my_movie)
individual_movie_rating_RDD.collect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Please ignore incompatible errors.
Step2: Importing libraries
Step3: The hidden code cell limits the output height in colab.
Step4: Training a Random Forest model
Step5: The dataset contains a mix of numerical (e.g. bill_depth_mm), categorical
Step7: Next split the dataset into training and testing
Step8: And finally, convert the pandas dataframe (pd.Dataframe) into tensorflow datasets (tf.data.Dataset)
Step9: Notes
Step10: Remarks
Step11: Remark
Step12: Plot the model
Step13: The root node on the left contains the first condition (bill_depth_mm >= 16.55), number of examples (240) and label distribution (the red-blue-green bar).
Step14: The information in summary are all available programatically using the model inspector
Step15: The content of the summary and the inspector depends on the learning algorithm (tfdf.keras.RandomForestModel in this case) and its hyper-parameters (e.g. compute_oob_variable_importances=True will trigger the computation of Out-of-bag variable importances for the Random Forest learner).
Step16: Plotting the training logs
Step17: Let's plot it
Step18: This dataset is small. You can see the model converging almost immediately.
Step19: <!-- <img class="tfo-display-only-on-site" src="images/beginner_tensorboard.png"/> -->
Step20: The description of the learning algorithms and their hyper-parameters are also available in the API reference and builtin help
Step21: Using a subset of features
Step22: Note
Step23: Note that year is in the list of CATEGORICAL features (unlike the first run).
Step24: As new training methods are published and implemented, combinaisons of hyper-parameters can emerge as good or almost-always-better than the default parameters. To avoid changing the default hyper-parameter values these good combinaisons are indexed and available as hyper-parameter templates.
Step25: The available tempaltes are available with predefined_hyperparameters. Note that different learning algorithms have different templates, even if the name is similar.
Step26: Feature Preprocessing
Step27: The following example re-implements the same logic using TensorFlow Feature
Step28: Training a regression model
Step29: Training a ranking model
Step31: The dataset is stored as a .txt file in a specific format, so first convert it into a csv file.
Step32: In this dataset, the relevance defines the ground-truth rank among rows of the same group.
Step33: At this point, keras does not propose any ranking metrics. Instead, the training and validation (a GBDT uses a validation dataset) are shown in the training
|
<ASSISTANT_TASK:>
Python Code:
# Install the specified package
!pip install tensorflow_decision_forests
# Install the specified package
!pip install wurlitzer
# Import necessary libraries
import tensorflow_decision_forests as tfdf
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import math
try:
from wurlitzer import sys_pipes
except:
from colabtools.googlelog import CaptureLog as sys_pipes
from IPython.core.magic import register_line_magic
from IPython.display import Javascript
# Some of the model training logs can cover the full
# screen if not compressed to a smaller viewport.
# This magic allows setting a max height for a cell.
@register_line_magic
def set_cell_height(size):
display(
Javascript("google.colab.output.setIframeHeight(0, true, {maxHeight: " +
str(size) + "})"))
# Check the version of TensorFlow Decision Forests
print("Found TensorFlow Decision Forests v" + tfdf.__version__)
# Download the dataset
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/penguins_toy.csv /tmp/penguins.csv
# Load a dataset into a Pandas Dataframe.
dataset_df = pd.read_csv("/tmp/penguins.csv")
# Display the first 3 examples.
dataset_df.head(3)
# Encode the categorical label into an integer.
#
# Details:
# This stage is necessary if your classification label is represented as a
# string. Note: Keras expected classification labels to be integers.
# Name of the label column.
label = "species"
classes = dataset_df[label].unique().tolist()
print(f"Label classes: {classes}")
dataset_df[label] = dataset_df[label].map(classes.index)
# Split the dataset into a training and a testing dataset.
def split_dataset(dataset, test_ratio=0.30):
Splits a panda dataframe in two.
test_indices = np.random.rand(len(dataset)) < test_ratio
return dataset[~test_indices], dataset[test_indices]
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_ds_pd, label=label)
%set_cell_height 300
# Specify the model.
model_1 = tfdf.keras.RandomForestModel()
# Optionally, add evaluation metrics.
model_1.compile(
metrics=["accuracy"])
# Train the model.
# "sys_pipes" is optional. It enables the display of the training logs.
# TODO
with sys_pipes():
model_1.fit(x=train_ds)
# TODO
# Evaluate the model
evaluation = model_1.evaluate(test_ds, return_dict=True)
print()
for name, value in evaluation.items():
print(f"{name}: {value:.4f}")
# Save the model
model_1.save("/tmp/my_saved_model")
# Plot the first tree of the model
tfdf.model_plotter.plot_model_in_colab(model_1, tree_idx=0, max_depth=3)
# Print the overall structure of the model
%set_cell_height 300
model_1.summary()
# The input features
model_1.make_inspector().features()
# The feature importances
model_1.make_inspector().variable_importances()
# TODO
# Evaluate the model
model_1.make_inspector().evaluation()
%set_cell_height 150
model_1.make_inspector().training_logs()
# Import necessary libraries
import matplotlib.pyplot as plt
logs = model_1.make_inspector().training_logs()
# Plot the logs
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot([log.num_trees for log in logs], [log.evaluation.accuracy for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Accuracy (out-of-bag)")
plt.subplot(1, 2, 2)
plt.plot([log.num_trees for log in logs], [log.evaluation.loss for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Logloss (out-of-bag)")
plt.show()
# This cell start TensorBoard that can be slow.
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Google internal version
# %load_ext google3.learning.brain.tensorboard.notebook.extension
# Clear existing results (if any)
!rm -fr "/tmp/tensorboard_logs"
# Export the meta-data to tensorboard.
model_1.make_inspector().export_to_tensorboard("/tmp/tensorboard_logs")
# docs_infra: no_execute
# Start a tensorboard instance.
%tensorboard --logdir "/tmp/tensorboard_logs"
# List all algorithms
tfdf.keras.get_all_models()
# help works anywhere.
help(tfdf.keras.RandomForestModel)
# ? only works in ipython or notebooks, it usually opens on a separate panel.
tfdf.keras.RandomForestModel?
feature_1 = tfdf.keras.FeatureUsage(name="bill_length_mm")
feature_2 = tfdf.keras.FeatureUsage(name="island")
all_features = [feature_1, feature_2]
# Note: This model is only trained with two features. It will not be as good as
# the one trained on all features.
# TODO
model_2 = tfdf.keras.GradientBoostedTreesModel(
features=all_features, exclude_non_specified_features=True)
model_2.compile(metrics=["accuracy"])
model_2.fit(x=train_ds, validation_data=test_ds)
print(model_2.evaluate(test_ds, return_dict=True))
# Define the features
%set_cell_height 300
feature_1 = tfdf.keras.FeatureUsage(name="year", semantic=tfdf.keras.FeatureSemantic.CATEGORICAL)
feature_2 = tfdf.keras.FeatureUsage(name="bill_length_mm")
feature_3 = tfdf.keras.FeatureUsage(name="sex")
all_features = [feature_1, feature_2, feature_3]
model_3 = tfdf.keras.GradientBoostedTreesModel(features=all_features, exclude_non_specified_features=True)
model_3.compile( metrics=["accuracy"])
with sys_pipes():
model_3.fit(x=train_ds, validation_data=test_ds)
# A classical but slighly more complex model.
model_6 = tfdf.keras.GradientBoostedTreesModel(
num_trees=500, growing_strategy="BEST_FIRST_GLOBAL", max_depth=8)
model_6.fit(x=train_ds)
# TODO
# A more complex, but possibly, more accurate model.
model_7 = tfdf.keras.GradientBoostedTreesModel(
num_trees=500,
growing_strategy="BEST_FIRST_GLOBAL",
max_depth=8,
split_axis="SPARSE_OBLIQUE",
categorical_algorithm="RANDOM",
)
model_7.fit(x=train_ds)
# A good template of hyper-parameters.
model_8 = tfdf.keras.GradientBoostedTreesModel(hyperparameter_template="benchmark_rank1")
model_8.fit(x=train_ds)
# The hyper-parameter templates of the Gradient Boosted Tree model.
print(tfdf.keras.GradientBoostedTreesModel.predefined_hyperparameters())
%set_cell_height 300
body_mass_g = tf.keras.layers.Input(shape=(1,), name="body_mass_g")
body_mass_kg = body_mass_g / 1000.0
bill_length_mm = tf.keras.layers.Input(shape=(1,), name="bill_length_mm")
raw_inputs = {"body_mass_g": body_mass_g, "bill_length_mm": bill_length_mm}
processed_inputs = {"body_mass_kg": body_mass_kg, "bill_length_mm": bill_length_mm}
# "preprocessor" contains the preprocessing logic.
preprocessor = tf.keras.Model(inputs=raw_inputs, outputs=processed_inputs)
# "model_4" contains both the pre-processing logic and the decision forest.
model_4 = tfdf.keras.RandomForestModel(preprocessing=preprocessor)
model_4.fit(x=train_ds)
model_4.summary()
def g_to_kg(x):
return x / 1000
feature_columns = [
tf.feature_column.numeric_column("body_mass_g", normalizer_fn=g_to_kg),
tf.feature_column.numeric_column("bill_length_mm"),
]
preprocessing = tf.keras.layers.DenseFeatures(feature_columns)
model_5 = tfdf.keras.RandomForestModel(preprocessing=preprocessing)
model_5.compile(metrics=["accuracy"])
model_5.fit(x=train_ds)
# Download the dataset.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/abalone_raw_toy.csv /tmp/abalone.csv
dataset_df = pd.read_csv("/tmp/abalone.csv")
print(dataset_df.head(3))
# Split the dataset into a training and testing dataset.
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
# Name of the label column.
label = "Rings"
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label, task=tfdf.keras.Task.REGRESSION)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label, task=tfdf.keras.Task.REGRESSION)
%set_cell_height 300
# TODO
# Configure the regression model.
model_7 = tfdf.keras.RandomForestModel(task = tfdf.keras.Task.REGRESSION)
# Optional.
model_7.compile(metrics=["mse"])
# Train the model.
with sys_pipes():
model_7.fit(x=train_ds)
# Evaluate the model on the test dataset.
evaluation = model_7.evaluate(test_ds, return_dict=True)
print(evaluation)
print()
print(f"MSE: {evaluation['mse']}")
print(f"RMSE: {math.sqrt(evaluation['mse'])}")
%set_cell_height 200
archive_path = tf.keras.utils.get_file("letor.zip",
"https://download.microsoft.com/download/E/7/E/E7EABEF1-4C7B-4E31-ACE5-73927950ED5E/Letor.zip",
extract=True)
# Path to the train and test dataset using libsvm format.
raw_dataset_path = os.path.join(os.path.dirname(archive_path),"OHSUMED/Data/All/OHSUMED.txt")
def convert_libsvm_to_csv(src_path, dst_path):
Converts a libsvm ranking dataset into a flat csv file.
Note: This code is specific to the LETOR3 dataset.
dst_handle = open(dst_path, "w")
first_line = True
for src_line in open(src_path,"r"):
# Note: The last 3 items are comments.
items = src_line.split(" ")[:-3]
relevance = items[0]
group = items[1].split(":")[1]
features = [ item.split(":") for item in items[2:]]
if first_line:
# Csv header
dst_handle.write("relevance,group," + ",".join(["f_" + feature[0] for feature in features]) + "\n")
first_line = False
dst_handle.write(relevance + ",g_" + group + "," + (",".join([feature[1] for feature in features])) + "\n")
dst_handle.close()
# Convert the dataset.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/ohsumed_toy.csv /tmp/ohsumed.csv
csv_dataset_path="/tmp/ohsumed.csv"
convert_libsvm_to_csv(raw_dataset_path, csv_dataset_path)
# Load a dataset into a Pandas Dataframe.
dataset_df = pd.read_csv(csv_dataset_path)
# Display the first 3 examples.
dataset_df.head(3)
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
# Display the first 3 examples of the training dataset.
train_ds_pd.head(3)
# Name of the relevance and grouping columns.
relevance = "relevance"
ranking_train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=relevance, task=tfdf.keras.Task.RANKING)
ranking_test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=relevance, task=tfdf.keras.Task.RANKING)
%set_cell_height 400
# TODO
# Define the ranking model
model_8 = tfdf.keras.GradientBoostedTreesModel(
task=tfdf.keras.Task.RANKING,
ranking_group="group",
num_trees=50)
with sys_pipes():
model_8.fit(x=ranking_train_ds)
# Print the summary of the model
%set_cell_height 400
model_8.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Espec functions
Step11: Data read functions
Step12: Load data and subtract background
Step13: Mesh plot all unsaturated data
Step14: Plot averaged data
Step15: Interpolate data between regions with and without vinyl
Step16: Plot interpolated data
Step17: Calculate signal ratios
Step18: Determine signal ratios from MCNP output
Step19: <font size="4"><p>Plot energy absorbed versus segment depth to make sure everything looks good
Step20: <font size="4"><p>Next, the number of photons that reach the CCD can be calculated. From the quantum efficiency of the CCD, the contribution to the signal from visible photons can be found.</p>
Step21: <font size="4"><p>The amount of energy deposited into the CCD depletion region from electrons and x-rays can be read for both the "with vinyl" and "without vinyl" cases
Step22: <font size="4"><p>Finally, determine the signal ratios</p>
Step23: <font size="4"><p>and plot the results
|
<ASSISTANT_TASK:>
Python Code:
#Imports
from math import *
import numpy as np
import scipy as sp
import scipy.special
import scipy.interpolate as interpolate
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
import seaborn as sns
import sys
import os
#Import custom modules
from physics import *
%matplotlib notebook
B0 = 1710.0/10**4 #Magnetic field strength in Tesla
def KEcalc(z,y):
Returns KE in J given z-position in m
return me*c**2*(sqrt((q*B0/(me*c))**2*((z**2+y**2)/(2*y))**2+1)-1)
def Radius(KE):
#Radius of electron orbit in m given KE in keV
return me*c/(q*B0)*sqrt((KE*1000*q/(me*c**2)+1)**2-1)
def zfcalc(KE,y):
#Returns z-position at screen in inches given KE in keV
R = Radius(KE)
return sqrt(R**2 - (y-R)**2)
def zfcalcGeneral(KE,yM,y):
R = Radius(KE)
zM = zfcalc(KE,yM)
return zM + (y - yM)*(R - yM)/zM
def KEcalcGeneral(zf,yM,yf):
Returns KE in J given z-position of electrons, y-position of magnet edge, and y-position of screen, all in m
a = (yM+yf)**2
b = -2*yM*(yf*(yM+yf)+zf**2)
d = yM**2*(zf**2+yf**2)
f = (me*c)/(q*B0)
g = (-b+sqrt(b**2-4*a*d))/(2*a)
return me*c**2*(sqrt(g**2+f**2)/f - 1)
def AngleIncidence(KE,yM):
R = Radius(KE)
return asin((R-yM)/R)
def getfns(folder,ext=''):
Get a list of full path filenames for all files in a folder and subfolders for the given extension
fns = []
for file in os.listdir(folder):
if file.endswith(ext):
fns.append(os.path.join(folder,file))
return fns
def readcsv(filename):
Read in a csv file and load it as an array
return np.loadtxt(open(filename, "rb"), delimiter=",")
def readcsvs(filenames):
Read in multiple csv files and load them in to a two-dimensional array
template = readcsv(filenames[0])
numfiles = len(filenames)
Data = np.zeros([numfiles,len(template)])
for i in range(numfiles):
spectrum = readcsv(filenames[i])
Data[i,:] = spectrum
return Data
def DataClean(Data):
Read in data and clean it by removing rows that have saturated pixel values
maxes = np.max(Data[:,500:],1)
includes = maxes<(2**16-1)
rejects = maxes>(2**16-2)
CleanData = Data[includes,:]
return CleanData
def DataAverage(Data):
Average input 2d array into a 1d array
return np.mean(Data,0)
MagnetOutFolderPath = os.curdir + '/Data/2015-08-17_Tiger_stripes_test/even_more_no_magnet'
MagnetInFolderPath = os.curdir + '/Data/2015-08-17_Tiger_stripes_test/magnet_in'
NoLaserFolderPath = os.curdir + '/Data/2015-08-17_Tiger_stripes_test/no_laser'
MagnetOutFiles = getfns(MagnetOutFolderPath,'')
MagnetInFiles = getfns(MagnetInFolderPath,'')
NoLaserFiles = getfns(NoLaserFolderPath,'')
MagnetOutData = readcsvs(MagnetOutFiles)
MagnetInData = readcsvs(MagnetInFiles)
Background = DataAverage(MagnetInData)
MagnetOutDataClean = DataClean(MagnetOutData)-Background
sns.set(font_scale=1.5)
fig1 = plt.figure(figsize=(8,6))
ax1 = fig1.add_subplot(111)
ax1.set_xlim(500,3648)
ax1.set_ylim(0,len(MagnetInData))
ax1.set_xlabel('Pixel')
ax1.set_ylabel('Instance')
# mesh1 = ax1.pcolormesh(MagnetInData, cmap='hot',vmin=0, vmax=10000)
mesh1 = ax1.pcolormesh(MagnetOutDataClean, cmap='inferno',vmin=0, vmax=65535)
MagnetOutDataCleanAvg = DataAverage(MagnetOutDataClean)
MagnetOutDataCleanAvgHat = savitzky_golay(MagnetOutDataCleanAvg,51,3) #Smoothed Data
fig2 = plt.figure(figsize=(12,8))
ax2 = fig2.add_subplot(111)
ax2.set_xlim(0,3648)
ax2.set_ylim(0,65535)
ax2.set_xlabel('Pixel Number')
ax2.set_ylabel('Pixel Value')
#ax2.semilogy()
ax2.plot(MagnetOutDataCleanAvg,linewidth=1)
ax2.plot(MagnetOutDataCleanAvgHat,linewidth=1,color='r')
#Pixel values were determined by picking a region within each peak and trough
LanexPixelArray = np.hstack((np.arange(240,260),np.arange(670,750),np.arange(1090,1300),np.arange(1600,1700),\
np.arange(1930,2030),np.arange(2400,2450),np.arange(2820,2920),np.arange(3280,3350)))
LanexData = np.hstack((MagnetOutDataCleanAvgHat[240:260],MagnetOutDataCleanAvgHat[670:750],\
MagnetOutDataCleanAvgHat[1090:1300],MagnetOutDataCleanAvgHat[1600:1700],\
MagnetOutDataCleanAvgHat[1930:2030],MagnetOutDataCleanAvgHat[2400:2450],\
MagnetOutDataCleanAvgHat[2820:2920],MagnetOutDataCleanAvgHat[3280:3350]))
VinylPixelArray = np.hstack((np.arange(460,490),np.arange(900,950),np.arange(1400,1500),np.arange(1750,1850),\
np.arange(2150,2250),np.arange(2540,2640)))
VinylData = np.hstack((MagnetOutDataCleanAvgHat[460:490],MagnetOutDataCleanAvgHat[900:950],\
MagnetOutDataCleanAvgHat[1400:1500],MagnetOutDataCleanAvgHat[1750:1850],\
MagnetOutDataCleanAvgHat[2150:2250],MagnetOutDataCleanAvgHat[2540:2640]))
LanexInterpFunc = interpolate.interp1d(LanexPixelArray,LanexData,kind='slinear')
LanexPixelArrayInterp = np.arange(240,3350)
LanexDataInterp = LanexInterpFunc(LanexPixelArrayInterp)
VinylInterpFunc = interpolate.interp1d(VinylPixelArray,VinylData,kind='slinear')
VinylPixelArrayInterp = np.arange(460,2640)
VinylDataInterp = VinylInterpFunc(VinylPixelArrayInterp)
# sns.set(context='poster',font_scale=1.5)
# sns.set_style("darkgrid")
# sns.set_palette(palette='deep')
# sns.set_color_codes(palette='deep')
plt.figure(figsize=(12,6))
#With Lanex
#plt.plot(LanexPixelArray,LanexData,linestyle='None',marker='.')
plt.plot(LanexPixelArrayInterp,LanexDataInterp, linewidth=2,linestyle='--',label='Without Vinyl',color='b')
#Without Lanex
#plt.plot(VinylPixelArray,VinylData,linestyle='None',marker='.')
plt.plot(VinylPixelArrayInterp,VinylDataInterp,linewidth=2,linestyle='--',label='With Vinyl',color='r')
#All smoothed data
plt.plot(MagnetOutDataCleanAvgHat,linewidth=2,label='All Data',color='g')
plt.xlim(0,2000)
plt.xlabel('Pixel Number')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.ylabel('Pixel Value')
plt.legend()
plt.subplots_adjust(left=0.12,bottom=0.14) #Adjust spacing to prevent clipping of x and y labels
plt.savefig('strip_test.svg')
yM = 0.5 #magnet edge position in inches
CCDpos = 3.02 #CCD y-position relative to magnet edge in mm
yM = yM*.0254 #Convert to base units
CCDpos = CCDpos*10**-3 #Convert to base units
yf = yM + CCDpos #Screen position
KEcalcGeneralVec = np.vectorize(KEcalcGeneral) #Vectorize espec function for KE
ChosenPixels = [479.37419602, 658.07909036, 805.53608568, 929.62974462, 1079.52432829, 1325.661891,\
1454.37508809, 1637.97442378, 1801.34325342, 1984.12923212, 2202.34572023, 2412.63089924,\
2595.3896852]
ChosenPositions = np.multiply(np.add(np.multiply(ChosenPixels,8.0/10**3),9.5),10**-3) #Convert to meters
ChosenEnergies = KEcalcGeneralVec(ChosenPositions,yM,yf) #Energies corresponding to the chosen pixels
SignalRatio = LanexInterpFunc(ChosenPixels)/VinylInterpFunc(ChosenPixels)
print(ChosenEnergies/(1000*q),'\n')
print(SignalRatio)
mass = 2.95680*10**-2 #mass of active layer in g
Nsegments = 101 #number of segments
segmentmass = mass/Nsegments #mass of each segment in g
phosphorthickness = 81.*10**-6 #thickness of active layer in m
KEsim = np.multiply([338, 391, 440, 485],10**-3) #Chosen energies (only the first 4 were used to save time)
#Create array of angles
CosThetaSim = []
for x in KEsim:
CosThetaSim.append(np.cos(AngleIncidence(x*1000,yM)))
CCDthickness = np.arange(1,21,1) #Depletion region thickness array
MCNP_Directory = '/home/drake/Documents/Physics/Research/Python/MCNP_Code'
#Read files. As far as visible photons are concerned, the depletion region thickness doesn't matter
deptharray= []
Eabsorbed_Segments = []
Eabsorbed_Segments_Error = []
for i in range(len(KEsim)):
deptharray.append([])
Eabsorbed_Segments.append([])
Eabsorbed_Segments_Error.append([])
#Directory of the input files where vinyl was not used in the simulation
if i%2==0: #Energies where vinyl was experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output2/Out_{KE}MeV_{Theta}Degrees_20umCCD_inverted'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)))
else: #Energies where vinyl wasn't experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output1/Out_{KE}MeV_{Theta}Degrees_20umCCD'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)))
for segmentnumber in range(Nsegments-1):
segment = segmentnumber+100 #segment number
deptharray[i].append(segmentnumber*phosphorthickness/Nsegments) #depth for each segment in microns
printflag = False
with open(directory) as searchfile:
for line in searchfile:
left,sep,right = line.partition(' -{segmentlabel} '.format(segmentlabel=str(segment)))
if printflag:
Eabs_per_g = float(line[17:28])
Eabs_per_g_Error = float(line[29:35])
Eabsorbed_Segments[i].append(Eabs_per_g*segmentmass)
Eabsorbed_Segments_Error[i].append(Eabs_per_g_Error*segmentmass)
printflag = False
if sep:
printflag = True
plt.figure(figsize=(12,6))
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[0],10**3),label='338 keV')
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[1],10**3),label='391 keV')
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[2],10**3),label='440 keV')
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[3],10**3),label='485 keV')
plt.xlabel('Depth (um)')
plt.ylabel('Energy absorbed (keV)')
plt.legend()
scatteringlength = 2.84*10**-6 #photon scattering length in Gd2O2S in m
conversionefficiency = 0.16 #electron energy to light energy conversion efficiency
emissionwavelength = 545*10**-9 #lanex emission wavelength in m
emissionenergy = hbar*2*pi*c/emissionwavelength
photonNumberArray = []
for i in range(len(KEsim)):
photonNumber = 0
for j in range(len(deptharray[i])):
photonNumber += conversionefficiency*Eabsorbed_Segments[i][j]*10**6*q/emissionenergy\
*(j+0.5)/Nsegments # Nabs = Nexc*(Distance from top of lanex)/(Phosphor thickness)
#where Distance from top of lanex = (SegmentNumber+0.5)/(Nsegments)*(Phosphor thickness)
photonNumberArray.append(photonNumber)
QuantumEfficiency = 0.4
PhotonSignal = np.array(np.multiply(photonNumberArray,QuantumEfficiency))
print(PhotonSignal)
#With vinyl
ExAbsorbedCCDVinyl = [] #x-rays
ExAbsorbedCCDErrorVinyl = []
EelAbsorbedCCDVinyl = [] #electrons
EelAbsorbedCCDErrorVinyl = []
CCDmass = np.multiply(CCDthickness,1.39740*10**-4) #mass of photoactive layer of CCD
for i in range(len(KEsim)):
ExAbsorbedCCDVinyl.append([])
ExAbsorbedCCDErrorVinyl.append([])
EelAbsorbedCCDVinyl.append([])
EelAbsorbedCCDErrorVinyl.append([])
for j in range(len(CCDthickness)):
#Directory of the input files where vinyl was used in the simulation
if i%2==0: #Energies where vinyl was experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output1/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
else: #Energies where vinyl wasn't experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output2/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD_inverted'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
printflag = False
with open(directory) as searchfile:
firstoccurence = False
for line in searchfile:
left,sep,right = line.partition('cell 7')
if printflag:
Eabs_per_g = float(line[17:28])
Eabs_per_g_Error = float(line[29:35])
if firstoccurence:
ExAbsorbedCCDVinyl[i].append(Eabs_per_g*CCDmass[j])
ExAbsorbedCCDErrorVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
else:
EelAbsorbedCCDVinyl[i].append(Eabs_per_g*CCDmass[j])
EelAbsorbedCCDErrorVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
printflag = False
if sep: # True iff 'cell 6' in line
printflag = True
firstoccurence = not firstoccurence
#Calculate the overall contribution to the signal
Energy_eh_pair = 3.65 #Energy needed to generate electron-hole pair in Si in eV
ElectronSignalVinyl = []
XraySignalVinyl = []
for i in range(len(KEsim)):
ElectronSignalVinyl.append([])
XraySignalVinyl.append([])
ElectronSignalVinyl[i].append(np.multiply(EelAbsorbedCCDVinyl[i],10**6/Energy_eh_pair))
XraySignalVinyl[i].append(np.multiply(ExAbsorbedCCDVinyl[i],10**6/Energy_eh_pair))
ElectronSignalVinyl = np.array(ElectronSignalVinyl)
XraySignalVinyl = np.array(XraySignalVinyl)
TotalSignalVinyl = np.add(ElectronSignalVinyl,XraySignalVinyl)
#Without vinyl
ExAbsorbedCCDNoVinyl = [] #x-rays
ExAbsorbedCCDErrorNoVinyl = []
EelAbsorbedCCDNoVinyl = [] #electrons
EelAbsorbedCCDErrorNoVinyl = []
CCDmass = np.multiply(CCDthickness,1.39740*10**-4) #mass of photoactive layer of CCD
for i in range(len(KEsim)):
ExAbsorbedCCDNoVinyl.append([])
ExAbsorbedCCDErrorNoVinyl.append([])
EelAbsorbedCCDNoVinyl.append([])
EelAbsorbedCCDErrorNoVinyl.append([])
for j in range(len(CCDthickness)):
#Directory of the input files where vinyl was not used in the simulation
if i%2==0: #Energies where vinyl was experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output2/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD_inverted'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
else: #Energies where vinyl wasn't experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output1/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
printflag = False
with open(directory) as searchfile:
firstoccurence = False
for line in searchfile:
left,sep,right = line.partition('cell 6')
if printflag:
Eabs_per_g = float(line[17:28])
Eabs_per_g_Error = float(line[29:35])
if firstoccurence:
ExAbsorbedCCDNoVinyl[i].append(Eabs_per_g*CCDmass[j])
ExAbsorbedCCDErrorNoVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
else:
EelAbsorbedCCDNoVinyl[i].append(Eabs_per_g*CCDmass[j])
EelAbsorbedCCDErrorNoVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
printflag = False
if sep: # True iff 'cell 6' in line
printflag = True
firstoccurence = not firstoccurence
#Calculate the overall contribution to the signal
Energy_eh_pair = 3.65 #Energy needed to generate electron-hole pair in Si in eV
ElectronSignalNoVinyl = []
XraySignalNoVinyl = []
for i in range(len(KEsim)):
ElectronSignalNoVinyl.append([])
XraySignalNoVinyl.append([])
ElectronSignalNoVinyl[i].append(np.multiply(EelAbsorbedCCDNoVinyl[i],10**6/Energy_eh_pair))
XraySignalNoVinyl[i].append(np.multiply(ExAbsorbedCCDNoVinyl[i],10**6/Energy_eh_pair))
TotalSignalNoVinyl = np.add(ElectronSignalNoVinyl,XraySignalNoVinyl)
for i in range(len(KEsim)):
TotalSignalNoVinyl[i] += PhotonSignal[i]
Ratio = [TotalSignalNoVinyl[0][0][0:19]/TotalSignalVinyl[0][0][0:19],\
TotalSignalNoVinyl[1][0][0:19]/TotalSignalVinyl[1][0][0:19],\
TotalSignalNoVinyl[2][0][0:19]/TotalSignalVinyl[2][0][0:19],\
TotalSignalNoVinyl[3][0][0:19]/TotalSignalVinyl[3][0][0:19]]
MCNPSignalRatio = []
for i in range(len(CCDthickness)-1):
MCNPSignalRatio.append([])
for j in range(len(KEsim)):
MCNPSignalRatio[i].append(Ratio[j][i])
mpl.rcParams.update({'font.size': 24, 'font.family': 'serif'})
sns.set(context='poster',font_scale=1.5)
sns.set_style("darkgrid")
sns.set_palette(palette='deep')
sns.set_color_codes(palette='deep')
plt.figure(figsize=(12,6))
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[0],linewidth=2,label='1 '+ u'\u03bcm')
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[1],linewidth=2,label='2 '+ u'\u03bcm')
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[4],linewidth=2,color='y',label='5 '+ u'\u03bcm')
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[9],linewidth=2,color='c',label='10 '+ u'\u03bcm')
#plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[14],linewidth=2,color='purple',label='MCNP 15 '+ u'\u03bcm photoactive region')
plt.plot(np.divide(ChosenEnergies,10**6*q),SignalRatio,linewidth=2,color='r',marker='o',label='Experiment')
plt.xlabel('Electron Energy (keV)')
plt.ylabel('Signal Ratio (No Vinyl / Vinyl)')
plt.xlim(0.33,0.50)
plt.legend(title='Photoactive Region Thickness')
plt.subplots_adjust(left=0.14,bottom=0.15) #Adjust spacing to prevent clipping of x and y labels
#plt.savefig('Signal_Ratio.svg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1) How does gradient checking work?
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
|
<ASSISTANT_TASK:>
Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / 2 / epsilon # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] += epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] -= epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / 2 / epsilon
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Manual Landmark Localization
Step3: Registration (manual landmark localization)
Step4: We can also evaluate the registration qualitatively by using a linked cursor approach via the same GUI we used to localize corresponding points. This time the points will be added in pairs.
Step5: <font color="red">Homework
Step6: Registration (semiautomatic landmark localization)
|
<ASSISTANT_TASK:>
Python Code:
# To use interactive plots (mouse clicks, zooming, panning) we use the notebook back end. We want our graphs
# to be embedded in the notebook, inline mode, this combination is defined by the magic "%matplotlib notebook".
%matplotlib notebook
import numpy as np
import SimpleITK as sitk
import registration_utilities as ru
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
import gui
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32)
moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32)
fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(
fdata("ct_T1.standard")
)
# In the original data both images have the same orientation (patient in supine), the approach should also work when
# images have different orientation. In the extreme they have a 180^o rotation between them.
rotate = True
if rotate:
rotation_center = moving_image.TransformContinuousIndexToPhysicalPoint(
[(index - 1) / 2.0 for index in moving_image.GetSize()]
)
transform_moving = sitk.Euler3DTransform(rotation_center, 0, 0, np.pi, (0, 0, 0))
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(moving_image)
resample.SetInterpolator(sitk.sitkLinear)
resample.SetTransform(transform_moving)
moving_image = resample.Execute(moving_image)
for i, p in enumerate(moving_fiducial_points):
moving_fiducial_points[i] = transform_moving.TransformPoint(p)
# Compute the rigid transformation defined by the two point sets. Flatten the tuple lists
# representing the points. The LandmarkBasedTransformInitializer expects the point coordinates
# in one flat list [x1, y1, z1, x2, y2, z2...].
fixed_fiducial_points_flat = [c for p in fixed_fiducial_points for c in p]
moving_fiducial_points_flat = [c for p in moving_fiducial_points for c in p]
reference_transform = sitk.LandmarkBasedTransformInitializer(
sitk.VersorRigid3DTransform(),
fixed_fiducial_points_flat,
moving_fiducial_points_flat,
)
# Generate a reference dataset from the reference transformation
# (corresponding points in the fixed and moving images).
fixed_points = ru.generate_random_pointset(image=fixed_image, num_points=100)
moving_points = [reference_transform.TransformPoint(p) for p in fixed_points]
# Compute the TRE prior to registration.
pre_errors_mean, pre_errors_std, _, pre_errors_max, pre_errors = ru.registration_errors(
sitk.Euler3DTransform(), fixed_points, moving_points, display_errors=True
)
print(
f"Before registration, errors (TRE) in millimeters, mean(std): {pre_errors_mean:.2f}({pre_errors_std:.2f}), max: {pre_errors_max:.2f}"
)
point_acquisition_interface = gui.RegistrationPointDataAquisition(
fixed_image, moving_image, fixed_window_level=(215, 50)
)
# fixed_image_points, moving_image_points = point_acquisition_interface.get_points()
fixed_image_points = [
(156.48434676356158, 201.92274575468412, 68.0),
(194.25413436597393, 98.55771047484492, 32.0),
(128.94523819661913, 96.18284152323203, 32.0),
]
moving_image_points = [
(141.46826904042848, 156.97653126727528, 48.0),
(113.70102381552435, 251.76553994455645, 8.0),
(180.69457220262115, 251.76553994455645, 8.0),
]
fixed_image_points_flat = [c for p in fixed_image_points for c in p]
moving_image_points_flat = [c for p in moving_image_points for c in p]
manual_localized_transformation = sitk.VersorRigid3DTransform(
sitk.LandmarkBasedTransformInitializer(
sitk.VersorRigid3DTransform(), fixed_image_points_flat, moving_image_points_flat
)
)
(
manual_errors_mean,
manual_errors_std,
manual_errors_min,
manual_errors_max,
_,
) = ru.registration_errors(
manual_localized_transformation, fixed_points, moving_points, display_errors=True
)
print(
f"After registration (manual point localization), errors (TRE) in millimeters, mean(std): {manual_errors_mean:.2f}({manual_errors_std:.2f}), max: {manual_errors_max:.2f}"
)
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
fixed_window_level=(215, 50),
known_transformation=manual_localized_transformation,
)
updated_moving_image_points = moving_image_points
updated_moving_image_points_flat = [c for p in updated_moving_image_points for c in p]
semi_automatic_transform = sitk.VersorRigid3DTransform(
sitk.LandmarkBasedTransformInitializer(
sitk.VersorRigid3DTransform(),
fixed_image_points_flat,
updated_moving_image_points_flat,
)
)
(
semi_automatic_errors_mean,
semi_automatic_errors_std,
_,
semi_automatic_errors_max,
_,
) = ru.registration_errors(
semi_automatic_transform,
fixed_points,
moving_points,
display_errors=True,
min_err=manual_errors_min,
max_err=manual_errors_max,
)
print(
f"After registration (semiautomatic point localization), errors (TRE) in millimeters, mean(std): {semi_automatic_errors_mean:.2f}({semi_automatic_errors_std:.2f}), max: {semi_automatic_errors_max:.2f}"
)
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
fixed_window_level=(215, 50),
known_transformation=semi_automatic_transform,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size="1.5em">More info about the %watermark extension</font>
Step2: <br>
Step3: <br>
Step4: <br>
Step5: <br>
Step6: <br>
Step7: <br>
Step8: <br>
Step9: <br>
Step10: <br>
Step11: <br>
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
%matplotlib inline
import matplotlib.pyplot as plt
# input data
mean_values = [1, 2, 3]
variance = [0.2, 0.4, 0.5]
bar_labels = ['bar 1', 'bar 2', 'bar 3']
# plot bars
x_pos = list(range(len(bar_labels)))
plt.bar(x_pos, mean_values, yerr=variance, align='center', alpha=0.5)
plt.grid()
# set height of the y-axis
max_y = max(zip(mean_values, variance)) # returns a tuple, here: (3, 5)
plt.ylim([0, (max_y[0] + max_y[1]) * 1.1])
# set axes labels and title
plt.ylabel('variable y')
plt.xticks(x_pos, bar_labels)
plt.title('Bar plot with error bars')
plt.show()
#plt.savefig('./my_plot.png')
from matplotlib import pyplot as plt
import numpy as np
# input data
mean_values = [1, 2, 3]
std_dev = [0.2, 0.4, 0.5]
bar_labels = ['bar 1', 'bar 2', 'bar 3']
fig = plt.figure(figsize=(8,6))
# plot bars
y_pos = np.arange(len(mean_values))
y_pos = [x for x in y_pos]
plt.yticks(y_pos, bar_labels, fontsize=10)
plt.barh(y_pos, mean_values, xerr=std_dev,
align='center', alpha=0.4, color='g')
# annotation and labels
plt.xlabel('measurement x')
t = plt.title('Bar plot with standard deviation')
plt.ylim([-1,len(mean_values)+0.5])
plt.xlim([0, 4])
plt.grid()
plt.show()
from matplotlib import pyplot as plt
import numpy as np
# input data
X1 = np.array([1, 2, 3])
X2 = np.array([2, 2, 3])
bar_labels = ['bar 1', 'bar 2', 'bar 3']
fig = plt.figure(figsize=(8,6))
# plot bars
y_pos = np.arange(len(X1))
y_pos = [x for x in y_pos]
plt.yticks(y_pos, bar_labels, fontsize=10)
plt.barh(y_pos, X1,
align='center', alpha=0.4, color='g')
# we simply negate the values of the numpy array for
# the second bar:
plt.barh(y_pos, -X2,
align='center', alpha=0.4, color='b')
# annotation and labels
plt.xlabel('measurement x')
t = plt.title('Bar plot with standard deviation')
plt.ylim([-1,len(X1)+0.1])
plt.xlim([-max(X2)-1, max(X1)+1])
plt.grid()
plt.show()
import matplotlib.pyplot as plt
# Input data
green_data = [1, 2, 3]
blue_data = [3, 2, 1]
red_data = [2, 3, 3]
labels = ['group 1', 'group 2', 'group 3']
# Setting the positions and width for the bars
pos = list(range(len(green_data)))
width = 0.2
# Plotting the bars
fig, ax = plt.subplots(figsize=(8,6))
plt.bar(pos, green_data, width,
alpha=0.5,
color='g',
label=labels[0])
plt.bar([p + width for p in pos], blue_data, width,
alpha=0.5,
color='b',
label=labels[1])
plt.bar([p + width*2 for p in pos], red_data, width,
alpha=0.5,
color='r',
label=labels[2])
# Setting axis labels and ticks
ax.set_ylabel('y-value')
ax.set_title('Grouped bar plot')
ax.set_xticks([p + 1.5 * width for p in pos])
ax.set_xticklabels(labels)
# Setting the x-axis and y-axis limits
plt.xlim(min(pos)-width, max(pos)+width*4)
plt.ylim([0, max(green_data + blue_data + red_data) * 1.5])
# Adding the legend and showing the plot
plt.legend(['green', 'blue', 'red'], loc='upper left')
plt.grid()
plt.show()
import matplotlib.pyplot as plt
blue_data = [100,120,140]
red_data = [150,120,190]
green_data = [80,70,90]
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,5))
bar_width = 0.5
# positions of the left bar-boundaries
bar_l = [i+1 for i in range(len(blue_data))]
# positions of the x-axis ticks (center of the bars as bar labels)
tick_pos = [i+(bar_width/2) for i in bar_l]
###################
## Absolute count
###################
ax1.bar(bar_l, blue_data, width=bar_width,
label='blue data', alpha=0.5, color='b')
ax1.bar(bar_l, red_data, width=bar_width,
bottom=blue_data, label='red data', alpha=0.5, color='r')
ax1.bar(bar_l, green_data, width=bar_width,
bottom=[i+j for i,j in zip(blue_data,red_data)], label='green data', alpha=0.5, color='g')
plt.sca(ax1)
plt.xticks(tick_pos, ['category 1', 'category 2', 'category 3'])
ax1.set_ylabel("Count")
ax1.set_xlabel("")
plt.legend(loc='upper left')
plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
plt.grid()
# rotate axis labels
plt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right')
############
## Percent
############
totals = [i+j+k for i,j,k in zip(blue_data, red_data, green_data)]
blue_rel = [i / j * 100 for i,j in zip(blue_data, totals)]
red_rel = [i / j * 100 for i,j in zip(red_data, totals)]
green_rel = [i / j * 100 for i,j in zip(green_data, totals)]
ax2.bar(bar_l, blue_rel,
label='blue data', alpha=0.5, color='b', width=bar_width
)
ax2.bar(bar_l, red_rel,
bottom=blue_rel, label='red data', alpha=0.5, color='r', width=bar_width
)
ax2.bar(bar_l, green_rel,
bottom=[i+j for i,j in zip(blue_rel, red_rel)],
label='green data', alpha=0.5, color='g', width=bar_width
)
plt.sca(ax2)
plt.xticks(tick_pos, ['category 1', 'category 2', 'category 3'])
ax2.set_ylabel("Percentage")
ax2.set_xlabel("")
plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
plt.grid()
# rotate axis labels
plt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right')
plt.show()
from matplotlib import pyplot as plt
import numpy as np
data = range(200, 225, 5)
bar_labels = ['a', 'b', 'c', 'd', 'e']
fig = plt.figure(figsize=(10,8))
# plot bars
y_pos = np.arange(len(data))
plt.yticks(y_pos, bar_labels, fontsize=16)
bars = plt.barh(y_pos, data,
align='center', alpha=0.4, color='g')
# annotation and labels
for b,d in zip(bars, data):
plt.text(b.get_width() + b.get_width()*0.08, b.get_y() + b.get_height()/2,
'{0:.2%}'.format(d/min(data)),
ha='center', va='bottom', fontsize=12)
plt.xlabel('X axis label', fontsize=14)
plt.ylabel('Y axis label', fontsize=14)
t = plt.title('Bar plot with plot labels/text', fontsize=18)
plt.ylim([-1,len(data)+0.5])
plt.vlines(min(data), -1, len(data)+0.5, linestyles='dashed')
plt.grid()
plt.show()
import matplotlib.pyplot as plt
# input data
mean_values = [1, 2, 3]
bar_labels = ['bar 1', 'bar 2', 'bar 3']
# plot bars
x_pos = list(range(len(bar_labels)))
rects = plt.bar(x_pos, mean_values, align='center', alpha=0.5)
# label bars
def autolabel(rects):
for ii,rect in enumerate(rects):
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2., 1.02*height, '%s'% (mean_values[ii]),
ha='center', va='bottom')
autolabel(rects)
# set height of the y-axis
max_y = max(zip(mean_values, variance)) # returns a tuple, here: (3, 5)
plt.ylim([0, (max_y[0] + max_y[1]) * 1.1])
# set axes labels and title
plt.ylabel('variable y')
plt.xticks(x_pos, bar_labels)
plt.title('Bar plot with labels')
plt.show()
#plt.savefig('./my_plot.png')
%matplotlib inline
import matplotlib.pyplot as plt
idx = range(4)
values = [100, 1000, 5000, 20000]
labels = ['category 1', 'category 2',
'category 3', 'category 4']
fig, ax = plt.subplots(1)
# Automatically align and rotate tick labels:
fig.autofmt_xdate()
bars = plt.bar(idx, values, align='center')
plt.xticks(idx, labels)
plt.tight_layout()
# Add text labels to the top of the bars
def autolabel(bars):
for bar in bars:
height = bar.get_height()
ax.text(bar.get_x() + bar.get_width()/2., 1.05 * height,
'%d' % int(height),
ha='center', va='bottom')
autolabel(bars)
plt.ylim([0, 25000])
plt.show()
import matplotlib.pyplot as plt
import matplotlib.colors as col
import matplotlib.cm as cm
# input data
mean_values = range(10,18)
x_pos = range(len(mean_values))
# create colormap
cmap1 = cm.ScalarMappable(col.Normalize(min(mean_values), max(mean_values), cm.hot))
cmap2 = cm.ScalarMappable(col.Normalize(0, 20, cm.hot))
# plot bars
plt.subplot(121)
plt.bar(x_pos, mean_values, align='center', alpha=0.5, color=cmap1.to_rgba(mean_values))
plt.ylim(0, max(mean_values) * 1.1)
plt.subplot(122)
plt.bar(x_pos, mean_values, align='center', alpha=0.5, color=cmap2.to_rgba(mean_values))
plt.ylim(0, max(mean_values) * 1.1)
plt.show()
import matplotlib.pyplot as plt
patterns = ('-', '+', 'x', '\\', '*', 'o', 'O', '.')
fig = plt.gca()
# input data
mean_values = range(1, len(patterns)+1)
# plot bars
x_pos = list(range(len(mean_values)))
bars = plt.bar(x_pos,
mean_values,
align='center',
color='white',
)
# set patterns
for bar, pattern in zip(bars, patterns):
bar.set_hatch(pattern)
# set axes labels and formatting
fig.axes.get_yaxis().set_visible(False)
plt.ylim([0, max(mean_values) * 1.1])
plt.xticks(x_pos, patterns)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 01
Step2: Variance
Step3: Standard Deviation
Step4: Effect size - Cohen'd
Step5: It is calculated with delta degree of freedom = 1!
Step6: Chapter 03
Step7: DataFrame Indexing
Step8: Chapter 04
Step9: Percentile
Step10: Cumulative Distribution Function
Step11: It can be plotted using .hist()
Step12: Interquartile Range
Step13: Normal (Gaussian) Distribution
Step14: Lognormal Distribution
Step15: Pareto Distribution
Step16: Weibull Distribution
Step17: Different scale below
Step18: Chapter 06
Step19: Pearson's Correlation
Step20: Chapter 08
Step21: Chapter 09
Step22: Exponentially-weighted moving average (EWMA)
|
<ASSISTANT_TASK:>
Python Code:
from typing import List
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import sklearn
% matplotlib inline
sample = [1, 3, 5, 6]
np.mean(sample)
pd.DataFrame(sample).mean()
np.var(sample)
# Warning! Pandas variance by default is normalized by N-1!
# That can be changed by using ddof(delta degrees of freedom) = 0
pd.DataFrame(sample).var(ddof = 0)
np.std(sample)
# Warning! Pandas std is calculated with variance by N-1!
# That can be changed by using ddof(delta degrees of freedom) = 0
pd.DataFrame(sample).std(ddof = 0)
def effect_size(g1: pd.DataFrame, g2: pd.DataFrame) -> float:
diff = g1.mean() - g2.mean()
var_g1, var_g2 = g1.var(ddof=1), g2.var(ddof=1)
n1, n2 = len(g1), len(g2)
pooled_var = (var_g1 * (n1 - 1) + var_g2 * (n2 - 1)) / ((n1 - 1) + (n2 - 1))
cohen_d = diff / np.sqrt(pooled_var)
return cohen_d
effect_size(pd.DataFrame([1, 2, 3, 4]), pd.DataFrame([3, 3, 1, 2]))
s = pd.Series([1, 2, 3, 4, 2])
def pmf(series: pd.Series) -> pd.Series:
return series.value_counts().sort_index() / series.count()
pmf(s)
array = np.random.randn(4, 2)
array
df = pd.DataFrame(array)
df
columns = ['A', 'B']
df = pd.DataFrame(data=array,
columns=columns)
df
index = ['a', 'b', 'c', 'd']
df = pd.DataFrame(data=array,
columns=columns,
index=index)
df
df['A']
df.loc['a']
df.iloc[0]
indices = ['a', 'c']
df.loc[indices]
df['a':'c']
df[0:2]
df[:2]
df['A'].loc['a']
# Data
a = [1, 2, 3, 3, 4, 5]
stats.percentileofscore(a=a, score=2.5)
stats.percentileofscore(a=a, score=2)
stats.percentileofscore(a=a, score=3)
np.percentile(a=a, q=50)
np.percentile(a=a, q=70)
# Pandas uses quantiles with different interpolation methods.
pd.DataFrame(data=a).quantile(q=0.5)
series = pd.Series(np.random.randn(25))
series.head()
cdf = series.value_counts().sort_index().cumsum()
cdf.plot()
series.hist(cumulative=True)
r1 = np.random.exponential(scale=1.0, size=100000)
r2 = np.random.exponential(scale=0.5, size=100000)
plt.hist(r1, bins = 200)
plt.xlim((0, 10))
plt.show()
plt.hist(r2, bins = 200)
plt.xlim((0, 10))
plt.show()
g1 = np.random.normal(loc=0.0, scale=1.0, size=100000)
g2 = np.random.normal(loc=0.0, scale=3.0, size=100000)
plt.hist(g1, bins = 200)
plt.xlim((-10, 10))
plt.show()
plt.hist(g2, bins = 200)
plt.xlim((-10, 10))
plt.show()
l1 = np.random.lognormal(mean=0.0, sigma=0.5, size=100000)
l2= np.random.lognormal(mean=0.0, sigma=1, size=100000)
plt.hist(l1, bins = 1000)
plt.xlim((0, 10))
plt.ylim((0, 4500))
plt.show()
plt.hist(l2, bins = 1000)
plt.xlim((0, 10))
plt.ylim((0, 4500))
plt.show()
p1 = (np.random.pareto(1, size=1000) + 1) * 1
p2 = (np.random.pareto(2, size=1000) + 1) * 1
plt.hist(p1, bins=100)
plt.ylim((0, 1000))
plt.show()
plt.hist(p2, bins = 100)
plt.ylim((0, 1000))
plt.show()
w1 = np.random.weibull(a=0.8, size=1000000)
w2 = np.random.weibull(a=1, size=1000000)
w3 = np.random.weibull(a=5, size=1000000)
w4 = np.random.weibull(a=10, size=1000000)
plt.hist(w1, bins = 200)
plt.xlim((-1, 15))
plt.ylim((0, 200000))
plt.show()
plt.hist(w2, bins = 200)
plt.xlim((-1, 15))
plt.ylim((0, 200000))
plt.show()
plt.hist(w3, bins = 200)
plt.xlim((-1, 5))
plt.ylim((0, 25000))
plt.show()
plt.hist(w4, bins = 200)
plt.xlim((-1, 5))
plt.ylim((0, 25000))
plt.show()
Z = np.array([[0, 2], [1, 1], [2, 0]]).T
Z
np.cov(Z, ddof=1)
stats.spearmanr([1, 2, 3, 4, 5], [5, 6, 7, 8, 7])
from sklearn.metrics import mean_squared_error
y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
mean_squared_error(y_true, y_pred)
trend = pd.Series([1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1])
noise = pd.Series(np.random.random(11))
season = trend + noise
season.name = 'season'
two_day_window = season.rolling(window=2).mean()
two_day_window.name = 'rolling mean'
two_day_window
plt.figure(figsize=(7, 7))
plt.plot(season)
plt.plot(two_day_window)
plt.legend()
ewma2 = season.ewm(span=2).mean()
ewma2.name = 'ewma2'
ewma3 = season.ewm(span=3).mean()
ewma3.name = 'ewma3'
ewma5 = season.ewm(span=5).mean()
ewma5.name = 'ewma5'
plt.figure(figsize=(7, 7))
plt.plot(season)
plt.plot(ewma2)
plt.plot(ewma3)
plt.plot(ewma5)
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implicit node creation on edge add
Step2: Just a touch of computational theory
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
# A SIMPLE EXAMPLE
G=nx.Graph()
G.add_node("a")
G.add_node("b")
G.add_node("c")
G.add_node("d")
G.add_node("e")
G.add_node("f")
G.add_edge('a', 'c')
G.add_edge('b', 'c')
G.add_edge('e', 'd')
G.add_edge('c', 'e')
G.add_edge('e', 'f')
G.add_edge('c', 'f')
pos=nx.spring_layout(G)
nx.draw(G, pos=pos)
nx.draw_networkx_labels(G, pos=pos)
plt.show()
# SOME FAKE DATA
locations =['Large Warehouse', 'Small Warehouse'
, 'Retail 1', 'Retail 2', 'Retail 3', 'Retail 4'
, 'Supplier 1', 'Supplier 2', 'Supplier 3']
adjacency = [
[ 0, 1500, 100, 275, 1600, 1750, 500, 130, 1550] #Large Warehouse'
, [ -1, 0, 1475, 1600, 400, 50, 500, 1800, 100] #Small Warehouse'
, [ -1, -1, 0, 300, 1750, 1600, 9999, 9999, 9999] #Retail 1
, [ -1, -1, -1, 0, 1840, 1900, 9999, 9999, 9999] #Retail 2
, [ -1, -1, -1, -1, 0, 650, 9999, 9999, 9999] #Retail 3
, [ -1, -1, -1, -1, -1, 0, 9999, 9999, 9999] #Retail 4
, [ -1, -1, -1, -1, -1, -1, 0, 400, 700] #Supplier 1
, [ -1, -1, -1, -1, -1, -1, -1, 0, 1900] #Supplier 2
, [ -1, -1, -1, -1, -1, -1, -1, -1, 1775] #Supplier 3
]
# CONVERT THAT FAKE DATA INTO A GRAPH
g = nx.Graph()
for loc in locations:
g.add_node(loc)
for i in range(len(locations)):
r = locations[i]
row = adjacency[i]
for j in range (i+1, len(locations)):
c = locations[j]
val = row[j]
if val > 0 and val < 9999:
g.add_edge(r, c, miles=val)
# VISUALIZE OUR DATASET
pos={'Large Warehouse': [ 7, 2],
'Small Warehouse': [ 2, 1.75],
'Retail 1': [ 6.5, 3],
'Retail 2': [ 7.5, .6],
'Retail 3': [ 3, .6],
'Retail 4': [ 1.5, 0.75],
'Supplier 1': [ 5, 3.5],
'Supplier 2': [ 9, 3],
'Supplier 3': [ 1, 2.5 ]}
nx.draw(g, pos=pos, node_size=4000)
nx.draw_networkx_labels(g, pos=pos)
plt.show()
# WHAT IS THE SHORTEST ROUTE TO TRANSPORT FROM SUPPLIER 1 TO RETAIL 3?
nx.dijkstra_path(g, source='Supplier 1', target='Retail 3', weight='miles')
print g.nodes()
g.add_edge('Supplier 1', 'Retail 5')
print g.nodes()
ap = nx.floyd_warshall(g, weight='miles')
print(ap['Supplier 3'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 0 - hyperparams
Step2: Once generate data
Step3: Step 1 - collect data
Step4: Step 2 - Build model
Step5: Step 3 training the network
Step6: Recall that without batch normalization within 10 epochs with num units 400 and batch_size 64 we reached at 4.940
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import tensorflow as tf
from os import path, remove
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \
renderStatsListWithLabels, renderStatsCollectionOfCrossValids
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from mylibs.tf_helper import getDefaultGPUconfig
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from collections import OrderedDict
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from common import get_or_run_nn
from data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider
from data_providers.price_history_dataset_generator import PriceHistoryDatasetGenerator
from skopt.space.space import Integer, Real
from skopt import gp_minimize
from skopt.plots import plot_convergence
import pickle
import inspect
import dill
import sys
from models.price_history_19_seq2seq_raw_dropout import PriceHistorySeq2SeqRawDropout
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
n_jobs = 1
%matplotlib inline
epochs = 10
num_features = 1
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 50 #47
#trunc_backprop_len = ??
with_EOS = False
total_train_size = 57994
train_size = 6400
test_size = 1282
data_path = '../data/price_history'
#npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz'
#npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz'
#npz_train = data_path + '/price_history_03_dp_60to30_57980_train.npz'
#npz_train = data_path + '/price_history_03_dp_60to30_6400_train.npz'
npz_train = data_path + '/price_history_60to30_6400_targets_normed_train.npz'
#npz_test = data_path + '/price_history_03_dp_60to30_test.npz'
npz_test = data_path + '/price_history_60to30_targets_normed_test.npz'
dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)
dp.inputs.shape, dp.targets.shape
aa, bb = dp.next()
aa.shape, bb.shape
model = PriceHistorySeq2SeqRawDropout(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)
graph = model.getGraph(batch_size=batch_size,
num_units=num_units,
input_len=input_len,
target_len=target_len)
#show_graph(graph)
#rnn_cell = PriceHistorySeq2SeqCV.RNN_CELLS.GRU
#cross_val_n_splits = 5
epochs, num_units, batch_size
#set(factors(train_size)).intersection(factors(train_size/5))
best_learning_rate = 1e-3 #0.0026945952539362472
keep_prob_input = 0.7
def experiment():
return model.run(npz_path=npz_train,
epochs=epochs,
batch_size = batch_size,
num_units = num_units,
input_len=input_len,
target_len=target_len,
learning_rate = best_learning_rate,
preds_gather_enabled=True,
keep_prob_input = keep_prob_input,
)
%%time
dyn_stats, preds_dict = get_or_run_nn(
experiment,
filename='019_seq2seq_60to30_epochs{}_learning_rate_{:.4f}_prob_input{}'.format(
epochs, best_learning_rate, keep_prob_input
))
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following module creates a pi chart that illustrates the breakdown of which isotope is measured to have the highest concentration during the given interval of our data.
Step2: The following commands will create a table with 4 rows, each containing an isotope (Bi-214, K-40, Cs-134, and Cs-137). There will be 5 columns with the following information
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import csv
import io
import urllib.request
import matplotlib.pyplot as plt
from datetime import datetime
import numpy as np
url = 'https://radwatch.berkeley.edu/sites/default/files/pictures/rooftop_tmp/weather.csv'
response = urllib.request.urlopen(url)
reader = csv.reader(io.TextIOWrapper(response))
timedata = []
Bi214 = []
K40 = []
Cs134 = []
Cs137 = []
line = 0
for row in reader:
if line != 0:
timedata.append(datetime.strptime(row[0], '%Y-%m-%d %H:%M:%S'))
Bi214.append(float(row[1]))
K40.append(float(row[2]))
Cs134.append(float(row[3]))
Cs137.append(float(row[4]))
line += 1
def pie_chart(activity1, activity2, activity3, activity4, labels):
tally = [0,0,0,0]
for i in range(0,len(bi214)-1):
comparing_list = [bi214[i],k40[i], cs134[i], cs137[i]]
if max(comparing_list) == bi214[i]:
tally[0] += 1
elif max(comparing_list) == k40[i]:
tally[1] += 1
elif max(comparing_list) == cs134[i]:
tally[2] += 1
else:
tally[3] += 1
# plt.pie places the items counter-clockwise
total_counts = sum(tally)
fracs = [tally[0]/total_counts, tally[1]/total_counts,
tally[2]/total_counts, tally[3]/total_counts]
explode = (0,0,0,0) # explode refers to "pulling out" certain items for exaggerated effect
plt.pie(fracs, explode=explode, labels=labels, autopct='%1.1f%%')
# 'autopct='1.1f%%' displays percent value for each slice
plt.axis('equal')
# set the pie chart against a square axis, otherwise it won't be circular.
plt.title('Air Monitor Data:\nBreakdown of Highest Measured Count Rate in
Isotopes\nof Interest from %s to %s' %(timedata[0],timedata[-1]))
# this title string demonstrates a line break (signified by \n after
# isotopes) and string interpolation (represented by %s).
plt.show()
# a more interesting example pie chart would breakdown total activity in each isotope:
activity = [sum(bi214), sum(k40), sum(cs134), sum(cs137)]
total_activity = sum(activity)
fracs = [activity[0]/total_activity, activity[1]/total_activity,
activity[2]/total_activity, activity[3]/total_activity]
explode = (0,0,0,0)
plt.pie(fracs, explode=explode, labels=labels, autopct='%1.1f%%')
plt.axis('equal')
plt.title('Air Monitor Data:\nBreakdown of Total Activity by Isotope of
Interest\nfrom %s to %s' %(timedata[0],timedata[-1]))
plt.show()
labels = ['Bi214','K40','Cs134','Cs137']
pie_chart(Bi214, K40, Cs134, Cs137, labels)
def table(bi214, k40, cs134, cs137):
RowLabel = ("Bi-214", "K-40", "Cs-134", "Cs-137")
ColLabel = ("isotope", "mean CPS", "median CPS",
"max CPS", "time of occurrence")
# The statistical meaning of mean and a computational method to obtain
# it are explored in a different module.
# For this module, we will use the numpy function 'np.mean'
mean_data = (np.mean(bi214), np.mean(k40), np.mean(cs134), np.mean(cs137))
mean_data = np.around(mean_data, decimals=4)
# np.around rounds elements in a list to the chosen decimal point.
# For median, we will use the numpy function 'np.median'
median_data = np.around((np.median(bi214), np.median(k40), np.median(cs134), np.median(cs137)), decimals=4)
# Numpy also has a function to scan a list for the max value contained in that list!
max_data = np.around((np.max(bi214), np.max(k40), np.max(cs134), np.max(cs137)), decimals=4)
# Each max CPS has a corresponding datetime component with the same index.
# I will use LIST.index(max(LIST)) to find these corresponding indices.
# Note: this method's weakness is that it only identifies the first
# occurrence of a maximum; if there the max occurs multiple times it will
# not acknowledge them. Can you think/find a way to do this in a better way?
time_data = (timedata[Bi214.index(np.max(bi214))], timedata[k40.index(np.max(k40))],
timedata[Cs134.index(np.max(cs134))], timedata[cs137.index(np.max(cs137))])
# if you have trouble understanding the previous commands, make
# sure you understand each line piece their purpose together!
data_array = np.vstack((RowLabel,mean_data,median_data, max_data,time_data)).T
# vstack places the lists atop eachother and '.T' transposes the
# information so it'll appear as it should on the table.
fig, ax = plt.subplots()
ax.axis('off')
# By default, matplotlib will always plot against axes.
# Therefore, if we want a table only, we turn the axis 'off'
ax.table(cellText=data_array, colLabels=ColLabel, loc='center')
fig.set_size_inches(18, 10)
plt.show()
table(Bi214, K40, Cs134, Cs137)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Acqure Data
Step2: Analyze Data
Step3: Find information about the features
Step4: There are 7703 examples and 1743 features.
Step5: There are 1975 entries, but the column NAME OF DATA ELEMENT has only 1734 not nut elements, so something is up. Let's try to explore the dict a little bit more
Step6: Nothing suspicius here, lets try again
Step7: Aha! It seems that the feature at index 15 is categorical, and that's why the rows that follow it don't have a value under NAME OF DATA ELEMENT. Just for now, let's get rid of those NAN rows.
Step8: Lets get the info of the new dict
Step9: We are interested primarly in the NAME OF DATA ELEMENT, VARIABLE NAME and API data type. They seem complete. Let's see howe many data types there are
Step10: Let's find out how many features have each data type
Step11: So in reality, there are 1206 float features, 521 integers, and 7 string features. (For now we assume that the autocomplete type is string). This numbers differ a lot from our previus analisys, in which we had 443 float features, 13 integer features and 1287 features that are strings.
Step12: We can see that these autocomplete features can be treated as strings.
Step13: Delete features that are meaningless
Step14: So there are some features in the data that are not explained in the dictionary. Tha is not necessarly an inconvenience, so we won't worry abot this right now.
Step15: So those NTP4 features are about Average Net prices, so they are defenetly numeric features, and it makes sense to keep them.
Step16: Let's see what are these features about
Step17: So UNITID, OPEID, OPEID6, INSTNM, INSTURL, NPCURL and ALIAS are features that have to do with the identity of the institution, so they don't add relevant information to the problem, therfore they will be eliminated. (flag_e)
Step18: Work on the string and autocmplet data
Step19: We already dropped INSTURL and NPCURL. Let's explore the STABBR feature
Step20: So this feature has to do with the state where the school is located. Let's explore the ACCREDAGENCY feature
Step21: Now les's explore the autocomplete data type
Step22: INSTNM and ALIAS where dropped, let's see the CITY feature
Step23: So STABBR, ACCREDAGENCY and CITY are features that we are going to keep, but they need to be transformed to an ordinal (using numbers) representation, since the ML algorithms use numbers and not strings.
Step24: Let's see how our data looks so far
Step25: Although we mapped or eliminated the string features, we still have a lot object (not numeric) data types. Let's work on them
Step26: We can see that there is a lot of data suppresed for privacy reasons. Also, there are dates, and one of them 12/31/2999 seems to be invalid. Let's go ahead and replace these values with nan, so we will treat it as any nan value. Also, if any column ends having all of its values as Nan, we will delete this column.
Step27: Lets find wich features are date features
Step28: It seems that SEPAR_DT_MDN don't add valuable information to the problem, so we are going to drop it
Step29: Now we will transfore all the object features to numeric
Step30: Now we have gotten rid of the object dtype
Step31: Filling missing data
Step32: We can see that after the name of a categorical feature, there is at least one item with value NaN. Let's use this to get a list of categorical features
Step33: To fill the missing data that belongs to a categorical feature, we will use the most common value of the data (mode). To fill the missing data that belongs to a numeric feature, we will use the the average of the data (mean).
Step34: Let's save the data in a file
|
<ASSISTANT_TASK:>
Python Code:
# data analysis and manipulation
import numpy as np
import pandas as pd
np.set_printoptions(threshold=1000)
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
#machine learning
import tensorflow as tf
#Regular expression
import re
all_data = pd.read_csv('datasets/CollegeScorecardData.csv')
all_data.head()
all_data.info()
data_dict = pd.read_csv('datasets/CollegeScorecardDataDictionary.csv')
data_dict.head()
data_dict.tail()
data_dict.info()
data_dict[5:10]
data_dict[10:20]
data_dict_no_nan_names = data_dict.dropna(subset=['NAME OF DATA ELEMENT'])
data_dict_no_nan_names[10:20]
data_dict_no_nan_names.info()
data_dict_no_nan_names['API data type'].unique()
data_dict_no_nan_names['API data type'].value_counts()
data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'autocomplete']
all_data_no_na_columns = all_data.dropna(axis=1, how='all')
# Create a list to save the features that are above a certain threshold
features_with_high_ratio = []
# Create a list to save the features in all_data but not in the dict
features_not_in_dict = []
#Calculate the ratio
for feature in all_data_no_na_columns.columns.values:
# Get the row in the dict wich have VARIABLE NAME == feature
row_in_dict = data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature]
# Get the data type of the row
data_type_series = row_in_dict['API data type']
#Check if exists in the dict
if data_type_series.size > 0:
# Get the data type
data_type = data_type_series.values[0]
# float features (numeric features) are not taken in account
if data_type == 'integer' or data_type == 'string' or data_type == 'autocomplete':
column = all_data_no_na_columns[feature]
column_no_na = column.dropna()
r = column_no_na.unique().size / column_no_na.size
if r > 0.8:
features_with_high_ratio.append(feature)
print(str(feature) + ": " + str(r))
#The feature is not in the dict
else:
features_not_in_dict.append(feature)
print ("\nFeatures in data but not in the dictionary:" + str(features_not_in_dict))
npt4_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT4_PUB'
npt41_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT41_PUB'
npt42_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT42_PUB'
data_dict_no_nan_names[npt4_pub | npt41_pub | npt42_pub ]
# Create a list to save the features that are above a certain threshold
features_with_high_ratio = []
# Create a list to save the features in all_data but not in the dict
features_not_in_dict = []
#Calculate the ratio
for feature in all_data_no_na_columns.columns.values:
# Get the row in the dict wich have VARIABLE NAME == feature
row_in_dict = data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature]
# Get the data type of the row
data_type_series = row_in_dict['API data type']
#Check if exists in the dict
if data_type_series.size > 0:
# Get the data type
data_type = data_type_series.values[0]
# float features (numeric features) are not taken in account
if (data_type == 'integer' or data_type == 'string' or data_type == 'autocomplete') \
and feature[:4] != 'NPT4':
column = all_data_no_na_columns[feature]
column_no_na = column.dropna()
r = column_no_na.unique().size / column_no_na.size
if r > 0.5:
features_with_high_ratio.append(feature)
print(str(feature) + ": " + str(r))
print(features_with_high_ratio)
high_ratio_features = pd.DataFrame()
for feature in features_with_high_ratio:
high_ratio_features = high_ratio_features.append(data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature])
high_ratio_features
all_data_no_id_cols = all_data_no_na_columns.drop(['UNITID', 'OPEID', 'OPEID6', 'INSTNM', 'INSTURL', 'NPCURL', 'ALIAS', 'ZIP'], axis = 1)
all_data_no_id_cols.head()
data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'string']
all_data_no_id_cols['STABBR']
all_data_no_id_cols['ACCREDAGENCY']
all_data_no_id_cols['ACCREDAGENCY'].value_counts()
data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'autocomplete']
all_data_no_id_cols['CITY']
all_data_no_strings = all_data_no_id_cols.copy()
#STABBR mapping
values = all_data_no_strings['STABBR'].unique()
mapping = {}
numeric_value = 1
for value in values:
mapping[value] = numeric_value
numeric_value += 1
all_data_no_strings['STABBR'] = all_data_no_strings['STABBR'].map(mapping)
#ACCREDAGENCY mapping
values = all_data_no_id_cols['ACCREDAGENCY'].unique()
mapping = {}
numeric_value = 1
for value in values:
mapping[value] = numeric_value
numeric_value += 1
all_data_no_strings['ACCREDAGENCY'] = all_data_no_strings['ACCREDAGENCY'].map(mapping)
#CITY mapping
values = all_data_no_id_cols['CITY'].unique()
mapping = {}
numeric_value = 1
for value in values:
mapping[value] = numeric_value
numeric_value += 1
all_data_no_strings['CITY'] = all_data_no_strings['CITY'].map(mapping)
all_data_no_strings.head()
all_data_no_strings.info()
regex = re.compile('[0-9]+(\.[0-9]+)?$')
words = []
for column in all_data_no_strings:
if all_data_no_strings[column].dtypes == 'object':
for data in all_data_no_strings[column]:
if not regex.match(str(data)):
words.append(data)
pd.Series(words).value_counts()
all_data_replaced_with_nan = all_data_no_strings.replace(to_replace = 'PrivacySuppressed', value = np.nan)
all_data_replaced_with_nan = all_data_replaced_with_nan.replace(to_replace = '12/31/2999', value = np.nan)
all_data_replaced_with_nan = all_data_replaced_with_nan.dropna(axis=1, how='all')
all_data_replaced_with_nan.info()
features_with_date = []
for column in all_data_replaced_with_nan:
if all_data_replaced_with_nan[column].dtypes == 'object':
if all_data_replaced_with_nan[column].str.match('[0-9]{2}/[0-9]{2}/[0-9]{4}').any():
features_with_date.append(column)
features_with_date
data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == 'SEPAR_DT_MDN']
all_data_no_dates = all_data_replaced_with_nan.drop(['SEPAR_DT_MDN'], axis = 1)
all_data_no_objects = all_data_no_dates.copy()
for feature in all_data_no_dates:
if all_data_no_dates[feature].dtypes == 'object':
#Make all data numeric
all_data_no_objects[feature] = pd.to_numeric(all_data_no_dates[feature])
all_data_no_objects.info()
high_nan_features = []
for feature in all_data_no_objects:
size = all_data_no_objects[feature].size
number_of_valid = all_data_no_objects[feature].count()
number_of_nan = size - number_of_valid
ratio = number_of_nan / size
if ratio > 0.9:
high_nan_features.append(feature)
print (len(high_nan_features))
all_data_no_high_nan = all_data_no_objects.drop(high_nan_features, axis = 1)
all_data_no_high_nan.info()
data_dict[15:25]
categorical_features = []
is_null = data_dict['NAME OF DATA ELEMENT'].isnull()
for i in range(len(is_null) - 1):
if not is_null[i] and is_null[i+1]:
categorical_features.append(data_dict['VARIABLE NAME'][i])
all_data_no_nan = all_data_no_high_nan.copy()
for feature in all_data_no_high_nan:
if feature in categorical_features:
mode = all_data_no_high_nan[feature].mode()[0]
all_data_no_nan[feature] = all_data_no_high_nan[feature].fillna(mode)
else:
mean = all_data_no_high_nan[feature].mean()
all_data_no_nan[feature] = all_data_no_high_nan[feature].fillna(mean)
all_data_no_nan.head()
all_data_no_nan.info()
all_data_no_nan.to_csv('datasets/CollegeScorecardDataCleaned.csv', index = False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We wrap all the preprocessing steps, that you can find more about in the author-topic notebook , in one fucntion so that we are able to iterate over different preprocessing parameters.
Step2: We create the corpus of the train and test data using two separate functions, since each corpus is tied to a certain dictionary which maps the words to their ids. Also in order to create the test corpus, we use the dictionary from the train data, since the trained model has have the same id2word reference as the new test data. Otherwise token with id 1 from the test data wont't mean the same as the trained upon token with id 1 in the model.
Step3: For our first training, we specify that we want the parameters max_freq and min_wordcoun to be 50 and 20, as proposed by the original notebook tutorial. We will find out if this configuration is good enough for us.
Step4: We wrap the model training also in a function, in order to, again, be able to iterate over different parametrizations.
Step5: We define a custom function, which measures the prediction accuracy, following the precision at k principle. We parametrize the accuracy by a parameter k, k=1 meaning we need an exact match in order to be accurate, k=5 meaning our prediction has be in the top 5 results, ordered by similarity.
Step6: We calculate the accuracy for a range of values for k=[1,2,3,4,5,6,8,10] and plot how exactly the prediction accuracy naturally rises with higher k.
Step7: We run our first training and observe that the passes and iterations parameters are set high enough, so that the model converges.
Step8: This is a rather poor accuracy performace. We increase the number of topic to 100.
Step9: The 100-topic model is much more accurate than the 20-topic model. We continue to increase the topic until convergence.
Step10: The 150-topic model is also slightly better, especially in the lower end of k. But we clearly see convergence. We try with 200 topic to be sure.
Step11: The 200-topic seems to be performing a bit better for lower k, might be due to a slight overrepresentation with high topic number. So let us stop here with the topic number increase and focus some more on the dictionary. We choose either one of the models.
Step12: We now have now nearly doubled the tokens. Let's train and evaluate.
|
<ASSISTANT_TASK:>
Python Code:
!wget -O - "https://archive.ics.uci.edu/ml/machine-learning-databases/00217/C50.zip" > /tmp/C50.zip
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')
import zipfile
filename = '/tmp/C50.zip'
zip_ref = zipfile.ZipFile(filename, 'r')
zip_ref.extractall("/tmp/")
zip_ref.close()
import os, re, io
def preprocess_docs(data_dir):
doc_ids = []
author2doc = {}
docs = []
folders = os.listdir(data_dir) # List of filenames.
for authorname in folders:
files = file = os.listdir(data_dir + '/' + authorname)
for filen in files:
(idx1, idx2) = re.search('[0-9]+', filen).span() # Matches the indexes of the start end end of the ID.
if not author2doc.get(authorname):
# This is a new author.
author2doc[authorname] = []
doc_id = str(int(filen[idx1:idx2]))
doc_ids.append(doc_id)
author2doc[authorname].extend([doc_id])
# Read document text.
# Note: ignoring characters that cause encoding errors.
with io.open(data_dir + '/' + authorname + '/' + filen, errors='ignore', encoding='utf-8') as fid:
txt = fid.read()
# Replace any whitespace (newline, tabs, etc.) by a single space.
txt = re.sub('\s', ' ', txt)
docs.append(txt)
doc_id_dict = dict(zip(doc_ids, range(len(doc_ids))))
# Replace dataset IDs by integer IDs.
for a, a_doc_ids in author2doc.items():
for i, doc_id in enumerate(a_doc_ids):
author2doc[a][i] = doc_id_dict[doc_id]
import spacy
nlp = spacy.load('en')
%%time
processed_docs = []
for doc in nlp.pipe(docs, n_threads=4, batch_size=100):
# Process document using Spacy NLP pipeline.
ents = doc.ents # Named entities.
# Keep only words (no numbers, no punctuation).
# Lemmatize tokens, remove punctuation and remove stopwords.
doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop]
# Remove common words from a stopword list.
#doc = [token for token in doc if token not in STOPWORDS]
# Add named entities, but only if they are a compound of more than word.
doc.extend([str(entity) for entity in ents if len(entity) > 1])
processed_docs.append(doc)
docs = processed_docs
del processed_docs
# Compute bigrams.
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(docs, min_count=20)
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
return docs, author2doc
def create_corpus_dictionary(docs, max_freq=0.5, min_wordcount=20):
# Create a dictionary representation of the documents, and filter out frequent and rare words.
from gensim.corpora import Dictionary
dictionary = Dictionary(docs)
# Remove rare and common tokens.
# Filter out words that occur too frequently or too rarely.
max_freq = max_freq
min_wordcount = min_wordcount
dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq)
_ = dictionary[0] # This sort of "initializes" dictionary.id2token.
# Vectorize data.
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(doc) for doc in docs]
return corpus, dictionary
def create_test_corpus(train_dictionary, docs):
# Create test corpus using the dictionary from the train data.
return [train_dictionary.doc2bow(doc) for doc in docs]
traindata_dir = "/tmp/C50train"
train_docs, train_author2doc = preprocess_docs(traindata_dir)
train_corpus_50_20, train_dictionary_50_20 = create_corpus_dictionary(train_docs, 0.5, 20)
print('Number of unique tokens: %d' % len(train_dictionary_50_20))
testdata_dir = "/tmp/C50test"
test_docs, test_author2doc = preprocess_docs(testdata_dir)
test_corpus_50_20 = create_test_corpus(train_dictionary_50_20, test_docs)
def train_model(corpus, author2doc, dictionary, num_topics=20, eval_every=0, iterations=50, passes=20):
from gensim.models import AuthorTopicModel
model = AuthorTopicModel(corpus=corpus, num_topics=num_topics, id2word=dictionary.id2token, \
author2doc=author2doc, chunksize=2500, passes=passes, \
eval_every=eval_every, iterations=iterations, random_state=1)
top_topics = model.top_topics(corpus)
tc = sum([t[1] for t in top_topics])
print(tc / num_topics)
return model
# NOTE: Author of the logic of this function is the Olavur Mortensen, from his notebook tutorial.
def predict_author(new_doc, atmodel, top_n=10, smallest_author=1):
from gensim import matutils
import pandas as pd
def similarity(vec1, vec2):
'''Get similarity between two vectors'''
dist = matutils.hellinger(matutils.sparse2full(vec1, atmodel.num_topics), \
matutils.sparse2full(vec2, atmodel.num_topics))
sim = 1.0 / (1.0 + dist)
return sim
def get_sims(vec):
'''Get similarity of vector to all authors.'''
sims = [similarity(vec, vec2) for vec2 in author_vecs]
return sims
author_vecs = [atmodel.get_author_topics(author) for author in atmodel.id2author.values()]
new_doc_topics = atmodel.get_new_author_topics(new_doc)
# Get similarities.
sims = get_sims(new_doc_topics)
# Arrange author names, similarities, and author sizes in a list of tuples.
table = []
for elem in enumerate(sims):
author_name = atmodel.id2author[elem[0]]
sim = elem[1]
author_size = len(atmodel.author2doc[author_name])
if author_size >= smallest_author:
table.append((author_name, sim, author_size))
# Make dataframe and retrieve top authors.
df = pd.DataFrame(table, columns=['Author', 'Score', 'Size'])
df = df.sort_values('Score', ascending=False)[:top_n]
return df
def prediction_accuracy(test_author2doc, test_corpus, model, k=5):
print("Precision@k: top_n={}".format(k))
matches=0
tries = 0
for author in test_author2doc:
author_id = model.author2id[author]
for doc_id in test_author2doc[author]:
predicted_authors = predict_author(test_corpus[doc_id:doc_id+1], atmodel=model, top_n=k)
tries = tries+1
if author_id in predicted_authors["Author"]:
matches=matches+1
accuracy = matches/tries
print("Prediction accuracy: {}".format(accuracy))
return accuracy, k
def plot_accuracy(scores1, label1, scores2=None, label2=None):
import matplotlib.pyplot as plt
s = [score*100 for score in scores1.values()]
t = list(scores1.keys())
plt.plot(t, s, "b-", label=label1)
plt.plot(t, s, "r^", label=label1+" data points")
if scores2 is not None:
s2 = [score*100 for score in scores2.values()]
plt.plot(t, s2, label=label2)
plt.plot(t, s2, "o", label=label2+" data points")
plt.legend(loc="lower right")
plt.xlabel('parameter k')
plt.ylabel('prediction accuracy')
plt.title('Precision at k')
plt.xticks(t)
plt.grid(True)
plt.yticks([30,40,50,60,70,80,90,100])
plt.axis([0, 11, 30, 100])
plt.show()
atmodel_standard = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20)
accuracy_scores_20topic={}
for i in [1,2,3,4,5,6,8,10]:
accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_standard, k=i)
accuracy_scores_20topic[k] = accuracy
plot_accuracy(scores1=accuracy_scores_20topic, label1="20 topics")
atmodel_100topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=100, eval_every=0, iterations=50, passes=10)
accuracy_scores_100topic={}
for i in [1,2,3,4,5,6,8,10]:
accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_100topics, k=i)
accuracy_scores_100topic[k] = accuracy
plot_accuracy(scores1=accuracy_scores_20topic, label1="20 topics", scores2=accuracy_scores_100topic, label2="100 topics")
atmodel_150topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=150, eval_every=0, iterations=50, passes=15)
accuracy_scores_150topic={}
for i in [1,2,3,4,5,6,8,10]:
accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_150topics, k=i)
accuracy_scores_150topic[k] = accuracy
plot_accuracy(scores1=accuracy_scores_100topic, label1="100 topics", scores2=accuracy_scores_150topic, label2="150 topics")
atmodel_200topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=200, eval_every=0, iterations=50, passes=15)
accuracy_scores_200topic={}
for i in [1,2,3,4,5,6,8,10]:
accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_200topics, k=i)
accuracy_scores_200topic[k] = accuracy
plot_accuracy(scores1=accuracy_scores_150topic, label1="150 topics", scores2=accuracy_scores_200topic, label2="200 topics")
train_corpus_25_10, train_dictionary_25_10 = create_corpus_dictionary(train_docs, 0.25, 10)
test_corpus_25_10 = create_test_corpus(train_dictionary_25_10, test_docs)
print('Number of unique tokens: %d' % len(train_dictionary_25_10))
atmodel_150topics_25_10 = train_model(train_corpus_25_10, train_author2doc, train_dictionary_25_10, num_topics=150, eval_every=0, iterations=50, passes=15)
accuracy_scores_150topic_25_10={}
for i in [1,2,3,4,5,6,8,10]:
accuracy, k = prediction_accuracy(test_author2doc, test_corpus_25_10, atmodel_150topics_25_10, k=i)
accuracy_scores_150topic_25_10[k] = accuracy
plot_accuracy(scores1=accuracy_scores_150topic_25_10, label1="150 topics, max_freq=25%, min_wordcount=10", scores2=accuracy_scores_150topic, label2="150 topics, standard")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Random Test Data
Step2: Statsmodels Results
Step3: Scikit-Learn Cook's Distance
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import scipy as sp
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Note: statsmodels requires scipy 1.2
import statsmodels.formula.api as sm
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from statsmodels.stats.outliers_influence import OLSInfluence as influence
from yellowbrick.base import Visualizer
# Make Test Dataset
X, y = make_regression(
n_samples=100, n_features=14, n_informative=6, bias=1.2, noise=49.8, tail_strength=0.6, random_state=637
)
# Convert to a DataFrame for statsmodels
data = pd.DataFrame(X)
data.columns = [f"X{i}" for i in range(X.shape[1])]
data["y"] = y
data.head()
# Compute an OLS model
cols = data.columns
model = sm.ols(formula=f"{cols[-1]} ~ {' + '.join(cols[:-1])}", data=data)
model = model.fit()
print(model.summary())
# Compute the influence to get Cook's distance
inf = influence(model)
# cooks_distance is an attribute of incluence, here C, not sure about P (p-value maybe?)
C, P = inf.cooks_distance
def plot_cooks_distance(c):
_, ax = plt.subplots(figsize=(9,6))
ax.stem(c, markerfmt=",")
ax.set_xlabel("instance")
ax.set_ylabel("distance")
ax.set_title("Cook's Distance Outlier Detection")
return ax
plot_cooks_distance(C)
class CooksDistance(Visualizer):
def fit(self, X, y):
# Leverage is computed as the diagonal of the projection matrix of X
# TODO: whiten X before computing leverage
self.leverage_ = (X * np.linalg.pinv(X).T).sum(1)
# Compute the MSE
rank = np.linalg.matrix_rank(X)
df = X.shape[0] - rank
resid = y - LinearRegression().fit(X, y).predict(X)
mse = np.dot(resid, resid) / df
resid_studentized_internal = resid / np.sqrt(mse) / np.sqrt(1-self.leverage_)
self.distance_ = resid_studentized_internal**2 / X.shape[1]
self.distance_ *= self.leverage_ / (1 - self.leverage_)
self.p_values_ = sp.stats.f.sf(self.distance_, X.shape[1], df)
self.draw()
return self
def draw(self):
self.ax.stem(self.distance_, markerfmt=",", label="influence")
self.ax.axhline(4/len(self.distance_), c='r', ls='--', lw=1, label="$\frac{4}{n}$")
def finalize(self):
self.ax.legend()
self.ax.set_xlabel("instance")
self.ax.set_ylabel("influence")
self.ax.set_title("Cook's Distance Outlier Detection")
viz = CooksDistance().fit(X, y)
viz.finalize()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Decoding in sensor space using a linear SVM
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis A. Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
import os
import numpy as np
from mne import io
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname_fwd = data_path + 'MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
subjects_dir = data_path + '/subjects'
subject = os.environ['SUBJECT'] = subjects_dir + '/sample'
os.environ['SUBJECTS_DIR'] = subjects_dir
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
label_names = 'Aud-rh', 'Vis-rh'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_r=2, vis_r=4) # load contra-lateral conditions
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
# Set up pick list: MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # mark bads
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True,
reject=dict(grad=4000e-13, eog=150e-6),
decim=5) # decimate to save memory and increase speed
epochs.equalize_event_counts(list(event_id.keys()), 'mintime', copy=False)
epochs_list = [epochs[k] for k in event_id]
# Compute inverse solution
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
n_times = len(epochs.times)
n_vertices = 3732
n_epochs = len(epochs.events)
# Load data and compute inverse solution and stcs for each epoch.
noise_cov = mne.read_cov(fname_cov)
inverse_operator = read_inverse_operator(fname_inv)
X = np.zeros([n_epochs, n_vertices, n_times])
# to save memory, we'll load and transform our epochs step by step.
for condition_count, ep in zip([0, n_epochs / 2], epochs_list):
stcs = apply_inverse_epochs(ep, inverse_operator, lambda2,
method, pick_ori="normal", # saves us memory
return_generator=True)
for jj, stc in enumerate(stcs):
X[condition_count + jj] = stc.lh_data
# Make arrays X and y such that :
# X is 3d with X.shape[0] is the total number of epochs to classify
# y is filled with integers coding for the class to predict
# We must have X.shape[0] equal to y.shape[0]
# we know the first half belongs to the first class, the second one
y = np.repeat([0, 1], len(X) / 2) # belongs to the second class
X = X.reshape(n_epochs, n_vertices * n_times)
# we have to normalize the data before supplying them to our classifier
X -= X.mean(axis=0)
X /= X.std(axis=0)
# prepare classifier
from sklearn.svm import SVC # noqa
from sklearn.cross_validation import ShuffleSplit # noqa
# Define a monte-carlo cross-validation generator (reduce variance):
n_splits = 10
clf = SVC(C=1, kernel='linear')
cv = ShuffleSplit(len(X), n_splits, test_size=0.2)
# setup feature selection and classification pipeline
from sklearn.feature_selection import SelectKBest, f_classif # noqa
from sklearn.pipeline import Pipeline # noqa
# we will use an ANOVA f-test to preselect relevant spatio-temporal units
feature_selection = SelectKBest(f_classif, k=500) # take the best 500
# to make life easier we will create a pipeline object
anova_svc = Pipeline([('anova', feature_selection), ('svc', clf)])
# initialize score and feature weights result arrays
scores = np.zeros(n_splits)
feature_weights = np.zeros([n_vertices, n_times])
# hold on, this may take a moment
for ii, (train, test) in enumerate(cv):
anova_svc.fit(X[train], y[train])
y_pred = anova_svc.predict(X[test])
y_test = y[test]
scores[ii] = np.sum(y_pred == y_test) / float(len(y_test))
feature_weights += feature_selection.inverse_transform(clf.coef_) \
.reshape(n_vertices, n_times)
print('Average prediction accuracy: %0.3f | standard deviation: %0.3f'
% (scores.mean(), scores.std()))
# prepare feature weights for visualization
feature_weights /= (ii + 1) # create average weights
# create mask to avoid division error
feature_weights = np.ma.masked_array(feature_weights, feature_weights == 0)
# normalize scores for visualization purposes
feature_weights /= feature_weights.std(axis=1)[:, None]
feature_weights -= feature_weights.mean(axis=1)[:, None]
# unmask, take absolute values, emulate f-value scale
feature_weights = np.abs(feature_weights.data) * 10
vertices = [stc.lh_vertno, np.array([], int)] # empty array for right hemi
stc_feat = mne.SourceEstimate(feature_weights, vertices=vertices,
tmin=stc.tmin, tstep=stc.tstep,
subject='sample')
brain = stc_feat.plot()
brain.set_time(100)
brain.show_view('l') # take the medial view to further explore visual areas
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [ベイズ推論]</P>
Step2: $\mu$の事前分布に無情報事前分布を仮定する。
|
<ASSISTANT_TASK:>
Python Code:
X = np.zeros(10)
for i in range(len(X)):
X[i] = np.random.normal(5,1)
X
class RWMH:
def __init__(self, X):
self.mu = 2
self.freedom = 5.0
self.x_var = np.mean(X)
def prior_dist(self, t):
ft = math.gamma((self.freedom+1.0)/2.0)/(math.sqrt(self.freedom*math.pi)*math.gamma((self.freedom)/2.0))*pow(1+pow(t, 2)/self.freedom, -(self.freedom+1)/2.0)
return ft
def prop_dist(self):
self.mu = np.random.normal(self.mu, 0.5)
return self.mu
def accept(self, mu_new, mu):
return min([1, self.prior_dist(mu_new)/self.prior_dist(mu)])
def simulate(self):
mu = np.zeros(110000)
mu[0] = self.mu
for i in range(1,110000):
mu_new = self.prop_dist()
u = np.random.uniform()
if u <= self.accept(mu_new, mu[i-1]):
mu[i] = mu_new
else:
mu[i] = mu[i-1]
self.mu = mu[i]
return mu
rwmh = RWMH(X)
mu = rwmh.simulate()
plt.plot(mu)
plt.hist(mu)
plt.hist(mu[10000:])
mu.mean()
class RWMH:
def __init__(self, X):
self.mu = 5
self.x_var = np.mean(X)
def prior_dist(self, t):
ft = np.random.normal(0, 10000)
return ft
def prop_dist(self):
self.mu = np.random.normal(self.mu, 0.001)
return self.mu
def accept(self, mu_new, mu):
return min([1, self.prior_dist(mu_new)/self.prior_dist(mu)])
def simulate(self):
mu = np.zeros(11000)
mu[0] = self.mu
for i in range(1,11000):
mu_new = self.prop_dist()
u = np.random.uniform()
if u <= self.accept(mu_new, mu[i-1]):
mu[i] = mu_new
else:
mu[i] = mu[i-1]
self.mu = mu[i]
return mu
rwmh = RWMH(X)
mu = rwmh.simulate()
plt.plot(mu)
plt.hist(mu)
plt.hist(mu[1000:])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1
Step1: Step 2
Step2: 2. Applying OpenCV filters on Webcam input
Step 1
Step3: Step 2
Step4: Step 3
Step5: Step 4
Step6: Step 5
Step7: Step 6
Step8: Step 7
|
<ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
Overlay("base.bit").download()
from pynq.drivers.video import HDMI
hdmi_out = HDMI('out')
hdmi_out.start()
# monitor configuration: 640*480 @ 60Hz
hdmi_out.mode(HDMI.VMODE_640x480)
hdmi_out.start()
# monitor (output) frame buffer size
frame_out_w = 1920
frame_out_h = 1080
# camera (input) configuration
frame_in_w = 640
frame_in_h = 480
from pynq.drivers.video import Frame
import cv2
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("capture device is open: " + str(videoIn.isOpened()))
import numpy as np
ret, frame_vga = videoIn.read()
if (ret):
frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)
frame_1080p[0:480,0:640,:] = frame_vga[0:480,0:640,:]
hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8)))
else:
raise RuntimeError("Error while reading from camera.")
import time
frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)
num_frames = 20
readError = 0
start = time.time()
for i in range (num_frames):
# read next image
ret, frame_vga = videoIn.read()
if (ret):
laplacian_frame = cv2.Laplacian(frame_vga, cv2.CV_8U)
# copy to frame buffer / show on monitor reorder RGB (HDMI = GBR)
frame_1080p[0:480,0:640,[0,1,2]] = laplacian_frame[0:480,0:640,[1,0,2]]
hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8)))
else:
readError += 1
end = time.time()
print("Frames per second: " + str((num_frames-readError) / (end - start)))
print("Number of read errors: " + str(readError))
frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)
num_frames = 20
start = time.time()
for i in range (num_frames):
# read next image
ret, frame_webcam = videoIn.read()
if (ret):
frame_canny = cv2.Canny(frame_webcam,100,110)
frame_1080p[0:480,0:640,0] = frame_canny[0:480,0:640]
frame_1080p[0:480,0:640,1] = frame_canny[0:480,0:640]
frame_1080p[0:480,0:640,2] = frame_canny[0:480,0:640]
# copy to frame buffer / show on monitor
hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8)))
else:
readError += 1
end = time.time()
print("Frames per second: " + str((num_frames-readError) / (end - start)))
print("Number of read errors: " + str(readError))
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.figure(1, figsize=(10, 10))
frame_vga = np.zeros((480,640,3)).astype(np.uint8)
frame_vga[0:480,0:640,0] = frame_canny[0:480,0:640]
frame_vga[0:480,0:640,1] = frame_canny[0:480,0:640]
frame_vga[0:480,0:640,2] = frame_canny[0:480,0:640]
plt.imshow(frame_vga[:,:,[2,1,0]])
plt.show()
videoIn.release()
hdmi_out.stop()
del hdmi_out
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Go through all the files in the directory, and find the source prefixes that have both lang_from and lang_to CSVs available.
Step2: Now, go through stub_both and for each CSVs, read in both languages, and take all the txt entries at the same timestamps, and build the correspondence.
Step3: Tokenize the correspondences
Step4: Build frequency dictionaries
|
<ASSISTANT_TASK:>
Python Code:
import os
import csv
import time, random
import re
lang_from, lang_to = 'en', 'ko'
data_path = './data'
stub_from, stub_to = set(),set()
stub_matcher = re.compile(r"(.*)\-(\w+)\.csv")
for fname in os.listdir(data_path):
#print(fname)
m = stub_matcher.match(fname)
if m:
stub, lang = m.group(1), m.group(2)
if lang == lang_from: stub_from.add(stub)
if lang == lang_to: stub_to.add(stub)
stub_both = stub_from.intersection(stub_to)
correspondence_loc,txt_from,txt_to=[],[],[]
def read_dict_from_csv(fname):
d=dict()
with open(fname, 'r') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
d[float(row['ts'])]=row['txt']
return d
for stub in stub_both:
#print("Reading stub %s" % (stub,))
data_from = read_dict_from_csv( os.path.join(data_path, stub+'-'+lang_from+'.csv') )
data_to = read_dict_from_csv( os.path.join(data_path, stub+'-'+lang_to+'.csv') )
valid, skipped=0, 0
for ts, txt in data_from.items():
if ts in data_to:
correspondence_loc.append( (stub, ts) )
txt_from.append( txt )
txt_to.append( data_to[ts] )
valid += 1
else:
skipped += 1
print("%3d valid of %3d fragments from '%s'" % (valid, valid+skipped, stub))
print(" Total data : %d text fragments" % (len(correspondence_loc),))
for _ in range(10):
i = random.randrange(len(correspondence_loc))
print( txt_from[i], txt_to[i] )
sub_punctuation = re.compile(r'[\,\.\:\;\?\!\-\—\s\"0-9\(\)]+')
sub_apostrophes = re.compile(r'\'(\w+)')
sub_multispaces = re.compile(r'\s\s+')
if lang_from=='ja' or lang_to=='ja':
import tinysegmenter
ja_segmenter = tinysegmenter.TinySegmenter()
sub_punc_ja = re.compile(r'[\」\「\?\。\、\・\(\)\―]+')
def tokenize_txt(arr, lang):
tok=[]
for txt in arr:
t = txt.lower()
t = re.sub(sub_punctuation, u' ', t)
if "'" in t:
t = re.sub(sub_apostrophes, r" '\1", t)
if lang=='ja':
t = ' '.join( ja_segmenter.tokenize(t) )
t = re.sub(sub_punc_ja, u' ', t)
t = re.sub(sub_multispaces, ' ', t)
tok.append(t.strip())
return tok
tok_from = tokenize_txt(txt_from, lang_from)
tok_to = tokenize_txt(txt_to, lang_to)
tok_from[220:250]
tok_to[220:250]
def build_freq(tok_arr):
f=dict()
for tok in tok_arr:
for w in tok.split():
if w not in f: f[w]=0
f[w]+=1
return f
freq_from=build_freq(tok_from)
freq_to =build_freq(tok_to)
len(freq_from),len(freq_to),
def most_frequent(freq, n=50, start=0):
return ', '.join( sorted(freq,key=lambda w:freq[w], reverse=True)[start:n+start] )
print(most_frequent(freq_from))
print(most_frequent(freq_to, n=100))
print(most_frequent(freq_from, n=20, start=9000))
print( len( [_ for w,f in freq_from.items() if f>=10]))
print( len( [_ for w,f in freq_to.items() if f>=10]))
def build_rank(freq):
return { w:i for i,w in enumerate( sorted(freq, key=lambda w:freq[w], reverse=True) ) }
rank_from = build_rank(freq_from)
rank_to = build_rank(freq_to)
print(rank_from['robot'])
def max_rank(tok, rank): # Find the most infrequent word in this tokenized sentence
r = -1
for w in tok.split():
if rank[w]>r: r=rank[w]
return r
tok_max_rank_from = [ max_rank(tok, rank_from) for tok in tok_from ]
tok_max_rank_to = [ max_rank(tok, rank_to) for tok in tok_to ]
start=0;print(tok_max_rank_from[start:start+15], '\n', tok_max_rank_to[start:start+15],)
i=0; tok_max_rank_from[i], tok_from[i], tok_to[i], tok_max_rank_to[i],
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Post-training integer quantization with int16 activations
Step2: Check that the 16x8 quantization mode is available
Step3: Train and export the model
Step4: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Step5: Write it out to a .tflite file
Step6: To instead quantize the model to 16x8 quantization mode, first set the optimizations flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification
Step7: As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options inference_input(output)_type to tf.int16.
Step8: Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
Step9: Note how the resulting file is approximately 1/3 the size.
Step10: Run the TensorFlow Lite models
Step11: Test the models on one image
Step12: Evaluate the models
Step13: Repeat the evaluation on the 16x8 quantized model
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32) / 255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter.representative_dataset = representative_data_gen
tflite_16x8_model = converter.convert()
tflite_model_16x8_file = tflite_models_dir/"mnist_model_quant_16x8.tflite"
tflite_model_16x8_file.write_bytes(tflite_16x8_model)
!ls -lh {tflite_models_dir}
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file))
interpreter_16x8.allocate_tensors()
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_16x8.get_input_details()[0]["index"]
output_index = interpreter_16x8.get_output_details()[0]["index"]
interpreter_16x8.set_tensor(input_index, test_image)
interpreter_16x8.invoke()
predictions = interpreter_16x8.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
# NOTE: This quantization mode is an experimental post-training mode,
# it does not have any optimized kernels implementations or
# specialized machine learning hardware accelerators. Therefore,
# it could be slower than the float interpreter.
print(evaluate_model(interpreter_16x8))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fundamental variables
Step2: Derived variables
Step3: The system's vector basis is given by $(\hat{\ell}, \hat{n}, \hat{\lambda})$, and will be computed by the code in terms of the fundamental logarithmic rotors defined above. Here, we give all the substitutions that will be needed in the code.
Step4: Various spin components and combinations
Step5: Other functions of the angular velocity that find frequent use
|
<ASSISTANT_TASK:>
Python Code:
# Make sure division of integers does not round to the nearest integer
from __future__ import division
import sys
sys.path.insert(0, '..') # Look for modules in directory above this one
# Make everything in python's symbolic math package available
from sympy import * # Make sure sympy functions are used in preference to numpy
import sympy # Make sympy. constructions available
from sympy import Rational as frac # Rename for similarity to latex
from sympy import log as ln
# Print symbolic expressions nicely
init_printing()
# We'll use the numpy `array` object for vectors
from numpy import array, cross, dot
# We'll use a custom object to keep track of variables
from Utilities.PNObjects import PNCollection
PNVariables = PNCollection()
# Unit basis vectors
PNVariables.AddBasicConstants('xHat, yHat, zHat', datatype='Quaternions::Quaternion', commutative=False)
# Dimensionful quantities, just in case anybody uses them...
PNVariables.AddBasicConstants('G, c')
# Masses of objects 1 and 2.
PNVariables.AddBasicConstants('M1')
PNVariables.AddBasicConstants('M2')
# Angular speed of separation vector
PNVariables.AddBasicVariables('v')
# Initial spins expressed as spinors taking zHat onto those spins (assumed to have constant magnitudes)
PNVariables.AddBasicConstants('S_chi1', datatype='Quaternions::Quaternion', commutative=False)
PNVariables.AddBasicConstants('S_chi2', datatype='Quaternions::Quaternion', commutative=False)
# Dynamic spin directions
PNVariables.AddBasicVariables('rfrak_chi1_x, rfrak_chi1_y')
PNVariables.AddBasicVariables('rfrak_chi2_x, rfrak_chi2_y')
# Tidal deformabilities, in units where the total mass is 1
PNVariables.AddBasicConstants('lambda1, lambda2')
# Frame aligned to Orbital angular velocity vector and magnitude ("Newtonian" angular momentum)
PNVariables.AddBasicVariables('rfrak_frame_x, rfrak_frame_y, rfrak_frame_z')
PNVariables.AddDerivedConstant('M', M1+M2)
PNVariables.AddDerivedConstant('delta', (M1-M2)/M)
PNVariables.AddDerivedConstant('nu', M1*M2/M**2)
PNVariables.AddDerivedConstant('nu__2', (M1*M2/M**2)**2)
PNVariables.AddDerivedConstant('nu__3', (M1*M2/M**2)**3)
PNVariables.AddDerivedConstant('q', M1/M2)
# This rotor encodes all information about the frame
PNVariables.AddDerivedVariable('R', exp(rfrak_frame_x*xHat + rfrak_frame_y*yHat + rfrak_frame_z*zHat),
datatype='Quaternions::Quaternion', commutative=False)
# Unit separation vector between the compact objects
PNVariables.AddDerivedVariable('nHat', R*xHat*conjugate(R), datatype='Quaternions::Quaternion')
# Unit vector orthogonal to the other two; in the direction of velocity
PNVariables.AddDerivedVariable('lambdaHat', R*yHat*conjugate(R), datatype='Quaternions::Quaternion')
# Unit vector in direction of angular velocity
PNVariables.AddDerivedVariable('ellHat', R*zHat*conjugate(R), datatype='Quaternions::Quaternion')
# Components of the above
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('nHat_'+d, substitution_atoms=[nHat], substitution='nHat['+i+']')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('lambdaHat_'+d, substitution_atoms=[lambdaHat], substitution='lambdaHat['+i+']')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('ellHat_'+d, substitution_atoms=[ellHat], substitution='ellHat['+i+']')
# These rotors encode all information about the spin directions
PNVariables.AddDerivedVariable('R_S1', exp(rfrak_chi1_x*xHat + rfrak_chi1_y*yHat),
datatype='Quaternions::Quaternion', commutative=False)
PNVariables.AddDerivedVariable('R_S2', exp(rfrak_chi2_x*xHat + rfrak_chi2_y*yHat),
datatype='Quaternions::Quaternion', commutative=False)
# The spins are derived from rfrak_chi1_x, etc.
PNVariables.AddDerivedVariable('chiVec1', S_chi1*R_S1*zHat*conjugate(R_S1)*conjugate(S_chi1), datatype='Quaternions::Quaternion')
PNVariables.AddDerivedVariable('chiVec2', S_chi2*R_S2*zHat*conjugate(R_S2)*conjugate(S_chi2), datatype='Quaternions::Quaternion')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('chi1_'+d, substitution_atoms=[chiVec1], substitution='chiVec1['+i+']')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('chi2_'+d, substitution_atoms=[chiVec2], substitution='chiVec2['+i+']')
PNVariables.AddDerivedConstant('chi1chi1', substitution_atoms=[chiVec1], substitution='chiVec1.normsquared()')
PNVariables.AddDerivedConstant('chi1chi2', substitution_atoms=[chiVec1,chiVec2], substitution='chiVec1.dot(chiVec2)')
PNVariables.AddDerivedConstant('chi2chi2', substitution_atoms=[chiVec2], substitution='chiVec2.normsquared()')
PNVariables.AddDerivedVariable('chi1_n', substitution_atoms=[chiVec1,nHat], substitution='chiVec1.dot(nHat)')
PNVariables.AddDerivedVariable('chi1_lambda', substitution_atoms=[chiVec1,lambdaHat], substitution='chiVec1.dot(lambdaHat)')
PNVariables.AddDerivedVariable('chi1_ell', substitution_atoms=[chiVec1,ellHat], substitution='chiVec1.dot(ellHat)')
PNVariables.AddDerivedVariable('chi2_n', substitution_atoms=[chiVec2,nHat], substitution='chiVec2.dot(nHat)')
PNVariables.AddDerivedVariable('chi2_lambda', substitution_atoms=[chiVec2,lambdaHat], substitution='chiVec2.dot(lambdaHat)')
PNVariables.AddDerivedVariable('chi2_ell', substitution_atoms=[chiVec2,ellHat], substitution='chiVec2.dot(ellHat)')
PNVariables.AddDerivedConstant('sqrt1Mchi1chi1', sqrt(1-chi1chi1))
PNVariables.AddDerivedConstant('sqrt1Mchi2chi2', sqrt(1-chi2chi2))
PNVariables.AddDerivedVariable('S', chiVec1*M1**2 + chiVec2*M2**2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('S_ell', chi1_ell*M1**2 + chi2_ell*M2**2)
PNVariables.AddDerivedVariable('S_n', chi1_n*M1**2 + chi2_n*M2**2)
PNVariables.AddDerivedVariable('S_lambda', chi1_lambda*M1**2 + chi2_lambda*M2**2)
PNVariables.AddDerivedVariable('Sigma', M*(chiVec2*M2 - chiVec1*M1), datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('Sigma_ell', M*(chi2_ell*M2 - chi1_ell*M1))
PNVariables.AddDerivedVariable('Sigma_n', M*(chi2_n*M2 - chi1_n*M1))
PNVariables.AddDerivedVariable('Sigma_lambda', M*(chi2_lambda*M2 - chi1_lambda*M1))
PNVariables.AddDerivedVariable('S1', chiVec1*M1**2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('S1_ell', chi1_ell*M1**2)
PNVariables.AddDerivedVariable('S1_n', chi1_n*M1**2)
PNVariables.AddDerivedVariable('S1_lambda', chi1_lambda*M1**2)
PNVariables.AddDerivedVariable('S2', chiVec2*M2**2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('S2_ell', chi2_ell*M2**2)
PNVariables.AddDerivedVariable('S2_n', chi2_n*M2**2)
PNVariables.AddDerivedVariable('S2_lambda', chi2_lambda*M2**2)
PNVariables.AddDerivedVariable('chi_s', (chiVec1 + chiVec2)/2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('chi_s_ell', (chi1_ell+chi2_ell)/2)
PNVariables.AddDerivedVariable('chi_s_n', (chi1_n+chi2_n)/2)
PNVariables.AddDerivedVariable('chi_s_lambda', (chi1_lambda+chi2_lambda)/2)
PNVariables.AddDerivedVariable('chi_a', (chiVec1 - chiVec2)/2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('chi_a_ell', (chi1_ell-chi2_ell)/2)
PNVariables.AddDerivedVariable('chi_a_n', (chi1_n-chi2_n)/2)
PNVariables.AddDerivedVariable('chi_a_lambda', (chi1_lambda-chi2_lambda)/2)
PNVariables.AddDerivedVariable('x', v**2)
PNVariables.AddDerivedVariable('Omega_orb', v**3/M)
PNVariables.AddDerivedVariable('logv', log(v))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One way to handle this is to drop the first 4, given that almost all observations are missing.
Step2: Many features (e.g. LotArea, GarageCars) are indeed correlated highly with SalePrice.
Step3: Fit Models
Step4: DecisionTree Regressor is performing much better than Linear Regression here, perhaps capturing some non-linearity in data.
Step5: Best performance thus far is from RF.
|
<ASSISTANT_TASK:>
Python Code:
tmp = train.isnull().sum()
# get top 10 results
tmp.sort_values(ascending=False).head(10).plot(kind='bar', figsize=(8,8))
drop_cols = ['PoolQC','MiscFeature','Alley','Fence']
# write custom transformer to drop these 4 cols for use in Pipeline later
from sklearn.base import BaseEstimator, TransformerMixin
def DropColumnsTransform(BaseEstimator, TransformerMixin):
def __init__(self, attribs_drop):
self.attribs_drop = attribs_drop
def fit(self, X):
return self
def transform(self, X):
return X.drop(self.attribs_drop, axis=1).values
# look at categorical data
train_cat = train.select_dtypes(include=['object'])
train_cat.shape
# use this to impute missing values as "?"
train_cat = train_cat.fillna("?")
print("43/%d or %.2f%% of columns are categorical" % (train.shape[1], 43/train.shape[1]*100))
from sklearn.preprocessing import LabelBinarizer, Imputer
LabelBinarizer = LabelBinarizer()
# loop to apply LB to each column individually, then combine them back together
list_cols = []
for col in list(train_cat.columns):
x = train_cat[col].values
x_trans = LabelBinarizer.fit_transform(x)
list_cols.append(x_trans)
train_cat_transformed = np.concatenate(list_cols,axis=1)
train_cat_transformed
# numerical data now
Imp = Imputer(strategy="median")
train_num = train.select_dtypes(include=['number'])
train_num.shape
# look at correlation
cor = train_num.corr()
f = plt.figure(figsize=(15,15))
sns.heatmap(cor, cmap='plasma')
tmp = cor['SalePrice'].sort_values(ascending=False)
tmp[1:11].plot(kind='bar', figsize=(8,8))
# we will have to remove SalePrice before imputing
train_num_wsp = train_num.drop('SalePrice',axis=1)
train_num_tr = Imp.fit_transform(train_num_wsp)
train_num_tr
X = np.concatenate([train_num_tr, train_cat_transformed],axis=1)
y = train_num['SalePrice'].values
print("Shape of X:", X.shape)
print("Shape of y:", y.shape)
from sklearn.model_selection import train_test_split
# split into 10% for validation at end
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.1)
# Linear Regression
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error as mse
linreg = LinearRegression()
scores = cross_val_score(linreg, X_train, y_train, scoring="neg_mean_squared_error", cv=10, verbose=1)
def printscorespretty(scores):
sc = np.sqrt(-scores)
print("Scores:", sc)
print("Mean:", np.mean(sc))
print("SD:", np.sqrt(np.var(sc)))
printscorespretty(scores)
#Decision Tree Regressor
from sklearn.tree import DecisionTreeRegressor
dtr = DecisionTreeRegressor()
scores = cross_val_score(dtr, X_train, y_train, scoring="neg_mean_squared_error", cv=10, verbose=1)
printscorespretty(scores)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor()
scores = cross_val_score(rf, X_train, y_train, scoring="neg_mean_squared_error", cv=10, verbose=1)
printscorespretty(scores)
# XGBoost
from xgboost import XGBRegressor
XGB = XGBRegressor()
scores = cross_val_score(XGB, X_train, y_train, scoring="neg_mean_squared_error", cv=10, verbose=1)
printscorespretty(scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the data
Step2: We want to make the time dependency a bit clearer and add an identifier to each of the stock values (in this notebook we only have Google though).
Step3: Create training data sample
Step4: The resulting dataframe now consists of these "windows" stamped out of the original dataframe.
Step5: If you now group by the new id column, each of the groups will be a certain stock symbol until and including the data until a certain day (and including the last 20 days in the past).
Step6: we now have 1254 unique windows (identified by stock symbol and ending date)
Step7: We "lost" 5 windows, as we required to have a minimum history of more than 5 days.
Step8: The process is also shown in this image (please note that the window size is smaller for better visibility)
Step9: We make the data a bit easier to work with by removing the tuple-index
Step10: Our (AAPL, 2020-07-14 00
Step11: Just to repeat
Step12: Quick consistency test
Step13: However, we need to be a bit careful here
Step14: We can now train normal AdaBoostRegressors to predict the next time step .
Step15: and do feature selection before training
Step16: Now lets check how good our prediction is
Step17: The prediction is for the next day, so for drawing we need to shift 1 step back
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
from tsfresh import extract_features, select_features
from tsfresh.utilities.dataframe_functions import roll_time_series, make_forecasting_frame
from tsfresh.utilities.dataframe_functions import impute
try:
import pandas_datareader.data as web
except ImportError:
print("You need to install the pandas_datareader. Run pip install pandas_datareader.")
from sklearn.linear_model import LinearRegression
df = web.DataReader("AAPL", 'stooq')["High"]
df.head()
plt.figure(figsize=(15, 6))
df.plot(ax=plt.gca())
plt.show()
df_melted = pd.DataFrame({"high": df.copy()})
df_melted["date"] = df_melted.index
df_melted["Symbols"] = "AAPL"
df_melted.head()
df_rolled = roll_time_series(df_melted, column_id="Symbols", column_sort="date",
max_timeshift=20, min_timeshift=5)
df_rolled.head()
df_rolled[df_rolled["id"] == ("AAPL", pd.to_datetime("2020-07-14"))]
df_melted[(df_melted["date"] <= pd.to_datetime("2020-07-14")) &
(df_melted["date"] >= pd.to_datetime("2020-06-15")) &
(df_melted["Symbols"] == "AAPL")]
len(df_melted)
df_rolled["id"].nunique()
df_rolled.groupby("id").size().agg([np.min, np.max])
X = extract_features(df_rolled.drop("Symbols", axis=1),
column_id="id", column_sort="date", column_value="high",
impute_function=impute, show_warnings=False)
X.head()
X = X.set_index(X.index.map(lambda x: x[1]), drop=True)
X.index.name = "last_date"
X.head()
X.loc['2020-07-14']
y = df_melted.set_index("date").sort_index().high.shift(-1)
y["2020-07-13"], df["2020-07-14"].iloc[0]
y = y[y.index.isin(X.index)]
X = X[X.index.isin(y.index)]
X[:"2018"]
X_train = X[:"2018"]
X_test = X["2019":]
y_train = y[:"2018"]
y_test = y["2019":]
X_train_selected = select_features(X_train, y_train)
ada = LinearRegression()
ada.fit(X_train_selected, y_train)
X_test_selected = X_test[X_train_selected.columns]
y_pred = pd.Series(ada.predict(X_test_selected), index=X_test_selected.index)
plt.figure(figsize=(15, 6))
y.plot(ax=plt.gca())
y_pred.plot(ax=plt.gca(), legend=None, marker=".")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compare the schools on List of CC schools with list of all public/private schools.
Step2: Cycle through only first 89 values - stop when reaching
Step3: Create restfulapi of schools thaat have cc and those that don't
|
<ASSISTANT_TASK:>
Python Code:
crcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=5, index_col=0, usecols=[0,1,2])
#crcom
aqcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=6, usecols=[0])
aqjsz = aqcom.to_json()
dicthol = json.loads(aqjsz)
dschoz = dicthol['School']
#dicthol
dscv = dschoz.values()
ccschool = list()
for ds in range(87):
#print(dschoz[str(ds)])
ccschool.append((dschoz[str(ds)]))
schccd = dict()
scda = dict({'cc' : True})
sanoc = dict({'cc' : False})
#schccd.update({ccs : scda})
for ccs in ccschool:
#These schools have a cc license. Update the list of all schools with cc and value = true.
#Focus on schools that don't have cc license.
#Filter schools in area that don't have cc license.
#print (ccs)
schccd.update({ccs : scda})
ccschz = list()
for dsc in range(87):
#print (dschoz[str(dsc)])
ccschz.append((dschoz[str(dsc)]))
#Append in names of schools that are missing from this dict.
#Something like
#schccd.update{school that doesnt have cc : {'cc' : False}}
#schccd
noclist = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[1])
webskol = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[6])
websjs = webskol.to_json()
dictscha = json.loads(websjs)
numsweb = dictscha['School website']
lenmuns = len(numsweb)
#for nuran in range(lenmuns):
# print (numsweb[str(nuran)])
#noclist.values[0:10]
aqjaq = noclist.to_json()
jsaqq = json.loads(aqjaq)
najsa = jsaqq['Name']
alsl = len(najsa)
allschlis = list()
for alr in range(alsl):
allschlis.append(najsa[str(alr)])
#allschlis
newlis = list(set(allschlis) - set(ccschool))
empd = dict()
sstru = json.dumps(schccd)
for newl in newlis:
#print (newl)
empd.update({newl : sanoc})
empdum = json.dumps(empd)
trufal = empd.copy()
trufal.update(schccd)
trfaj = json.dumps(trufal)
savjfin = open('/home/wcmckee/ccschool/index.json', 'w')
savjfin.write(trfaj)
savjfin.close()
#savtru = open('/home/wcmckee/ccschool/cctru.json', 'w')
#savtru.write(sstru)
#savtru.close()
#for naj in najsa.values():
#print (naj)
# for schk in schccd.keys():
#print(schk)
# allschlis.append(schk)
#for i in ccschz[:]:
# if i in allschlis:
# ccschz.remove(i)
# allschlis.remove(i)
#Cycle though some schools rather than everything.
#Cycle though all schools and find schools that have cc
#for naj in range(2543):
#print(najsa[str(naj)])
# for schk in schccd.keys():
# if schk in (najsa[str(naj)]):
#Remove these schools from the list
# print (schk)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales.
Step2: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y
Step3: Tuning the bins
Step4: Histogram Styling
|
<ASSISTANT_TASK:>
Python Code:
# Create a sample of Gaussian draws
np.random.seed(0)
x_data = np.random.randn(1000)
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, padding=0)
fig
hist.x, hist.y
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, padding=0)
fig
# Changing the number of bins
hist.bins = "sqrt"
# Changing the range
hist.min = 0
# Normalizing the count
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, density=True)
fig
# changing the color
hist.colors = ["orangered"]
# stroke and opacity update
hist.stroke = "orange"
hist.opacities = [0.5] * len(hist.x)
# Laying the histogram on its side
hist.orientation = "horizontal"
fig.axes[0].orientation = "vertical"
fig.axes[1].orientation = "horizontal"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: variable definitions
Step2: example recording 1
Step3: example recording 2
Step4: formating
Step5: remove noise
Step6: plot
Step7: save figure
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot
%matplotlib inline
from matplotlib.patches import Rectangle
from matplotlib.lines import Line2D
import numpy
from scipy.io import wavfile
from os import path
from datetime import timedelta
from django.db import connection
from database.models import Sound
from database.models import Site
from nacoustik import Wave
from nacoustik.spectrum import psd
from nacoustik.noise import remove_background_noise, remove_anthrophony
from figutils import query, style
figure_directory = ""
site1 = Site.objects.get(name='Höttinger Rain')
sound_db1 = Sound.objects.get(id=147)
site2 = Site.objects.get(name='Pfaffensteig')
sound_db2 = Sound.objects.get(id=158)
style.set_font()
# example recording 1
wave1 = Wave(sound_db1.get_filepath())
wave1.read()
wave1.normalize()
samples1 = wave1.samples[(100 * wave1.rate):(160 * wave1.rate)]
duration = 60
f, t, a_pass = psd(samples1, rate=wave1.rate, window_length=512)
ale_pass = remove_background_noise(a_pass, N=0.18, iterations=3)
b_pass = remove_anthrophony(ale_pass, time_delta=t[1]-t[0], freq_delta=f[1]-f[0])
b_pass = numpy.ma.masked_equal(b_pass, value=0)
# example recording 2
wave2 = Wave(sound_db2.get_filepath())
wave2.read()
wave2.normalize()
samples2 = wave2.samples[(0 * wave2.rate):(60 * wave2.rate)]
duration = 60
f, t, a_fail = psd(samples2, rate=wave2.rate, window_length=512)
ale_fail = remove_background_noise(a_fail, N=0.18, iterations=3)
b_fail = remove_anthrophony(ale_fail, time_delta=t[1]-t[0], freq_delta=f[1]-f[0])
b_fail = numpy.ma.masked_equal(b_fail, value=0)
# create figure
figure3 = pyplot.figure()
#figure3.subplots_adjust(left=0.04, bottom=0.12, right=0.96, top=0.97, wspace=0, hspace=0)
figure3.subplots_adjust(left=0.04, bottom=0.04, right=0.96, top=0.99, wspace=0, hspace=0)
figure3.set_figwidth(6.85)
figure3.set_figheight(9.21)
# specify frequency bins (width of 1 kiloherz)
bins = numpy.arange(0, (wave1.rate / 2), 1000)
# axes
ax_a = pyplot.subplot2grid((21, 1), (0, 0), rowspan=5, colspan=1)
ax_b = pyplot.subplot2grid((21, 1), (5, 0), rowspan=5, colspan=1, sharex=ax_a, sharey=ax_a)
ax_c = pyplot.subplot2grid((21, 1), (11, 0), rowspan=5, colspan=1, sharey=ax_a)
ax_d = pyplot.subplot2grid((21, 1), (16, 0), rowspan=5, colspan=1, sharex=ax_c, sharey=ax_a)
# compute xlabels
start_time = sound_db1.get_datetime() + timedelta(seconds=100)
time_delta = 10
n = int((duration / time_delta) + 1)
xlabels_pass = [(start_time + timedelta(seconds=i*time_delta)).strftime("%H:%M:%S") for i in range(n)]
start_time = sound_db1.get_datetime()
xlabels_fail = [(start_time + timedelta(seconds=i*time_delta)).strftime("%H:%M:%S") for i in range(n)]
ylabels = ["", "2", "", "4", "", "6", "", "8", "", "10", "", ""]
# original - example 1
spec_1 = ax_a.pcolormesh(t, f, a_pass[0], cmap='Greys', vmin=-150, vmax=-80)
ax_a.set(ylim=([0, wave1.rate / 2]),
yticks = bins.astype(numpy.int) + 1000)
ax_a.set_yticklabels(ylabels)
ax_a.set_ylabel("frequency (kilohertz)")
ax_a.tick_params(length=6, color='black', direction='in',
bottom=True, labelbottom=False,
top=False, labeltop=False,
left=True, labelleft=False,
right=True, labelright=True)
ax_a.set_frame_on(False)
# after adaptive level equalization - example 1
spec_2 = ax_b.pcolormesh(t, f, b_pass[0], cmap='Greys', vmin=-150, vmax=-80)
ax_b.set(ylim=([0, wave1.rate / 2]),
yticks = bins.astype(numpy.int) + 1000)
ax_b.set_xticklabels(xlabels_pass)
ax_b.set_ylabel("frequency (kilohertz)")
ax_b.tick_params(length=6, color='black', direction='in',
bottom=True, labelbottom=True,
top=False, labeltop=False,
left=True, labelleft=False,
right=True, labelright=True)
ax_b.set_frame_on(False)
# original - example 2
spec_3 = ax_c.pcolormesh(t, f, a_fail[1], cmap='Greys', vmin=-150, vmax=-80)
ax_c.set(ylim=([0, wave2.rate / 2]),
yticks = bins.astype(numpy.int) + 1000)
ax_c.set_ylabel("frequency (kilohertz)")
ax_c.tick_params(length=6, color='black', direction='in',
bottom=True, labelbottom=False,
top=False, labeltop=False,
left=True, labelleft=False,
right=True, labelright=True)
ax_c.set_frame_on(False)
# after adaptive level equalization - example 2
spec_4 = ax_d.pcolormesh(t, f, b_fail[1], cmap='Greys', vmin=-150, vmax=-80)
ax_d.set(ylim=([0, wave2.rate / 2]),
yticks = bins.astype(numpy.int) + 1000)
ax_d.set_xticklabels(xlabels_fail)
ax_d.set_ylabel("frequency (kilohertz)")
ax_d.tick_params(length=6, color='black', direction='in',
bottom=True, labelbottom=True,
top=False, labeltop=False,
left=True, labelleft=False,
right=True, labelright=True)
ax_d.set_frame_on(False)
ax_d.set_xlabel("time of day (hours:minutes:seconds)")
# axes borders
ax_a.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))
ax_a.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))
ax_a.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))
ax_b.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))
ax_b.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))
ax_b.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))
ax_c.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))
ax_c.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))
ax_c.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))
ax_d.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))
ax_d.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))
ax_d.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))
# annotation
ax_a.add_line(Line2D([t[0], t[-1:]], [2000, 2000], color='black', linewidth=1, linestyle='--'))
t1 = ax_a.text(14, 2100, '2 kilohertz', color='black', ha='left', va='bottom')
b1 = ax_a.add_patch(Rectangle((23, 0), 9, 6100, facecolor='none', edgecolor='black', linestyle='--'))
b2 = ax_a.add_patch(Rectangle((30, 0), 9, 11500, facecolor='none', edgecolor='black', linestyle='--'))
ap = dict(arrowstyle='-',
connectionstyle='arc3,rad=0.2')
a1 = ax_a.annotate('plane landing', (23, 4000), xytext=(21, 6000), ha='right', va='center', arrowprops=ap)
a2 = ax_a.annotate('car passing', (39, 9000), xytext=(42, 10000), ha='left', va='center', arrowprops=ap)
style.multi_annotate(ax_a, 'birds calling', xy_list=[(53.5, 8500), (45.5, 4000)], xytext=(50, 6300),
ha='center', va='center',
arrowprops=dict(arrowstyle='->',
connectionstyle='arc3,rad=0.2'))
# title formatting
title_font = {
'size': 12.0,
'weight': 'bold'
}
ax_a2 = pyplot.axes([0.005, 0, 1, 0.99], facecolor=(1, 1, 1, 0), frameon=False)
ax_a2.tick_params(bottom=False, labelbottom=False,
top=False, labeltop=False,
left=False, labelleft=False,
right=False, labelright=False)
ax_b2 = pyplot.axes([0.005, 0, 1, 0.76], facecolor=(1, 1, 1, 0), frameon=False)
ax_b2.tick_params(bottom=False, labelbottom=False,
top=False, labeltop=False,
left=False, labelleft=False,
right=False, labelright=False)
ax_c2 = pyplot.axes([0.005, 0, 1, 0.49], facecolor=(1, 1, 1, 0), frameon=False)
ax_c2.tick_params(bottom=False, labelbottom=False,
top=False, labeltop=False,
left=False, labelleft=False,
right=False, labelright=False)
ax_d2 = pyplot.axes([0.005, 0, 1, 0.26], facecolor=(1, 1, 1, 0), frameon=False)
ax_d2.tick_params(bottom=False, labelbottom=False,
top=False, labeltop=False,
left=False, labelleft=False,
right=False, labelright=False)
t1 = ax_a2.text(0, 1, 'a', horizontalalignment='left', verticalalignment='top',
fontdict=title_font)
t2 = ax_b2.text(0, 1, 'b', horizontalalignment='left', verticalalignment='top',
fontdict=title_font)
t3 = ax_c2.text(0, 1, 'c', horizontalalignment='left', verticalalignment='top',
fontdict=title_font)
t4 = ax_d2.text(0, 1, 'd', horizontalalignment='left', verticalalignment='top',
fontdict=title_font)
#figure3.savefig(path.join(figure_directory, "figure3.png"), dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 元组Tuple 用法
Step2: 思考
Step3: 词性
|
<ASSISTANT_TASK:>
Python Code:
# 定义字典
# 访问字典中的 key-value
d = {'Tom': 95, 'Mary': 90, 'Tracy': 92}
print(d)
print(d['Tom'])
# 字典增加元素,直接定义值即可
d['Hugo'] = 85
print(d)
# 修改字典元素的值
d['Tom'] = 97
print(d)
# 字典是否存在某个 key
print('Tom' in d)
# 如果要获得不存在的 key 的 value,可以设置默认值
print(d.get('Tommy',80))
# 去获得不存在的 key 的 value,会报错
print(d['Tommy'])
# 字典删除 key
d = {'Tom': 95, 'Mary': 90, 'Tracy': 92}
print(d)
# 删除 key
d.pop('Tom')
print(d)
# 获得字典的长度
# 定义一个空的字典
d = {}
# 造一些字典内容
# str 函数将整数转换为字符串
for i in range(30):
d['id_'+str(i)] = i*3
print(d)
print(len(d))
# 创建元组,用小括号
t = ('Tom', 'Jerry', 'Mary')
print(t)
# 访问元组的元素
print(t[1])
# 元组创建后是不能修改的
# 会报错
t.append('Someone')
# 元组创建后是不能修改的
# 像列表一样去定义值也会报错
# 'tuple' object does not support item assignment
t[1] = 'aaa'
# 看看 tuple 有什么方法,你会发现很少
print(dir(tuple))
# 创建复杂一点的元组
t1 = ['A', 'B', 'C']
t2 =(t1, 100, 200)
print(t2)
# 变通的实现"可变"元组内容
t1 = ['A', 'B', 'C']
t2 =(t1, 100, 200)
print(t1)
print(t2)
# tuple的每个元素,指向永远不变,但指向的元素本身是可变的
t1.append('D')
print(t1)
print(t2)
# 创建只有1个元素的元组
# 下面这样是不行的
t = (1)
# l成了一个整数,因为这里的括号有歧义,被认作数学计算里的小括号
print(type(t))
# 1个元素的元组必须加逗号来消除歧义
t = (1,)
print(type(t))
print(t)
!pip install jieba
import jieba
# 全模式
# 把句子中所有的可以称此的词语都扫描出来,速度非常快,但是不能解决歧义
seg_list = jieba.cut("今天上海的天气怎么样", cut_all = True)
print("Full Mode: " + "/ ".join(seg_list))
# 精确模式
# 试图将句子最精确的切开,适合文本分析
seg_list = jieba.cut("明天纽约下雨么", cut_all = False)
print("Default Mode: " + "/ ".join(seg_list))
# 默认是精确模式
seg_list = jieba.cut("现在天气怎么样")
print(", ".join(seg_list))
# 默认是精确模式
seg_list = jieba.cut("小明硕士毕业于中国科学院计算所,后在日本京都大学深造")
print(", ".join(seg_list))
# 搜索引擎模式
# 在精确模式的基础上,对长词再次切分,提高召回率,适合用于搜索引擎分词
seg_list = jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造")
print(", ".join(seg_list))
# 看看网络上的段子,分词带来的烦恼
seg_list = jieba.cut_for_search("黑夜总会过去")
print(", ".join(seg_list))
seg_list = jieba.cut("黑夜总会过去", cut_all = True)
print(", ".join(seg_list))
# 默认是精确模式
seg_list = jieba.cut("2016年第一季度支付事业部交易量报表")
print(','.join(seg_list))
# 默认是精确模式
seg_list = jieba.cut("2016年第一季度支付事业部交易量报表")
for i in seg_list:
print(i)
import jieba.posseg as pseg
words = pseg.cut("我爱北京天安门")
for word, flag in words:
print('%s %s' % (word, flag))
print('\n'.join([''.join([('ILOVEYOU'[(x-y)%8]if((x*0.05)**2+(y*0.1)**2-1)**3-(x*0.05)**2*(y*0.1)**3<=0 else' ')for x in range(-30,30)])for y in range(15,-15,-1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perform a baseline registration with a single feature and create a couple of new metrics. Each metric is defined by a name ("CC"), the input fixed (image), input moving (image2), a weight value (e.g. 2) and a sampling parameter ( for CC this defines a radius of 9x9 e.g. 4 extra pixels on all sides of the center pixel. Five entries are needed in total.
Step2: Append the first metric to the metric list. In actuality this means that reg2 will be driven by both a demons metric and the default metric.
Step3: Add a third metric and run this new registration.
Step4: Quantify the results in terms of mutual information of the registration results using the original image intensity.
|
<ASSISTANT_TASK:>
Python Code:
import ants
image = ants.image_read(ants.get_ants_data('r16'))
image2 = ants.image_read(ants.get_ants_data('r64'))
aff = ants.registration( image, image2, "Affine" )
g1 = ants.iMath_grad( image )
g2 = ants.iMath_grad( image2 )
reg1 = ants.registration( image, image2, 'SyNOnly', initial_transform=aff['fwdtransforms'][0], verbose=False )
demonsMetric = ['demons', g1, g2, 1, 1]
ccMetric = ['CC', image, image2, 1.5, 4 ]
metrics = list( )
metrics.append( demonsMetric )
reg2 = ants.registration( image, image2, 'SyNOnly',
multivariate_extras = metrics, initial_transform=aff['fwdtransforms'][0] )
metrics.append( ccMetric )
reg3 = ants.registration( image, image2, 'SyNOnly',
multivariate_extras = metrics, initial_transform=aff['fwdtransforms'][0] )
print( ants.image_mutual_information( image, image2 ) )
print( ants.image_mutual_information( image, reg1['warpedmovout'] ) )
print( ants.image_mutual_information( image, reg2['warpedmovout'] ) )
print( ants.image_mutual_information( image, reg3['warpedmovout'] ) )
ants.plot(reg1['warpedmovout'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Import useful functions from previous notebooks
Step3: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step4: Split data into training, test, and validation sets
Step5: Extract features and normalize
Step6: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
Step7: Compute a single distance
Step8: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step9: QUIZ QUESTION
Step10: Compute multiple distances
Step11: QUIZ QUESTION
Step12: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Step13: The subtraction operator (-) in Numpy is vectorized as follows
Step14: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step15: Aside
Step16: To test the code above, run the following cell, which should output a value -0.0934339605842
Step17: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
Step18: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Step19: To test the code above, run the following cell, which should output a value 0.0237082324496
Step20: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step21: QUIZ QUESTIONS
Step22: Perform k-nearest neighbor regression
Step23: QUIZ QUESTION
Step24: Make a single prediction by averaging k nearest neighbor outputs
Step25: QUIZ QUESTION
Step26: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Step27: QUIZ QUESTION
Step28: Choosing the best value of k using a validation set
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
graphlab.product_key.set_product_key("C0C2-04B4-D94B-70F6-8771-86F9-C6E1-E122")
sales = graphlab.SFrame('kc_house_data_small.gl/kc_house_data_small.gl')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
features = feature_matrix / norms
return features, norms
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
print features_test[0]
print features_train[9]
print np.sqrt(np.sum((features_train[9]-features_test[0])**2))
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[0])**2)))
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[2])**2)))
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
print features_train[0:3] - features_test[0]
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
diff = features_train[0:len(features_train)] - features_test[0]
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
distances = np.sqrt(np.sum(diff**2, axis=1))
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
def compute_distances(features_instances, features_query):
diff = features_instances[0:len(features_instances)] - features_query
distances = np.sqrt(np.sum(diff**2, axis=1))
return distances
distances = compute_distances(features_train, features_test[2])
min = distances[0]
index = 0
for i in xrange(len(distances)):
if(distances[i] < min):
min = distances[i]
index = i
print min
print index
print output_train[382]
def k_nearest_neighbors(k, feature_train, features_query):
distances = compute_distances(features_train, features_query)
neighbors = np.argsort(distances)[0:k]
return neighbors
print k_nearest_neighbors(4, features_train, features_test[2])
def predict_output_of_query(k, features_train, output_train, features_query):
neighbors = k_nearest_neighbors(k, features_train, features_query)
prices = output_train[neighbors]
prediction = np.sum(prices)/k
return prediction
print predict_output_of_query(4, features_train, output_train, features_test[2])
def predict_output(k, features_train, output_train, features_query):
predictions = []
for i in xrange(len(features_query)):
prediction = predict_output_of_query(k, features_train, output_train, features_query[i])
predictions.append(prediction)
return predictions
print predict_output(10, features_train, output_train,features_test[0:10])
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 3T_데이터 분석을 위한 SQL 실습 (2) - SUB QUERY, HAVING
Step3: JOIN은 조금 어렵지만 속도가 WHERE보다 빠르다.
Step8: 서브쿼리랑 HAVING 다시 천천히 해보자
Step9: pandas
|
<ASSISTANT_TASK:>
Python Code:
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset='utf8',
)
customer_df = pd.read_sql("SELECT * FROM customer;", db)
payment_df = pd.read_sql("SELECT * FROM payment;", db)
customer_df.head(1)
payment_df.head(1)
SQL_QUERY =
SELECT c.first_name, c.last_name, SUM(p.amount) "Revenue"
FROM
customer c
JOIN payment p
ON p.customer_id = c.customer_id
GROUP BY c.customer_id
ORDER BY Revenue DESC
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT
c.customer_id,
SUM(p.amount)
FROM payment p, customer c
WHERE p.customer_id = c.customer_id
GROUP BY c.customer_id
;
pd.read_sql(SQL_QUERY, db)
payment_df.groupby("customer_id").agg({"amount": np.sum})
rental_df = pd.read_sql("SELECT * FROM rental;", db)
rental_df.head(1)
customer_df.head(1)
SQL_QUERY =
SELECT
c.first_name,
c.last_name,
COUNT(*) "rentals_per_customer"
FROM
rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY c.customer_id
HAVING rentals_per_customer >=30
ORDER BY 3 DESC
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT
c.first_name,
c.last_name,
COUNT(*) "rentals_per_customer"
FROM
rental r,
customer c
WHERE
r.customer_id = c.customer_id
GROUP BY c.customer_id
HAVING rentals_per_customer >= 30
;
pd.read_sql(SQL_QUERY, db)
RENTALS_PER_CUSTOMER_SQL_QUERY =
SELECT
c.first_name,
c.last_name,
COUNT(*) "rentals_per_customer"
FROM
rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY c.customer_id
;
SQL_QUERY =
SELECT *
FROM ({RENTALS_PER_CUSTOMER_SQL_QUERY}) as rpc
WHERE rentals_per_customer >= 30
;
.format(RENTALS_PER_CUSTOMER_SQL_QUERY=RENTALS_PER_CUSTOMER_SQL_QUERY.replace(";", ""))
# print(SQL_QUERY)
pd.read_sql(SQL_QUERY, db)
rc_df = rental_df.merge(customer_df, on="customer_id")
rc_df.groupby("customer_id").size() >= 30
rentals_per_customer_df = rc_df.groupby("customer_id").agg({"customer_id": np.size})
is_30 = rentals_per_customer_df.customer_id > 30
rentals_per_customer_df[is_30]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import scipy.integrate
import numpy as np
N0 = 1
time_span = [0, 10]
def dN1_dt(t, N1):
input = 1-np.cos(t) if 0<t<2*np.pi else 0
return -100*N1 + input
sol = scipy.integrate.solve_ivp(fun=dN1_dt, t_span=time_span, y0=[N0,])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute interpolation (also works with Raw and Epochs objects)
Step2: You can also use minimum-norm for EEG as well as MEG
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis A. Engemann <denis.engemann@gmail.com>
# Mainak Jas <mainak.jas@telecom-paristech.fr>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# plot with bads
evoked.plot(exclude=[], picks=('grad', 'eeg'))
evoked_interp = evoked.copy().interpolate_bads(reset_bads=False)
evoked_interp.plot(exclude=[], picks=('grad', 'eeg'))
evoked_interp_mne = evoked.copy().interpolate_bads(
reset_bads=False, method=dict(eeg='MNE'), verbose=True)
evoked_interp_mne.plot(exclude=[], picks=('grad', 'eeg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Define Hyperparameter Configuration File
Step2: 3) Train
Step3: Run local
Step4: Run on cloud (1 cloud ML unit)
|
<ASSISTANT_TASK:>
Python Code:
%%bash
mkdir trainer
touch trainer/__init__.py
%%writefile trainer/task.py
import argparse
import pandas as pd
import tensorflow as tf
import os #NEW
import json #NEW
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.ERROR)
data_train = pd.read_csv(
filepath_or_buffer='https://storage.googleapis.com/spls/gsp418/housing_train.csv',
names=["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","MEDV"])
data_test = pd.read_csv(
filepath_or_buffer='https://storage.googleapis.com/spls/gsp418/housing_test.csv',
names=["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","MEDV"])
FEATURES = ["CRIM", "ZN", "INDUS", "NOX", "RM",
"AGE", "DIS", "TAX", "PTRATIO"]
LABEL = "MEDV"
feature_cols = [tf.feature_column.numeric_column(k)
for k in FEATURES] #list of Feature Columns
def generate_estimator(output_dir):
return tf.estimator.DNNRegressor(feature_columns=feature_cols,
hidden_units=[args.hidden_units_1, args.hidden_units_2], #NEW (use command line parameters for hidden units)
model_dir=output_dir)
def generate_input_fn(data_set):
def input_fn():
features = {k: tf.constant(data_set[k].values) for k in FEATURES}
labels = tf.constant(data_set[LABEL].values)
return features, labels
return input_fn
def serving_input_fn():
#feature_placeholders are what the caller of the predict() method will have to provide
feature_placeholders = {
column.name: tf.placeholder(column.dtype, [None])
for column in feature_cols
}
#features are what we actually pass to the estimator
features = {
# Inputs are rank 1 so that we can provide scalars to the server
# but Estimator expects rank 2, so we expand dimension
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(
features, feature_placeholders
)
train_spec = tf.estimator.TrainSpec(
input_fn=generate_input_fn(data_train),
max_steps=3000)
exporter = tf.estimator.LatestExporter('Servo', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn=generate_input_fn(data_test),
steps=1,
exporters=exporter)
######START CLOUD ML ENGINE BOILERPLATE######
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Arguments
parser.add_argument(
'--output_dir',
help='GCS location to write checkpoints and export models',
required=True
)
parser.add_argument(
'--job-dir',
help='this model ignores this field, but it is required by gcloud',
default='junk'
)
parser.add_argument(
'--hidden_units_1', #NEW (expose hyperparameter to command line)
help='number of neurons in first hidden layer',
type = int,
default=10
)
parser.add_argument(
'--hidden_units_2', #NEW (expose hyperparameter to command line)
help='number of neurons in second hidden layer',
type = int,
default=10
)
args = parser.parse_args()
arguments = args.__dict__
output_dir = arguments.pop('output_dir')
output_dir = os.path.join(#NEW (give each trial its own output_dir)
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
######END CLOUD ML ENGINE BOILERPLATE######
#initiate training job
tf.estimator.train_and_evaluate(generate_estimator(output_dir), train_spec, eval_spec)
%%writefile config.yaml
trainingInput:
hyperparameters:
goal: MINIMIZE
hyperparameterMetricTag: average_loss
maxTrials: 5
maxParallelTrials: 1
params:
- parameterName: hidden_units_1
type: INTEGER
minValue: 1
maxValue: 100
scaleType: UNIT_LOG_SCALE
- parameterName: hidden_units_2
type: INTEGER
minValue: 1
maxValue: 100
scaleType: UNIT_LOG_SCALE
GCS_BUCKET = 'gs://vijays-sandbox-ml' #CHANGE THIS TO YOUR BUCKET
PROJECT = 'vijays-sandbox' #CHANGE THIS TO YOUR PROJECT ID
REGION = 'us-central1' #OPTIONALLY CHANGE THIS
import os
os.environ['GCS_BUCKET'] = GCS_BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=trainer \
-- \
--output_dir='./output'
%%bash
gcloud config set project $PROJECT
%%bash
JOBNAME=housing_$(date -u +%y%m%d_%H%M%S)
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=./trainer \
--job-dir=$GCS_BUCKET/$JOBNAME/ \
--runtime-version 1.4 \
--config config.yaml \
-- \
--output_dir=$GCS_BUCKET/$JOBNAME/output
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function to optimize
Step2: To illustrate the problem
Step3: Now with a Logarithmic latent space mapping
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from IPython.core.debugger import Tracer # debugging
from IPython.display import clear_output, display
import time
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # prettify matplotlib
import copy
import numpy as np
import sklearn.gaussian_process as sk_gp
# local modules
import turbo as tb
import turbo.modules as tm
import turbo.gui.jupyter as tg
import turbo.plotting as tp
# make deterministic
np.random.seed(100)
buffer = 5e-3 # function not defined at exactly 0
shift = -2
def f(x):
x = x - shift
return np.cos(2*(20-x)**2)/x - 2*np.log(x)
def logspace(from_, to, num_per_mag=1):
'''
num_per_mag: number of samples per order of magnitude
'''
from_exp = np.log10(from_)
to_exp = np.log10(to)
num = abs(to_exp-from_exp)*num_per_mag + 1
return np.logspace(from_exp, to_exp, num=num, base=10)
x_min = buffer
x_max = 5
xs = logspace(x_min, x_max, num_per_mag=200)
x_min += shift
x_max += shift
xs += shift
#xs = np.linspace(x_min, x_max, num=601)
print(len(xs))
ys = f(xs)
best_y = np.max(ys)
plt.figure(figsize=(16,4))
plt.plot(xs, ys, 'g-')
plt.margins(0.01, 0.1)
plt.title('Linear Scale')
plt.xlabel('x')
plt.ylabel('cost')
plt.show()
plt.figure(figsize=(16,4))
plt.plot(xs - shift, ys, 'g-') # have to revert the shift to plot with the log scale
plt.margins(0.1, 0.1)
plt.title('Logarithmic Scale')
plt.xlabel('x')
plt.axes().set_xscale('log')
plt.ylabel('cost')
plt.show()
bounds = [('x', x_min, x_max)]
op = tb.Optimiser(f, 'max', bounds, pre_phase_trials=2, settings_preset='default')
'''
op.latent_space = tm.NoLatentSpace()
# this function is very difficult to fit effectively, I found that the only way to make the GP behave is
# to use the domain knowledge that the length_scale can't be anywhere near the default maximum of 100,000
op.surrogate_factory = tm.SciKitGPSurrogate.Factory(gp_params=dict(
alpha = 1e-10, # larger => more noise. Default = 1e-10
kernel = 1.0 * gp.kernels.Matern(nu=2.5, length_scale_bounds=(1e-5, 10))+gp.kernels.WhiteKernel(),
), variable_iterations=lambda trial_num: 4 if (trial_num-2) % 3 == 0 else 1)
'''
op.surrogate = tm.GPySurrogate()
op.acquisition = tm.UCB(beta=2)
op_log = copy.deepcopy(op)
rec = tb.Recorder(op)
tg.OptimiserProgressBar(op)
op.run(max_trials=30)
tp.plot_error(rec, true_best=best_y);
tp.plot_timings(rec);
tp.interactive_plot_trial_1D(rec, param='x', true_objective=f)
zero_point = x_min - buffer # the function is not defined for any x <= zero point
op_log.latent_space = tm.ConstantLatentSpace(mappings={'x' : tm.LogMap(zero_point=zero_point)})
rec_log = tb.Recorder(op_log)
tg.OptimiserProgressBar(op_log)
op_log.run(max_trials=15)
tp.plot_error(rec_log, true_best=best_y);
tp.plot_timings(rec_log);
for l in [False, True]:
tp.plot_trial_1D(rec_log, param='x', trial_num=-1, true_objective=f, plot_in_latent_space=l)
tp.interactive_plot_trial_1D(rec_log, true_objective=f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Order of models
Step6: Notice the margin vs classification accuracy trade-off by tuning parameter C
Step7: By normalizing the three boundary vectors, discrepancies between intra-boundary angles have lowered.
Step8: Multiclass SVM
Step9: In the plot below we see this more complex model has also classified all data correctly.
Step10: Now for multiclass classification of NYC urban sound files!
|
<ASSISTANT_TASK:>
Python Code:
# Create the training data
np.random.seed(2)
X, y = make_blobs(n_samples=300,cluster_std=.25, centers=np.array([(-3,1),(0,2),(3,1)]))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
from sklearn.base import BaseEstimator, ClassifierMixin, clone
from numpy import linalg as L
class OneVsAllClassifier(BaseEstimator, ClassifierMixin):
One-vs-all classifier
We assume that the classes will be the integers 0,..,(n_classes-1).
We assume that the estimator provided to the class, after fitting, has a "decision_function" that
returns the score for the positive class.
def __init__(self, estimator, n_classes):
Constructed with the number of classes and an estimator (e.g. an
SVM estimator from sklearn)
@param estimator : binary base classifier used
@param n_classes : number of classes
self.n_classes = n_classes
self.estimators = [clone(estimator) for _ in range(n_classes)]
self.fitted = False
def fit(self, X, y=None):
This should fit one classifier for each class.
self.estimators[i] should be fit on class i vs rest
@param X: array-like, shape = [n_samples,n_features], input data
@param y: array-like, shape = [n_samples,] class labels
@return returns self
for i in range(self.n_classes):
self.estimators[i].fit(X, y==i)
self.fitted = True
return self
def decision_function(self, X):
Returns the score of each input for each class. Assumes
that the given estimator also implements the decision_function method (which sklearn SVMs do),
and that fit has been called.
@param X : array-like, shape = [n_samples, n_features] input data
@return array-like, shape = [n_samples, n_classes]
if not self.fitted:
raise RuntimeError("You must train classifer before predicting data.")
if not hasattr(self.estimators[0], "decision_function"):
raise AttributeError(
"Base estimator doesn't have a decision_function attribute.")
classes_score = [ est.decision_function(X) for est in self.estimators ]
score_ = classes_score[0]
for thing in classes_score[1:]:
score_ = np.column_stack( (score_, thing ) )
return score_
def predict(self, X):
Predict the class with the highest score.
@param X: array-like, shape = [n_samples,n_features] input data
@returns array-like, shape = [n_samples,] the predicted classes for each input
predictions = np.zeros(X.shape[0])
decisions = self.decision_function(X)
for i in range(predictions.shape[0]):
predictions[i] = np.argmax( decisions[i] )
return predictions
#Here we test the OneVsAllClassifier
from sklearn import svm
svm_estimator = svm.LinearSVC(loss='hinge', fit_intercept=False, C=5)
clf_onevsall = OneVsAllClassifier(svm_estimator, n_classes=3)
clf_onevsall.fit(X,y)
for i in range(3) :
print("Coeffs %d"%i)
print(clf_onevsall.estimators[i].coef_) #Will fail if you haven't implemented fit yet
print( clf_onevsall.estimators[i].coef_/L.norm(clf_onevsall.estimators[i].coef_))
# create a mesh to plot in
h = .02 # step size in the mesh
x_min, x_max = min(X[:,0])-3,max(X[:,0])+3
y_min, y_max = min(X[:,1])-3,max(X[:,1])+3
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
mesh_input = np.c_[xx.ravel(), yy.ravel()]
Z = clf_onevsall.predict(mesh_input)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
from sklearn import metrics
metrics.confusion_matrix(y, clf_onevsall.predict(X))
# create a mesh to plot in
h = .02 # step size in the mesh
x_min, x_max = min(X[:,0])-3,max(X[:,0])+3
y_min, y_max = min(X[:,1])-3,max(X[:,1])+3
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
mesh_input = np.c_[xx.ravel(), yy.ravel()]
def f(parameter):
svm_estimator = svm.LinearSVC(loss='hinge', fit_intercept=False, C=parameter)
clf_onevsall = OneVsAllClassifier(svm_estimator, n_classes=3)
clf_onevsall.fit(X,y)
Z = clf_onevsall.predict(mesh_input)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.2)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
for c in [1, 50, 100, 200, 500, 1000]:
f(c)
soa = np.array([[0, 0, -0.76080386, -0.64898189], [0, 0,0.68477689, 0.72875278], [0, 0, 0.73358915, -0.67959323]])
A, B, C, D = zip(*(-1*soa) )
plt.figure()
ax = plt.gca()
ax.quiver(A, B, C, D, scale=1)
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
ax.set_xlim([-6, 6])
ax.set_ylim([-2, 4])
plt.draw()
plt.show()
def zeroOne(y,a) :
'''
Computes the zero-one loss.
@param y: output class
@param a: predicted class
@return 1 if different, 0 if same
'''
return int(y != a)
def featureMap(X,y,num_classes) :
'''
Computes the class-sensitive features.
@param X: array-like, shape = [n_samples,n_inFeatures] or [n_inFeatures,], input features for input data
@param y: a target class (in range 0,..,num_classes-1)
@return array-like, shape = [n_samples,n_outFeatures], the class sensitive features for class y
'''
#The following line handles X being a 1d-array or a 2d-array
num_samples, num_inFeatures = (1,X.shape[0]) if len(X.shape) == 1 else (X.shape[0],X.shape[1])
num_outFeatures = num_inFeatures*num_classes
output = np.zeros((num_samples, num_outFeatures))
if X.ndim == 1:
output[0, y*num_inFeatures: y*num_inFeatures+2] = X
else:
for i in range(num_samples):
output[i, y*num_inFeatures: y*num_inFeatures +2 ] = X[i]
return output
def sgd(X, y, num_outFeatures, subgd, lam = 0.01, eta = 0.1, T = 10000):
'''
Runs subgradient descent, and outputs resulting parameter vector.
@param X: array-like, shape = [n_samples,n_features], input training data
@param y: array-like, shape = [n_samples,], class labels
@param num_outFeatures: number of class-sensitive features
@param subgd: function taking x,y and giving subgradient of objective
@param weight w
@param eta: learning rate for SGD
@param T: maximum number of iterations
@return: vector of weights
'''
num_samples = X.shape[0]
w_store = np.zeros(num_outFeatures)
w = np.zeros(num_outFeatures)
#w = np.array([0,0.1,0,0.1,0,0.1])
for t in range(T):
#eta /= (t+1)
#sample_ind = np.random.choice(num_samples, 1)
for sample_ind in range(num_samples):
w -= eta*subgd(X[sample_ind], y[sample_ind], w)
#if sample_ind == t:
# print(subgd(X[sample_ind], y[sample_ind], w))
w -= eta*lam*w
return w#_store/T
class MulticlassSVM(BaseEstimator, ClassifierMixin):
'''
Implements a Multiclass SVM estimator.
'''
def __init__(self, num_outFeatures, lam=0.01, num_classes=3, Delta=zeroOne, Psi=featureMap):
'''
Creates a MulticlassSVM estimator.
@param num_outFeatures: number of class-sensitive features produced by Psi
@param lam: l2 regularization parameter
@param num_classes: number of classes (assumed numbered 0,..,num_classes-1)
@param Delta: class-sensitive loss function taking two arguments (i.e., target margin)
@param Psi: class-sensitive feature map taking two arguments
'''
self.num_outFeatures = num_outFeatures
self.lam = lam
self.num_classes = num_classes
self.Delta = Delta
self.Psi = lambda X,y : Psi(X,y,num_classes)
self.fitted = False
def subgradient(self,x,y,w):
'''
Computes the subgradient at a given data point x,y
@param x: sample input
@param y: sample class
@param w: parameter vector
@return returns subgradient vector at given x,y,w
'''
scores_ = []
for i in [0,1,2]:
scores_.append( zeroOne(i,y) + np.dot(w, (self.Psi(x,i) - self.Psi(x,y) ).reshape(6) ) )
y_hat = (np.argmax( scores_ ) )
#print(x, x.shape, (self.Psi(x,y_hat) - self.Psi(x,y)).reshape(6))
return (self.lam*w + self.Psi(x, y_hat) - self.Psi(x, y)).reshape(6)
def fit(self,X,y,eta=0.1,T=10000):
'''
Fits multiclass SVM
@param X: array-like, shape = [num_samples,num_inFeatures], input data
@param y: array-like, shape = [num_samples,], input classes
@param eta: learning rate for SGD
@param T: maximum number of iterations
@return returns self
'''
self.coef_ = sgd(
X,y, self.num_outFeatures,self.subgradient,self.lam,eta,T) #when to use num_outFeatures?
self.fitted = True
return self
def decision_function(self, data):
'''
Returns the score on each input for each class. Assumes
that fit has been called.
@param X : array-like, shape = [n_samples, n_inFeatures]
@return array-like, shape = [n_samples, n_classes]
giving scores for each sample,class pairing
'''
if not self.fitted:
raise RuntimeError("You must train classifer before predicting data.")
samples = data.shape[0]
scores = np.zeros((samples, 3))
for s in range(samples):
for c in [0,1,2]:
scores[s,c] = np.dot(self.coef_, self.Psi(data[s], c).T)
return scores
def predict(self, X):
'''
Predict the class with the highest score.
@param X: array-like, shape = [n_samples, n_inFeatures], input data to predict
@return array-like, shape = [n_samples,], class labels predicted for each data point
'''
labels = np.zeros(X.shape[0])
table = self.decision_function(X)
#print(self.decision_function(X))
for i in range(X.shape[0]):
labels[i] = np.argmax( table[i,:] )
return labels
#the following code tests the MulticlassSVM and sgd
#will fail if MulticlassSVM is not implemented yet
est = MulticlassSVM(6,lam=0.01)
est.fit(X,y)
print("w:")
print(est.coef_)
#print(mesh_input.shape)
Z = est.predict(mesh_input)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
from sklearn import metrics
metrics.confusion_matrix(y, est.predict(X))
import librosa as l
path ='../UrbanSound8K/audio/fold1/'
y, sr = l.load(path + '7061-6-0-0.wav')#+'7061-6-0-0.wav')
import IPython
IPython.display.Audio(path + '7061-6-0-0.wav')
from librosa.display import waveplot, specshow
plt.figure()
plt.subplot(3, 1, 1)
waveplot(y, sr=sr)
plt.title('Waves')
D = l.amplitude_to_db(l.stft(y), ref=np.max)
plt.subplot(4, 2, 1)
specshow(D, y_axis='linear')
plt.colorbar(format='%+2.0f dB')
plt.title('Linear-frequency power spectrogram')
def features(file_path):
try:
y, sr = l.load(file_path, duration=5.0, res_type='scipy')
except:
print(file_path)
# return 'r'
mfcc = l.feature.mfcc(y).mean(axis = 1).reshape(20)
#print(l.feature.delta(y).mean(), mfcc.shape)
features = np.append(mfcc,(l.feature.delta(y).mean()) ) #np.concatenate((l.feature.delta(y).mean(), mfcc ))
if features.shape[0] != 21:
#print(l.feature.delta(y).shape , l.feature.mfcc(y).shape)
raise Exception('sample w/ bad feature')
return features
#features(path + '7061-6-0-0.wav')
import pandas as pd
meta = pd.read_csv('../UrbanSound8K/metadata/UrbanSound8k.csv')
data_set = meta.sample(n=4000)
data_set.head()
array = np.zeros((4000, 21))
labels = np.zeros((4000,))
from os import listdir
i = 0
for folder in range(1,11):
path ='../UrbanSound8K/audio/fold%d/' %(folder)
for file in listdir(path):
if file in data_set['slice_file_name'].values:
labels[i] = data_set[data_set['slice_file_name'] == file]['classID'].values[0]
array[i] = features(path+file)
i += 1
mean = array.mean(axis=1)
array -= mean[:, np.newaxis]
array_normed = array/(array.std(axis=1))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
data_set['slice_file_name'], data_set['classID'], test_size=0.5, random_state=42)
X_train.shape, X_test.shape, y_train.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start submission procedure
Step2: please provide information on the contact person for this CORDEX data submission request
Step3: Type of submission
Step4: Requested general information
Step5: institute_id
Step6: model_id
Step7: experiment_id and time_period
Step8: Example file name
Step9: information on the grid_mapping
Step10: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
Step11: Please provide information on quality check performed on the data you plan to submit
Step12: Terms of use
Step13: Information on directory structure and data access path
Step14: Give the path where the data reside, for example
Step15: Exclude variable list
Step16: Uniqueness of tracking_id and creation_date
Step17: Variable list
Step18: Check your submission before submission
Step19: Save your form
Step20: officially submit your form
|
<ASSISTANT_TASK:>
Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
# initialize your CORDEX submission form template
from dkrz_forms import form_handler
from dkrz_forms import checks
my_email = "..." # example: sf.email = "Mr.Mitty@yahoo.com"
my_first_name = "..." # example: sf.first_name = "Harold"
my_last_name = "..." # example: sf.last_name = "Mitty"
my_keyword = "..." # example: sf.keyword = "mymodel_myrunid"
sf = form_handler.init_form("CORDEX",my_first_name,my_last_name,my_email,my_keyword)
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
sf.institute_id = "..." # example: sf.institute_id = "AWI"
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub['status_flag_validity'] = res['valid_submission']
form_handler.DictTable(res)
form_handler.form_save(sf)
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: Question 2
Step3: Question 3
Step5: Question 4
|
<ASSISTANT_TASK:>
Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# Display a specific YouTube video, with a given width and height.
# WE STRONGLY RECOMMEND that you can watch the video in full-screen mode
# (much higher resolution) by clicking the little box in the bottom-right
# corner of the video.
YouTubeVideo("cCLB1sNpNYo",width=640,height=360)
# write your program here. Don't forget that you execute your program by holding
# down 'shift' and pressing 'enter'
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("yv7klK57Ezc",width=640,height=360)
# write your program here, using multiple cells if necessary (adding extra cells using
# the 'Cell' menu at the top of this notebook). Don't forget that you can execute
# your program by holding down 'shift' and pressing 'enter' in each cell!
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("TJ_bGrigAMg",width=640,height=360)
# write your program here, using multiple cells if necessary (adding extra cells using
# the 'Cell' menu at the top of this notebook). Don't forget that you can execute
# your program by holding down 'shift' and pressing 'enter' in each cell!
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("rVhV_9YZZXw",width=640,height=360)
from IPython.display import HTML
HTML(
<iframe
src="https://docs.google.com/forms/d/e/1FAIpQLSc3fbJob95nAMX4KoYdhql4aco4i2_nagMHUqazRRt8g0JMVA/viewform?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are two ways to use the integrate function. In one line, like integrate(x,(x,0,1)) or by naming an expression and then integrating it over a range
Step2: So this tells us the normalization constant should be $c=\frac{1}{\sqrt{a}}$. Check that it is normalized if we do that
Step3: Because psi is a real function, we can calculate expectation values by integrating over $x$ or $x^2$ with psi**2
Step4: Example 10.2
Step5: Use the diff function to take a derivative of a symbolic expression. For example
Step6: Example 10.3
Step7: Which agrees with the book.
Step8: So $\Delta x^2 = a^2 - 0^2$ therefore $\Delta x = a$
Step9: This agrees with the book after we notice that we had to force $p$ to be positive in order to get the integral to converge. The book has $|p|$ in the argument of the exponent to reflect this constraint.
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
init_printing(use_unicode=True)
# SymPy works better if you specify what letters are symbols:
x, y, z = symbols('x y z', real=True)
# notice we can also put some restrictions on the symbols:
a, c = symbols('a c', nonzero=True, real=True)
integrate?
integrate(x,(x,0,1))
integrate(x**2,(x,0,1))
A = (c*cos((pi*x)/(2.0*a)))**2
A.integrate((x,-a,a))
psi = 1/sqrt(a)*cos((pi*x)/(2.0*a)) # notice we can name the expression something useful.
B = psi**2
B.integrate( (x,-a,a), conds='none')
C = x*psi**2
C.integrate( (x,-a,a), conds='none')
D = x**2 * psi**2
E = D.integrate( (x,-a,a), conds='none')
E
E.simplify() # this is a useful method!
E.n() # the .n() method approximates the numerical part. You can look at the full expression below.
h = Symbol('hbar', real=True, positive=True)
diff(x**2, x)
# Solution
-1j*h*diff( 1/a*cos((pi*x)/(2*a)) ,x)
# Solution
B1 = (pi*h/(2*a))**2 * (cos((pi*x)/(2*a)))**2
B1.integrate( (x,-a,a), conds='none' )
p = Symbol('p', real=True)
# Solution
A = integrate(1/sqrt(2*pi*a*h)*exp(-I*p*x/h)*cos((pi*x)/(2*a)),(x,-a,a), conds='none')
# Solution
A
psi_p = sqrt(2*a*pi/h) * 2/(pi**2 - (2*p*a/h)**2) * cos(p*a/h)
psi_p
psi_p == sqrt(2*a*pi/h)*2/(pi**2 - (2*p*a/h)**2) * cos(p*a/h)
x, y, z = symbols('x y z', real=True)
a, c = symbols('a c', nonzero=True, real=True, positive=True)
psi = c*1/(a**2 + x**2) # define the wavefunction with c constant
int1 = integrate(psi*psi,(x,-oo,oo), conds='none') # integrate psi^2
solutions = solve(int1 - 1,c) # solve for c, this returns a list of solutions
c2 = simplify(solutions[0]) # simplify the solution for c:
c2
psi2 = c2/c*psi
psi2
integrate(psi2 * x * psi2,(x,-oo,oo))
integrate(psi2 * x**2 * psi2,(x,-oo,oo))
p = symbols('p', nonzero=True, real=True, positive=True)
B = integrate(sqrt(1/(2*pi*h))*exp(-I*p*x/h)*psi2,(x,-oo,oo))
B
B.simplify()
psi = 1/sqrt(2*a)*sech(x/a)
dpsi = diff(psi,x)
dpsi
ddpsi = diff(dpsi,x)
ddpsi = ddpsi.simplify()
ddpsi
expect_p = integrate(psi*ddpsi,(x,-oo,oo))
expect_p
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: ```
Step5: a) Example with All probabilities 50 - 50 (input signal = noise)
Step6: b) Example with some clear input singnal
Step7: Example with some clear input singnal half noise
Step8: We assume that there are 3 differnt values for dependent variable
Step9: We have 4 categories for dependent var
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
def get_features_values_combinations(list_a, list_b):
Returns a list of combinations of the values of
e.g. from the lists
list_a = ['L', 'M', 'W']
list_b = ['F', 'I', 'S']
we get the combinations:
[('L', 'F'), ('L', 'I'), ('L', 'S'), ('M', 'F'), ('M', 'I') ...
import itertools
return list(itertools.product(list_a, list_b))
def generate_dataset_with_probabilities(n_datapoints,
x_values,
y_values,
y_probabilites=None):
Return a dataset of given possible values with given possible probabilities
:n_datapoints: The number of datapoints we want to generate
:x_values: 1-D array e.g. ['Man', 'Woman']
:y_values: 1-D array e.g. ['meat', 'fish', 'vegetables']
:y_probabilites: 1-D array-like e.g. [0.5, 0.25, 0.25]
The probabilities associated with each entry in entries_values.
If not given the sample assumes a uniform distribution over all entries_values
import numpy as np
datapoints = []
for i in range(n_datapoints):
datapoint = []
datapoint = (x_values,) + (np.random.choice(y_values, p=y_probabilites),)
datapoints.append(datapoint)
return datapoints
def generate_datapoints(n_datapoints, var_x, var_y, case_probabilites):
# Generate the datapoints according to the desired destribution
datapoints = []
all_cases = get_features_values_combinations(var_x, var_y)
n_cases = len(all_cases)
n_datapoints_per_case = int(1. * n_datapoints / n_cases)
for case in all_cases:
sex = case[:-1][0]
case_probabilites = cases_probabilites[sex]
datapoints.extend(
generate_dataset_with_probabilities(n_datapoints_per_case,
sex,
var_y,
y_probabilites=case_probabilites))
N_datapoints = len(datapoints)
print "All the combinations we can have are: ", all_cases
print "N total datapoints: ", N_datapoints
return np.array(datapoints)
def get_transformed_dataframe(df):
Transform the DataFrame accordinlgy (dummy vars / encoding)
# And transform the categorical variables
from sklearn import preprocessing
df_transformed_X = pd.get_dummies(df[var_x_name])
encoder = preprocessing.LabelEncoder()
encoder.fit(df[var_y_name].values)
df_transformed_y = pd.DataFrame(data = encoder.transform(df[var_y_name].values), columns=[var_y_name])
df_transformed = pd.concat([df_transformed_X, df_transformed_y], axis=1)
df_transformed.head(2)
return df_transformed
# Independent Variable
var_x_name = 'sex'
var_x = ['Man', 'Woman']
# Dependent Variable
var_y_name = 'status'
var_y = ['Died', 'Alive']
# We assume that we do not have any strong signal in our input
# e.g. for men smoking or not 50% diy
# the same for women
cases_probabilites = {
'Man' : [0.5, 0.5],
'Woman' : [0.5, 0.5]}
datapoints = generate_datapoints(1000, var_x, var_y, case_probabilites)
# Cross check that the datapoints we generated follow the frequency we thought
all_cases = get_features_values_combinations(var_x, var_y)
for case in all_cases:
print "Frequency of ", case, ": ", 1. * len([x for x in datapoints if tuple(x.tolist()) == case]) / len(datapoints)
df = pd.DataFrame(data = datapoints, columns=[var_x_name, var_y_name])
df_transformed = get_transformed_dataframe(df)
# Split our sample to train / set
from sklearn.model_selection import train_test_split
X = df_transformed[['Man', 'Woman']].values
y = df_transformed['status'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# Apply model
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
print('LogisticRegression score: %f'
% logistic.fit(X_train, y_train).score(X_test, y_test))
# Here we assume that Men are affected a lot from smoking 90% die while women not, only 10% die
cases_probabilites = {
'Man' : [0.9, 0.1],
'Woman' : [0.1, 0.9]}
datapoints = generate_datapoints(1000, var_x, var_y, case_probabilites)
# Cross check that the datapoints we generated follow the frequency we thought
all_cases = get_features_values_combinations(var_x, var_y)
for case in all_cases:
print "Frequency of ", case, ": ", 1. * len([x for x in datapoints if tuple(x.tolist()) == case]) / len(datapoints)
df = pd.DataFrame(data = datapoints, columns=[var_x_name, var_y_name])
df_transformed = get_transformed_dataframe(df)
# Split our sample to train / set
from sklearn.model_selection import train_test_split
X = df_transformed[['Man', 'Woman']].values
y = df_transformed['status'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# Apply model
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
print('LogisticRegression score: %f'
% logistic.fit(X_train, y_train).score(X_test, y_test))
# We assume that we do not have any strong signal in our input
# e.g. for men smoking or not 50% diy
# the same for women
cases_probabilites = {
'Man' : [0.5, 0.5],
'Woman' : [0.5, 0.5]}
datapoints = generate_datapoints(1000, var_x, var_y, case_probabilites)
# Cross check that the datapoints we generated follow the frequency we thought
all_cases = get_features_values_combinations(var_x, var_y)
for case in all_cases:
print "Frequency of ", case, ": ", 1. * len([x for x in datapoints if tuple(x.tolist()) == case]) / len(datapoints)
df = pd.DataFrame(data = datapoints, columns=[var_x_name, var_y_name])
df_transformed = get_transformed_dataframe(df)
# Split our sample to train / set
from sklearn.model_selection import train_test_split
X = df_transformed[['Man', 'Woman']].values
y = df_transformed['status'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# Apply model
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
print('LogisticRegression score: %f'
% logistic.fit(X_train, y_train).score(X_test, y_test))
var_y = ['Died', 'Alive', 'Almost_dead']
cases_probabilites = {
'Man' : [0.333, 0.333, 1-0.333-.333],
'Woman' : [0.333, 0.333, 1-.333-.333]}
datapoints = generate_datapoints(10000, var_x, var_y, case_probabilites)
# Cross check that the datapoints we generated follow the frequency we thought
all_cases = get_features_values_combinations(var_x, var_y)
for case in all_cases:
print "Frequency of ", case, ": ", 1. * len([x for x in datapoints if tuple(x.tolist()) == case]) / len(datapoints)
df = pd.DataFrame(data = datapoints, columns=[var_x_name, var_y_name])
df_transformed = get_transformed_dataframe(df)
# Split our sample to train / set
from sklearn.model_selection import train_test_split
X = df_transformed[['Man', 'Woman']].values
y = df_transformed['status'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# Apply model
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
print('LogisticRegression score: %f'
% logistic.fit(X_train, y_train).score(X_test, y_test))
var_y = ['Died', 'Alive', 'Almost_dead', 'Almost_alive']
cases_probabilites = {
'Man' : [0.25, 0.25, 0.25, 0.25],
'Woman' : [0.25, 0.25, 0.25, 0.25]
}
datapoints = generate_datapoints(10000, var_x, var_y, case_probabilites)
# Cross check that the datapoints we generated follow the frequency we thought
all_cases = get_features_values_combinations(var_x, var_y)
for case in all_cases:
print "Frequency of ", case, ": ", 1. * len([x for x in datapoints if tuple(x.tolist()) == case]) / len(datapoints)
df = pd.DataFrame(data = datapoints, columns=[var_x_name, var_y_name])
df_transformed = get_transformed_dataframe(df)
# Split our sample to train / set
from sklearn.model_selection import train_test_split
X = df_transformed[['Man', 'Woman']].values
y = df_transformed['status'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# Apply model
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
print('LogisticRegression score: %f'
% logistic.fit(X_train, y_train).score(X_test, y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line plot of sunspot data
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.path.isfile('yearssn.dat')
data=np.loadtxt('yearssn.dat')
ssc=data[:,1]
year=data[:,0]
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
f=plt.figure(figsize=(25,4))
plt.plot(year,ssc)
plt.title("Sun Spots Seen Per Year Since 1700")
plt.xlabel("Year")
plt.ylabel("Number of Sun Spots Seen")
plt.xlim(1700,2015)
plt.ylim(0,180)
assert True # leave for grading
# YOUR CODE HERE
f=plt.figure(figsize=(25,4))
seventeen=data[:100,:]
eighteen=data[100:200,:]
nineteen=data[200:300,:]
two=data[300:,:]
plt.subplot(2,2,1)
plt.plot(seventeen[:,0],seventeen[:,1])
plt.title("Sun Spots seen per Year During the 1700's")
plt.subplot(2,2,2)
plt.plot(eighteen[:,0],eighteen[:,1])
plt.title("Sun Spots seen per Year During the 1800's")
plt.subplot(2,2,3)
plt.plot(nineteen[:,0],nineteen[:,1])
plt.title("Sun Spots seen per Year During the 1900's")
plt.subplot(2,2,4)
plt.plot(two[:,0],two[:,1])
plt.title("Sun Spots seen per Year During the 2000's")
plt.tight_layout()
assert True # leave for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So the acceptance fraction is about 84%, which seems too high. It should be closer to 23%. So we should increase the typical step size.
|
<ASSISTANT_TASK:>
Python Code:
import h5py
f = h5py.File('../data/mc.hdf5', mode='r')
list(f.keys())
d = f['samples']
list(d.attrs)
d.attrs['acceptance']
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
%config InlineBackend.figure_format = 'svg'
import emcee
ac = emcee.autocorr
x = d[:,0]
x.shape
acf = ac.function(d[:,5])
plt.plot(acf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 人脸生成(Face Generation)
Step3: 探索数据(Explore the Data)
Step5: CelebA
Step7: 预处理数据(Preprocess the Data)
Step10: 输入(Input)
Step13: 辨别器(Discriminator)
Step16: 生成器(Generator)
Step19: 损失函数(Loss)
Step22: 优化(Optimization)
Step25: 训练神经网络(Neural Network Training)
Step27: 训练
Step29: MNIST
Step31: CelebA
|
<ASSISTANT_TASK:>
Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
# TODO: Implement Function
x = tf.placeholder(dtype=tf.float32,shape=(None,image_width,image_height,image_channels),name="inputs_x")
z = tf.placeholder(dtype=tf.float32,shape=(None,z_dim),name="z_dim")
learning_rate = tf.placeholder(dtype=tf.float32,shape=(),name="learning_rate")
return x, z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
# TODO: Implement Function
alpha=0.2
with tf.variable_scope("discriminator",reuse=reuse):
conv_1 = tf.layers.conv2d(images, 64, 4, strides=2, padding='same')
conv_1_bn = tf.layers.batch_normalization(conv_1, training=True)
relu1 = tf.maximum(alpha * conv_1_bn, conv_1_bn)
# 14x14x64
conv_2 = tf.layers.conv2d(relu1, 128, 4, strides=2, padding='same')
conv_2_bn = tf.layers.batch_normalization(conv_2, training=True)
relu2 = tf.maximum(alpha * conv_2_bn, conv_2_bn)
# 7x7x128
conv_3 = tf.layers.conv2d(relu2, 256, 4, strides=2, padding='same')
conv_3_bn = tf.layers.batch_normalization(conv_3, training=True)
relu3 = tf.maximum(alpha * conv_3_bn, conv_3_bn)
# 4x4x256
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
# TODO: Implement Function
alpha=0.2
with tf.variable_scope('generator',reuse=not(is_train)):
# Reshape it to start the convolutional stack
fc = tf.layers.dense(z, 4*4*512)
x = tf.reshape(fc, (-1, 4, 4, 512))
x = tf.layers.batch_normalization(x, training=is_train)
relu_x = tf.maximum(alpha * x, x)
# 4x4x512 now
#conv2d_1_t = tf.nn.conv2d_transpose(relu_x, kernel, output_shape=[64,7,7,256], strides=[1,2,2,1], padding='SAME')
conv2d_1_t = tf.layers.conv2d_transpose(relu_x, 256,4, strides=1, padding='valid')
conv2d_1_t_bn = tf.layers.batch_normalization(conv2d_1_t, training=is_train)
relu1 = tf.maximum(alpha * conv2d_1_t_bn, conv2d_1_t_bn)
# 7x7x256 now
conv2d_2_t = tf.layers.conv2d_transpose(relu1, 128, 4, strides=2, padding='same')
conv2d_2_t_bn = tf.layers.batch_normalization(conv2d_2_t, training=is_train)
relu2 = tf.maximum(alpha * conv2d_2_t_bn, conv2d_2_t_bn)
# 14x14x128
# Output layer
logits = tf.layers.conv2d_transpose(relu2, out_channel_dim, 4, strides=2, padding='same')
# 28x28x3 now
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
# TODO: Implement Function
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_f = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real)*(1 - 0.1)))
d_loss_f = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_f, labels=tf.zeros_like(d_logits_f)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_f, labels=tf.ones_like(d_logits_f)))
d_loss = d_loss_real + d_loss_f
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# TODO: Implement Function
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
# TODO: Build Model
input_real,input_z, lr = model_inputs(*data_shape[1:], z_dim)
out_channel_dim = 1
if data_image_mode == "RGB" :
out_channel_dim = 3
d_loss, g_loss = model_loss(input_real,input_z,out_channel_dim)
d_train_opt, g_train_opt = model_opt(d_loss,g_loss,lr,beta1)
steps = 0
start = time.clock()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
steps += 1
#print(batch_images)
batch_images = batch_images * 2
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr :learning_rate})
_ = sess.run(g_train_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr :learning_rate})
if steps % 100 == 0:
end = time.clock()
train_loss_d = d_loss.eval({input_real: batch_images, input_z: batch_z})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Time: {:.4f}s ...Epoch {}/{}...setp:{}...".format((end - start),epoch_i+1, epoch_count,steps),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
start = time.clock()
if steps % 500 == 0:
show_generator_output(sess,4,input_z,out_channel_dim,data_image_mode)
show_generator_output(sess,4,input_z,out_channel_dim,data_image_mode)
import time
batch_size = 32
z_dim = 100
learning_rate = 0.001
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Jupyter Notebook is a web-based application that enables users to create documents that combine live code wth narrative next, equations, images, visualizations and HTML/JavaScript widgets.
Step2: Inline plotting
Step3: Seamless access to the system shell
Step4: Narrative text and equations
Step5: Images
Step6: This representation is displayed if the object is returned from an expression
Step7: Or you can manually display the object using display
Step9: HTML
Step10: JavaScript
Step11: LaTeX
Step12: When a symbolic expression is passed to display or returned from an expression, the LaTeX representation is computed and displayed in the Notebook
Step13: nbviewer
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display, Image, HTML
from talktools import website, nbviewer
2+2
import math
math.atan?
%pylab inline
plot(rand(50))
!ls -al
from IPython.display import display
from IPython.display import Image
i = Image("images/jupyter_logo.png")
print(i)
i
display(i)
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
from IPython.display import Javascript
display(Javascript("alert('hi');"))
from __future__ import division
from sympy import *
x, y, z = symbols("x y z")
init_printing(use_latex='mathjax')
Rational(3,2)*pi + exp(I*x) / (x**2 + y)
(1/cos(x)).series(x, 0, 12)
website('https://nbviewer.jupyter.org')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Change data type
Step2: Rename the categories
Step3: Values in data frame have not changed
Step4: tabulate Department, Name, and YearsOfService, by Grade
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
file_name_string = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Begin/EmployeesWithGrades.xlsx'
employees_df = pd.read_excel(file_name_string, 'Sheet1', index_col=None, na_values=['NA'])
employees_df
employees_df["Grade"] = employees_df["Grade"].astype("category")
employees_df["Grade"].cat.categories = ["excellent", "good", "acceptable", "poor", "unacceptable"]
employees_df
employees_df.groupby('Grade').count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Création d'un jeu de données
Step2: Q2
Step3: a est la matrice de covariance.
Step4: Q4
Step5: Calcul de la racine carrée
Step6: C'est presque l'identité aux erreurs de calcul près.
Step7: Q8
Step8: On vérifie qu'on ne s'est pas trompé.
Step9: Décorrélation
Step10: Q9
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import random
import numpy as np
def combinaison():
x = random.gauss(0,1) # génère un nombre aléatoire
y = random.gauss(0,1) # selon une loi normale
z = random.gauss(0,1) # de moyenne null et de variance 1
x2 = x
y2 = 3*x + y
z2 = -2*x + y + 0.2*z
return [x2, y2, z2]
li = [ combinaison () for i in range (0,100) ]
mat = np.matrix(li)
mat[:5]
npm = mat
t = npm.transpose ()
a = t @ npm
a /= npm.shape[0]
a
cov = a
var = np.array([cov[i,i]**(-0.5) for i in range(cov.shape[0])])
var.resize((3,1))
varvar = var @ var.transpose()
varvar
cor = np.multiply(cov, varvar)
cor
def correlation(npm):
t = npm.transpose ()
a = t @ npm
a /= npm.shape[0]
var = np.array([cov[i,i]**(-0.5) for i in range(cov.shape[0])])
var.resize((3,1))
varvar = var @ var.transpose()
return np.multiply(cov, varvar)
correlation(npm)
L,P = np.linalg.eig(a)
P.transpose() @ P
np.diag(L)
def square_root_matrix(M):
L,P = np.linalg.eig(M)
L = L ** 0.5
root = P @ np.diag(L) @ P.transpose()
return root
root = square_root_matrix(cov)
root
root @ root - cov
np.linalg.inv(cov)
def simultation (N, cov) :
# simule un échantillon de variables corrélées
# N : nombre de variables
# cov : matrice de covariance
# ...
return M
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Their product is $(PQ)(X)$, of degree $n+m=8$
Step3: If we evaluate both $P(X)$ and $Q(X)$, on $n+m$ different points, $\lambda_i \in \mathbb{R}$ or $\in\mathbb{N}$, then we can fit a polynomial of degree $n+m = \delta(PQ)$ on these sampling points, and by uniqueness (thanks to the Fundamental Theorem of Algebra), it will be equal to $(PQ)(X)$.
Step4: Naive interpolation values
Step5: Ok, at least it seems to work!
Step6: Chebyshev nodes as interpolation values
Step7: Ok, at least it seems to work!
Step8: Benchmark
Step9: Of course, our implementation is slower.
Step10: On a larger example
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.version.full_version
P = [ 1, 0, 1, 1]
n = len(P) - 1
P, n
Q = [1, 0, 0, 1, 0, 0]
m = len(Q) - 1
Q, m
PQ = polymul(P, Q)
d = len(PQ) - 1
PQ, d
assert d == n + m
lambdas = np.arange(0, d + 1)
lambdas
values_P = np.polyval(P, lambdas)
values_P
values_Q = np.polyval(Q, lambdas)
values_Q
values_PQ = values_P * values_Q
values_PQ
PQ_sampled = np.polyfit(lambdas, values_PQ, d)
PQ_sampled
PQ
np.asarray(np.round(PQ_sampled), dtype=int)
np.max(np.abs(PQ_sampled)[np.abs(PQ_sampled) < 0.9])
lambdas = np.cos(np.pi * (2 * np.arange(1, 2 + d) - 1) / (2 * d))
lambdas
values_P = np.polyval(P, lambdas)
values_P
values_Q = np.polyval(Q, lambdas)
values_Q
values_PQ = values_P * values_Q
values_PQ
PQ_sampled2 = np.polyfit(lambdas, values_PQ, d)
PQ_sampled2
PQ
np.asarray(np.round(PQ_sampled2), dtype=int)
np.max(np.abs(PQ_sampled2)[np.abs(PQ_sampled2) < 0.9])
def mypolymul(P, Q):
n = len(P) - 1
m = len(Q) - 1
d = n + m
lambdas = np.cos(np.pi * (2 * np.arange(1, 2 + d) - 1) / (2 * d))
values_P = np.polyval(P, lambdas)
values_Q = np.polyval(Q, lambdas)
values_PQ = values_P * values_Q
PQ_sampled = np.polyfit(lambdas, values_PQ, d)
# return PQ_sampled
return np.asarray(np.round(PQ_sampled), dtype=int)
np.polymul(P, Q)
mypolymul(P, Q)
import warnings
warnings.simplefilter('ignore', np.RankWarning)
%timeit np.polymul(P, Q)
%timeit mypolymul(P, Q)
def random_polynomial(d=10, maxcoef=1):
return np.random.randint(low=-maxcoef, high=maxcoef+1, size=d+1)
P = random_polynomial()
Q = random_polynomial()
P, Q
%timeit np.polymul(P, Q)
np.polymul(P, Q)
%timeit mypolymul(P, Q)
mypolymul(P, Q)
assert np.all(np.polymul(P, Q) == mypolymul(P, Q))
d = 100
maxcoef = 1
%timeit np.polymul(random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef))
%timeit mypolymul(random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef))
P, Q = random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef)
assert np.all(np.polymul(P, Q) == mypolymul(P, Q))
d = 10
maxcoef = 3
%timeit np.polymul(random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef))
%timeit mypolymul(random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef))
P, Q = random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef)
assert np.all(np.polymul(P, Q) == mypolymul(P, Q))
d = 10
maxcoef = 50
%timeit np.polymul(random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef))
%timeit mypolymul(random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef))
P, Q = random_polynomial(d=d, maxcoef=maxcoef), random_polynomial(d=d, maxcoef=maxcoef)
assert np.all(np.polymul(P, Q) == mypolymul(P, Q))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data into skrf
Step2: DC point extrapolation
Step3: Microstripline
Step4: Measurement vs simulation comparison
Step5: Surprisingly, the time domain results shows a very good agreement, within ±2%, while the frequency domain results exhibit a reasonable agreement only on the lower half of the frequencies.
Stepped impedance microstripline
Step6: Measurement vs simulation comparison
Step7: Both time domain and the frequency domain results shows a reasonable agreement, within ±5% for time domain. The frequency domain results exhibit a good agreement, within ±dB only on the lower half of the frequencies, and the upper frequencies shows better agreement than in the previous microstripline case.
Step8: Perform NISTMultilineTRL algorithm
Step9: Relative dielectric permittivity and loss tangent
Step10: Calibration based values are plotted against modelled value as a sanity check.
Step11: The agreement between measurements and the model seems very reasonable. Relative error of $\epsilon_{r,eff}$ stay within ±1% outside very low frequencies and relative error of $A$ is kept between ±10% on most of the range. Considering the shape of $A$, it is not possible to do much better with this model.
Step12: Estimate connector delay with linear regression on unwrapped phase.
Step13: Build connector model and compare it against calibration extracted data.
Step14: Comparison of connector model characteristics against calibration results shows a reasonable agreement. Calibration results exhibit some glitches that do not correspond to the expected physical behavior. They are caused by the calibration being close to singular due to the thru and line phase being a multiple of 180 degrees. Accuracy could be enhanced by feeding more distinct lines to the algorithm, but these are not manufactured yet.
Step15: The modelled results are plotted against the calibration data, as a sanity check.
Step16: The delta in dB is quite big in low and high frequencies, but we will see when comparing measurement and simulation that the time domain reflectometry results are very decent with this value. The connector effects inductive peaks are well rendered in the case of the microstripline.
Step17: Media sections with different geometries
Step18: Simulated devices under test
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import skrf
from skrf.media import MLine, DefinedAEpTandZ0
import numpy as np
from numpy import real, log, log10, sum, absolute, pi, sqrt
from scipy.optimize import minimize
import matplotlib.pyplot as plt
from IPython.display import *
skrf.stylely()
#load all measurement and simulation data into a dictionary
meas = skrf.read_all_networks('tdr_measurement_vs_simulation/measurement/')
simu = skrf.read_all_networks('tdr_measurement_vs_simulation/simulation/')
names = ['P1-MSL_Stepped_140-P2',
'P1-MSL_Thru_100-P2']
meas_dc_ext = meas.copy()
simu_dc_ext = simu.copy()
for k, n in enumerate(names):
meas_dc_ext[n] = meas_dc_ext[n].extrapolate_to_dc(kind='linear')
simu_dc_ext[n] = simu_dc_ext[n].extrapolate_to_dc(kind='linear')
Image('tdr_measurement_vs_simulation/figures/MSL_100.jpg', width='50%')
plt.figure()
plt.subplot(2,2,1)
plt.title('Time')
meas_dc_ext['P1-MSL_Thru_100-P2'].s11.plot_z_time_step(window='hamming', label='meas')
simu_dc_ext['P1-MSL_Thru_100-P2'].s11.plot_z_time_step(window='hamming', label='simu')
plt.xlim((-2, 3))
plt.subplot(2,2,2)
plt.title('Frequency')
meas_dc_ext['P1-MSL_Thru_100-P2'].s11.plot_s_db(label='meas')
simu_dc_ext['P1-MSL_Thru_100-P2'].s11.plot_s_db(label='simu')
plt.subplot(2,2,3)
z0 = 50
t, ymea = meas_dc_ext['P1-MSL_Thru_100-P2'].s11.step_response(window='hamming', pad=0)
ymea[ymea == 1.] = 1. + 1e-12 # solve numerical singularity
ymea[ymea == -1.] = -1. + 1e-12 # solve numerical singularity
ymea = z0 * (1+ymea) / (1-ymea)
t, ysim = simu_dc_ext['P1-MSL_Thru_100-P2'].s11.step_response(window='hamming', pad=0)
ysim[ysim == 1.] = 1. + 1e-12 # solve numerical singularity
ysim[ysim == -1.] = -1. + 1e-12 # solve numerical singularity
ysim = z0 * (1+ysim) / (1-ysim)
plt.xlabel('Time (ns)')
plt.ylabel('Relative error (%)')
plt.plot(t*1e9, 100*(ysim-ymea)/ymea)
plt.xlim((-2, 3))
plt.subplot(2,2,4)
delta = simu_dc_ext['P1-MSL_Thru_100-P2'].s_db[:,0,0] - meas_dc_ext['P1-MSL_Thru_100-P2'].s_db[:,0,0]
f = simu_dc_ext['P1-MSL_Thru_100-P2'].f * 1e-9
plt.xlabel('Frequency (GHz)')
plt.ylabel('Delta (dB)')
plt.plot(f, delta)
plt.ylim((-20,20))
plt.tight_layout()
plt.show()
Image('tdr_measurement_vs_simulation/figures/MSL_Stepped_140.jpg', width='75%')
plt.figure()
plt.subplot(2,2,1)
plt.title('Time')
meas_dc_ext['P1-MSL_Stepped_140-P2'].s11.plot_z_time_step(window='hamming', label='measurement')
simu_dc_ext['P1-MSL_Stepped_140-P2'].s11.plot_z_time_step(window='hamming', label='simulation')
plt.xlim((-1, 3))
plt.subplot(2,2,2)
plt.title('Frequency')
meas_dc_ext['P1-MSL_Stepped_140-P2'].s11.plot_s_db(label='measurement')
simu_dc_ext['P1-MSL_Stepped_140-P2'].s11.plot_s_db(label='simulation')
plt.subplot(2,2,3)
z0 = 50
t, ymea = meas_dc_ext['P1-MSL_Stepped_140-P2'].s11.step_response(window='hamming', pad=0)
ymea[ymea == 1.] = 1. + 1e-12 # solve numerical singularity
ymea[ymea == -1.] = -1. + 1e-12 # solve numerical singularity
ymea = z0 * (1+ymea) / (1-ymea)
t, ysim = simu_dc_ext['P1-MSL_Stepped_140-P2'].s11.step_response(window='hamming', pad=0)
ysim[ysim == 1.] = 1. + 1e-12 # solve numerical singularity
ysim[ysim == -1.] = -1. + 1e-12 # solve numerical singularity
ysim = z0 * (1+ysim) / (1-ysim)
plt.xlabel('Time (ns)')
plt.ylabel('Relative error (%)')
plt.plot(t*1e9, 100*(ysim-ymea)/ymea)
plt.xlim((-2, 3))
plt.subplot(2,2,4)
delta = simu_dc_ext['P1-MSL_Stepped_140-P2'].s_db[:,0,0] - meas_dc_ext['P1-MSL_Stepped_140-P2'].s_db[:,0,0]
f = simu_dc_ext['P1-MSL_Stepped_140-P2'].f * 1e-9
plt.xlabel('Frequency (GHz)')
plt.ylabel('Delta (dB)')
plt.plot(f, delta)
plt.ylim((-10,10))
plt.tight_layout()
plt.show()
Image('tdr_measurement_vs_simulation/figures/MSL_100.jpg', width='50%')
Image('tdr_measurement_vs_simulation/figures/MSL_200.jpg', width='100%')
Image('tdr_measurement_vs_simulation/figures/MSL_Short_50.jpg', width='25%')
# Perform NISTMultilineTRL algorithm
line100mm = meas['P1-MSL_Thru_100-P2']
line200mm = meas['P1-MSL_Thru_200-P2']
short50mm = skrf.network.two_port_reflect(meas['P1-MSL_Short_50'], meas['P2-MSL_Short_50'])
measured = [line100mm, short50mm, line200mm]
Grefls = [-1]
lengths = [100e-3, 200e-3] # in meter
offset = [50e-3] # in meter
cal = skrf.NISTMultilineTRL(measured, Grefls, lengths, er_est=4.5, refl_offset=offset)
# frequency axis
freq = line100mm.frequency
f = line100mm.frequency.f
f_ghz = line100mm.frequency.f/1e9
# the physical dimensions of the lines are known by design (neglecting manufacturing tolerances)
W = 3.00e-3
H = 1.55e-3
T = 50e-6
L = 0.1
# calibration results to compare against
ep_r_mea = cal.er_eff.real
A_mea = 20/log(10)*cal.gamma.real
# starting values for the optimizer
A = 0.0
f_A = 1e9
ep_r0 = 4.5
tanD0 = 0.02
f_epr_tand = 1e9
x0 = [ep_r0, tanD0]
# function to be minimised
def model(x, freq, ep_r_mea, A_mea, f_ep):
ep_r, tanD = x[0], x[1]
m = MLine(frequency=freq, Z0=50, w=W, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tanD, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_ep,
diel='djordjevicsvensson', disp='kirschningjansen')
ep_r_mod = m.ep_reff_f.real
A_mod = m.alpha * 20/log(10)
return sum((ep_r_mod - ep_r_mea)**2) + 0.1*sum((A_mod - A_mea)**2)
# run optimizer
res = minimize(model, x0, args=(freq, ep_r_mea, A_mea, f_epr_tand),
bounds=[(4.0, 5.0), (0.001, 0.1)])
# get the results and print the results
ep_r, tanD = res.x[0], res.x[1]
print('epr={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(ep_r, tanD, f_epr_tand * 1e-9))
# build the corresponding media
m = MLine(frequency=freq, Z0=50, w=W, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tanD, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
plt.figure()
plt.subplot(2,2,1)
plt.xlabel('Frequency [GHz]')
plt.ylabel('$\epsilon_{r,eff}$')
plt.plot(f_ghz, ep_r_mea, label='measured')
plt.plot(f_ghz, m.ep_reff_f.real, label='model')
plt.legend()
plt.subplot(2,2,2)
plt.xlabel('Frequency [GHz]')
plt.ylabel('A (dB/m)')
plt.plot(f_ghz, A_mea, label='measured')
A_mod = 20/log(10)*m.alpha
plt.plot(f_ghz, A_mod, label='model')
plt.legend()
plt.subplot(2,2,3)
plt.xlabel('Frequency [GHz]')
plt.ylabel('$\epsilon_{r,eff}$ error [%]')
rel_err = 100 * ((ep_r_mea - m.ep_reff_f.real)/ep_r_mea)
plt.plot(f_ghz, rel_err)
plt.ylim((-2,2))
plt.subplot(2,2,4)
plt.xlabel('Frequency [GHz]')
plt.ylabel('$A$ error [%]')
rel_err = 100 * ((A_mea - A_mod)/A_mea)
plt.plot(f_ghz, rel_err)
plt.ylim((-20,10))
plt.tight_layout()
plt.show()
# extract connector characteristic from port 1 error coefficients
conn = skrf.error_dict_2_network(cal.coefs, cal.frequency, is_reciprocal=True)[0]
# connector delay estimation by linear regression on the unwrapped phase
xlim = 9000 # used to avoid phase jump if any
phi_conn = (np.angle(conn.s[:xlim,1,0]))
z = np.polyfit(f[:xlim], phi_conn, 1)
p = np.poly1d(z)
delay_conn = -z[0]/(2*np.pi)
print('Connector delay: {:.1f} ps'.format(delay_conn * 1e12))
mc = DefinedAEpTandZ0(m.frequency, ep_r=1, tanD=0.02, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_epr_tand, model='frequencyinvariant')
Z0_conn = 50.0 # the actual connector characteristic impedance will be tuned later
left = mc.line(delay_conn, 's', z0=Z0_conn)
check = mc.thru() ** left ** mc.thru()
plt.figure()
plt.subplot(2,1,1)
conn.plot_s_deg(1, 0, label='measured')
check.plot_s_deg(1, 0, label='model')
plt.ylabel('phase (rad)')
plt.legend()
plt.subplot(2,1,2)
conn.plot_s_db(1, 0, label='Measured')
check.plot_s_db(1, 0, label='Model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.tight_layout()
plt.show()
s11_ref = conn.s[:,0,0]
x0 = [Z0_conn]
# function to be minimised
def model2(x, mc, delay_conn, s11_ref):
Z0_mod = x[0]
conn_mod = mc.line(delay_conn, 's', z0=Z0_mod)
check = mc.thru(z0 = 50.) ** conn_mod ** mc.thru(z0 = 50.)
s11_mod = check.s[:,0,0]
return sum(absolute(s11_ref-s11_mod))
# run optimizer
res = minimize(model2, x0, args=(mc, delay_conn, s11_ref),
bounds=[(20, 100)])
# get the results and print the results
Z0_conn = res.x[0]
print('Z0_conn={:.1f} ohm.'.format(Z0_conn))
conn_mod = mc.line(delay_conn, 's', z0=Z0_conn)
check = mc.thru(z0 = 50.) ** conn_mod ** mc.thru(z0 = 50.)
plt.figure;
plt.subplot(2,1,1)
conn.plot_s_db(0,0, label = 'measured')
check.plot_s_db(0,0, label = 'model')
plt.subplot(2,1,2)
plt.plot(check.f*1e-9, (check.s_db[:,0,0]-conn.s_db[:,0,0]))
plt.ylabel('Delta (dB)')
plt.xlabel('Frequency (GHz)')
plt.tight_layout()
plt.show()
freq = skrf.F(1,10e3,10000, 'mhz')
# 50 ohm segment
MSL1 = MLine(frequency=freq, z0=50, w=W, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tanD, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
# Capacitive segment
MSL2 = MLine(frequency=freq, z0=50, w=8.0e-3, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tanD, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
# Inductive segment
MSL3 = MLine(frequency=freq, z0=50, w=1.0e-3, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tanD, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
# Connector transmission line media with guessed loss
MCON = DefinedAEpTandZ0(m.frequency, ep_r=1, tanD=0.025, Z0=Z0_conn,
f_low=1e3, f_high=1e18, f_ep=f_epr_tand, model='frequencyinvariant')
# SMA connector
conn_sma = MCON.thru(z0 = 50.) ** \
MCON.line(delay_conn, 's', z0=Z0_conn, embed=True) ** \
MCON.thru(z0 = 50.)
# microstripline
thru_simu = conn_sma ** MSL1.line(100e-3, 'm', embed=True, z0=MSL1.Z0) ** conn_sma
thru_simu.name = 'P1-MSL_Thru_100-P2'
# stepped impedance microstripline
step_simu = conn_sma \
** MSL1.line(50e-3, 'm', embed=True, z0=MSL1.Z0) \
** MSL2.line(20e-3, 'm', embed=True, z0=MSL2.Z0) \
** MSL3.line(20e-3, 'm', embed=True, z0=MSL3.Z0) \
** MSL1.line(50e-3, 'm', embed=True, z0=MSL1.Z0) \
** conn_sma
step_simu.name = 'P1-MSL_Stepped_140-P2'
# write simulated data to .snp files
write_data = False
if write_data:
step_simu.write_touchstone(dir='tdr_measurement_vs_simulation/simulation/')
thru_simu.write_touchstone(dir='tdr_measurement_vs_simulation/simulation/')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 연습문제
|
<ASSISTANT_TASK:>
Python Code:
x = np.arange(9).reshape((3,3))
x
np.diag(x)
np.diag(x, k=1)
np.diag(x, k=-1)
np.diag(np.diag(x))
np.diag(np.diag(x, k=-1), k=1)
np.diag(np.arange(2, 7), k=-1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
def extract_features(filename):
# load the model
model = VGG16()
# re-structure the model
model = Model(inputs=model.inputs, outputs=model.layers[-2].output)
# load the photo
image = load_img(filename, target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# get features
feature = model.predict(image, verbose=0)
return feature
# load and prepare the photograph
photo = extract_features('example.jpg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm2m', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Has the prevalence of a token increased or decreased over time?
Step2: $N_{embryo}$
Step3: $f("embryo") = \frac{N_{embryo}}{N}$
Step4: ...and then we use NLTK's MLEProbDist (Maximum Likelihood Estimator) to obtain the probability distribution.
Step5: $p("embryo") ~= \hat{p}("embryo") = f("embryo") $
Step6: Since we are interested in change over time, we need to generate a conditional probability distribution.
Step7: $f("embryo" \Bigm| year)$
Step8: $\hat{p}("embryo" \Bigm| year)$
Step9: Now we'll take a look at the token that we'd like to analyze. Let's try chicken.
Step10: The SciPy package provides a Ordinary Least Squares linear regression function called linregress(). We can use that to estimate the model parameters from our data.
Step11: At first pass, our linear model looks like a remarkably good fit. Our r-squared value is not too bad, and we have a very low p value. The problem with interpreting that p-value, however, is that data derived from texts rarely satisfy the assumptions of the t-test used to assess significance. Aside from the fact that we have very few "observations", does the distribution of Y values (token probabilities) shown below look normally distributed to you?
Step12: We need to use an hypothesis test that does not assume normality, and can handle the small sample size. One such approach is a permutation test.
|
<ASSISTANT_TASK:>
Python Code:
text_root = '../data/EmbryoProjectTexts/files'
zotero_export_path = '../data/EmbryoProjectTexts'
documents = nltk.corpus.PlaintextCorpusReader(text_root, 'https.+')
metadata = zotero.read(zotero_export_path, index_by='link', follow_links=False)
word_counts = nltk.FreqDist([normalize_token(token)
for token in documents.words()
if filter_token(token)])
print 'N_e', word_counts['embryo']
print 'N', word_counts.N()
word_counts.freq('embryo')
word_probs = nltk.MLEProbDist(word_counts)
print word_probs.prob('embryo') # Probability of an observed token to be 'embryo'.
word_counts_over_time = nltk.ConditionalFreqDist([
(metadata[fileid].date, normalize_token(token))
for fileid in documents.fileids()
for token in documents.words(fileids=[fileid])
if filter_token(token)
])
embryo_counts = pd.DataFrame(columns=['Year', 'Count'])
for i, (year, counts) in enumerate(word_counts_over_time.items()):
embryo_counts.loc[i] = [year, counts['embryo']]
embryo_counts
plt.scatter(embryo_counts.Year, embryo_counts.Count)
plt.ylabel('Word count')
plt.xlabel('Year')
plt.show()
embryo_freq = pd.DataFrame(columns=['Year', 'Frequency'])
for i, (year, counts) in enumerate(word_counts_over_time.items()):
embryo_freq.loc[i] = [year, counts.freq('embryo')]
plt.scatter(embryo_freq.Year, embryo_freq.Frequency)
plt.ylabel('Word frequency')
plt.xlabel('Year')
plt.show()
word_probs_over_time = nltk.ConditionalProbDist(word_counts_over_time, nltk.MLEProbDist)
embryo_prob = pd.DataFrame(columns=['Year', 'Probability'])
for i, (year, probs) in enumerate(word_probs_over_time.items()):
embryo_prob.loc[i] = [year, probs.prob('embryo')]
plt.scatter(embryo_prob.Year, embryo_prob.Probability)
plt.ylabel('Conditional word probability')
plt.xlabel('Year')
plt.show()
print 'N(w|c=2016) =', word_counts_over_time[2016]['embryo']
print 'f(w|c=2016) =', word_counts_over_time[2016].freq('embryo')
print '^p(w|c=2016) =', word_probs_over_time[2016].prob('embryo')
chicken_data = pd.DataFrame(columns=['Year', 'Probability'])
for i, (year, probs) in enumerate(word_probs_over_time.items()):
chicken_data.loc[i] = [year, probs.prob('chicken')]
chicken_data
# Create a scatterplot.
plt.scatter(chicken_data.Year, chicken_data.Probability)
# Scale the Y axis.
plt.ylim(chicken_data.Probability.min(), chicken_data.Probability.max())
# Scale the X axis.
plt.xlim(chicken_data.Year.min(), chicken_data.Year.max())
plt.ylabel('$\\hat{p}(\'chicken\'|year)$')
plt.show() # Render the figure.
from scipy.stats import linregress
Beta, Beta0, r, p, stde = linregress(chicken_data.Year, chicken_data.Probability)
print '^Beta:', Beta
print '^Beta_0:', Beta0
print 'r-squared:', r*r
print 'p:', p
plt.scatter(chicken_data.Year, chicken_data.Probability)
plt.plot(chicken_data.Year, Beta0 + Beta*chicken_data.Year) # Array math!
plt.ylim(chicken_data.Probability.min(), chicken_data.Probability.max())
plt.xlim(chicken_data.Year.min(), chicken_data.Year.max())
plt.ylabel('$\\hat{p}(\'chicken\'|year)$')
plt.show() # Render the figure.
plt.hist(chicken_data.Probability)
plt.show()
import numpy as np
# We can use underscores `_` for values that we don't want to keep.
samples = pd.DataFrame(columns=['Beta_pi', 'Beta0_pi'])
for i in xrange(1000):
shuffled_probability = np.random.permutation(chicken_data.Probability)
# linregress() returns five parameters; we only care about the first two.
Beta_pi, Beta0_pi, _, _, _ = linregress(chicken_data.Year, shuffled_probability)
samples.loc[i] = [Beta_pi, Beta0_pi]
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.hist(samples.Beta_pi) # Histogram of Beta values from permutations.
plt.plot([Beta, Beta], [0, 200], # Beta from the observed data.
lw=5, label='$\\hat{\\beta}$')
# Plot the upper and lower bounds of the inner 95% probability.
Beta_upper = np.percentile(samples.Beta_pi, 97.5)
Beta_lower = np.percentile(samples.Beta_pi, 2.5)
plt.plot([Beta_upper, Beta_upper], [0, 200], color='k', lw=2, label='$p = 0.05$')
plt.plot([Beta_lower, Beta_lower], [0, 200], color='k', lw=2)
plt.legend()
plt.xlabel('$\\beta_{\\pi}$', fontsize=24)
# Same procedure for Beta0.
plt.subplot(122)
plt.hist(samples.Beta0_pi)
plt.plot([Beta0, Beta0], [0, 200], lw=5, label='$\\hat{\\beta_0}$')
Beta0_upper = np.percentile(samples.Beta0_pi, 97.5)
Beta0_lower = np.percentile(samples.Beta0_pi, 2.5)
plt.plot([Beta0_upper, Beta0_upper], [0, 200], color='k', lw=2, label='$p = 0.05$')
plt.plot([Beta0_lower, Beta0_lower], [0, 200], color='k', lw=2)
plt.legend()
plt.xlabel('$\\beta_{0\\pi}$', fontsize=24)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: See the source of python functions/classes with question marks (? or ??)
Step2: %load
Step3: %run
Step4: %load code
Step5: You absolutely need to know how to make appear plots in a page using
Step6: %bash to run cell with bash
Step7: %%latex to render cell content in LaTeX
Step8: Collapsing windows
Step9: Mixing Python and R
Step10: %R push X is equivalent to %R -i X
Step11: We can also directly write in R
Step12: Timing and profiling
Step13: %%timeit does several times the same operation and computes the average time
Step14: To limit the number of repetitions use "-n number"
Step15: To run profilers (processing and memory) you have to install further
Step16: %memit Profiling the memory usage
Step17: %mprun for profiling a code (testmem in the code mymem.py)
Step18: This works only on an external file.
Step19: The line profiler doesn't currently work.
Step20: Speed
Step21: Widgets
Step22: Text widgets
Step23: Buttons
Step24: Progress bar
Step25: Interaction
Step26: To create a checkbox
Step27: To pass a string
Step28: As a decorator of a function
Step29: Finally, we can bind the input of one widget to the output of another one.
Step30: Interactive visualization
|
<ASSISTANT_TASK:>
Python Code:
! pwd
names = !ls *.py
names[:3]
%pycat?
%%writefile pythoncode.py
import numpy
def append_if_not_exists(arr, x):
if x not in arr:
arr.append(x)
def some_useless_slow_function():
arr = list()
for i in range(10000):
x = numpy.random.randint(0, 10000)
append_if_not_exists(arr, x)
%run ./foo.py
%lsmagic
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# The semicolon at the end avoid to print the function
plt.hist(np.linspace(0, 1, 1000)**1.5);
%%bash
for i in a b c;
do
echo $i
done
%%latex
\begin{align}
a = \frac{1}{2} && b = \frac{1}{3} && c = \frac{1}{4}\\
a && b && c \\
1 && 2 && 3
\end{align}
%load_ext rpy2.ipython
# example of R ...
%R X=c(1,4,5,7); sd(X); mean(X)
import numpy as np
import pylab
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
# Push variables in rpy2
%Rpush X Y
v1 = %R plot(X,Y); print(summary(lm(Y~X))); vv=mean(X)*mean(Y)
# Compute the fit with R and save coefficients in Python
b = %R lm(Y~X)$coef
# Or compute variables in rpy2 and then pull them from the rpy2 space
#%R a=resid(lm(Y-X))
#%Rpull a
#print a, b
b = %R a=resid(lm(Y~X))
%Rpull a
print "a", a
print "b", b
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
%time {1 for i in xrange(10*1000000)}
%%timeit
x = range(10000)
max(x)
%%timeit -n 100
x = range(10000)
max(x)
from numpy.random import randn
def add_and_sum(x, y):
added = x + y
summed = added.sum(axis=1)
return summed
x = randn(3000, 3000)
y = randn(3000, 3000)
%prun add_and_sum(x, y)
!python -m cProfile foo.py
!python -m cProfile -o foo.out foo.py
import pstats
stats = pstats.Stats('foo.out')
stats.print_stats()
%load_ext memory_profiler
%memit range(1000000)
%memit list(range(1000000))
import numpy as np
def testmem():
a = np.arange(1000000)
b = list(range(1000000))
del(a)
del(b)
%mprun -f testmem testmem()
%%writefile mymem.py
#mymem.py
import numpy as np
def testmem():
a = np.arange(1000000)
b = list(range(1000000))
del(a)
del(b)
testmem()
from mymem import testmem
%mprun -f testmem testmem()
from pythoncode import some_useless_slow_function, append_if_not_exists
%prun some_useless_slow_function()
%load_ext memory_profiler
%mprun -f append_if_not_exists some_useless_slow_function()
import line_profiler
lp = line_profiler.LineProfiler()
lp.add_function(some_useless_slow_function)
lp.runctx('some_useless_slow_function()', locals=locals(), globals=globals())
lp.print_stats()
import multiprocessing
print multiprocessing.cpu_count(), 'xeon cores'
from ipywidgets import widgets
from IPython.display import display
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
button = widgets.Button(description="Click me !")
display(button)
def on_button_clicked(b):
print ("Yes, you clicked me !")
button.on_click(on_button_clicked)
from IPython.display import display
from ipywidgets import FloatProgress
import numpy as np
f = FloatProgress(min=0,max=1000)
f.value=0
display(f)
for i in range(1000):
a = np.exp(np.arange(100.))
f.value += 1
from ipywidgets import interact
def f(x):
print(x)
interact(f,x=10)
interact(f, x=True);
interact(f, x='Hi there!');
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
outputText = widgets.Text()
outputText
inputText = widgets.Text()
def makeUpperCase(sender):
outputText.value = inputText.value.upper()
inputText.on_submit(makeUpperCase)
inputText
#from IPython.html.widgets import *
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
t = np.arange(0.0,1.0,0.01)
def pltsin(f):
plt.plot(t,np.sin(2*np.pi*t*f))
plt.show()
interact(pltsin, f=(1,10,0.1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 所有获取的数据已经存放在百度云盘上,后面的章节使用的数据都是本节更新的数据,建议直接从云盘下载入库完毕的数据库,不需要从各个数据源再一个一个的下载数据进行入库,百度云地址如下:
Step2: 如果不想通过直接下载数据文件的方式,也可运行下面的cell点击按钮后进行美股数据全市场更新,如果运行过就不要重复运行了:
Step3: 买入因子,卖出因子等依然使用相同的设置,如下所示:
Step4: 1. 美股交易训练集回测
Step5: 将数据读取模式设置为本地数据模式,即进行全市场回测时最合适的模式,运行效率高,且分类数据更新和交易回测。
Step6: 下面根据下载的数据缓存类型设置缓存类型,如果下载解压的是csv需要勾选use_csv,如果是hdf5不需要勾选:
Step7: 下面通过env中的设置将回测中的symbols切分为回测训练集与回测测试集,且打开回测生成买入时刻特征开关:
Step8: 下面设置回测时切割训练集,测试集使用的切割比例参数,默认为10,即切割为10份,9份做为训练,1份做为测试
Step9: 资金管理依然使用默认atr,每笔交易的买入基数资金设置为万分之15,这个值如果设置太大,比如初始默认的0.1的话,将会导致太多的股票由于资金不足无法买入,丧失全市场回测的意义,如果太小的话又会导致资金利用率下降,导致最终收益下降:
Step10: 在上面run_kl_update中更新了从2011-08-08至2017-08-08,由于在买入时刻生成特征,所以要保留一年的数据做为特征数据回测时段,所以下面的回测start使用2012-08-08至2017-08-08,即向后推了一年做回测:
Step11: 2. 美股交易测试集回测
Step12: 提高g_atr_pos_base为0.007(因为默认切割训练集与测试集数据比例为4
Step13: 下面开始回测,第一次运行select:run loop back,然后点击run select_test,如果已经回测过可select:load test data直接从缓存数据读取:
Step14: 3. 满仓乘数
Step15: 使用test_frm进行度量结果可以看到所有交易都顺利成交了,策略买入成交比例
Step16: 转换出来的test_frm即是一个使用满仓乘数的度量对象,下面使用test_frm直接进行满仓度量即可:
|
<ASSISTANT_TASK:>
Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ipywidgets
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak, ABuSymbol
from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd, ABuMarket, AbuSymbolUS
from abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu
# 关闭沙盒数据
abupy.env.disable_example_env_ipython()
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
def tx_us():
abupy.env.g_market_source = EMarketSourceType.E_MARKET_SOURCE_tx
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
abu.run_kl_update(start='2011-08-08', end='2017-08-08', market=EMarketTargetType.E_MARKET_TARGET_US, n_jobs=32)
# 避免使用notebook运行run all时运行不想执行的代码
_ = ipywidgets.interact_manual(tx_us)
# 初始化资金500万
read_cash = 5000000
# 买入因子依然延用向上突破因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 卖出因子继续使用上一节使用的因子
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
def select_store_cache(use_csv):
if use_csv:
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
else:
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_HDF5
print(abupy.env.g_data_cache_type)
use_csv = ipywidgets.Checkbox(True)
_ = ipywidgets.interact(select_store_cache, use_csv=use_csv)
# 回测时生成买入时刻特征
abupy.env.g_enable_ml_feature = True
# 回测时将symbols切割分为训练集数据和测试集两份,使用训练集进行回测
abupy.env.g_enable_train_test_split = True
abupy.env.g_split_tt_n_folds = 4
# 每笔交易的买入基数资金设置为万分之15
abupy.beta.atr.g_atr_pos_base = 0.0015
abu_result_tuple = None
def run_loop_back():
global abu_result_tuple
abu_result_tuple, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
choice_symbols=None,
start='2012-08-08', end='2017-08-08')
# 把运行的结果保存在本地,以便之后分析回测使用,保存回测结果数据代码如下所示
abu.store_abu_result_tuple(abu_result_tuple, n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='train_us')
ABuProgress.clear_output()
def run_load_train():
global abu_result_tuple
abu_result_tuple = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='train_us')
def select(select):
if select == 'run loop back':
run_loop_back()
else:
run_load_train()
_ = ipywidgets.interact_manual(select, select=['run loop back', 'load train data'])
metrics_train = AbuMetricsBase.show_general(*abu_result_tuple, only_show_returns=True)
# 测试集回测时也不生成买入时刻特征了,为提高回测运行效率
abupy.env.g_enable_ml_feature = False
# 回测时使用切割好的测试数据
abupy.env.g_enable_last_split_test = True
abupy.beta.atr.g_atr_pos_base = 0.007
abu_result_tuple_test = None
def run_loop_back_test():
global abu_result_tuple_test
abu_result_tuple_test, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
choice_symbols=None,
start='2012-08-08', end='2017-08-08')
# 把运行的结果保存在本地,以便之后分析回测使用,保存回测结果数据代码如下所示
abu.store_abu_result_tuple(abu_result_tuple_test, n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_us')
ABuProgress.clear_output()
def run_load_test():
global abu_result_tuple_test
abu_result_tuple_test = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_us')
def select_test(select):
if select == 'run loop back':
run_loop_back_test()
else:
run_load_test()
_ = ipywidgets.interact_manual(select_test, select=['run loop back', 'load test data'])
AbuMetricsBase.show_general(*abu_result_tuple_test, only_show_returns=True)
test_frm = None
def run_transform():
global test_frm
metrics_test = AbuMetricsBase(*abu_result_tuple_test)
test_frm = metrics_test.transform_to_full_rate_factor(n_process_kl=4, show=False)
# 转换后保存起来,下次直接读取,不用再转换了
from abupy import AbuResultTuple
test_us_fr = AbuResultTuple(test_frm.orders_pd, test_frm.action_pd, test_frm.capital, test_frm.benchmark)
abu.store_abu_result_tuple(test_us_fr, n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_us_full_rate')
def run_load():
global test_frm
test_us_fr = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_us_full_rate')
# 本地读取后使用AbuMetricsBase构造度量对象,参数enable_stocks_full_rate_factor=True, 即使用满仓乘数
test_frm = AbuMetricsBase(test_us_fr.orders_pd, test_us_fr.action_pd, test_us_fr.capital, test_us_fr.benchmark,
enable_stocks_full_rate_factor=True)
test_frm.fit_metrics()
def select_full(select):
if select == 'transform':
run_transform()
else:
run_load()
_ = ipywidgets.interact_manual(select_full, select=['transform', 'load'])
AbuMetricsBase.show_general(test_frm.orders_pd,
test_frm.action_pd, test_frm.capital, test_frm.benchmark, only_show_returns=True)
print(type(test_frm))
test_frm.plot_returns_cmp(only_show_returns=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
a = np.ones((n,n), dtype=np.int)
b = (n-1)*a
diagonal_matrix = np.diag(np.diag(b))
return diagonal_matrix
print complete_deg(5)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
a = np.ones((n,n), dtype=np.int)
b = np.diag(np.diag(a))
c = -1*a
d = c + b
adjacency_matrix = abs(d)
return adjacency_matrix
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
def eig_vals(m):
L = complete_deg(m) - complete_adj(m)
return np.linalg.eigvals(L)
print eig_vals(2)
print
print eig_vals(3)
print
print eig_vals(4)
print
print eig_vals(5)
print
print eig_vals(6)
print
print eig_vals(7)
print
print eig_vals(25)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You will work with data from the Housing Prices Competition for Kaggle Learn Users to predict home prices in Iowa using 79 explanatory variables describing (almost) every aspect of the homes.
Step2: Use the next cell to print the first several rows of the data. It's a nice way to get an overview of the data you will use in your price prediction model.
Step3: The next code cell defines five different random forest models. Run this code cell without changes. (To review random forests, look here.)
Step4: To select the best model out of the five, we define a function score_model() below. This function returns the mean absolute error (MAE) from the validation set. Recall that the best model will obtain the lowest MAE. (To review mean absolute error, look here.)
Step5: Step 1
Step6: Step 2
Step7: Run the next code cell without changes. The code fits the model to the training and validation data, and then generates test predictions that are saved to a CSV file. These test predictions can be submitted directly to the competition!
|
<ASSISTANT_TASK:>
Python Code:
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex1 import *
print("Setup Complete")
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Obtain target and predictors
y = X_full.SalePrice
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = X_full[features].copy()
X_test = X_test_full[features].copy()
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
X_train.head()
from sklearn.ensemble import RandomForestRegressor
# Define the models
model_1 = RandomForestRegressor(n_estimators=50, random_state=0)
model_2 = RandomForestRegressor(n_estimators=100, random_state=0)
model_3 = RandomForestRegressor(n_estimators=100, criterion='absolute_error', random_state=0)
model_4 = RandomForestRegressor(n_estimators=200, min_samples_split=20, random_state=0)
model_5 = RandomForestRegressor(n_estimators=100, max_depth=7, random_state=0)
models = [model_1, model_2, model_3, model_4, model_5]
from sklearn.metrics import mean_absolute_error
# Function for comparing different models
def score_model(model, X_t=X_train, X_v=X_valid, y_t=y_train, y_v=y_valid):
model.fit(X_t, y_t)
preds = model.predict(X_v)
return mean_absolute_error(y_v, preds)
for i in range(0, len(models)):
mae = score_model(models[i])
print("Model %d MAE: %d" % (i+1, mae))
# Fill in the best model
best_model = ____
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
best_model = model_3
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
# Define a model
my_model = ____ # Your code here
# Check your answer
step_2.check()
#%%RM_IF(PROD)%%
my_model = 3
step_2.assert_check_failed()
#%%RM_IF(PROD)%%
my_model = best_model
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
# Fit the model to the training data
my_model.fit(X, y)
# Generate test predictions
preds_test = my_model.predict(X_test)
# Save predictions in format used for competition scoring
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import io
data = io.StringIO("""
rs alleles chrom pos strand assembly# center protLSID assayLSID
TP3 A/C 0 3 + NaN NaN NaN NaN
TP7 A/T 0 7 + NaN NaN NaN NaN
TP12 T/A 0 12 + NaN NaN NaN NaN
TP15 C/A 0 15 + NaN NaN NaN NaN
TP18 C/T 0 18 + NaN NaN NaN NaN
""")
df = pd.read_csv(data, delim_whitespace=True).set_index('rs')
test = ['TP3', 'TP7', 'TP18']
def g(df, test):
return df.loc[test]
result = g(df, test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the Kernel
Step2: Python imports
Step3: Loading the dataset
Step4: The datasets object itself is DatasetDict, which contains one key for the training, validation and test set.
Step5: To access an actual element, you need to select a split first, then give an index
Step6: Using the unique method to extract label list. This will allow us to experiment with other datasets without hard-coding labels.
Step7: To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset (automatically decoding the labels in passing).
Step8: Preprocessing the data
Step9: You can check which type of models have a fast tokenizer available and which don't on the big table of models.
Step10: Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in this tutorial if you're interested.
Step11: Note that transformers are often pretrained with subword tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of that
Step12: Fine Tuning the Model
Step13: The warning is telling us we are throwing away some weights (the vocab_transform and vocab_layer_norm layers) and randomly initializing some other (the pre_classifier and classifier layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
Step14: Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay.
Step15: Now we Create the Trainer object and we are almost ready to train.
Step16: We can now finetune our model by just calling the train method
Step17: The evaluate method allows you to evaluate again on the evaluation dataset or on another dataset
Step20: To get the precision/recall/f1 computed for each category now that we have finished training, we can apply the same function as before on the result of the predict method
Step21: Run Training Job on Cloud AI Platform (CAIP)
Step22: Running Training Job on Cloud AI Platform
Step23: Using Custom Containers
Step24: Running Training Job on Cloud AI Platform with Custom Container
Step25: Monitoring Training Job on Cloud AI Platform (CAIP)
Step26: You can stream logs using gcloud ai-platform jobs stream-logs $JOB_NAME
|
<ASSISTANT_TASK:>
Python Code:
!pip -q install torch==1.7
!pip -q install transformers
!pip -q install datasets
!pip -q install tqdm
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import numpy as np
from datasets import load_dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
EvalPrediction, Trainer, TrainingArguments,
default_data_collator)
datasets = load_dataset("imdb")
batch_size = 16
max_seq_length = 128
model_name_or_path = "bert-base-cased"
datasets
print(
"Total # of rows in training dataset {} and size {:5.2f} MB".format(
datasets["train"].shape[0], datasets["train"].size_in_bytes / (1024 * 1024)
)
)
print(
"Total # of rows in test dataset {} and size {:5.2f} MB".format(
datasets["test"].shape[0], datasets["test"].size_in_bytes / (1024 * 1024)
)
)
datasets["train"][0]
label_list = datasets["train"].unique("label")
import random
import pandas as pd
from datasets import ClassLabel, Sequence
from IPython.display import HTML, display
def show_random_elements(dataset, num_examples=2):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
df[column] = df[column].transform(
lambda x: [typ.feature.names[i] for i in x]
)
display(HTML(df.to_html()))
show_random_elements(datasets["train"])
tokenizer = AutoTokenizer.from_pretrained(
model_name_or_path,
use_fast=True,
)
# 'use_fast' ensure that we use fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.
tokenizer("Hello, this is one sentence!")
example = datasets["train"][4]
print(example)
tokenizer(
["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."],
is_split_into_words=True,
)
# Dataset loading repeated here to make this cell idempotent
# Since we are over-writing datasets variable
datasets = load_dataset("imdb")
# TEMP: We can extract this automatically but Unique method of the dataset
# is not reporting the label -1 which shows up in the pre-processing
# Hence the additional -1 term in the dictionary
label_to_id = {1: 1, 0: 0, -1: 0}
def preprocess_function(examples):
# Tokenize the texts
args = (examples["text"],)
result = tokenizer(
*args, padding="max_length", max_length=max_seq_length, truncation=True
)
# Map labels to IDs (not necessary for GLUE tasks)
if label_to_id is not None and "label" in examples:
result["label"] = [label_to_id[example] for example in examples["label"]]
return result
datasets = datasets.map(preprocess_function, batched=True, load_from_cache_file=True)
model = AutoModelForSequenceClassification.from_pretrained(
model_name_or_path, num_labels=len(label_list)
)
args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
weight_decay=0.01,
output_dir="/tmp/cls",
)
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.argmax(preds, axis=1)
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
trainer = Trainer(
model,
args,
train_dataset=datasets["train"],
eval_dataset=datasets["test"],
data_collator=default_data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
trainer.save_model("./finetuned-bert-classifier")
trainer.evaluate()
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name_or_path = "bert-base-cased"
label_text = {0: "Negative", 1: "Positive"}
saved_model_path = "./finetuned-bert-classifier"
def predict(input_text, saved_model_path):
# initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
# preprocess and encode input text
predict_input = tokenizer.encode(
review_text, truncation=True, max_length=128, return_tensors="pt"
)
# load trained model
loaded_model = AutoModelForSequenceClassification.from_pretrained(saved_model_path)
# get predictions
output = loaded_model(predict_input)
# return labels
label_id = torch.argmax(*output.to_tuple(), dim=1)
print(f"Review text: {review_text}")
print(f"Sentiment : {label_text[label_id.item()]}\n")
# example #1
review_text = (
Jaw dropping visual affects and action! One of the best I have seen to date.
)
predict_input = predict(review_text, saved_model_path)
# example #2
review_text = Take away the CGI and the A-list cast and you end up with film with less punch.
predict_input = predict(review_text, saved_model_path)
!cd python_package && ./scripts/train-local.sh
!cd python_package && ./scripts/train-cloud.sh
!cd custom_container && ./scripts/train-local.sh
!cd custom_container && ./scripts/train-cloud.sh
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can use IPython.display classes to ensure embedded versions of your diagram will persist in your notebook, even without JavaScript!
Step2: Saving images to files
Step3: Because the diagram is actually drawn in the browser, it is not immediately available to save to disk. It can be convenient to display the diagram in the notebook and simultaneously save it, so auto_save_png and auto_save_svg methods are available which save the images as soon as they are available
|
<ASSISTANT_TASK:>
Python Code:
from ipysankeywidget import SankeyWidget
from ipywidgets import Layout
links = [
{'source': 'start', 'target': 'A', 'value': 2},
{'source': 'A', 'target': 'B', 'value': 2},
{'source': 'C', 'target': 'A', 'value': 2},
{'source': 'A', 'target': 'C', 'value': 2},
]
layout = Layout(width="500", height="200")
sankey = SankeyWidget(links=links, layout=layout)
sankey
from IPython.display import (
Image,
SVG
)
import base64
data = base64.decodebytes(bytes(sankey.png, 'ascii'))
Image(data)
SVG(sankey.svg)
!rm test.svg test.png
sankey.save_svg('test.svg')
%%html
<img src="test.svg" />
# This won't work
s = SankeyWidget(links=links, layout=layout)
s.save_png('test.png')
s
%%html
<img src="test.png" />
# This does work
SankeyWidget(links=links, layout=layout).auto_save_png('test.png')
%%html
<img src="test.png" />
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: TODO
|
<ASSISTANT_TASK:>
Python Code:
import os.path, gitpath #pip install git+'https://github.com/ruxi/python-gitpath.git'
os.chdir(gitpath.root()) # changes path to .git root
#os.getcwd() #check current work directory
py_commit_msg =
templating py_commit_msg
%%bash -s "$py_commit_msg"
echo $1
git add --all :/
git commit -a -m "$1" #message from py_commit_msg
git push origin master
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wigner Function
Step2: Time Evolution
Step3: Plots
Step4: References
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
from matplotlib import cm
import matplotlib.pyplot as plt
from qutip import *
from piqs import *
#TLS parameters
N = 6
ntls = N
nds = num_dicke_states(ntls)
[jx, jy, jz, jp, jm] = jspin(N)
w0 = 1
gE = 0.1
gD = 0.01
h = w0 * jz
#photonic parameters
nphot = 20
wc = 1
kappa = 1
ratio_g = 2
g = ratio_g/np.sqrt(N)
a = destroy(nphot)
#TLS liouvillian
system = Dicke(N = N)
system.hamiltonian = h
system.emission = gE
system.dephasing = gD
liouv = system.liouvillian()
#photonic liouvilian
h_phot = wc * a.dag() * a
c_ops_phot = [np.sqrt(kappa) * a]
liouv_phot = liouvillian(h_phot, c_ops_phot)
#identity operators
id_tls = to_super(qeye(nds))
id_phot = to_super(qeye(nphot))
#light-matter superoperator and total liouvillian
liouv_sum = super_tensor(liouv_phot, id_tls) + super_tensor(id_phot, liouv)
h_int = g * tensor(a + a.dag(), jx)
liouv_int = -1j* spre(h_int) + 1j* spost(h_int)
liouv_tot = liouv_sum + liouv_int
#total operators
jz_tot = tensor(qeye(nphot), jz)
jpjm_tot = tensor(qeye(nphot), jp*jm)
nphot_tot = tensor(a.dag()*a, qeye(nds))
rho_ss = steadystate(liouv_tot)
jz_ss = expect(jz_tot, rho_ss)
jpjm_ss = expect(jpjm_tot, rho_ss)
nphot_ss = expect(nphot_tot, rho_ss)
psi = rho_ss.ptrace(0)
xvec = np.linspace(-6, 6, 100)
W = wigner(psi, xvec, xvec)
jmax = (0.5 * N)
j2max = (0.5 * N + 1) * (0.5 * N)
plt.rc('text', usetex = True)
label_size = 20
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
wmap = wigner_cmap(W) # Generate Wigner colormap
nrm = mpl.colors.Normalize(0, W.max())
max_cb =np.max(W)
min_cb =np.min(W)
fig2 = plt.figure(2)
plotw = plt.contourf(xvec, xvec, W, 100, cmap=wmap, norm=nrm)
plt.title(r"Wigner Function", fontsize=label_size);
plt.xlabel(r'$x$', fontsize = label_size)
plt.ylabel(r'$p$', fontsize = label_size)
cb = plt.colorbar()
cb.set_ticks( [min_cb, max_cb])
cb.set_ticklabels([r'$0$',r'max'])
plt.show()
plt.close()
#set initial conditions for spins and cavity
tmax = 40
nt = 1000
t = np.linspace(0, tmax, nt)
rho0 = dicke(N, N/2, N/2)
rho0_phot = ket2dm(basis(nphot,0))
rho0_tot = tensor(rho0_phot, rho0)
result = mesolve(liouv_tot, rho0_tot, t, [], e_ops = [jz_tot, jpjm_tot, nphot_tot])
rhot_tot = result.states
jzt_tot = result.expect[0]
jpjmt_tot = result.expect[1]
adagat_tot = result.expect[2]
jmax = (N/2)
j2max = N/2*(N/2+1)
fig1 = plt.figure(1)
plt.plot(t, jzt_tot/jmax, 'k-', label='time evolution')
plt.plot(t, t*0+jz_ss/jmax, 'g--', label='steady state')
plt.title('Total inversion', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle J_z\rangle (t)$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
fig2 = plt.figure(2)
plt.plot(t, jpjmt_tot/j2max, 'k-', label='time evolution')
plt.plot(t, t*0+jpjm_ss/j2max, 'g--', label='steady state')
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle J_{+}J_{-}\rangle (t)$', fontsize = label_size)
plt.title('Light emission', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
fig3 = plt.figure(3)
plt.plot(t, adagat_tot, 'k-', label='time evolution')
plt.plot(t, t*0 + nphot_ss, 'g--', label='steady state')
plt.title('Cavity photons', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle a^\dagger a \rangle (t)$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
qutip.about()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate data
Step2: Model
Step3: Note that we added a column of ones to the regressor matrix for the bias term. We model the slope and the bias term in the same node so we do not factorize between them
Step4: The first element is the slope which multiplies x and the second element is the bias term which multiplies the constant ones. Now we compute the dot product of X and B
Step5: The noise parameter
Step6: The noisy observations
Step7: Inference
Step8: Construct the variational Bayesian (VB) inference engine by giving all stochastic nodes
Step9: Iterate until convergence
Step10: Results
Step11: Note that we use the learned node B but create a new regressor array for predictions. Plot the predictive distribution of noiseless function values
Step12: Note that the above plot shows two standard deviation of the posterior of the noiseless function, thus the data points may lie well outside this range. The red line shows the true linear function. Next, plot the distribution of the noise parameter and the true value, 2−2=0.25
Step13: The noise level is captured quite well, although the posterior has more mass on larger noise levels (smaller precision parameter values). Finally, plot the distribution of the regression parameters and mark the true value
Step14: In this case, the true parameters are captured well by the posterior distribution.
Step15: This node contains both the regression parameter vector and the noise parameter. We compute the dot product similarly as before
Step16: However, Y is constructed as follows
Step17: Because the noise parameter is already in F_tau we can give a constant one as the second argument. The total noise parameter for Y is the product of the noise parameter in F_tau and one. Now, inference is run similarly as before
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
k = 2 # slope
c = 5 # bias
s = 2 # noise standard deviation
# This cell content is hidden from Sphinx-generated documentation
%matplotlib inline
np.random.seed(42)
x = np.arange(10)
y = k*x + c + s*np.random.randn(10)
X = np.vstack([x, np.ones(len(x))]).T
from bayespy.nodes import GaussianARD
B = GaussianARD(0, 1e-6, shape=(2,))
from bayespy.nodes import SumMultiply
F = SumMultiply('i,i', B, X)
from bayespy.nodes import Gamma
tau = Gamma(1e-3, 1e-3)
Y = GaussianARD(F, tau)
Y.observe(y)
from bayespy.inference import VB
Q = VB(Y, B, tau)
Q.update(repeat=1000)
xh = np.linspace(-5, 15, 100)
Xh = np.vstack([xh, np.ones(len(xh))]).T
Fh = SumMultiply('i,i', B, Xh)
import bayespy.plot as bpplt
bpplt.pyplot.figure()
bpplt.plot(Fh, x=xh, scale=2)
bpplt.plot(y, x=x, color='r', marker='x', linestyle='None')
bpplt.plot(k*xh+c, x=xh, color='r');
bpplt.pyplot.figure()
bpplt.pdf(tau, np.linspace(1e-6,1,100), color='k')
bpplt.pyplot.axvline(s**(-2), color='r');
bpplt.pyplot.figure();
bpplt.contour(B, np.linspace(1,3,1000), np.linspace(1,9,1000),
n=10, colors='k');
bpplt.plot(c, x=k, color='r', marker='x', linestyle='None',
markersize=10, markeredgewidth=2)
bpplt.pyplot.xlabel(r'$k$');
bpplt.pyplot.ylabel(r'$c$');
from bayespy.nodes import GaussianGamma
B_tau = GaussianGamma(np.zeros(2), 1e-6*np.identity(2), 1e-3, 1e-3)
F_tau = SumMultiply('i,i', B_tau, X)
Y = GaussianARD(F_tau, 1)
Y.observe(y)
Q = VB(Y, B_tau)
Q.update(repeat=1000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Numerical Methods
Step5: Finite differencing formulas
Step6: Convergence
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot
import numpy
def linear_interpolation(f, x_points):
Return the function that linearly interpolates f at the two x_points, and its derivative.
xi, xip = x_points
g = lambda x : (x - xip) / (xi - xip) * f(xi) + (x - xi) / (xip - xi) * f(xip)
dg = lambda x : f(xi) / (xi - xip) + f(xip) / (xip - xi)
return g, dg
def quadratic_interpolation(f, x_points):
Return the function that quadratically interpolates f at the two x_points, and its derivative.
xim, xi, xip = x_points
g = lambda x : (x - xi ) * (x - xip) / (xim - xi ) / (xim - xip) * f(xim) + \
(x - xim) * (x - xip) / (xi - xim) / (xi - xip) * f(xi ) + \
(x - xim) * (x - xi ) / (xip - xim) / (xip - xi ) * f(xip)
dg = lambda x : (2.0*x - xi - xip) / (xim - xi ) / (xim - xip) * f(xim) + \
(2.0*x - xim - xip) / (xi - xim) / (xi - xip) * f(xi ) + \
(2.0*x - xim - xi ) / (xip - xim) / (xip - xi ) * f(xip)
return g, dg
def linear_interpolated_data(f, x, n_intervals):
Takes the x coordinates and interpolates f(x) over n_intervals (ie, at n_intervals+1 equally spaced points),
returning the interpolated values at coordinates x.
n_interp_points = int((len(x)-1)/n_intervals)
x_interpolant = numpy.linspace(0, 1, n_intervals+1)
g_fd = numpy.zeros_like(x)
dg_fd = numpy.zeros_like(x)
for i in range(n_intervals):
g, dg = linear_interpolation(f,
[x_interpolant[i],
x_interpolant[i+1]])
g_fd[n_interp_points*i:n_interp_points*(i+1)] = g(x[n_interp_points*i:n_interp_points*(i+1)])
dg_fd[n_interp_points*i:n_interp_points*(i+1)] = dg(x[n_interp_points*i:n_interp_points*(i+1)])
g_fd[-1] = g(x[-1])
dg_fd[-1] = dg(x[-1])
return g_fd, dg_fd
def quadratic_interpolated_data(f, x, n_intervals):
Takes the x coordinates and interpolates f(x) over n_intervals (ie, at n_intervals+1 equally spaced points),
returning the interpolated values at coordinates x.
n_interp_points = int((len(x)-1)/n_intervals)
x_interpolant = numpy.linspace(0, 1, n_intervals+1)
g_cd = numpy.zeros_like(x)
dg_cd = numpy.zeros_like(x)
for i in range(1, n_intervals, 2):
g, dg = quadratic_interpolation(f,
[x_interpolant[i-1],
x_interpolant[i],
x_interpolant[i+1]])
g_cd[n_interp_points*(i-1):n_interp_points*(i+1)] = g(x[n_interp_points*(i-1):n_interp_points*(i+1)])
dg_cd[n_interp_points*(i-1):n_interp_points*(i+1)] = dg(x[n_interp_points*(i-1):n_interp_points*(i+1)])
g_cd[-1] = g(x[-1])
dg_cd[-1] = dg(x[-1])
return g_cd, dg_cd
def f(x):
return 1-(x-0.25)**2+numpy.sin(3*numpy.pi*x)**3
x = numpy.linspace(0, 1, 129)
n_intervals = 4
g_fd, dg_fd = linear_interpolated_data(f, x,
n_intervals)
g_cd, dg_cd = quadratic_interpolated_data(f, x,
n_intervals)
pyplot.figure()
pyplot.plot(x, f(x), 'k-', lw=2)
pyplot.plot(x, g_fd, 'b--', lw=2)
pyplot.figure()
pyplot.plot(x, f(x), 'k-', lw=2)
pyplot.plot(x, g_cd, 'r--', lw=2)
pyplot.show()
def df(x):
return -2*(x-0.25)+9*numpy.pi*\
numpy.sin(3*numpy.pi*x)**2*\
numpy.cos(3*numpy.pi*x)
n_intervals = 4
g_fd, dg_fd = linear_interpolated_data(f, x,
n_intervals)
g_cd, dg_cd = quadratic_interpolated_data(f, x,
n_intervals)
pyplot.figure()
pyplot.plot(x, df(x), 'k-', lw=2)
pyplot.plot(x, dg_fd, 'b--', lw=2)
pyplot.figure()
pyplot.plot(x, df(x), 'k-', lw=2)
pyplot.plot(x, dg_cd, 'r--', lw=2)
pyplot.show()
def backward_differencing(f, x_i, dx):
f_i = f(x_i)
f_i_minus_1 = f(x_i - dx)
return (f_i - f_i_minus_1) / dx
def forward_differencing(f, x_i, dx):
f_i = f(x_i)
f_i_plus_1 = f(x_i + dx)
return (f_i_plus_1 - f_i) / dx
def central_differencing(f, x_i, dx):
f_i = f(x_i)
f_i_minus_1 = f(x_i - dx)
f_i_plus_1 = f(x_i + dx)
diff_1 = (f_i_plus_1-f_i_minus_1)/(2.0*dx)
diff_2 = (f_i_minus_1-2.0*f_i+f_i_plus_1)/(dx**2)
return diff_1, diff_2
dx = 0.1
bd = backward_differencing(numpy.exp, 0.0, dx)
fd = forward_differencing(numpy.exp, 0.0, dx)
cd1, cd2 = central_differencing(numpy.exp, 0.0, dx)
print("Find derivatives of exp(x), x=0.\n")
print("Backward:")
print("Value is {:4.2f}, error {:7.2e}\n".format(bd, abs(bd - 1.0)))
print("Forward:")
print("Value is {:4.2f}, error {:7.2e}\n".format(fd, abs(fd - 1.0)))
print("Central (1st derivative):")
print("Value is {:4.2f}, error {:7.2e}\n".format(cd1, abs(cd1 - 1.0)))
print("Central (2 derivatives):")
print("Value is {:4.2f}, error {:7.2e}".format(cd2, abs(cd2 - 1.0)))
dxs = numpy.logspace(-5, 0, 10)
bd_errors = numpy.zeros_like(dxs)
for i, dx in enumerate(dxs):
bd = backward_differencing(numpy.exp, 0.0, dx)
bd_errors[i] = abs(bd - 1.0)
pyplot.figure()
pyplot.loglog(dxs, bd_errors, 'kx', ms=10, mew=2,
label='Backwards')
pyplot.loglog(dxs, dxs*(bd_errors[0]/dxs[0]),
'b--', lw=2, label=r"$p=1$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error")
pyplot.legend(loc="lower right")
pyplot.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras 的分布式训练
Step2: 下载数据集
Step3: 定义分配策略
Step4: 设置输入管道(pipeline)
Step5: 0-255 的像素值, 必须标准化到 0-1 范围。在函数中定义标准化。
Step6: 将此功能应用于训练和测试数据,随机打乱训练数据,并批量训练。 请注意,我们还保留了训练数据的内存缓存以提高性能。
Step7: 生成模型
Step8: 定义回调(callback)
Step9: 训练和评估
Step10: 如下所示,检查点(checkpoint)将被保存。
Step11: 要查看模型的执行方式,请加载最新的检查点(checkpoint)并在测试数据上调用 evaluate 。
Step12: 要查看输出,您可以在终端下载并查看 TensorBoard 日志。
Step13: 导出到 SavedModel
Step14: 在无需 strategy.scope 加载模型。
Step15: 在含 strategy.scope 加载模型。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# 导入 TensorFlow 和 TensorFlow 数据集
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
import os
print(tf.__version__)
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
# 您还可以执行 info.splits.total_num_examples 来获取总数
# 数据集中的样例数量。
num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
# 定义检查点(checkpoint)目录以存储检查点(checkpoints)
checkpoint_dir = './training_checkpoints'
# 检查点(checkpoint)文件的名称
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# 衰减学习率的函数。
# 您可以定义所需的任何衰减函数。
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# 在每个 epoch 结束时打印LR的回调(callbacks)。
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
model.fit(train_dataset, epochs=12, callbacks=callbacks)
# 检查检查点(checkpoint)目录
!ls {checkpoint_dir}
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss, eval_acc = model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
!ls -sh ./logs
path = 'saved_model/'
model.save(path, save_format='tf')
unreplicated_model = tf.keras.models.load_model(path)
unreplicated_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
with strategy.scope():
replicated_model = tf.keras.models.load_model(path)
replicated_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)
print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = []
for sentence in source_text.split("\n"):
id_sentence = []
for word in sentence.split():
#print(source_vocab_to_int[word])
id_sentence.append(source_vocab_to_int[word])
source_id_text.append(id_sentence)
target_id_text = []
for sentence in target_text.split("\n"):
id_sentence = []
for word in sentence.split():
id_sentence.append(target_vocab_to_int[word])
id_sentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(id_sentence)
print("len source_id text "+str(len(source_id_text)))
print("len target_id text "+str(len(target_id_text)))
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32,shape=[None,None],name="input")
targets = tf.placeholder(tf.int32,shape=[None,None], name="target")
learningRate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return inputs, targets, learningRate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
print(dec_input)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
# enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
output, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
training_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],sequence_length - 1, vocab_size,
decoding_scope, output_fn, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
embed_target = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, dec_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
#dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, embed_target)
training_logits, inference_logits = decoding_layer(embed_target, dec_embeddings, encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 13
decoding_embedding_size = 13
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Заметим, что $ABD = \beta_1$, $DBC = \beta_2$, $ABC = \beta_1 + \beta_2$, $BCD = 180 - \beta_2 - \beta_3$, $CDB = \beta_3$, $BDA= \beta_4$, $CDA = \beta_3 + \beta_4$, $DAB = 180 - \beta_1 - \beta_4$. Тогда
|
<ASSISTANT_TASK:>
Python Code:
import numpy
X = numpy.array([
[1, 0, 0, 0],
[0, 1, 0, 0],
[1, 1, 0, 0],
[0, -1, -1, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 0, 1, 1],
[-1, 0, 0, -1]
])
Y = numpy.array([50.78, 30.25, 78.29, 99.57 - 180, 50.42, 40.59, 88.87, 89.86 - 180]).T
Beta = numpy.linalg.inv(X.T @ X) @ X.T @ Y
print('Beta =', Beta)
n, k = X.shape
RSS = (Y - X @ Beta).T @ (Y - X @ Beta)
sigma2 = RSS / (n - k)
print('sigma^2 =', sigma2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualization with plotly_express
Step2: lang
Step3: lang
Step4: lang
Step5: lang
Step6: lang
Step7: lang
Step9: 予習課題. データフレームの可視化 (Visualize the dataset)
Step10: lang
|
<ASSISTANT_TASK:>
Python Code:
# CSVファイルからデータを読み込みましょう。 Read the data from CSV file.
df = pd.read_csv('data/16-July-2019-Tokyo-hourly.csv')
print("行数は %d です" % len(df))
print(df.dtypes)
df.head()
px.line(df, y='Temperature_degC')
px.line(df, x='Time_Hour', y='Temperature_degC')
df.dtypes
px.line(df, y='Pressure_hPa')
px.scatter(df, x='Temperature_degC', y='Pressure_hPa', color='Time_Hour')
px.scatter(df, x='Time_Hour', y='Temperature_degC', size='Precipitation_mm')
px.scatter(df, x='Time_Hour', y='Temperature_degC', size='Precipitation_mm', color='Precipitation_mm')
# MASTER ONLY
df.melt(id_vars=['Time_Hour'], value_vars=['Temperature_degC', 'Pressure_hPa']).iloc[[0,1,-2,-1]]
# MASTER ONLY: This example is too confusing.
# This example is demonstrating how Grammar of Graphics makes it hard to create confusing plots
# when starting from a tidy dataset. E.g it is hard to plot different variables together.
# In this example it takes some effort even to try plotting temperature and pressure together,
# and even then it does not work well because of different value range. Grammar of Graphics
# assumes that all plots shown together have one variable, and as a consequence of that chooses
# a single axis scale, which does not work here for both variables, because their ranges
# are very different and cannot be plotted with the same axis scale.
px.line(df.melt(id_vars=['Time_Hour'], value_vars=['Temperature_degC', 'Pressure_hPa']), x='Time_Hour', y='value', facet_col='variable')
px.scatter_matrix(df)
%%solution
# BEGIN PROMPT
df15 = pd.read_csv(...)
px.___(...)
# END PROMPT
# BEGIN SOLUTION
df15 = pd.read_csv('data/15-July-2019-Tokyo-hourly.csv')
px.bar(df15, x='Time_Hour', y='SunshineDuration_h')
# END SOLUTION
%%inlinetest FigureTest
try:
df15
assert len(df15) == 24, "Did you load the right data set? Expected to see 24 rows, but got %d" % len(df15)
except NameError:
assert False, "Your code does not define df15"
# Check the submission syntactically.
import ast
# This import will be uncommented when executed on autograder server.
#import submission_source
try:
a = ast.parse(submission_source.source)
assert len(a.body) > 0, "Is your code submission empty?"
e = None
for x in a.body:
if x.__class__ == ast.Expr:
e = x
break
assert e is not None, "Your code does not have any function call in it?"
assert e.value.__class__ == ast.Call, "Do you have a function call in your cell? The code may be okay, but I am just not sure"
assert e.value.func.__class__ == ast.Attribute, "I do not recognize your function call. The code may be okay, but I am just not sure"
assert e.value.func.attr in set(['line', 'bar', 'scatter']), "Expected to see a px.line() or px.bar() or px.scatter plot, but got %s. This may be okay, but I am just not sure" % e.value.func.attr
except AssertionError as e:
raise e
except SyntaxError as e:
assert False, "Your code does not compile: %s" % e
except IndentationError as e:
assert False, "Something is wrong with the test: %s" % e
except Exception as e:
assert False, "Something is wrong with the test: %s" % e
# TODO(salikh): Implement a library for easy checking of syntactic conditiions.
#assert ae.TopExprIsCall(a), "Your "
# MASTER ONLY
# This visualization should work as well.
px.line(df15, x='Time_Hour', y='SunshineDuration_h')
%%submission
px.line(df15, y='SunshineDuration_h')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create example vector data
Step2: The GeoDataFrame looks like this
Step3: Open the raster data
Step4: Let's see the raster data with the point data overlaid.
Step5: Sampling the data
Step6: Carry out the sampling of the data and store the results in a new column called value. Note that if the image has more than one band, a value is returned for each band.
|
<ASSISTANT_TASK:>
Python Code:
import geopandas
import rasterio
import matplotlib.pyplot as plt
from shapely.geometry import Point
# Create sampling points
points = [Point(625466, 5621289), Point(626082, 5621627), Point(627116, 5621680), Point(625095, 5622358)]
gdf = geopandas.GeoDataFrame([1, 2, 3, 4], geometry=points, crs=32630)
gdf.head()
src = rasterio.open('s2a_l2a_fishbourne.tif')
from rasterio.plot import show
fig, ax = plt.subplots()
# transform rasterio plot to real world coords
extent=[src.bounds[0], src.bounds[2], src.bounds[1], src.bounds[3]]
ax = rasterio.plot.show(src, extent=extent, ax=ax, cmap='pink')
gdf.plot(ax=ax)
coord_list = [(x,y) for x,y in zip(gdf['geometry'].x , gdf['geometry'].y)]
gdf['value'] = [x for x in src.sample(coord_list)]
gdf.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variablen, Datentypen, Operatoren
Step2: Jeder dieser Typen lässt sich in einem wahrheitswertigen Kontext verwenden. In einem solchen ist beispielsweise ein leerer String oder eine leere Liste gleichbedeutend mit False.
Step3: Mit Hilfe von Operatoren lassen sich Typen verknüpfen. Für numerische Typen gibt es beispielsweise die arithmetischen Operatoren
Step4: Eine vollständige Übersicht der Typen und verfügbaren Operationen findet sich in der offiziellen Dokumentation.
Step5: Eine for-Schleife wie in C gibt es in Python nicht. Das for-Schlüsselwort kommt immer in Begleitung seines Freundes in; die beiden sind unzertrennlich. Damit lässt sich über Sequenzen iterieren.
Step6: Das liest sich doch deutlich angenehmer als die while-Schleife, oder? Falls du mal explizit die Indices einer Sequenz brauchst, hilft die enumerate Funktion.
Step8: Funktionen
Step9: Schau dir doch mal die Hilfe zu print an und finde heraus, wie wir verhindern können, dass nach jedem Aufruf von print eine neue Zeile begonnen wird. Passe dann die folgende Zelle so an, dass alle Zahlen durch Leerzeichen getrennt in der selben Zeile erscheinen.
Step10: Funktionen können mehrere Argumente annehmen. Dabei wird zwischen Argumenten und Schlüsselwortargumenten unterschieden. Primitive Datentypen wie int können als Schlüsselwortargument vordefiniert werden, da sie "by value" übergeben werden. Komplexere Datentypen wie zum Beispiel Listen werden als Referenzen übergeben was zur Folge hat, dass ein Standardwert innerhalb der Funktion verändert werden könnte. Deswegen wird i.d.R. das Verfahren aus dem untenstehenden Beispiel verwendet.
Step11: Comprehension
Step12: Auf die gleiche Art lassen sich auch Sequenzen filtern.
Step13: Das Ganze lässt sich natürlich kombinieren und verschachteln.
|
<ASSISTANT_TASK:>
Python Code:
print('Hello, world!')
# Dies ist ein Kommentar :)
x = 1 # x ist ein int
print(x)
x = 'Hallo, Welt!' # x ist jetzt ein string
print(x)
y = 3.1415 # y is ein float
print(y)
z = [1, 'a', 2.7182] # z ist eine (heterogene) Liste mit drei Einträgen
# Auch wenn es vom Typsystem nicht gefordert wird,
# ist es eine gute Idee, Listen nur homogen zu befüllen.
z = [1, 2, 3]
print(z)
print(z[1])
print(z[0:-1]) # Mit Hilfe von "Slices" können wir Teile von Listen addressieren.
# Die Syntax ist dabei {Anfang}:{Ende}:{Schrittweite} wobei negative
# Indices vom Ende der Liste gezählt werden.
(a, b) = (100, 'Zaphod') # Tuple können benutzt werden, um Ausdrücke zu entpacken
# und so effektiv mehrere Variablen gleichzeitig zuzuweisen
# Die Klammern können dabei weggelassen werden
a, b = 42, 'Ford'
print(a)
print(b)
x = 'Die Antwort ist {}.'.format(a) # Strings lassen sich formatieren, so können die Inhalte
# von Variablen bequem ausgegeben werden.
print(x)
not []
a = 1.337
b = a * 5
c = b ** 10
c
name = ''
if 5 == 3:
print('Irgendwas stimmt mit dem Universum nicht.')
elif name: # Hier wird die Wahrheitswertigkeit verwendet
print('Hallo, {}!'.format(name))
else: # Setze einen Namen ein, um nicht hier zu landen
print('Nun, das ist jetzt etwas peinlich…')
i = 0
while i < 5:
print(i)
i += 1
names = ['Klaus', 'Dieter', 'Hans']
for name in names:
print('Hello {}'.format(name))
for i in range(5): # Praktisch, um Zahlenfolgen zu erzeugen.
print(i)
for index, name in enumerate(names):
print('Person {} heißt {}.'.format(index, name))
def square(x):
This function squares its input.
x - A value of a type that implements `**` (power).
return x ** 2
print(square(4))
help(square)
for i in range(100):
print(i)
greetings = {
'English': 'Hello, {}!',
'Deutsch': 'Hallo, {}!',
'Francais': 'Salut, {}!',
'Espagnol': '¡Hola, {}!'
}
def greet(name, language=None):
if language is None: # So wird ein Standardwert für Sequenztypen definiert
language = 'English'
greeting = greetings.get(language)
if greeting is None:
print("Well, this is embarassing. I don't speak {}.".format(language))
else:
print(greeting.format(name))
greet('William')
greet('Wilhelm', language='Deutsch')
greet('Guillaume', 'Francais') # Wenn die Reihenfolge stimmt, kann das Schlüsselwort
# auch weggelassen werden.
greet('Guillermo', language='Espagnol')
greet('Guglielmo', language='Italiano')
[x ** 2 for x in range(10)]
[x for x in range(10) if x % 2 != 0]
odd_squares = [x ** 2 for x in range(10) if x % 2 == 1]
print(odd_squares)
nums = [z for z in [x + y for x in range(3) for y in range(3, 6)] if z < 6]
print(nums)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: K-nearest neighbors
Step2: Explain a single prediction from the test set
Step3: Explain all the predictions in the test set
Step4: Support vector machine with a linear kernel
Step5: Support vector machine with a radial basis function kernel
Step6: Logistic regression
Step7: Decision tree
Step8: Random forest
Step9: Neural network
|
<ASSISTANT_TASK:>
Python Code:
import sklearn
from sklearn.model_selection import train_test_split
import numpy as np
import shap
import time
X_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0)
# rather than use the whole training set to estimate expected values, we could summarize with
# a set of weighted kmeans, each weighted by the number of points they represent. But this dataset
# is so small we don't worry about it
#X_train_summary = shap.kmeans(X_train, 50)
def print_accuracy(f):
print("Accuracy = {0}%".format(100*np.sum(f(X_test) == Y_test)/len(Y_test)))
time.sleep(0.5) # to let the print get out before any progress bars
shap.initjs()
knn = sklearn.neighbors.KNeighborsClassifier()
knn.fit(X_train, Y_train)
print_accuracy(knn.predict)
explainer = shap.KernelExplainer(knn.predict_proba, X_train)
shap_values = explainer.shap_values(X_test.iloc[0,:])
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test.iloc[0,:])
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
svc_linear = sklearn.svm.SVC(kernel='linear', probability=True)
svc_linear.fit(X_train, Y_train)
print_accuracy(svc_linear.predict)
# explain all the predictions in the test set
explainer = shap.KernelExplainer(svc_linear.predict_proba, X_train)
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
svc_linear = sklearn.svm.SVC(kernel='rbf', probability=True)
svc_linear.fit(X_train, Y_train)
print_accuracy(svc_linear.predict)
# explain all the predictions in the test set
explainer = shap.KernelExplainer(svc_linear.predict_proba, X_train)
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
linear_lr = sklearn.linear_model.LogisticRegression()
linear_lr.fit(X_train, Y_train)
print_accuracy(linear_lr.predict)
# explain all the predictions in the test set
explainer = shap.KernelExplainer(linear_lr.predict_proba, X_train)
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
import sklearn.tree
dtree = sklearn.tree.DecisionTreeClassifier(min_samples_split=2)
dtree.fit(X_train, Y_train)
print_accuracy(dtree.predict)
# explain all the predictions in the test set
explainer = shap.KernelExplainer(dtree.predict_proba, X_train)
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
from sklearn.ensemble import RandomForestClassifier
rforest = RandomForestClassifier(n_estimators=100, max_depth=None, min_samples_split=2, random_state=0)
rforest.fit(X_train, Y_train)
print_accuracy(rforest.predict)
# explain all the predictions in the test set
explainer = shap.KernelExplainer(rforest.predict_proba, X_train)
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
from sklearn.neural_network import MLPClassifier
nn = MLPClassifier(solver='lbfgs', alpha=1e-1, hidden_layer_sizes=(5, 2), random_state=0)
nn.fit(X_train, Y_train)
print_accuracy(nn.predict)
# explain all the predictions in the test set
explainer = shap.KernelExplainer(nn.predict_proba, X_train)
shap_values = explainer.shap_values(X_test)
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Decoding in sensor space using a linear SVM
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis A. Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
import os
import numpy as np
from mne import io
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname_fwd = data_path + 'MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
subjects_dir = data_path + '/subjects'
subject = os.environ['SUBJECT'] = subjects_dir + '/sample'
os.environ['SUBJECTS_DIR'] = subjects_dir
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
label_names = 'Aud-rh', 'Vis-rh'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_r=2, vis_r=4) # load contra-lateral conditions
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
# Set up pick list: MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # mark bads
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True,
reject=dict(grad=4000e-13, eog=150e-6),
decim=5) # decimate to save memory and increase speed
epochs.equalize_event_counts(list(event_id.keys()), 'mintime', copy=False)
epochs_list = [epochs[k] for k in event_id]
# Compute inverse solution
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
n_times = len(epochs.times)
n_vertices = 3732
n_epochs = len(epochs.events)
# Load data and compute inverse solution and stcs for each epoch.
noise_cov = mne.read_cov(fname_cov)
inverse_operator = read_inverse_operator(fname_inv)
X = np.zeros([n_epochs, n_vertices, n_times])
# to save memory, we'll load and transform our epochs step by step.
for condition_count, ep in zip([0, n_epochs / 2], epochs_list):
stcs = apply_inverse_epochs(ep, inverse_operator, lambda2,
method, pick_ori="normal", # saves us memory
return_generator=True)
for jj, stc in enumerate(stcs):
X[condition_count + jj] = stc.lh_data
# Make arrays X and y such that :
# X is 3d with X.shape[0] is the total number of epochs to classify
# y is filled with integers coding for the class to predict
# We must have X.shape[0] equal to y.shape[0]
# we know the first half belongs to the first class, the second one
y = np.repeat([0, 1], len(X) / 2) # belongs to the second class
X = X.reshape(n_epochs, n_vertices * n_times)
# we have to normalize the data before supplying them to our classifier
X -= X.mean(axis=0)
X /= X.std(axis=0)
# prepare classifier
from sklearn.svm import SVC # noqa
from sklearn.cross_validation import ShuffleSplit # noqa
# Define a monte-carlo cross-validation generator (reduce variance):
n_splits = 10
clf = SVC(C=1, kernel='linear')
cv = ShuffleSplit(len(X), n_splits, test_size=0.2)
# setup feature selection and classification pipeline
from sklearn.feature_selection import SelectKBest, f_classif # noqa
from sklearn.pipeline import Pipeline # noqa
# we will use an ANOVA f-test to preselect relevant spatio-temporal units
feature_selection = SelectKBest(f_classif, k=500) # take the best 500
# to make life easier we will create a pipeline object
anova_svc = Pipeline([('anova', feature_selection), ('svc', clf)])
# initialize score and feature weights result arrays
scores = np.zeros(n_splits)
feature_weights = np.zeros([n_vertices, n_times])
# hold on, this may take a moment
for ii, (train, test) in enumerate(cv):
anova_svc.fit(X[train], y[train])
y_pred = anova_svc.predict(X[test])
y_test = y[test]
scores[ii] = np.sum(y_pred == y_test) / float(len(y_test))
feature_weights += feature_selection.inverse_transform(clf.coef_) \
.reshape(n_vertices, n_times)
print('Average prediction accuracy: %0.3f | standard deviation: %0.3f'
% (scores.mean(), scores.std()))
# prepare feature weights for visualization
feature_weights /= (ii + 1) # create average weights
# create mask to avoid division error
feature_weights = np.ma.masked_array(feature_weights, feature_weights == 0)
# normalize scores for visualization purposes
feature_weights /= feature_weights.std(axis=1)[:, None]
feature_weights -= feature_weights.mean(axis=1)[:, None]
# unmask, take absolute values, emulate f-value scale
feature_weights = np.abs(feature_weights.data) * 10
vertices = [stc.lh_vertno, np.array([], int)] # empty array for right hemi
stc_feat = mne.SourceEstimate(feature_weights, vertices=vertices,
tmin=stc.tmin, tstep=stc.tstep,
subject='sample')
brain = stc_feat.plot(views=['lat'], transparent=True,
initial_time=0.1, time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Detection Summary
Step2: Identification Summary
Step3: Distribution of Correct Matches (True Positives) over timedelta categories
Step4: Distribution of Incorrect Matches (False Positives) over timedelta categories
Step5: Identification using different pipeline configuration
Step6: Score Separability Experiments
Step7: Scores of Success Cases
Step8: Scores of All Cases
Step9: Individual Results
Step10: Highest Scoring Success Cases
Step11: Failure Cases (sorted by groundtrue score)
Step12: Failure Cases (sorted by groundfalse score)
|
<ASSISTANT_TASK:>
Python Code:
import ibeis
ibs = ibeis.opendb(db=db)
ibeis.other.dbinfo.show_image_time_distributions(ibs, ibs.get_valid_gids())
_ = ibeis.other.dbinfo.get_dbinfo(ibs)
# Get a sample of images
gids = ibs.get_valid_gids()
aids = ibs.get_image_aids(gids)
nAids_list = list(map(len, aids))
gids_sorted = ut.sortedby(gids, nAids_list)[::-1]
samplex = list(range(5))
print(samplex)
gids_sample = ut.list_take(gids_sorted, samplex)
import ibeis.viz
for gid in ut.ProgressIter(gids_sample, lbl='drawing image'):
ibeis.viz.show_image(ibs, gid)
a = ['unctrl', ctrl]
if db in ['WS_ALL', 'WS_Hard']:
a = [ctrl]
test_result = ibeis.run_experiment(
e='rank_cdf',
db=db,
a=a,
t=['baseline'])
#test_result.print_unique_annot_config_stats()
_ = test_result.draw_func()
test_result = ibeis.run_experiment(
e='timedelta_hist',
db=db,
a=a,
t=['baseline'],
truepos=True)
test_result.draw_func()
# Baseline Timedeltas FP
test_result = ibeis.run_experiment(
e='timedelta_hist',
db=db,
a=a,
t=['baseline'],
falsepos=True)
test_result.draw_func()
test_result = ibeis.run_experiment(
e='rank_cdf',
db=db,
a=['ctrl'],
t=['invar4'])
test_result.draw_func()
a = a
t = ['best']
test_result = ibeis.run_experiment(
e='scores',
db=db,
a=a,
t=t,
f=[':fail=False,min_gf_timedelta=None'],
)
_ = test_result.draw_func()
# Score Seperability with NO FILTERING
test_result = ibeis.run_experiment(
e='scores',
db=db,
a=a,
t=t,
f=[':fail=None,min_gf_timedelta=None']
)
_ = test_result.draw_func()
t = ['best']
a = a
draw_case_kw = dict(show_in_notebook=True, annot_modes=[0, 1])
test_result = ibeis.run_experiment(
e='draw_cases',
db=db,
a=a,
t=t,
f=[':fail=False,min_gf_timedelta=12h,index=0:3,sortdsc=gtscore,without_gf_tag=Photobomb'],
**draw_case_kw)
_ = test_result.draw_func()
test_result = ibeis.run_experiment(
e='draw_cases',
db=db,
a=a,
t=t,
f=[':fail=True,min_gf_timedelta=12h,index=0:3,sortdsc=gtscore'],
**draw_case_kw)
_ = test_result.draw_func()
test_result = ibeis.run_experiment(
e='draw_cases',
db=db,
a=a,
t=t,
f=[':fail=True,min_gf_timedelta=12h,index=0:3,sortdsc=gfscore'],
**draw_case_kw)
_ = test_result.draw_func()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A small Dseqrecord object can be created directly. The Dseqrecord class is a double stranded version of the Biopython SeqRecord class.
Step2: The representation below indicate the size of the sequence and the fact that it is linear (- symbol).
Step3: The Dseqrecord class is the main pydna data type together with the Dseq class. The sequence information is actually held by an internal Dseq object that is accessible from the .seq property
Step4: Dseqrecords can be read from local files in several formats
Step5: The sequence files above all contain the same sequence. We can print the sequence by the .seq property.
Step6: We can also read from a string defined directly in the code
Step7: We can also download sequences from genbank if we know the accession number. The plasmid pUC19 has the accession number L09137. We have to give pydna a valid email address before we use Genbank in this way. Please change the email address to your own when executing this script. Genbank require to be able to contact its users if there is a problem.
Step8: This molecule is circular so the representation below begins with a "o". The size is 2686 bp.
Step9: We can also read sequences remotely from other web sites for example this sequence for YEplac181
Step10: Dseqrecord supports the same kind of digestion / ligation functionality as shown for Dseq.
Step11: Finally, we can save Dseqrecords in a local file. The default format is Genbank.
|
<ASSISTANT_TASK:>
Python Code:
from pydna.dseqrecord import Dseqrecord
mysequence = Dseqrecord("GGATCCAAA")
mysequence
mysequence.seq
from pydna.readers import read
read_from_fasta = read("fastaseq.fasta")
read_from_gb = read("gbseq.gb")
read_from_embl = read("emblseq.emb")
print(read_from_fasta.seq)
print(read_from_gb.seq)
print(read_from_embl.seq)
read_from_string = read('''
>seq_from_string
GGATCCAAA
''')
from pydna.genbank import Genbank
gb = Genbank("bjornjobb@gmail.com")
pUC19 = gb.nucleotide("L09137")
pUC19
from pydna.download import download_text
text = download_text("https://gist.githubusercontent.com/BjornFJohansson/e445e5039d61bdcdf933c435438b4585/raw/a6d57a8d5cffcbf0ab76307c82746e5b7265d0c8/YEPlac181snapgene.gb")
YEplac181 = read(text)
YEplac181
from Bio.Restriction import BamHI
a, b = mysequence.cut(BamHI)
a
b
a.seq
b.seq
a+b
mysequence.write("new_sequence.gb")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BGP Session Compatibility
Step2: Return Value
Step3: Print the first row of the returned Dataframe
Step4: BGP Session Status
Step5: Return Value
Step6: Print the first row of the returned Dataframe
Step7: BGP Edges
Step8: Return Value
Step9: Print the first row of the returned Dataframe
Step10: OSPF Session Compatibility
Step11: Return Value
Step12: Print the first row of the returned Dataframe
Step13: OSPF Edges
Step14: Return Value
Step15: Print the first row of the returned Dataframe
Step16: Test Route Policies
Step17: Return Value
Step18: Print the first row of the returned Dataframe
Step19: Search Route Policies
Step20: Return Value
Step21: Print the first row of the returned Dataframe
|
<ASSISTANT_TASK:>
Python Code:
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.bgpSessionCompatibility().answer().frame()
result.head(5)
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.bgpSessionStatus().answer().frame()
result.head(5)
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.bgpEdges().answer().frame()
result.head(5)
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.ospfSessionCompatibility().answer().frame()
result.head(5)
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.ospfEdges().answer().frame()
result.head(5)
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.testRoutePolicies(policies='/as1_to_/', direction='in', inputRoutes=list([BgpRoute(network='10.0.0.0/24', originatorIp='4.4.4.4', originType='egp', protocol='bgp', asPath=[[64512, 64513], [64514]], communities=['64512:42', '64513:21'])])).answer().frame()
result.head(5)
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
result = bf.q.searchRoutePolicies(nodes='/^as1/', policies='/as1_to_/', inputConstraints=BgpRouteConstraints(prefix=["10.0.0.0/8:8-32", "172.16.0.0/28:28-32", "192.168.0.0/16:16-32"]), action='permit').answer().frame()
result.head(5)
result.iloc[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is required for Bokeh to work in notebooks.
Step2: According to NIST,
Step3: Here are some examples from the dataset
|
<ASSISTANT_TASK:>
Python Code:
try:
import torchvision
except ModuleNotFoundError:
%pip install -qq torchvision
import torchvision
from torchvision import datasets
from torchvision import transforms
import numpy as np
import jax
import jax.numpy as jnp
import itertools
try:
from bokeh.io import output_notebook, show
except ModuleNotFoundError:
%pip install -qq bokeh
from bokeh.io import output_notebook, show
from bokeh.layouts import gridplot
from bokeh.plotting import figure
output_notebook()
transform = transforms.Compose(
[lambda img: torchvision.transforms.functional.rotate(img, 90), transforms.ToTensor(), jnp.array]
)
training_data = datasets.EMNIST(root="~/data", split="byclass", download=True, transform=transform)
def plot_one(item):
image, raw_label = item
label = training_data.classes[raw_label]
p = figure(title=f"label = {label}", tooltips=[("x", "$x"), ("y", "$y"), ("value", "@image")], match_aspect=True)
p.x_range.range_padding = p.y_range.range_padding = 0
# must give a vector of image data for image parameter
subplot = p.image(image=[np.array(image.squeeze())], x=0, y=0, dw=1, dh=1, level="image")
p.title.align = "center"
p.axis.visible = False
p.grid.grid_line_width = 0.5
return p
# Take the first 25 images
subplots = list(map(plot_one, itertools.islice(training_data, 25)))
grid = gridplot(subplots, ncols=5, toolbar_location=None, plot_width=150, plot_height=150)
show(grid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This object is central to an analysis conducted with bayesloop. It stores the data and further provides the methods to perform probabilistic inference on the models defined within the class, as described below.
Step2: Note that this particular data set is also hard-coded into the Study class, for convenient testing
Step3: In case you have multiple observations for each time step, you may also provide the data in the form np.array([[x1,y1,z1], [x2,y2,z2], ..., [xn,yn,zn]]). Missing data points should be included as np.nan.
Step4: We first define the observation model and provide two arguments
Step5: Note that you can also use the following short form to define observation models
Step6: Note that you can also use the following short form to define transition models
Step7: By default, fit computes the so-called smoothing distribution of the model parameters for each time step. This distribution states the probability (density) of the parameter value at a time step $t$, given all past and future data points. All distributions have the same shape as the parameter grid, and are stored in S.posteriorSequence for further analysis. Additionally, the mean values of each distribution are stored in S.posteriorMeanValues, as point estimates. Finally, the (natural) logarithmic value of the model evidence, the probability of the data given the chosen model, is stored in S.logEvidence (more details on evidence values follow).
Step8: From this first analysis, we may conclude that before 1880, an average of $\approx 3$ accidents per year were recorded. This changes significantly between 1880 and 1900, when the accident-rate drops to $\approx 1$ per year. We can also directly inspect the distribution of the accident rate at specific points in time, using the plot method with specified keyword argument t
Step9: Without the plot=True argument, this method only returns the parameter values (r1, r2, as specified when setting the observation model) as well as the corresponding probability values p1 and p2. Note that the returned probability values are always normalized to 1, so that we may easily evaluate the probability of certain conditions, like the probability of an accident rate < 1 in the year 1900
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt # plotting
import seaborn as sns # nicer plots
sns.set_style('whitegrid') # plot styling
import bayesloop as bl
S = bl.Study()
import numpy as np
data = np.array([5, 4, 1, 0, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4,
4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0,
0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0,
0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 3, 3, 0,
0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0])
S.load(data, timestamps=np.arange(1852, 1962))
S.loadExampleData()
L = bl.observationModels.Poisson('accident_rate', bl.oint(0, 6, 1000))
S.set(L)
L = bl.observationModels.Poisson('accident_rate')
S.set(L)
T = bl.transitionModels.GaussianRandomWalk('sigma', 0.2, target='accident_rate')
S.set(T)
S.fit()
plt.figure(figsize=(8, 4))
# plot of raw data
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
# parameter plot
S.plot('accident_rate')
plt.xlim([1851, 1961])
plt.xlabel('year');
plt.figure(figsize=(8, 4))
S.plot('accident_rate', t=1880, facecolor='r', alpha=0.5, label='1880')
S.plot('accident_rate', t=1900, facecolor='b', alpha=0.5, label='1900')
plt.legend()
plt.xlim([0, 5]);
S.eval('accident_rate < 1', t=1900);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table class="tfo-notebook-buttons" align="left">
Step3: ユーティリティ
Step4: 視覚化ツール
Step5: Object Detection API をインストールします。
Step6: これで、後で必要になる依存関係をインポートすることができます。
Step7: ラベルマップデータを読み込む(プロット用)
Step8: 検出モデルを構築し、事前トレーニング済みモデルの重みを読み込む
Step9: 選択したモデルを TensorFlow Hub から読み込む
Step10: 画像を読み込む
Step11: 推論を行う
Step12: 結果を可視化する
Step13: [オプション]
|
<ASSISTANT_TASK:>
Python Code:
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# This Colab requires TF 2.5.
!pip install -U "tensorflow>=2.5"
import os
import pathlib
import matplotlib
import matplotlib.pyplot as plt
import io
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from six.moves.urllib.request import urlopen
import tensorflow as tf
import tensorflow_hub as hub
tf.get_logger().setLevel('ERROR')
# @title Run this!!
def load_image_into_numpy_array(path):
Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: the file path to the image
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
image = None
if(path.startswith('http')):
response = urlopen(path)
image_data = response.read()
image_data = BytesIO(image_data)
image = Image.open(image_data)
else:
image_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(image_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(1, im_height, im_width, 3)).astype(np.uint8)
ALL_MODELS = {
'CenterNet HourGlass104 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',
'CenterNet HourGlass104 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1',
'CenterNet HourGlass104 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1',
'CenterNet HourGlass104 Keypoints 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1',
'CenterNet Resnet50 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1',
'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1',
'CenterNet Resnet101 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1',
'CenterNet Resnet50 V2 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1',
'CenterNet Resnet50 V2 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1',
'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1',
'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1',
'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1',
'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1',
'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1',
'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1',
'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1',
'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1',
'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2',
'SSD MobileNet V1 FPN 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1',
'SSD MobileNet V2 FPNLite 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1',
'SSD MobileNet V2 FPNLite 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1',
'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1',
'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1',
'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1',
'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1',
'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1',
'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1',
'Faster R-CNN ResNet50 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1',
'Faster R-CNN ResNet50 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1',
'Faster R-CNN ResNet50 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1',
'Faster R-CNN ResNet101 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1',
'Faster R-CNN ResNet101 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1',
'Faster R-CNN ResNet101 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1',
'Faster R-CNN ResNet152 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1',
'Faster R-CNN ResNet152 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1',
'Faster R-CNN ResNet152 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1',
'Faster R-CNN Inception ResNet V2 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1',
'Faster R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1',
'Mask R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'
}
IMAGES_FOR_TEST = {
'Beach' : 'models/research/object_detection/test_images/image2.jpg',
'Dogs' : 'models/research/object_detection/test_images/image1.jpg',
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
'Naxos Taverna' : 'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
'Beatles' : 'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg',
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
'Birds' : 'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg',
}
COCO17_HUMAN_POSE_KEYPOINTS = [(0, 1),
(0, 2),
(1, 3),
(2, 4),
(0, 5),
(0, 6),
(5, 7),
(7, 9),
(6, 8),
(8, 10),
(5, 6),
(5, 11),
(6, 12),
(11, 12),
(11, 13),
(13, 15),
(12, 14),
(14, 16)]
# Clone the tensorflow models repository
!git clone --depth 1 https://github.com/tensorflow/models
%%bash
sudo apt install -y protobuf-compiler
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import ops as utils_ops
%matplotlib inline
PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
#@title Model Selection { display-mode: "form", run: "auto" }
model_display_name = 'CenterNet HourGlass104 Keypoints 512x512' # @param ['CenterNet HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 1024x1024','Mask R-CNN Inception ResNet V2 1024x1024']
model_handle = ALL_MODELS[model_display_name]
print('Selected model:'+ model_display_name)
print('Model Handle at TensorFlow Hub: {}'.format(model_handle))
print('loading model...')
hub_model = hub.load(model_handle)
print('model loaded!')
#@title Image Selection (don't forget to execute the cell!) { display-mode: "form"}
selected_image = 'Beach' # @param ['Beach', 'Dogs', 'Naxos Taverna', 'Beatles', 'Phones', 'Birds']
flip_image_horizontally = False #@param {type:"boolean"}
convert_image_to_grayscale = False #@param {type:"boolean"}
image_path = IMAGES_FOR_TEST[selected_image]
image_np = load_image_into_numpy_array(image_path)
# Flip horizontally
if(flip_image_horizontally):
image_np[0] = np.fliplr(image_np[0]).copy()
# Convert image to grayscale
if(convert_image_to_grayscale):
image_np[0] = np.tile(
np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8)
plt.figure(figsize=(24,32))
plt.imshow(image_np[0])
plt.show()
# running inference
results = hub_model(image_np)
# different object detection models have additional results
# all of them are explained in the documentation
result = {key:value.numpy() for key,value in results.items()}
print(result.keys())
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if 'detection_keypoints' in result:
keypoints = result['detection_keypoints'][0]
keypoint_scores = result['detection_keypoint_scores'][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_detections[0])
plt.show()
# Handle models with masks:
image_np_with_mask = image_np.copy()
if 'detection_masks' in result:
# we need to convert np.arrays to tensors
detection_masks = tf.convert_to_tensor(result['detection_masks'][0])
detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])
# Reframe the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes,
image_np.shape[1], image_np.shape[2])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
result['detection_masks_reframed'] = detection_masks_reframed.numpy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_mask[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
instance_masks=result.get('detection_masks_reframed', None),
line_thickness=8)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_mask[0])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Timing 1 million elements arrays addition using an entry-by-entry function
Step2: Exercise 07.2 (member functions and slicing)
Step3: Function that takes a NumPy array of the raw scores and returns the scores as a percentage sorted from lowest to highest
Step4: Function that returns the maximum, minimum and mean of the raw scores as a dictionary
Step5: Modify your function for the min, max and mean to optionally exclude the highest and lowest scores
Step6: Exercise 07.3 (slicing)
Step7: use array slicing to
Step8: First two rows as a 2D sub-array
Step9: Bottom right $2 \times 2$ block as a 2D sub-array
Step10: Sum of the last column
Step11: Transpose of A
Step13: Exercise 07.4 (optional extension)
Step14: Below we compute the same root using scipy function bisect
|
<ASSISTANT_TASK:>
Python Code:
n = 1000000
x = np.random.rand(n)
y = np.random.rand(n)
%time z = x + y
def sum_vec(x, y):
"Sum two vectors entry by entry"
z = np.zeros(n)
for i in range(n):
z[i] = x[i] + y[i]
return z
%time w = sum_vec(x, y)
# Test scores
scores = np.array([58.0, 35.0, 24.0, 42, 7.8])
def percentages(scores):
"Calculate percentages (max score = 60) from a list of scores and returns them sorted"
sorted_scores = scores / 60
sorted_scores.sort()
return sorted_scores
print(percentages(scores))
def max_min_mean(scores):
"Return a dictionary with max, min and mean score from a list of scores"
out = {}
out['min'] = scores.min()
out['max'] = scores.max()
out['mean'] = scores.mean()
return out
print(max_min_mean(scores))
def max_min_mean2(scores, exclude_extreme):
"Return a dictionary with max, min and mean score from a list of scores excluding extremes if exclude_extreme = 1"
out = {}
# If extremes are excluded filter the sorted list from second to second to last element
if exclude_extreme == 1:
scores.sort()
scores_filtered = scores[1:-1]
# Else use the entire list
elif exclude_extreme == 0:
scores_filtered = scores
# If exclude_extreme is not 0 nor 1 return a message
else:
return 'The second parameter should be either 0 (to include extremes) or 1 (to exclude them)'
out['min'] = scores_filtered.min()
out['max'] = scores_filtered.max()
out['mean'] = scores_filtered.mean()
return out
print(max_min_mean2(scores, 1))
A = np.array([[4.0, 7.0, -2.43, 67.1],
[-4.0, 64.0, 54.7, -3.33],
[2.43, 23.2, 3.64, 4.11],
[1.2, 2.5, -113.2, 323.22]])
print(A)
print(A[:,2])
print(A[:2,:])
print(A[-2:,-2:])
print(A[:,-1].sum())
print(A.transpose())
def f(x):
return x**3 - 6*x**2 + 4*x + 12
#return x**2 + x - 20 # Roots = -5, 4
def compute_root(f, x0, x1, tol, max_it):
Computes the root of f between x0 and x1 using bisection,
stops if the value of f at the root is under tol or if max_it is reached
and returns the root, the value of f at the root and the number of iterations
for i in range(max_it):
# Compute x_mid
x_mid = (x0 + x1) / 2
# Compute f for the three values
f_0, f_1, f_mid = f(x0), f(x1), f(x_mid)
# Check the value of f_0*f_mid to determine how to update the endpoints
if f_0*f_mid < 0:
x1 = x_mid
else:
x0 = x_mid
# Check if f is under tol
if abs(f_mid) < tol:
return x_mid, f_mid, i+1
# Return the approximate root in case max_it is reached
return x_mid, f_mid, i+1
# Test for the function f
%time x, f_x, num_it = compute_root(f, x0=3, x1=6, tol=1.0e-6, max_it=1000)
print('Approximate root:', x)
print('Value of f:', f_x)
print('Number of iterations:', num_it)
from scipy.optimize import bisect as bisect
# Compute the root of f using scipy function
%time x0 = bisect(f, a=3, b=6)
print('Approximate root:', x0)
print('Value of f:', f(x0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Code to remove the columns “Class”, “By”, “TA”, “JL”, “PW” (i.e. irrelevant and less relevant data to do with the project)
Step2: Code to clean the data (remove unwanted symbols)
Step3: Code to normalize data (normalize ratings from 0 to 100)
Step4: Print data
Step5: Finding Trends in the Dataset
Step6: 1) How consistent are wine ratings?
Step7: While the figure above shows that the standard deviations of wine ratings is slightly skewed to the left, the figure appears to be approximately a normal distribution.
Step8: The figure suggests that there may be a possible trend that higher-priced wines have higher ratings. In general, low-priced wines seem to achieve more lower scores than high-priced wines (it should be noted, however, that there are also more low-priced wines than high-priced wines in the data). Also, it is interesting that above a price point of around 400 euros, a median rating below 50 (out of a scale of 100) does not appear.
Step9: To be categorized as a luxury wine, the price of the wine has been categorized as within the top decile of wine prices.
Step10: Given the dataset of wines from 2004 to 2016, the figure above suggests that 2015 and 2016 yielded wines with higher ratings. Yearly ratings were based on median ratings. From a numerical standpoint, the median rating of 2015 vintage wines is approximately 66.7 (on a scale of 100)
Step11: By looking at the median ratings of wines by region ("AOC" in the dataset), Sauternes appears to yield the highest rated wines (numerically, Sauternes's median rating is approximately 69.811 out of a scale of 100). The difference in median rating between Sauternes and Pauillac is quite high (approximately 7.55), suggesting that Sauternes produces wines that could potentially be considered better than the other locations.
Step12: Based on median prices, the figure identifies that Pomerol produces the most expensive wines (with a median wine price of 44 euros). Following Pomerol are St-Julien and Sauternes (with median wine prices of 42.5 and 39.0, respectively).
Step13: Based on mean prices, Pauillac produces the most expensive wines (with a mean price of approximately 101.74 euros). The change in region can be explained by the fact that some wines are much more highly priced than others and thus skew the data.
Step14: In terms of critic ratings, the figure above suggests that median ratings and wine appreciation amounts have at most a very weak trend. In fact, there is a correlation of only approximately 0.279, which suggests that critic ratings likely are unrelated to whether a wine will appreciate and by how much.
Step15: In terms of year, the figure above shows that wines produced in 2005, 2009, 2010, 2014, and 2015 appreciated. The wines of 2009 seem to have appreciated the most (by a mean appreciation of approximately 68.13%). 2005 wines appreciated a similar amount at approximately 66.43%.
|
<ASSISTANT_TASK:>
Python Code:
import pandas
file = open("./data.html")
data = pandas.io.html.read_html(file, encoding='utf-8')[0]
data.drop(["Class.", "By", "TA", "JL", "PW"], axis=1, inplace=True)
import re
def cleanEntry(str):
match = re.search(r"(-?(?:\d*\.)?\d+)(?:-(-?(?:\d*\.)?\d+))?", str)
if match is None:
return None
else:
groups = match.groups()
if groups[1] is None:
return float(groups[0])
else:
return (float(groups[0]) + float(groups[1])) / 2
data.rename(columns={"+/–": "+/-"}, inplace=True)
data.replace(["·", "\?", "½", "%", "€ ", "\+", "–"],
["", "", ".5", "", "", "", "-"],
inplace=True, regex=True)
colsToClean = ["RP", "NM", "JR", "BD", "JS", "De", "RVF", "JA", "RG", "Price", "+/-"]
data[colsToClean] = data[colsToClean].applymap(cleanEntry)
ratingColNames = ["RP", "NM", "JR", "BD", "JS", "De", "RVF", "JA", "RG"]
for col in ratingColNames:
data[col] -= data[col].min()
data[col] /= data[col].max()
data[col] *= 100
print(data)
%matplotlib inline
import matplotlib.pyplot as plt
stddevs = data[ratingColNames].apply(lambda x: x.std(), axis=1)
print(stddevs.describe())
stddevs.plot.hist(bins=50)
plt.title("Histogram of Standard Deviation of Wine Ratings")
plt.xlabel("Standard Deviation of Wine Ratings")
medianRatings = data[ratingColNames].apply(lambda x: x.median(), axis=1).rename("Median rating")
print("Correlation:", medianRatings.corr(data["Price"]))
pandas.concat([medianRatings, data["Price"]], axis=1)\
.plot\
.scatter(x="Price",
y="Median rating",
s=0.25,
xlim=(0, 1000),
ylim=(medianRatings.min(), medianRatings.max()),
title="Median Rating vs. Price for All Wines")
luxuryWines = data[data["Price"] > data["Price"].quantile(0.9)]
medianRatings = luxuryWines[ratingColNames].apply(lambda x: x.median(), axis=1).rename("Median rating")
print("Correlation:", medianRatings.corr(data["Price"]))
pandas.concat([medianRatings, data["Price"]], axis=1)\
.plot\
.scatter(x="Price",
y="Median rating",
s=1,
xlim=(luxuryWines["Price"].min(), luxuryWines["Price"].max()),
ylim=(medianRatings.min(), medianRatings.max()),
title="Median Rating vs. Price for Luxury Wines")\
.set_xlabel("Price (€)")
wineByYear = data.groupby(data["Year"]).median()
medianRatings = wineByYear[ratingColNames].apply(lambda x: x.median(), axis=1).rename("Median rating")
print(medianRatings.sort_values(ascending=False))
medianRatings.plot(title="Median Rating of Wines by Year of Production").set_ylabel("Median rating")
wineByRegion = data.groupby(data["AOC"]).median()
medianRatings = wineByRegion[ratingColNames].apply(lambda x: x.median(), axis=1).rename("Median rating")
sortedByRating = medianRatings.sort_values(ascending=False)
print(sortedByRating)
sortedByRating.plot.barh(title="Median Rating of Wines by Region")
plt.xlabel("Median rating")
plt.gca().invert_yaxis()
sortedByMedianPrice = data.groupby(data["AOC"]).median()["Price"].sort_values(ascending=False)
print(sortedByMedianPrice)
sortedByMedianPrice.plot.barh(title="Median Price of Wines by Region")
plt.xlabel("Median price (€)")
plt.gca().invert_yaxis()
sortedByMeanPrice = data.groupby(data["AOC"]).mean()["Price"].sort_values(ascending=False)
print(sortedByMeanPrice)
sortedByMeanPrice.plot.barh(title="Mean Price of Wines by Region")
plt.xlabel("Median price (€)")
plt.gca().invert_yaxis()
medianRatings = data[ratingColNames].apply(lambda x: x.median(), axis=1).rename("Median rating")
print("Correlation:", medianRatings.corr(data["+/-"]))
pandas.concat([medianRatings, data], axis=1)\
.plot\
.scatter(y="+/-",
x="Median rating",
s=1,
ylim=(data["+/-"].min(), data["+/-"].max()),
xlim=(medianRatings.min(), medianRatings.max()), title="Appreciation Amount vs. Median Rating")\
.set_ylabel("Appreciation amount (%)")
appreciationByYear = data.groupby(data["Year"]).mean()["+/-"]
print(appreciationByYear)
appreciationByYear.plot.bar()
plt.title("Mean Appreciation Amount by Year of Production")
plt.ylabel("Mean appreciation amount (%)")
appreciationByRegion = data.groupby(data["AOC"]).mean()["+/-"].sort_values(ascending=False)
print(appreciationByRegion)
appreciationByRegion.plot.barh()
plt.title("Mean Appreciation Amount by Region")
plt.xlabel("Mean appreciation amount (%)")
plt.gca().invert_yaxis()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To get some intuitions about the data, let's plot the 100 labelled books, using the counts of the words "laser" and "love" as the x and y axes
Step2: This plot shows each of our 100 labelled books, positioned according to the counts of the words "laser" and "love" in the book, and colored by the book's genre label. Romance books are red; scifi books are blue. As we can see, the two genres appear pretty distinct here, which means we can expect pretty good classification accuracy!
Step3: For each observation we pass as input to <code>clf.predict()</code>, the function returns one label (either 0 or 1). In the snippet above, we pass in only a single observation, so we get only a single label back. The example observation above gets a label 1, which means the model thought this particular book was a work of science-fiction. Just like that, we've trained a machine learning classifier and classified some new data!
Step4: For each pixel in the plot above, we retrieve the 3 closest points with known labels. We then use a majority vote of those labels to assign the label of the pixel. This is exactly analogous to predicting a label for unlabelled point—in both cases, we take a majority vote of the 3 closest points with known labels. Working in this way, we can use labelled data to classify unlabelled data. That's all there is to K-Nearest Neighbors classification!
Step5: If we run the simulation above a number of times, we should see the "outlier" point is consistently isolated quickly, while it usually takes more iterations to isolate the other points. This is the chief intuition behind the Isolation Forests outlier classification strategy—outliers are isolated quickly because they are farther from other points in the dataset.
Step6: In just a few lines of code, we can create, train, and deploy a machine learning model for detecting outliers in high-dimensional data!
Step7: Now that we have represented each passage of 1000 words with a high-dimensional vector, let's project those vectors down into two dimensions to visualize the similarity between our three author's styles
Step8: As we can see, the new points in purple have strong overlap with the green points, suggesting that the mystery author has a style quite similar to that of Austen. There's a good reason for that—the purple text is <i>Pride and Prejudice and Zombies</i>, which adapts the language and plot of Jane Austen's classic novel. When working with high-dimensional datasets, it's often helpful to create some quick visualizations of the data using a dimension reduction technique like UMAP as we have just done.
Step9: As we can see above, <code>words</code> is just a list of words. For each of those words, <code>vectors</code> contains a corresponding 50-dimensional vector (or list of 50 numbers). Those vectors indicate the semantic meaning of a word. In other words, if the English language were a 50 dimensional vector space, each word in <code>words</code> would be positioned in that space by virtue of its corresponding vector.
|
<ASSISTANT_TASK:>
Python Code:
# import the make_blobs function from the sklearn module/package
from sklearn.datasets.samples_generator import make_blobs
# use the function we imported to generate a matrix with 100 rows and 2 columns
# n_samples=100 specifies the number of rows in the returned matrix
# n_features=2 specifies the number of columns in the returned matrix
# centers=2 specifies the number of centroids, or attraction points, in the returned matrix
# random_state=0 makes the random data generator reproducible
# center_box=(0,20) specifies we want the centers in X to be between 0,20
X, labels = make_blobs(n_samples=100, n_features=2, centers=2, random_state=0, center_box=(0,20))
# display the first three rows in X and their genre labels
print(X[:3], '\n\n', labels[:3])
# commands prefaced by a % in Jupyter are called "magic"
# these "magic" commands allow us to do special things only related to jupyter
# %matplotlib inline - allows one to display charts from the matplotlib library in a notebook
# %load_ext autoreload - automatically reloads imported modules if they change
# %autoreload 2 - automatically reloads imported modules if they change
%matplotlib inline
%load_ext autoreload
%autoreload 2
# import code from matplotlib, a popular data visualization library
import matplotlib
import matplotlib.pyplot as plt
# get the 0th column of the matrix (i.e. counts of the word "laser")
x_vals = X[:,0]
# get the 1st column of the matrix (i.e. counts of the word "love")
y_vals = X[:,1]
# create a "scatterplot" of the data in X
# the first argument to plt.scatter is a list of x-axis values
# the second argument to plt.scatter is a list of y-axis values
# c=labels specifies we want to use the list of labels to color each point
# cmap=plt.cm.RdYlBu specifies we want to use the Red Yellow Blue colors in the chart
plt.scatter(x_vals, y_vals, c=labels, cmap=plt.cm.RdYlBu)
# add axis labels and a plot title
plt.xlabel('occurrences of word laser')
plt.ylabel('occurrences of word love')
plt.title('Science Fiction and Romance Books')
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
# create a KNN classifier using 3 as the value of K
clf = KNeighborsClassifier(3)
# "train" the classifier by showing it our labelled data
clf.fit(X, labels)
# predict the genre label of a new, unlabelled book
clf.predict(np.array([[14.2, 10.3]]))
from sklearn.neighbors import KNeighborsClassifier
# import some custom helper code
import helpers
# create and train a KNN model
clf = KNeighborsClassifier(3)
clf.fit(X, labels)
# use a helper function to plot the trained classifier's decision boundary
helpers.plot_decision_boundary(clf, X, labels)
# add a title and axis labels to the chart
plt.title('K-Nearest Neighbors: Classifying Science Fiction and Romance')
plt.xlabel('occurrences of word laser')
plt.ylabel('occurrences of word love')
from IPython.display import IFrame
IFrame(src='https://s3.amazonaws.com/duhaime/blog/visualizations/isolation-forests.html', width=700, height=640)
from sklearn.ensemble import IsolationForest
from sklearn.datasets.samples_generator import make_blobs
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# seed a random number generator for consistent random values
rng = np.random.RandomState(1)
# generate 100 "training" data observations
n_training = 500
X, _ = make_blobs(random_state=6, n_samples=n_training)
# create the IsolationForest classifier
clf = IsolationForest(max_samples=500, random_state=1, n_jobs=-1)
# train the classifier on the training data
clf.fit(X)
# generate 100 new observations
new_vals = rng.uniform(low=(-10, -12), high=(10, 4), size=(100, 2))
# get classification results for the new observations; `result` contains
# one observation for each value in `new_vals`: a 1 means the point was
# in the training distribution, -1 means the point is an outlier
result = clf.predict(new_vals)
# plot the classification results
helpers.plot_iforest_decision_boundary(clf, X, new_vals, result)
import pandas as pd
from bs4 import BeautifulSoup
from sklearn.feature_extraction.text import CountVectorizer
from nltk import ngrams
from requests import get
def get_passages(url, chunk_size=1000):
text = BeautifulSoup( get(url).text, 'html.parser' ).get_text().lower()
words = ''.join([c for c in text if c.isalpha() or c == ' ']).split()
return [' '.join(i) for idx, i in enumerate(ngrams(words, chunk_size)) if
idx % chunk_size == 0 and idx > 0]
austen = get_passages('https://www.gutenberg.org/files/1342/1342-h/1342-h.htm')
dickens = get_passages('https://www.gutenberg.org/files/98/98-h/98-h.htm')
mystery = get_passages('https://s3.amazonaws.com/duhaime/blog/machine-learning-intro/ppz.txt')
vec = CountVectorizer()
X = vec.fit_transform(austen + dickens).toarray()
print(X.shape) # prints the number of rows then the number of columns in X
from matplotlib.lines import Line2D
from umap import UMAP
X = vec.fit_transform(austen + dickens + mystery).toarray()
projected = UMAP(random_state=2).fit_transform(X)
labels = ['green' for i in range(len(austen))] + \
['orange' for i in range(len(dickens))] + \
['purple' for i in range(len(mystery))]
plt.scatter(projected[:,0], projected[:,1], c=labels)
plt.title('Dimension Reduction from a Term Document Matrix')
# add a legend
plt.legend(handles=[
Line2D([], [], label='Jane Austen Window', marker='o', markerfacecolor='green', color='white'),
Line2D([], [], label='Charles Dickens Window', marker='o', markerfacecolor='orange', color='white'),
Line2D([], [], label='Mystery Author Window', marker='o', markerfacecolor='purple', color='white'),
])
from zipfile import ZipFile
from collections import defaultdict
from urllib.request import urlretrieve
import numpy as np
import json, os, codecs
# download the vector files we'll use
if not os.path.exists('glove.6B.50d.txt'):
urlretrieve('http://nlp.stanford.edu/data/glove.6B.zip', 'glove.6B.zip')
# unzip the downloaded zip archive
ZipFile('glove.6B.zip').extractall(os.getcwd())
# get the first n words and their vectors
vectors = []
words = []
n = 50000
for row_idx, row in enumerate(codecs.open('glove.6B.50d.txt', 'r', 'utf8')):
if row_idx > n: break
split_row = row.split()
word, vector = ' '.join(split_row[:-50]), [float(i) for i in split_row[-50:]]
words += [word]
vectors += [vector]
# check out a sample word and its vector
print(words[1700], vectors[1700], '\n')
from sklearn.cluster import KMeans
# cluster the word vectors
kmeans = KMeans(n_clusters=20, random_state=0).fit(np.array(vectors))
# `kmeans.labels_` is an array whos `i-th` member identifies the group to which
# the `i-th` word in `words` is assigned
groups = defaultdict(list)
for idx, i in enumerate(kmeans.labels_):
groups[i] += [words[idx]]
# print the top 10 words contained in each group
for i in groups:
print(groups[i][:10])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import and load MNIST dataset (Preprocessing)
Step2: Plot 8 random images
Step3: Specify Autoencoder
Step4: Helper function to get a random mini-batch of images
Step5: Perform the training
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import numpy as np
from matplotlib import pyplot as plt
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
batch_size_train = 60000 # Samples per Training Batch
batch_size_test = 10000 # just create one large test dataset (MNIST test dataset has 10.000 Samples)
# Get Training and Test Dataset with a Dataloader
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./files/', train=True, download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./files/', train=False, download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])),
batch_size=batch_size_test, shuffle=True)
# We are only interessted in the data and not in the targets
for idx, (data, targets) in enumerate(train_loader):
x_train = data[:,0,:,:]
for idx, (data, targets) in enumerate(test_loader):
x_test = data[:,0,:,:]
image_size = x_train.shape[1]
x_test_flat = torch.reshape(x_test, (x_test.shape[0], image_size*image_size))
plt.figure(figsize=(16,2))
for k in range(8):
plt.subplot(1,8,k+1)
plt.imshow(x_train[np.random.randint(x_train.shape[0])], interpolation='nearest', cmap='binary')
plt.xticks(())
plt.yticks(())
hidden_encoder_1 = 500
hidden_encoder_2 = 250
hidden_encoder_3 = 100
hidden_encoder = [hidden_encoder_1, hidden_encoder_2, hidden_encoder_3]
hidden_decoder_1 = 100
hidden_decoder_2 = 250
hidden_decoder_3 = 500
hidden_decoder = [hidden_decoder_1, hidden_decoder_2, hidden_decoder_3]
class Autoencoder(nn.Module):
def __init__(self, hidden_encoder, hidden_decoder, image_size, bit_per_image):
super(Autoencoder, self).__init__()
# Define Transmitter Layer: Linear function, M input neurons (symbols), 2 output neurons (real and imaginary part)
self.We1 = nn.Linear(image_size*image_size, hidden_encoder[0])
self.We2 = nn.Linear(hidden_encoder[0], hidden_encoder[1])
self.We3 = nn.Linear(hidden_encoder[1], hidden_encoder[2])
self.We4 = nn.Linear(hidden_encoder[2], bit_per_image)
# Define Receiver Layer: Linear function, 2 input neurons (real and imaginary part), M output neurons (symbols)
self.Wd1 = nn.Linear(bit_per_image,hidden_decoder[0])
self.Wd2 = nn.Linear(hidden_decoder[0], hidden_decoder[1])
self.Wd3 = nn.Linear(hidden_decoder[1], hidden_decoder[2])
self.Wd4 = nn.Linear(hidden_decoder[2], image_size*image_size)
# Non-linearity (used in transmitter and receiver)
self.activation_function = nn.ELU()
self.sigmoid = nn.Sigmoid()
self.softsign = nn.Softsign()
def forward(self, training_data, Pe):
encoded = self.encoder(training_data)
# random binarization in training
ti = encoded.clone()
compressed = ti + (self.binarizer(ti) - ti).detach()
# add error pattern (flip the bit or not)
error_tensor = torch.distributions.Bernoulli(Pe * torch.ones_like(compressed)).sample()
received = torch.mul( compressed, 1 - 2*error_tensor)
reconstructed = self.decoder(received)
return reconstructed
def test(self, valid_data, Pe):
encoded_test = self.encoder(valid_data)
compressed_test = self.binarizer_deterministic(encoded_test)
error_tensor_test = torch.distributions.Bernoulli(Pe * torch.ones_like(compressed_test)).sample()
received_test = torch.mul( compressed_test, 1 - 2*error_tensor_test )
reconstructed_test = self.decoder(received_test)
loss_test = torch.mean(torch.square(valid_data - reconstructed_test))
reconstructed_test_noerror = self.decoder(compressed_test)
return reconstructed_test
def encoder(self, batch):
temp = self.activation_function(self.We1(batch))
temp = self.activation_function(self.We2(temp))
temp = self.activation_function(self.We3(temp))
output = self.softsign(self.We4(temp))
return output
def decoder(self, batch):
temp = self.activation_function(self.Wd1(batch))
temp = self.activation_function(self.Wd2(temp))
temp = self.activation_function(self.Wd3(temp))
output = self.sigmoid(self.Wd4(temp))
return output
def binarizer(self, input):
# This is the stochastic quatizer which we use for the training
prob = torch.div(torch.add(input, 1.0), 2.0)
bernoulli = torch.distributions.Bernoulli(prob) # torch.distributions.bernoulli.
# bernoulli = tf.distributions.Bernoulli(probs=prob, dtype=tf.float32)
return 2*bernoulli.sample() - 1
def binarizer_deterministic(self, input):
# This is the deteministic quatizer which we use for
return torch.sign(input)
def get_batch(x, batch_size):
idxs = np.random.randint(0, x.shape[0], (batch_size))
return torch.stack([torch.reshape(x[k], (-1,)) for k in idxs])
batch_size = 250
Pe_range = np.array([0, 0.01, 0.1, 0.2])
bit_range = np.array([5, 10, 20, 30, 40, 50, 60, 70, 80, 100])
SNR_result = np.zeros( (len(Pe_range), len(bit_range)) )
# Mean Squared Error loss
loss_fn = nn.MSELoss()
for i in range(len(Pe_range)):
for j in range(len(bit_range)):
best_SNR = -9999;
print('Initializing ....')
model = Autoencoder(hidden_encoder, hidden_decoder, image_size, bit_range[j])
model.to(device)
# Adam Optimizer
optimizer = optim.Adam(model.parameters())
print('Start Training') # Training loop
for it in range(100000): # Original paper does 50k iterations
mini_batch = torch.Tensor(get_batch(x_train, batch_size)).to(device)
# Propagate (training) data through the net
reconstructed = model(mini_batch, Pe_range[i])
# compute loss
loss = loss_fn(mini_batch, reconstructed)
# compute gradients
loss.backward()
# Adapt weights
optimizer.step()
# reset gradients
optimizer.zero_grad()
# Evaulation with the test data
if it % 500 == 0:
reconstructed_test = model.test(x_test_flat.to(device), Pe_range[i])
noise = torch.mean(torch.square(x_test_flat.to(device) - reconstructed_test))
SNR = 10.0 * (torch.log(torch.mean(torch.square(x_test_flat.to(device)))) - torch.log(noise)) / np.log(10.0)
cur_SNR = SNR.detach().cpu().numpy().squeeze()
if cur_SNR > best_SNR:
best_SNR = cur_SNR
if it % 10000 == 0:
print('Pe = %1.2f, bits = %d, It %d: (best SNR: %1.4f dB)' % (Pe_range[i], bit_range[j], it, best_SNR))
SNR_result[i,j] = best_SNR
print('Finished learning for e = %1.2f, bits = %d. Best SNR: %1.4f' % (Pe_range[i], bit_range[j], best_SNR))
print('Training finished')
np.savetxt('SNR_result.txt', SNR_result, delimiter=',')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'access-1-0', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Horvath normalization source into R namespace.
Step2: Read in Betas
Step3: Normalization Step
Step4: Now we need to fix the labels a little bit.
|
<ASSISTANT_TASK:>
Python Code:
cd ..
import NotebookImport
from Setup.Imports import *
import os as os
import pandas as pd
from pandas.rpy.common import convert_to_r_dataframe, convert_robj
import rpy2.robjects as robjects
from IPython.display import clear_output
robjects.r.library('WGCNA');
robjects.r.source("/cellar/users/agross/Data/MethylationAge/Horvath/NORMALIZATION.R")
clear_output()
path = '/cellar/users/agross/TCGA_Code/Methlation/data/'
f = path + 'all_betas_raw.csv'
df = pd.read_csv(f, low_memory=True, header=0, index_col=0)
labels = pd.read_csv(path + 'all_betas_raw_pdata.csv',
index_col=0)
gold_standard = pd.read_csv('/cellar/users/agross/Data/MethylationAge/Horvath/probeAnnotation21kdatMethUsed.csv', index_col=0)
horvath = pd.read_table('/cellar/users/agross/TCGA_Code/Methlation/data/Horvath_Model.csv', index_col=0, skiprows=[0,1])
intercept = horvath.CoefficientTraining['(Intercept)']
horvath = horvath.iloc[1:]
df = df.ix[gold_standard.index]
df = df.T.fillna(gold_standard.goldstandard2).T
df_r = robjects.r.t(convert_to_r_dataframe(df))
gs = list(gold_standard.goldstandard2.ix[df.index])
gs_r = robjects.FloatVector(gs)
del df
data_n = robjects.r.BMIQcalibration(df_r, gs_r)
data_n = convert_robj(data_n).T
clear_output()
c1 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx',
'HIV- samples from OldStudy', index_col=0)
c2 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx',
'HIV+ samples', index_col=0)
clinical = c1.append(c2)
s = labels[labels.studyIndex == 's2'].sampleNames
ss = clinical[['Sample_Plate','Sample_Well']].sort(['Sample_Plate','Sample_Well'])
assert(alltrue(ss.Sample_Well == s))
new_label = clinical.sort(['Sample_Plate','Sample_Well']).index
new_label = pd.Series(new_label, s.index)
new_names = labels['sampleNames'].replace(new_label.to_dict())
new_labels = labels['sampleNames'].ix[new_label.index] = new_label
new_labels = new_labels.combine_first(labels.sampleNames)
labels['sampleNames'] = new_labels
new_labels2 = labels[labels.studyIndex == 's3'].sampleNames
new_labels2 = new_labels2.map(lambda s: '_'.join(s.split('_')[1:]))
new_labels2 = new_labels2.combine_first(new_labels)
labels['sampleNames'] = new_labels2
data_n.columns = list(new_labels2)
data_n = data_n.astype(float)
data_n.to_hdf(HDFS_DIR + 'methylation_norm.h5','BMIQ_Horvath')
labels.to_hdf(HDFS_DIR + 'methylation_norm.h5','labels')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the data. We want "VG12"
Step2: Import all the local models, saved locally as etalon.py. See the paper for derivations of these equations.
Step3: Define the likelihood. In this case we are using two different gap sizes, but fixed fill factor.
Step4: Define the prior. We want to put a Normal prior on $d_M$
Step5: Combine likelihood and prior to obtain the posterior.
Step6: Set up emcee.
Step7: Set up the initial conditions
Step8: Run the burn-in phase. Run the full MCMC. Pickle the results.
Step9: Linearize $a$ and $s$ for easy inspection of the values.
Step10: Inspect the chain.
Step11: Linearize $a$ and $s$ for graphical purposes.
Step12: Calculate confidence intervals.
Step13: Overlay draws from the Gaussian Process.
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import emcee
import triangle
import pandas as pd
import seaborn as sns
from astroML.decorators import pickle_results
sns.set_context("paper", font_scale=2.0, rc={"lines.linewidth": 2.5})
sns.set(style="ticks")
df = pd.read_csv('../data/cln_20130916_cary5000.csv', index_col=0)
df = df[df.index > 1250.0]
plt.plot(df.index[::4], df.run11[::4]/100.0, label='On-mesh')
plt.plot(df.index, df.run10/100.0, label='Off-mesh')
plt.plot(df.index, df.run12/100.0, label='Shard2')
plt.plot(df.index, df.run9/100.0, label='DSP')
plt.plot(df.index, df.run15/100.0, label='VG08')
plt.plot(df.index, df.run17/100.0, label='VG08 alt')
#plt.plot(x, T_gap_Si_withFF_fast(x, 65.0, 0.5, n1)/T_DSP, label='Model')
plt.legend(loc='best')
plt.ylim(0.80, 1.05)
from etalon import *
np.random.seed(78704)
# Introduce the Real data, decimate the data.
x = df.index.values[::4]
N = len(x)
# Define T_DSP for the model
T_DSP = T_gap_Si(x, 0.0)
n1 = sellmeier_Si(x)
# Define uncertainty
yerr = 0.0004*np.ones(N)
iid_cov = np.diag(yerr ** 2)
# Select the spectrum of interest
# Normalize the spectrum by measured DSP Si wafer.
y = df.run11.values[::4]/100.0
def lnlike(dM, eps, lna, lns):
a, s = np.exp(lna), np.exp(lns)
off_diag_terms = a**2 * np.exp(-0.5 * (x[:, None] - x[None, :])**2 / s**2)
C = iid_cov + off_diag_terms
sgn, logdet = np.linalg.slogdet(C)
if sgn <= 0:
return -np.inf
T_mix = 0.5 * (T_gap_Si_withFF_fast(x, dM+eps, 1.0, n1) + T_gap_Si_withFF_fast(x, eps, 1.0, n1))/T_DSP
r = y - T_mix
return -0.5 * (np.dot(r, np.linalg.solve(C, r)) + logdet)
def lnprior(dM, eps, lna, lns):
prior = -0.5 * ((49.0-dM)/6.0)**2.0
if not (31.0 < dM < 67 and 0.0 < eps < 60.0 and -12 < lna < -2 and 0 < lns < 10):
return -np.inf
return prior
def lnprob(p):
lp = lnprior(*p)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(*p)
@pickle_results('SiGaps_12_VG12_twoGaps-sampler.pkl')
def hammer_time(ndim, nwalkers, dM_Guess, eps_Guess, a_Guess, s_Guess, nburnins, ntrials):
# Initialize the walkers
p0 = np.array([dM_Guess, eps_Guess, np.log(a_Guess), np.log(s_Guess)])
pos = [p0 + 1.0e-2*p0 * np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)
pos, lp, state = sampler.run_mcmc(pos, nburnins)
sampler.reset()
pos, lp, state = sampler.run_mcmc(pos, ntrials)
return sampler
np.random.seed(78704)
ndim, nwalkers = 4, 32
dM_Guess = 49.0
eps_Guess = 15.0
a_Guess = 0.0016
s_Guess = 25.0
nburnins = 200
ntrials = 700
sampler = hammer_time(ndim, nwalkers, dM_Guess, eps_Guess, a_Guess, s_Guess, nburnins, ntrials)
chain = sampler.chain
samples_lin = copy(sampler.flatchain)
samples_lin[:, 2:] = np.exp(samples_lin[:, 2:])
fig, axes = plt.subplots(4, 1, figsize=(5, 6), sharex=True)
fig.subplots_adjust(left=0.1, bottom=0.1, right=0.96, top=0.98,
wspace=0.0, hspace=0.05)
[a.plot(np.arange(chain.shape[1]), chain[:, :, i].T, "k", alpha=0.5)
for i, a in enumerate(axes)]
[a.set_ylabel("${0}$".format(l)) for a, l in zip(axes, ["d_M", "\epsilon", "\ln a", "\ln s"])]
axes[-1].set_xlim(0, chain.shape[1])
axes[-1].set_xlabel("iteration");
fig = triangle.corner(samples_lin,
labels=map("${0}$".format, ["d_M", "\epsilon", "a", "s"]),
quantiles=[0.16, 0.84])
fig = triangle.corner(samples_lin[:,0:2],
labels=map("${0}$".format, ["d_M", "\epsilon"]),
quantiles=[0.16, 0.84])
plt.savefig("VG12_twoGaps_cornerb.pdf")
dM_mcmc, eps_mcmc, a_mcmc, s_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]),
zip(*np.percentile(samples_lin, [16, 50, 84],
axis=0)))
dM_mcmc, eps_mcmc, a_mcmc, s_mcmc
print "{:.0f}^{{+{:.0f}}}_{{-{:.0f}}}".format(*dM_mcmc)
print "{:.0f}^{{+{:.0f}}}_{{-{:.0f}}}".format(*eps_mcmc)
plt.figure(figsize=(6,3))
for dM, eps, a, s in samples_lin[np.random.randint(len(samples_lin), size=60)]:
off_diag_terms = a**2 * np.exp(-0.5 * (x[:, None] - x[None, :])**2 / s**2)
C = iid_cov + off_diag_terms
fit = 0.5*(T_gap_Si_withFF_fast(x, dM+eps, 1.0, n1)+T_gap_Si_withFF_fast(x, eps, 1.0, n1))/T_DSP
vec = np.random.multivariate_normal(fit, C)
plt.plot(x, vec,"-b", alpha=0.06)
plt.step(x, y,color="k", label='Measurement')
fit = 0.5*(T_gap_Si_withFF_fast(x, dM_mcmc[0]+eps_mcmc[0], 1, n1)+T_gap_Si_withFF_fast(x, eps_mcmc[0], 1, n1))/T_DSP
fit_label = 'Model with $d_M={:.0f}$ nm, $\epsilon={:.0f}$'.format(dM_mcmc[0], eps_mcmc[0])
plt.plot(x, fit, '--', color=sns.xkcd_rgb["pale red"], alpha=1.0, label=fit_label)
fit1 = T_gap_Si_withFF_fast(x, 43, 0.5, n1)/T_DSP
fit2 = T_gap_Si_withFF_fast(x, 55, 0.5, n1)/T_DSP
fit2_label = 'Model with $d_M={:.0f}\pm{:.0f}$ nm, $\epsilon={:.0f}$'.format(49, 6, 0)
plt.fill_between(x, fit1, fit2, alpha=0.6, color=sns.xkcd_rgb["green apple"])
plt.plot([-10, -9], [-10, -9],"-", alpha=0.85, color=sns.xkcd_rgb["green apple"], label=fit2_label)
plt.plot([-10, -9], [-10, -9],"-b", alpha=0.85, label='Draws from GP')
plt.plot([0, 5000], [1.0, 1.0], '-.k', alpha=0.5)
plt.fill_between([1200, 1250], 2.0, 0.0, hatch='\\', alpha=0.4, color='k', label='Si absorption cutoff')
plt.xlabel('$\lambda$ (nm)');
plt.ylabel('$T_{gap}$');
plt.xlim(1200, 2501);
plt.ylim(0.9, 1.019);
plt.legend(loc='lower right')
plt.savefig("VG12_twoGapsb.pdf", bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The matplotlib figure
Step2: On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
Step3: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.
Step4: Notice how the axes view limits (ax.viewLim) have been updated to include the whole of the line.
Step5: The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.
Step6: Exercise 3 continued
Step7: Titles, Legends, colorbars and annotations
Step8: The creation of a legend is as simple as adding a "label" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend
Step9: Colorbars are created with the plt.colorbar function
Step10: Matplotlib comes with powerful annotation capabilities, which are described in detail at http
Step11: Saving your plots
Step12: Matplotlib supports many output file formats, including most commonly used ones. You can see a list of the supported file formats including the filename extensions they are recognised by with
Step13: Further steps
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
fig = plt.figure()
plt.show()
ax = plt.axes()
plt.show()
ax = plt.axes()
line1, = ax.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
plt.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
top_right_ax = plt.subplot(2, 3, 3)
bottom_left_ax = plt.subplot(2, 3, 4)
plt.show()
import numpy as np
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.show()
plt.imshow(data, extent=[-180, 180, -90, 90],
interpolation='nearest', origin='lower')
plt.show()
plt.pcolormesh(x, y, data)
plt.show()
plt.scatter(x2d, y2d, c=data, s=15)
plt.show()
plt.bar(x, data.sum(axis=0), width=np.diff(x)[0])
plt.show()
plt.plot(x, data.sum(axis=0), linestyle='--',
marker='d', markersize=10, color='red', alpha=0.5)
plt.show()
fig = plt.figure()
ax = plt.axes()
# Adjust the created axes so its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.8)
fig.suptitle('Figure title', fontsize=18, fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
plt.show()
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5 * x ** 3 - 3 * x ** 2, linewidth=2,
label='$f(x)=0.5x^3-3x^2$')
plt.plot(x, 1.5 * x ** 2 - 6 * x, linewidth=2, linestyle='--',
label='Gradient of $f(x)$', )
plt.legend(loc='lower right')
plt.grid()
plt.show()
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.colorbar(orientation='horizontal')
plt.show()
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2)
plt.annotate('Local minimum',
xy=(4, -18),
xytext=(-2, -40), fontsize=15,
arrowprops={'facecolor': 'black', 'headlength': 10})
plt.grid()
plt.show()
plt.plot(range(10))
plt.savefig('my_plot.png')
from IPython.display import Image
Image(filename='my_plot.png')
plt.gcf().canvas.get_supported_filetypes_grouped()
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(1234)
n_steps = 500
t = np.arange(n_steps)
# Probability distribution:
mu = 0.002 # Mean
sigma = 0.01 # Standard deviation
# Generate a random walk, with position X as a function of time:
S = mu + sigma * np.random.randn(n_steps)
X = S.cumsum()
# Calculate the 1 sigma upper and lower analytic population bounds:
lower_bound = mu * t - sigma * np.sqrt(t)
upper_bound = mu * t + sigma * np.sqrt(t)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.