text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Step20: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
# GRADED FUNCTION: basic_sigmoid
import math
import numpy as np
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = math.exp(-1 * x)
s = 1 / (1 + s)
### END CODE HERE ###
return s
basic_sigmoid(3)
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
#s = np.exp(np.multiply(-1, x))
#s = np.divide(1, np.add(1, s))
s = 1 / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1 - s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = None
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = None
# Divide x by its norm.
x = None
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = None
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = None
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = None
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From this point on we can use ccd to refer to our clustering change detector implementation in the Dashboard. Here we can use all sorts of utilities that we can also analyse. The first one we'll use is the estimator implementation
Step2: Let's now visualize the data, showing the probabilities along the range of values we've encountered.
Step3: As we can see in the charts above, we can now see how the estimator works for finding where change points could be based on how the particular point partitions the space. While that's interesting, we still need to check which/whether these high-probability change points partition the space nicely (or in a statistically significant way).
Step4: Alternative A
Step5: Alternative B
|
<ASSISTANT_TASK:>
Python Code:
import sys
_CATAPULT_PATH=os.path.join(os.path.abspath(''), '..', '..', '..')
_DASHBOARD_PATH=os.path.join(_CATAPULT_PATH, 'dashboard')
# Set up the environment so that we can import the modules in the
# Dashboard codebase.
sys.path.insert(0, _CATAPULT_PATH)
sys.path.insert(0, _DASHBOARD_PATH)
import dashboard
# Include all the paths needed for testing
for path in dashboard.PathsForTesting():
sys.path.insert(0, path)
from dashboard.common import clustering_change_detector as ccd
import pandas as pd
import numpy as np
import statsmodels as sm
import numba as nb
import matplotlib.pyplot as plt
# Create a random dataset from some normal distributions.
measurements = np.concatenate(
(np.random.normal(100, 5, 50),
np.random.normal(20, 5, 50),
np.random.normal(100, 5, 50)),
axis=None)
commits = np.arange(1000,1150,dtype=np.int64)
frame = pd.DataFrame(measurements, columns=['measurement'], index=commits)
frame
# We then create a column of estimates. First, we'll do it with
# the pure Python implementation, and time it.
def create_estimates(measurements):
results = np.array(
[ccd.Estimator(measurements, i) for i in
range(len(measurements))])
return results
estimates=create_estimates(measurements)
frame['estimates'] = estimates
frame['probability'] = estimates / np.nanmax(estimates)
%timeit create_estimates(measurements)
fig, axes = plt.subplots(2, 1, sharex=True)
frame['measurement'].plot.line(ax=axes[0], title='Measurements')
frame['probability'].plot.line(ax=axes[1], title='Probabilities',color='r')
# We need to convert the measurements into a Python list
# because the pure Python implementation can only work with
# those native objects.
inputs = list(measurements)
result = ccd.ClusterAndFindSplit(inputs)
# WARNING: Running this might take a while depending on how
# capable your machine is.
%timeit ccd.ClusterAndFindSplit(inputs)
print(result)
# TODO(dberris): Implement the purely vectorized versions here.
# TODO(dberris): Implement a GPU-accelerated versions here.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using SQL for Queries
Step3: Ordering
Step5: Aggregate queries
Step6: Joins
Step8: Matching students and majors
Step10: Left outer join
Step12: Emulating a full outer join with UNION ALL
Step14: Using linker tables to match students to classes (a MANY TO MANY join)
Step16: Same thing but including students with no majors
Step17: Using SQLite3
Step18: SQLite specific commands to get metadata
Step19: What are the columns of the table "Album"?
Step20: Standard SQL statements with parameter substitution
Step21: Not this
Step23: User defined functions
Step25: An aggregate UDF
Step26: Using SQL magic functions
Step27: Configuring the SqlMagic extension
Step28: Connect to SQLite3 database
Step29: Other databases
Step30: Cell magic
Step31: You can assign results of queries to Python names
Step32: Resutls behave like lists
Step33: You can use Python variables in your queires.
Step34: Convert to pandas dataframe
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
import pandas as pd
import numpy as np
from pandasql import PandaSQL
pdsql = PandaSQL()
tips = sns.load_dataset('tips')
tips.head()
pdsql("SELECT * FROM tips LIMIT 5;")
pdsql("SELECT * FROM tips WHERE sex='Female' LIMIT 5;")
pdsql("SELECT tip, sex, size FROM tips WHERE total_bill< 10 LIMIT 5;")
query =
SELECT * FROM tips
WHERE sex='Female' and smoker='Yes'
ORDER BY total_bill ASC
LIMIT 5;
pdsql(query)
query =
SELECT count(*) AS count, max(tip) AS max, min(tip) AS min FROM tips
WHERE size > 1
GROUP BY sex, day
HAVING max < 6
ORDER BY count DESC
LIMIT 5;
pdsql(query)
student = pd.read_csv('data/student.txt')
student
cls = pd.read_csv('data/class.txt')
cls
major = pd.read_csv('data/major.txt')
major
student_cls = pd.read_csv('data/student_class.txt')
student_cls
query =
SELECT s.first, s.last, m.name
FROM student s
INNER JOIN major m
ON s.major_id = m.major_id;
pdsql(query)
query =
SELECT s.first, s.last, m.name
FROM student s
LEFT OUTER JOIN major m
ON s.major_id = m.major_id;
pdsql(query)
query =
SELECT s.first, s.last, m.name
FROM student s
LEFT JOIN major m
ON s.major_id = m.major_id
UNION All
SELECT s.first, s.last, m.name
FROM major m
LEFT JOIN student s
ON s.major_id = m.major_id
WHERE s.major_id IS NULL;
pdsql(query)
query =
SELECT s.first, s.last, c.code, c.name, c.credits
FROM student s
INNER JOIN student_cls sc ON s.student_id = sc.student_id
INNER JOIN cls c ON c.class_id = sc.class_id;
pdsql(query)
query =
SELECT s.first, s.last, c.code, c.name, c.credits
FROM student s
LEFT OUTER JOIN student_cls sc ON s.student_id = sc.student_id
LEFT OUTER JOIN cls c ON c.class_id = sc.class_id;
pdsql(query)
import sqlite3
c = sqlite3.connect('data/Chinook_Sqlite.sqlite')
list(c.execute("SELECT name FROM sqlite_master WHERE type='table';"))
list(c.execute("PRAGMA table_info(Album);"))
t = ['%rock%']
list(c.execute("SELECT * FROM Album WHERE Title like ? LIMIT 5;", t))
t = ["'%rock%'"]
list(c.execute("SELECT * FROM Album WHERE Title like %s LIMIT 5;" % t[0]))
def encode(text, offset):
Caesar cipher of text with given offset.
from string import ascii_lowercase, ascii_uppercase
tbl = dict(zip(map(ord, ascii_lowercase + ascii_uppercase),
ascii_lowercase[offset:] + ascii_lowercase[:offset] +
ascii_uppercase[offset:] + ascii_uppercase[:offset]))
return text.translate(tbl)
c.create_function("encode", 2, encode)
list(c.execute("SELECT Title, encode(Title, 3) FROM Album limit 5;"))
class CV:
Aggregate UDF for coefficient of varation in %.
def __init__(self):
self.s = []
def step(self, value):
self.s.append(value)
def finalize(self):
if len(self.s) < 2:
return 0
else:
return 100.0*np.std(self.s)/np.mean(self.s)
c.create_aggregate("cv", 1, CV)
list(c.execute("PRAGMA table_info(Invoice);"))
list(c.execute("SELECT cv(Total) from Invoice limit 10;"))
c.close()
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore')
%load_ext sql
%config SqlMagic
%config SqlMagic.displaylimit=10
%sql sqlite:///data/Chinook_Sqlite.sqlite
%sql SELECT * from Album LIMIT 5;
%sql SELECT * from Artist LIMIT 5;
%%sql
SELECT Artist.Name, Album.Title
FROM Album
INNER JOIN Artist on Album.ArtistId = Artist.ArtistId
ORDER BY Artist.Name ASC
LIMIT 5;
result = %sql SELECT * from Album;
type(result)
result[2:4]
artist_id = 10
%sql select * from Artist where ArtistId < :artist_id;
word = '%rock%'
%sql select * from Album WHERE Title LIKE :word;
df = result.DataFrame()
df.head(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting up the inputs
Step2: Plane wave incident field
Step3: Additional inputs for the simulation
Step4: Target plot
Step5: Performing the calculations
Step6: Local Field
Step7: Plotting the results
Step8: Local field enhancement
|
<ASSISTANT_TASK:>
Python Code:
#------Library loading------
# numpy for matrix computations
import numpy as np; import numpy.ma as ma
# system libraries
import sys
# plotting libraries
%matplotlib inline
import matplotlib.pylab as plt
from matplotlib.patches import Circle, Ellipse
# Generalized Multiparticle Mie import
sys.path.append('../')
import py_gmm
# building the optical constant database
eps_db_out=py_gmm.mat.generate_eps_db('../epsilon/',ext='*.edb')
eps_files,eps_names,eps_db=eps_db_out['eps_files'],eps_db_out['eps_names'],eps_db_out['eps_db']
# sphere radius (in nm)
v_r = np.array([ 40., 40.])
# sphere position (in nm)
m_xyz = np.array([[ -42.5, 0. , 0. ],
[ 42.5, 0. , 0. ]])
# how many spheres in the target? We guess it from the length of the radius vector
ns = len(v_r)
# sphere composition, calling the names contained in "eps_names", just populated above
target_comp= np.array(['eAgPalikSTDf','eAuJCSTDf']) # vector containing the optical constants names
# refractive index of the environment
n_matrix = 1.33 # water
# Euler angles: (alpha,beta,gamma)=(0,0,0) means a z-directed, x-polarized plane wave
alpha = 0.0 # azimuth
beta = 0.0 # polar
gamma = 0.0 # polarization
# Wavelengths for the specrum computation
wl_min = 300
wl_max = 800
n_wl = 250
v_wl = np.linspace(wl_min,wl_max,n_wl)
# Wavelength for the local field computation
v_wl_lf = [430.0,630] # resonance wavelengths
n_stop=10 # maximum multipolar expansion order
f_int=0.0; # interaction cutoff (normally set to zero to have full interactions)
lf_ratio=300; # plot sphere local field contribution up to a distance equal to d=lf_ratio*r_sphere
qs_flag='no' # retarded simulation
n_E = 400 # local field plotting grid resolution
# target plot
fig = plt.figure(num=1,figsize=(10,10)) # setting the figure size
ax = fig.add_subplot(1, 1, 1, aspect='equal') # creating the plotting axis
# plot bounds and eliminating x and y ticks
plt.xlim(-1.1*(-m_xyz[0,0]+v_r[0]),1.1*(m_xyz[1,0]+v_r[ns-1]))
plt.ylim(-1.1*(v_r[0]),1.1*(v_r[0]))
plt.xticks([])
plt.yticks([])
# plotting the target
v_color = ['0.6','y']
for c,r,col in zip(m_xyz,v_r,v_color):
c0=c[0];c1=c[1];
ax.add_patch(Circle((c0,c1),r,color=col))
# computing the expansion coefficients and cross sections with a loop
m_abcd_ext_sca_abs = [] # list to be filled with the output
for wl in v_wl:
# retrieving optical constants at wl from the database
e_list=py_gmm.mat.db_to_eps(wl,eps_db,target_comp);
m_eps=np.column_stack((np.real(e_list),np.imag(e_list)));
# solving the gmm problem (calculating the cross sections and the expansion coefficients)
out=py_gmm.gmm_py.gmm_f2py_module.expansion_coefficients(m_xyz, # target sphere position in nm
v_r, # target sphere radii in nm
m_eps, # e1 and e2 for each sphere
f_int, # interaction coefficient
n_matrix, # environment refractive index
wl, # computation wavelength
alpha,beta,gamma, # euler angles for the incident pw
0, # =0 Linear, =1 Left Circular, =2 Right Circular
n_stop, # maximum number for expansion coefficients
qs_flag) # quasi static approximation
m_abcd_ext_sca_abs.append(out)
# extracting coefficients and cross section
v_coeff=[];v_cext=[];v_csca=[];v_cabs=[];
for out in m_abcd_ext_sca_abs:
v_coeff.append(out[0]);
v_cext.append(out[1]);
v_csca.append(out[2]);
v_cabs.append(out[3]);
# converting the lists to numpy arrays
v_cext=np.array(v_cext)
v_csca=np.array(v_csca)
v_cabs=np.array(v_cabs)
# local field for the first resonance
v_field = []
for wl_lf in v_wl_lf:
# optical constants
e_list=py_gmm.mat.db_to_eps(wl_lf,eps_db,target_comp);
m_eps=np.column_stack((np.real(e_list),np.imag(e_list)));
# gmm coefficients computation
out=py_gmm.gmm_py.gmm_f2py_module.expansion_coefficients(m_xyz, # target sphere position in nm
v_r, # target sphere radii in nm
m_eps, # e1 and e2 for each sphere
f_int, # interaction coefficient
n_matrix, # environment refractive index
wl_lf, # computation wavelength
alpha,beta,gamma, # euler angles for the incident pw
0, # =0 Linear, =1 Left Circular, =2 Right Circular
n_stop, # maximum number for expansion coefficients
qs_flag) # quasi static approximation
v_amnbmn=out[0][:,0] # getting field expansion coefficients
v_dmncmn=out[0][:,1]
# local field
v_emn=py_gmm.gmm_py.gmm_f2py_module.emn(n_stop)[0] # normalization coeffs
# building plotting grid
x_min = -1.5*(v_r[0]-m_xyz[0,0])
x_max = 1.5*(v_r[1]+m_xyz[1,0])
y_min = -1.5*v_r[0]
y_max = 1.5*v_r[0]
v_x=np.linspace(x_min,x_max,n_E);
v_y=np.linspace(y_min,y_max,n_E);
# retrieving the local field
m_E=[]
for x in v_x:
for y in v_y:
out = py_gmm.gmm_py.gmm_f2py_module.exyz("yes", # include incident local field
n_stop, # maximum number for expansion coefficients
0, # =0 Linear, =1 Left Circular, =2 Right Circular
lf_ratio, # plot sphere contribution up to distance d=lf_ratio*r
wl_lf, # computation wavelength
alpha,beta,gamma,
x,y,0.0, # field computation coordinates
v_amnbmn,v_dmncmn,v_emn, # expansion and normalization coefficients
m_xyz,m_eps,v_r, # sphere position, composition and size
n_matrix, # environment refractive index
qs_flag) # quasi static approximation
m_E=np.append(m_E,out[3])
m_E=np.array(m_E).reshape(n_E,n_E)
v_field.append(m_E)
# cross section plot
f_size=25;
f_size_ticks=20;
plt.figure(1,figsize=(15,10));
plt.plot(v_wl,np.sum(v_cext,axis=1),'k',linewidth=3.0);
plt.plot(v_wl,v_cext[:,0],'0.6',
v_wl,v_cext[:,1],'y',linewidth=2.0);
# plt title
plt.title('AuAg dimer',fontsize=f_size)
# axes labels
plt.xlabel(r'wavelength (nm)', fontsize=f_size)
plt.ylabel(r'C$_{ext}$', fontsize=f_size)
# ticks
plt.xticks(fontsize=f_size_ticks)
plt.yticks(fontsize=f_size_ticks)
# legend
plt.legend((r'Integral C$_{ext}$',
r'Ag C$_{ext}$',
r'Au C$_{ext}$'),frameon=False,fontsize=f_size-5)
# layout
plt.tight_layout()
# local field plot
f_size = 25
fig = plt.figure(2,figsize=(14,10))
v_title = ['High energy resonance','Low energy resonance']
for i_E,m_E in enumerate(v_field):
ax = fig.add_subplot(2, 1, i_E+1, aspect='equal') # creating the plotting axis
plt.imshow(m_E.T,origin='lower',cmap='gnuplot2', aspect=(y_max-y_min)/(x_max-x_min))
# remove ticks
plt.xticks([])
plt.yticks([])
# colorbar
cb = plt.colorbar()
cb.set_label('|E|', fontsize=f_size-5)
cb.ax.tick_params(labelsize=f_size-10)
plt.title(v_title[i_E],fontsize=f_size)
plt.tight_layout()
# sphere outlines
for c,r in zip(m_xyz,v_r):
# aspect rations
x_ar = n_E/(x_max-x_min)
y_ar = n_E/(y_max-y_min)
# circle centers
c0=x_ar*(c[0]-x_min)
c1=y_ar*(c[1]-y_min)
ax.add_patch(Ellipse((c0,c1),2.0*r*x_ar,2.0*r*y_ar,facecolor='none',edgecolor='w',linewidth=2.0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1:
Step2: Non-commutative symbols
Step3: Exponential gf recap
Step4: Linear types
Step5: occupancies
Step6:
Step7:
Step8:
Step9: Differences
Step10:
Step11:
|
<ASSISTANT_TASK:>
Python Code:
from itertools import repeat
from sympy import *
#from type_system import *
%run ../../src/commons.py
%run ./type-system.py
init_printing()
x,y,m,n,t,z = symbols('x y m n t z', commutative=True)
alpha, beta, gamma, eta = symbols(r'\alpha \beta \gamma \eta', commutative=True)
f,g = Function('f'), Function('g')
((1/(1-w[0]*z))*(1/(1-w[1]*z))).diff(z).series(z, n=6)
define(f(z), z/((1-z)**2),ctor=FEq).series(z,n=10)
define(f(z), 1/(1-alpha*z), ctor=FEq).series(z,n=10)
define(f(z), 1/(1-(u[0]+u[1])*z), ctor=FEq).series(z,n=4)
define(f(z), 1/(1-(o[0]+o[1])*z), ctor=FEq).series(z,n=4)
define(f(z), z*(1/(1-z))*(1/(1-z)), ctor=FEq).series(z,n=10)
define(f(z), z**3,ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), exp(z),ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), z*exp(z), ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), z**2*exp(z)/factorial(2,evaluate=False),
ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), z**3*exp(z)/factorial(3, evaluate=False),
ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), (exp(z)+exp(-z))/2, ctor=FEq).series(z, n=20, kernel='exponential')
define(f(z), exp(m*z), ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), (exp(z)-1)/z, ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), 1/(1-z), ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), (1/(1-z))*(1/(1-z)), ctor=FEq).series(z, n=10, kernel='exponential')
define(f(z), exp(z)**2, ctor=FEq).series(z, n=10, kernel='exponential')
tyvar(x).gf()
(tyvar(u[0]) * tyvar(u[1]) * tyvar(u[2])).gf()
(tyvar(o[0]) * tyvar(o[1]) * tyvar(o[2])).gf()
(tyvar(u[0]) | tyvar(u[1]) | tyvar(u[2])).gf()
(tyvar(o[0]) | tyvar(o[1]) | tyvar(o[2])).gf()
truth.gf() + falsehood.gf()
boolean.gf()
maybe(tyvar(alpha)[z]).gf()
nel = 4
syms=[u[i] for i in range(nel)]
occ_prb, = cp(maybe(tyvar(u[i]*z)) for i in range(nel)).gf() # here we can use the `[z]` notation too.
occ_prb
occupancy(occ_prb, syms, objects='unlike', boxes='unlike').series(z)
occupancy(occ_prb, syms, objects='unlike', boxes='like').series(z)
occupancy(occ_prb, syms, objects='like', boxes='unlike').series(z)
occupancy(occ_prb, syms, objects='like', boxes='like').series(z)
u_hat = symbols(r'␣_0:10')
nel = 3
occ_prb, = cp(tyvar(z*(sum(u[j] for j in range(nel) if j != i))) | tyvar(u_hat[i])
for i in range(nel)).gf()
occ_prb
syms=[u[i] for i in range(nel)]+[u_hat[i] for i in range(nel)]
occupancy(occ_prb, syms, objects='unlike', boxes='unlike').series(z)
occupancy(occ_prb, syms, objects='unlike', boxes='like').series(z)
occupancy(occ_prb, syms, objects='like', boxes='unlike').series(z)
occupancy(occ_prb, syms, objects='like', boxes='like').series(z)
occupancy_problem, = cp(maybe(du(tyvar((u[i]*z)**(j+1)) for j in range(i+1)))
for i in range(3)).gf()
occupancy_problem
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='unlike', boxes='unlike').series(z)
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='unlike', boxes='like').series(z)
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='like', boxes='unlike').series(z)
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='like', boxes='like').series(z)
((1+t)*(1+t+t**2)*(1+t+t**2+t**3)).series(t,n=10) # just for checking
def sums_of_powers(boxes, base):
p = IndexedBase('\space')
return cp(cp() | tyvar(p[j]*z**(base**i))
for i in range(0,boxes)
for j in [Pow(base,i,evaluate=False)] # implicit let
).gf()
occupancy, = sums_of_powers(boxes=4, base=2)
occupancy.series(z, n=32)
occupancy, = sums_of_powers(boxes=4, base=3)
occupancy.series(z, n=100)
occupancy, = sums_of_powers(boxes=4, base=5)
occupancy.series(z, n=200)
occupancy, = sums_of_powers(boxes=4, base=7)
occupancy.series(z, n=500)
assert 393 == 7**0 + 7**2 + 7**3 # _.rhs.rhs.coeff(z, 393)
difference = (cp() | tyvar(-gamma*z))
ones = nats * difference
ones_gf, = ones.gf()
ones_gf
ones_gf(z,1,1,1).series(z, n=10) # check!
one_gf, = (ones * difference).gf()
one_gf.series(z, n=10).rhs.rhs.subs({w[0]:1, w[1]:1, gamma:1})
l = IndexedBase(r'\circ')
def linear_comb_of_powers(boxes, base):
return cp(lst(tyvar(Mul(l[j], z**(base**i), evaluate=False)))
for i in range(boxes)
for j in [Pow(base,i,evaluate=False)]).gf()
occupancy, = linear_comb_of_powers(boxes=4, base=Integer(2))
occupancy.series(z, n=8)
occupancy, = linear_comb_of_powers(boxes=4, base=3)
occupancy.series(z, n=9)
occupancy, = linear_comb_of_powers(boxes=4, base=5)
occupancy.series(z, n=10)
def uniform_rv(n):
return tyvar(S(1)/nel) * lst(tyvar(x))
occupancy, = uniform_rv(n=10).gf()
occupancy.series(x,n=10)
class lst_structure_w(rec):
def definition(self, alpha):
me = self.me()
return alpha | lst(me)
def label(self):
return r'\mathcal{L}_{w}' # `_s` stands for "structure"
lst_structure_w(tyvar(alpha)).gf()
[gf.series(alpha) for gf in _]
class lst_structure(rec):
def definition(self, alpha):
me = self.me()
return alpha | (lst(me) * me * me)
def label(self):
return r'\mathcal{L}_{s}' # `_s` stands for "structure"
lst_structure(tyvar(alpha)).gf()
_[0].series(alpha, n=10)
class structure(rec):
def definition(self, alpha):
me = self.me()
return alpha | (bin_tree(me) * me * me)
def label(self):
return r'\mathcal{S}'
structure(tyvar(alpha)).gf()
gf = _[0]
gf.simplify()
nel = 7
s = gf.simplify().series(alpha, n=nel).rhs.rhs
[s.coeff(alpha, n=i).subs({pow(-1,S(1)/3):-1}).radsimp().powsimp() for i in range(nel)]
class structure(rec):
def definition(self, alpha):
me = self.me()
return alpha | (nnbin_tree(me) * me)
def label(self):
return r'\mathcal{S}'
structure(tyvar(alpha)).gf()
gf = _[0]
gf.simplify()
nel = 20
s = gf.simplify().series(alpha, n=nel).rhs.rhs
[s.coeff(alpha, n=i).subs({pow(-1,S(1)/3):-1}).radsimp().powsimp() for i in range(nel)]
class nn_structure(rec):
def definition(self, alpha):
me = self.me()
return alpha * bin_tree(nnbin_tree(me))
def label(self):
return r'\mathcal{L}_{s}^{+}' # `_s` stands for "structure"
nn_structure(tyvar(alpha)).gf()
_[0].series(alpha, n=10)
class nnlst_structure(rec):
def definition(self, alpha):
me = self.me()
return alpha * lst(nnlst(me))
def label(self):
return r'\mathcal{L}_{s}^{+}' # `_s` stands for "structure"
nnlst_structure(tyvar(alpha)).gf()
_[0].series(alpha, n=10)
class tree(rec):
def definition(self, alpha):
return alpha * lst(self.me())
def label(self):
return r'\mathcal{T}'
tree(tyvar(alpha)).gf()
_[0].series(alpha, n=10)
class combination(rec):
def definition(self, alpha):
me = self.me()
return alpha | (me * me)
def label(self):
return r'\mathcal{C}'
combination(tyvar(alpha)).gf()
_[0].series(alpha, n=10)
class ab_tree(rec):
def definition(self, alpha, beta):
me = self.me()
return beta | (alpha * me * me)
def label(self):
return r'\mathcal{T}_{a,b}'
ab_tree_gfs = ab_tree(tyvar(alpha), tyvar(beta)).gf()
ab_tree_gfs
ab_tree_gf = ab_tree_gfs[0]
fab_eq = FEq(ab_tree_gf.lhs, ab_tree_gf.rhs.series(beta, n=20).removeO(), evaluate=False)
fab_eq
fab_eq(x,x)
(_*alpha).expand()
#with lift_to_Lambda(fab_eq) as F:
B = fab_eq(x,1)
A = fab_eq(1,x)
A,B,
(A+B).expand()
((1+x)*A).expand()
class dyck(rec):
def definition(self, alpha, beta):
me = self.me()
return cp() | (alpha * me * beta * me)
def label(self):
return r'\mathcal{D}'
dyck_gfs = dyck(tyvar(alpha*x), tyvar(beta*x)).gf()
dyck_gfs
dyck_gf = dyck_gfs[0]
dyck_gf.series(x,n=10)
class motzkin(rec):
def definition(self, alpha, beta, gamma):
me = self.me()
return cp() | (alpha * me * beta * me) | (gamma * me)
def label(self):
return r'\mathcal{M}'
motzkin_gfs = motzkin(tyvar(alpha*x), tyvar(beta*x), tyvar(gamma*x),).gf()
motzkin_gfs
motzkin_gf = motzkin_gfs[0]
motzkin_gf.series(x,n=10)
motzkin_gf(x,1,1,1).series(x,n=10)
class motzkin_p(rec):
def definition(self, alpha, beta, gamma, eta):
me = self.me()
return cp() | (alpha * me * beta * me) | (gamma * me) | (eta * me)
def label(self):
return r'\mathcal{M}^{+}'
motzkinp_gfs = motzkin_p(tyvar(alpha*x), tyvar(beta*x), tyvar(gamma*x), tyvar(eta*x),).gf()
motzkinp_gfs
motzkinp_gf = motzkinp_gfs[0]
motzkinp_gf.series(x,n=6)
motzkinp_gf(x,1,1,1,1).series(x,n=10)
class fibo(rec):
def definition(self, alpha, beta):
me = self.me()
return cp() | alpha | ((beta | (alpha * beta)) * me)
def label(self):
return r'\mathcal{F}'
fibo_gf, = fibo(tyvar(alpha*x), tyvar(beta*x),).gf()
fibo_gf
fibo_gf.series(x,n=10)
fibo_gf(1,x,1).series(x,n=10)
lst_of_truth_gf, = lst(tyvar(x)).gf()
lst_of_truth_gf.series(x, n=10, is_exp=True)
lst_of_boolean_gf.series(x,n=10,is_exp=True)
_.rhs.rhs.subs({w[0]:1,w[1]:1})
sum((_.rhs.rhs.coeff(x,i)/factorial(i))*x**i for i in range(1,10))
class powerset(ty):
def gf_rhs(self, ty):
return [exp(self.mulfactor() * gf.rhs) for gf in ty.gf()]
def mulfactor(self):
return 1
def label(self):
return r'\mathcal{P}'
powerset_of_tyvar_gf, = (2**(nnlst(tyvar(alpha)))).gf()
powerset_of_tyvar_gf
powerset_of_tyvar_gf.series(alpha, n=10, is_exp=True)
powerset_of_tyvar_gf, = (2**(nnlst(boolean))).gf()
powerset_of_tyvar_gf
powerset_of_tyvar_gf.series(x, n=5, is_exp=True)
_.rhs.rhs.subs({w[0]:1,w[1]:1})
powerset_of_tyvar_gf, _ = (2**(bin_tree(tyvar(alpha)))).gf()
powerset_of_tyvar_gf
powerset_of_tyvar_gf.series(alpha, n=10, is_exp=True)
l, = (2**(2**(nnlst(tyvar(alpha))))).gf()
define(l.lhs, l.rhs.ratsimp(), ctor=FEq).series(alpha,n=8,is_exp=True)
class cycle(ty):
def gf_rhs(self, ty):
return [log(gf.rhs) for gf in ty.gf()]
def label(self):
return r'\mathcal{C}'
cycle_of_tyvar_gf, = (~(lst(tyvar(alpha)))).gf()
cycle_of_tyvar_gf
cycle_of_tyvar_gf.series(alpha, n=10, is_exp=True)
cycle_of_tyvar_gf, = (~(lst(boolean))).gf()
cycle_of_tyvar_gf
cycle_of_tyvar_gf.series(x, n=8, is_exp=True)
_.rhs.rhs.subs({w[0]:1,w[1]:1})
Pstar_gf, = (2**(~(lst(tyvar(alpha))))).gf()
Pstar_gf.series(alpha, n=10, is_exp=True)
class ipowerset(powerset):
def mulfactor(self):
return -1
derangements_gf, = ((-2)**tyvar(alpha)).gf()
derangements_gf.series(alpha, n=10, is_exp=True)
derangements_gf, = ((-2)**nnlst(tyvar(alpha))).gf()
derangements_gf.series(alpha, n=10, is_exp=True)
[1,2][1:]
def foldr(f, l, i):
if not l:
return i
else:
car, *cdr = l
return f(car, foldr(f, cdr, i))
class arrow(ty):
def label(self):
return r'\rightarrow'
def gf_rhs(self, alpha, beta):
v = Dummy()
return [foldr(lambda gf, acc: Lambda([x], acc(gf.rhs)),
gfs[:-1],
Lambda([x], gfs[-1].rhs))(x)
for gfs in self.gfs_space()]
return [foldr(lambda gf, acc: acc**gf.rhs, gfs[:-1], gfs[-1].rhs)
for gfs in self.gfs_space()]
arr, = arrow(boolean, boolean).gf()
arr
arr.series(x,n=5,is_exp=False)
_.rhs.rhs.removeO().subs({w[0]:1,w[1]:1})
arr, = arrow(lst(boolean), lst(boolean)).gf()
arr
arr.series(x,n=5,is_exp=False)
_.rhs.rhs.removeO().subs({w[0]:1,w[1]:1})
lamda_gf = lamda(tyvar(x)).gf_rhs(tyvar(x))
lamda_gf
lamda_gf.rhs.series(x,n=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Sci-Kit Learn, the machine learning library
Step5: It seems that the predictor LSTAT is correlated with our response and will be a good base model. Lets try to build a simple linear model using just one predictor and response, (sklearn works the same way for more predictors, you just have to put them in one dataframe).
|
<ASSISTANT_TASK:>
Python Code:
# import libraries
import matplotlib
import IPython
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import pylab
import seaborn as sns
import sklearn as sk
%matplotlib inline
## Read the housing data! This time its not comma separated but space separated. Read up on how you can use Pandas
## to read in space separated files into a data frame
housing = Read a .txt file, pay attention to how the data is separated
# See if the import worked, print the first 5 lines using some in-built function. Where's your head at?
housing.head()
# Check if these of the variables are correlated using the visualization techniques built up in module 1!
# LSTAT and MEDV would be related. The more the proportion of poor houses, the smaller the price.
# Lets confirm our intuition
# Scatterplot between LSTAT and MEDV
sns.jointplot(housing.LSTAT,housing.MEDV,kind="reg")
# Define predictor and response
X = housing[['LSTAT']]
Y = housing.MEDV
# Load up the linear model and fit it.
from sklearn.linear_model import LinearRegression
lin_mod = LinearRegression()
lin_mod.fit(X,Y)
y_p = lin_mod.predict(X)
# Plot the results.
plt.scatter(X,Y,c='r')
plt.plot(X,y_p,c='y')
## Now start making your own regression!
# Remember the potential pitfalls we discussed.
# Correlation - check the correlation of each variable with the other. Heres a correlation-map to get you
# started on what predictors should be used and which ones are highly correlated and may pose a problem!
corr = housing.corr()
sns.heatmap(corr)
plt.savefig("correl.png")
# Start building your model here!
# First you'll need to separate out the predictors and response
X = predictors 1 through 13
Y = response = MEDV
# You can reuse the lin_mod object to continue fitting to different data!
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 기능적 API
Step2: 시작하기
Step3: 데이터의 모양은 784 차원 벡터로 설정됩니다. 각 샘플의 모양 만 지정되므로 배치 크기는 항상 생략됩니다.
Step4: 리턴되는 inputs 에는 모델에 공급하는 입력 데이터의 모양 및 dtype 에 대한 정보가 포함됩니다. 모양은 다음과 같습니다.
Step5: dtype은 다음과 같습니다.
Step6: 이 inputs 객체에서 레이어를 호출하여 레이어 그래프에서 새 노드를 만듭니다.
Step7: "레이어 호출" 동작은 "입력"에서 생성된 레이어로 화살표를 그리는 것과 같습니다. 입력을 dense 레이어로 "전달"하고 x를 출력으로 가져옵니다.
Step8: 이 시점에서 레이어 그래프에서 입력 및 출력을 지정하여 Model을 작성할 수 있습니다.
Step9: 모델 요약이 어떻게 보이는지 확인하십시오.
Step10: 모델을 그래프로 플롯 할 수도 있습니다.
Step11: 그리고 선택적으로 플롯 된 그래프에 각 레이어의 입력 및 출력 모양을 표시합니다.
Step12: 이 그림과 코드는 거의 동일합니다. 코드 버전에서 연결 화살표는 호출 작업으로 대체됩니다.
Step13: 자세한 내용은 훈련 및 평가 가이드를 참조하세요.
Step14: 자세한 내용은 모델 직렬화 및 저장 가이드를 참조하세요.
Step15: 여기서, 디코딩 아키텍처는 인코딩 아키텍처와 완전하게 대칭이므로 출력 형상은 입력 형상 (28, 28, 1)과 같습니다.
Step16: 보시다시피, 모델은 중첩될 수 있습니다. 모델은 하위 모델을 포함할 수 있습니다(모델은 레이어와 유사하므로). 모델 중첩의 일반적인 사용 사례는 앙상블 기법(ensembling)입니다. 예를 들어, 모델 세트를 단일 모델로 앙상블하여 예측을 평균화하는 방법은 다음과 같습니다.
Step17: 복잡한 그래프 토폴로지 조작
Step18: 이제 모델을 플롯합니다.
Step19: 이 모델을 컴파일 할 때 각 출력에 서로 다른 손실을 할당 할 수 있습니다. 각 손실에 다른 가중치를 할당하여 총 교육 손실에 대한 기여도를 조정할 수도 있습니다.
Step20: 출력 레이어의 이름이 다르므로 손실을 다음과 같이 지정할 수도 있습니다.
Step21: 입력 및 목표치의 NumPy 배열 목록을 전달하여 모델을 훈련합니다.
Step22: Dataset 객체에 맞춰 호출하면 ([title_data, body_data, tags_data], [priority_targets, dept_targets])와 같은 목록의 튜플 또는 ({'title'
Step23: 모델을 플롯합니다.
Step24: 이제 모델을 훈련합니다.
Step25: 공유 레이어
Step26: 레이어 그래프에서 노드 추출 및 재사용
Step27: 다음은 그래프 데이터 구조를 쿼리하여 얻은 모델의 중간 활성화입니다.
Step28: 다음 특성을 사용하여 중간 레이어 활성화의 값을 반환하는 새로운 feature-extraction 모델을 만듭니다.
Step29: 이 모델은 뉴런 스타일 전송과 같은 작업에 특히 유용합니다.
Step30: 사용자 정의 레이어에서 직렬화를 지원하려면, 레이어 인스턴스의 constructor 인수를 반환하는 get_config 메서드를 정의합니다.
Step31: 선택적으로, config 사전이 주어진 레이어 인스턴스를 다시 작성할 때 사용되는 클래스 메서드 from_config(cls, config)를 구현합니다. from_config의 기본 구현은 다음과 같습니다.
Step32: 다음 패턴 중 하나를 따르는 call 메서드를 구현하는 한 함수형 API에서 하위 클래스화된 레이어 또는 모델을 사용할 수 있습니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(784,))
# Just for demonstration purposes.
img_inputs = keras.Input(shape=(32, 32, 3))
inputs.shape
inputs.dtype
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
model.summary()
keras.utils.plot_model(model, "my_first_model.png")
keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=["accuracy"],
)
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
model.save("path_to_my_model")
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model("path_to_my_model")
encoder_input = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
autoencoder.summary()
encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
decoder_input = keras.Input(shape=(16,), name="encoded_img")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
decoder = keras.Model(decoder_input, decoder_output, name="decoder")
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
autoencoder.summary()
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1)(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
keras.losses.CategoricalCrossentropy(from_logits=True),
],
loss_weights=[1.0, 0.2],
)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"priority": keras.losses.BinaryCrossentropy(from_logits=True),
"department": keras.losses.CategoricalCrossentropy(from_logits=True),
},
loss_weights=[1.0, 0.2],
)
# Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32")
# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit(
{"title": title_data, "body": body_data, "tags": tags_data},
{"priority": priority_targets, "department": dept_targets},
epochs=2,
batch_size=32,
)
inputs = keras.Input(shape=(32, 32, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(inputs)
x = layers.Conv2D(64, 3, activation="relu")(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation="relu")(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation="relu")(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs, outputs, name="toy_resnet")
model.summary()
keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["acc"],
)
# We restrict the data to the first 1000 samples so as to limit execution time
# on Colab. Try to train on the entire dataset until convergence!
model.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2)
# Embedding for 1000 unique words mapped to 128-dimensional vectors
shared_embedding = layers.Embedding(1000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype="int32")
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype="int32")
# Reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
vgg19 = tf.keras.applications.VGG19()
features_list = [layer.output for layer in vgg19.layers]
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype("float32")
extracted_features = feat_extraction_model(img)
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config, custom_objects={"CustomDense": CustomDense})
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
self.classifier = layers.Dense(1)
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Note that you specify a static batch size for the inputs with the `batch_shape`
# arg, because the inner computation of `CustomRNN` requires a static batch size
# (when you create the `state` zeros tensor).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Esercizio 2
Step2: Esercizio 3
Step3: Il Metodo di Newton Raphson
|
<ASSISTANT_TASK:>
Python Code:
# COMPLETARE LA FUNZIONE SEGUENTE
def RadiceCubica(x):
# DA COMPLETARE
# DA COMPLETARE
# DA COMPLETARE
# Se non si trova la radice cubica:
return "failed", ""
# Funzione di test per la funzione che dovete implementare
def UnitTest():
Xs = [27, -8, 57893, 1957816251, 7406961012236344616, 35.7, 'ciao']
for i,x in enumerate(Xs):
msg, y = RadiceCubica(x)
if msg == 'ok':
print("Test ",i+1,' ok, result: ', y)
else:
print("Test ",i+1,' failed!')
# Esegui il test
UnitTest()
# COMPLETARE LA FUNZIONE SEGUENTE
def ApxRadiceQuadrata(x, epsilon=1e-02):
# DA COMPLETARE
# DA COMPLETARE
# DA COMPLETARE
return "failed", "", 0
# Funzione di test per la funzione che dovete implementare
def UnitTest():
Xs = [25, 0.25, 123456]
for i,x in enumerate(Xs):
msg, y, iter = ApxRadiceQuadrata(x)
if msg == 'ok':
print("Test ",i+1,' ok, result: ', y, ' | iterations:', iter)
else:
print("Test ",i+1,' failed! | iterations:', iter)
# Potete scegliere tra
UnitTest()
# Oppure, chiamare la funzione con un parametro alla volta
#print(ApxRadiceQuadrata(0.25))
# COMPLETARE LA FUNZIONE SEGUENTE
def BS_ApxRadiceQuadrata(x, epsilon=1e-02):
# DA COMPLETARE
# DA COMPLETARE
# DA COMPLETARE
return "failed", "", 0
# Funzione di test per la funzione che dovete implementare
def UnitTest():
Xs = [25, 0.25, 123456, 123456789]
for i,x in enumerate(Xs):
print('----------- Start Test ', i+1, ' -------------')
msg, y, iter = BS_ApxRadiceQuadrata(x)
if msg == 'ok':
print('TEST ',i+1,' ok, result: ', y, ' | iterations:', iter)
else:
print('TEST ',i+1,' failed! | iterations:', iter)
print('----------- Test ', i+1, ' Completed -------------')
# Potete scegliere tra
UnitTest()
# Polinomio di cui trovare la radice
#def p(x):
# DA COMPLETARE
# Derivata prima del polinomio dato
#def q(x):
# DA COMPLETARE
def NewtonRaphson(p, q, epsilon=1e-02):
# DA COMPLETARE
# DA COMPLETARE
# DA COMPLETARE
return 'failed', '', 0
# Test della funzione implementata
NewtonRaphson(p,q)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Dictionary
Step2: It is also important to note that the original data set has many more prospects (36548) than existent customers (4640). However, may be a bad idea to make a stratified split over this data set since we will loose that way the time dimension of the problem. In order to better check if the time dimension is important for this problem and the record provided, we need to re-create the missing calendar dates and transform the original data set in a timeseries object.
Step3: The provided data set, bank_marketing, has 41188 record lines describing various customers and prospects attributes, as well as their response in the telemarketing campaign of interest. The percentage of unique calendar dates across this record is low, whereas much more people seems to response positevely as the time goes by. However, some months are missing from the data set and adding time dimension in this problem cannot help to provide better predictions.
Step4: In order to evaluate our learning algorithms later, we need to make a train/test split of the bank_marketing SFrame. However, due to the class imbalance which is observed in contacts' response (it has much more prospects than original customers), we better do so in a stratified way.
Step5: ROI Calculation
Step6: Call everyone (assuming we have the budget & time to do so), ROI is 10.27%
Step7: Lead Scoring Modeling
Step8: A large proportion of customers who opened deposit accounts were employed (not students), under 38
Step9: Result
Step10: The toolkit automatically evaluates several types of algorithms, including
Step11: This initial model can be considered accurate given that it correctly predicts the purchasing decisions of ~90% of the contacts. However, the toolkit_model leaves room for improvement. Specifically only ~66% of predicted sales actually convert to sales. Furthermore, only ~24% of actual sales were actually predicted by the model. In order to better understand the model we can review the importance of the input features.
Step12: Lead score the contact list and measure our ROI
Step13: Result
Step14: Next we add quadratic interactions between the four features below
Step15: and re-train the GraphLab Create AutoML Classifier for this new data set qtrain.
Step16: Next, we evaluate the new AutoML Classifier, new_toolkit_model, on the test data set.
Step17: Note that this model is almost as accurate as the previous one, with similar precision (~66% of the predicted sales were actually converted to sales) and recall (~24% of actual sales were actually predicted by the model). However, to have a better feeling of the model just trained (newtoolkit_model) and how this differs from the previous one (toolkit_model), we can review the importance of the input features in these two cases.
Step18: By comparing these two models we note that
Step19: Result
Step20: To group the age values of our contacts we leverage the FeatureBinner method of the feature_engineering toolkit of GraphLab Create as shown below.
Step21: Lets now train a boosted trees classifier model using this enriched data set, qtrain1. We have also tweak its parameters to achieve better predictive performance.
Step22: Next we evaluate the new_boostedtrees_model on the test data set.
Step23: This new model (new_boostedtrees_model) is almost as accurate as the previous one, has higher precision (~66% of the predicted sales were actually converted to sales) and similar recall (~23% of actual sales were actually predicted by the model). To have a better feeling of the model just trained (newtoolkit_model) and how this differs from the previous one (new_toolkit_model), we can review the importance of the input features in these two cases.
Step24: By comparing these two cases, we note that
Step25: Conclusion
|
<ASSISTANT_TASK:>
Python Code:
import graphlab as gl
import pandas as pd
from datetime import datetime
from sklearn.cross_validation import StratifiedKFold
## load data set from a locally saved csv file
bank_marketing = gl.SFrame.read_csv('./../../../04.UCI.ML.REPO/Bank_Marketing/bank-additional/bank-additional-full.csv',
delimiter=';')
## other methods of loading data sets...
# data = gl.SFrame('s3://' or 'hdfs://')
# data # pySpark RDD or SchemaRDD / Spark DataFrame
# data = gl.SFrame.read_json('')
# With a DB: configure ODBC manager / driver on the machine
# data = gl.connect_odbc?
# data = gl.from_sql?
bank_marketing.head()
gl.canvas.set_target('ipynb')
bank_marketing.show()
from helper_functions import *
def _month_to_number(x):
from dateutil import parser
return parser.parse(x).strftime('%m')
def _wkday_to_number(x):
from dateutil import parser
return parser.parse(x).strftime('%w')
def _str_to_datetime(x):
import datetime
import pytz
from dateutil import parser
return parser.parse(x).strftime('%Y-%m-%d')
def _unix_timestamp_to_datetime(x):
import time
import datetime
import pytz
from dateutil import parser
return parser.parse(x)
bank_marketing['y'] = bank_marketing['y'].apply(lambda x: 1 if x=='yes' else 0)
bank_marketing['month_nr'] = bank_marketing['month'].apply(_month_to_number)
bank_marketing['wkday_nr'] = bank_marketing['day_of_week'].apply(_wkday_to_number)
bank_marketing['year'] = add_running_year(bank_marketing['month_nr'], 2008)
bank_marketing['date'] = add_running_date(bank_marketing, 'year', 'month_nr', 'wkday_nr')
bank_marketing['date'] = bank_marketing.apply(lambda row: '-'.join(map(str,(row['year'], row['month_nr'], row['date']))))
bank_marketing['date'] = bank_marketing['date'].apply(_str_to_datetime)
bank_marketing['date'] = bank_marketing['date'].apply(_unix_timestamp_to_datetime)
bank_marketing
bank_marketing = gl.TimeSeries(bank_marketing, index='date')
print 'Number of record lines [bank_marketing]: %d' % len(bank_marketing)
print 'Unique calendar dates across data set [bank_marketing]: %d' % len(bank_marketing['date'].unique())
unique_dates_pct = (len(bank_marketing['date'].unique())*100/float(len(bank_marketing)))
print 'Percentage of unique calendar dates across data set [bank_marketing]: %.2f%%'% unique_dates_pct
bank_marketing.filter_by(2008,'year')['month_nr'].unique().sort()
print 'Full Data Set [year: 2008]:'
print '------------------------------'
bank_marketing_2008 = bank_marketing.filter_by(2008,'year')
customers = len(bank_marketing_2008[bank_marketing_2008['y']==1])
prospects = len(bank_marketing_2008[bank_marketing_2008['y']==0])
print 'Number of examples in year segment [bank_marketing]: %d' % len(bank_marketing_2008)
print 'Number of existent customers: %d (%.2f%%)' % (customers, 100*customers/float(len(bank_marketing_2008)))
print 'Number of prospects: %d (%.2f%%)\n' % (prospects, 100*prospects/float(len(bank_marketing_2008)))
bank_marketing.filter_by(2009,'year')['month_nr'].unique().sort()
print 'Full Data Set [year: 2009]:'
print '------------------------------'
bank_marketing_2009 = bank_marketing.filter_by(2009,'year')
customers = len(bank_marketing_2009[bank_marketing_2009['y']==1])
prospects = len(bank_marketing_2009[bank_marketing_2009['y']==0])
print 'Number of examples in year segment [bank_marketing]: %d' % len(bank_marketing_2009)
print 'Number of existent customers: %d (%.2f%%)' % (customers, 100*customers/float(len(bank_marketing_2009)))
print 'Number of prospects: %d (%.2f%%)\n' % (prospects, 100*prospects/float(len(bank_marketing_2009)))
bank_marketing.filter_by(2010,'year')['month_nr'].unique().sort()
print 'Full Data Set [year: 2010]:'
print '------------------------------'
bank_marketing_2010 = bank_marketing.filter_by(2010,'year')
customers = len(bank_marketing_2010[bank_marketing_2010['y']==1])
prospects = len(bank_marketing_2010[bank_marketing_2010['y']==0])
print 'Number of examples in year segment [bank_marketing]: %d' % len(bank_marketing_2010)
print 'Number of existent customers: %d (%.2f%%)' % (customers, 100*customers/float(len(bank_marketing_2010)))
print 'Number of prospects: %d (%.2f%%)' % (prospects, 100*prospects/float(len(bank_marketing_2010)))
## remove the time dimension of the problem
## transform the Timeseries object in a Numpy array
bank_marketing = bank_marketing.to_sframe().remove_column('date')
features = bank_marketing.column_names()
bank_marketing_np = bank_marketing.to_numpy()
## provide the stratified train/test split
skf = StratifiedKFold(bank_marketing['y'], n_folds=2, shuffle=True, random_state=1)
for train_idx, test_idx in skf:
train, test = bank_marketing_np[train_idx], bank_marketing_np[test_idx]
train = pd.DataFrame(train, index=train_idx, columns=features)
train = gl.SFrame(train, format='dataframe')
test = pd.DataFrame(test, index=test_idx, columns=features)
test = gl.SFrame(test, format='dataframe')
## restore original dtypes
for attrib in features:
train[attrib] = train[attrib].astype(bank_marketing[attrib].dtype())
test[attrib] = test[attrib].astype(bank_marketing[attrib].dtype())
print 'Training Data Set:'
print '---------------------'
train_customers = len(train[train['y']==1])
train_prospects = len(train[train['y']==0])
print 'Number of examples in training set [train]: %d' % len(train)
print 'Number of existent customers: %d (%.2f%%)' % (train_customers, 100*train_customers/float(len(train)))
print 'Number of prospects: %d (%.2f%%)\n' % (train_prospects, 100*train_prospects/float(len(train)))
print 'Test Data Set:'
print '-----------------'
test_customers = len(test[test['y']==1])
test_prospects = len(test[test['y']==0])
print 'Number of examples in validation set [test]: %d' % len(test)
print 'Number of existent customers: %d (%.2f%%)' % (test_customers, 100*test_customers/float(len(test)))
print 'Number of prospects: %d (%.2f%%)' % (test_prospects, 100*test_prospects/float(len(test)))
def calc_call_roi(contact_list, lead_score, pct_tocall):
#assumptions
cost_ofcall = 1.00
cust_ltv = 100.00 #customer lifetime value
num_calls = int(len(contact_list) * pct_tocall)
if 'lead_score' in contact_list.column_names():
contact_list.remove_column('lead_score')
contact_list = contact_list.add_column(lead_score, name='lead_score')
sorted_bymodel = contact_list.sort('lead_score', ascending=False)
call_list = sorted_bymodel[:num_calls]
num_subscriptions = len(call_list[call_list['y']==1])
roi = (num_subscriptions * cust_ltv - num_calls * cost_ofcall) / float(num_calls * cost_ofcall)
return roi
init_leadscores = gl.SArray([1 for _ in test])
init_roi = calc_call_roi(test, init_leadscores, 1)
print 'ROI for calling all contacts [test]: %.2f%%' % init_roi
num_customers = float(len(train))
numY = gl.Sketch(train['y']).frequency_count(1)
print "%.2f%% of contacts in training set opened long-term deposit accounts." % (numY/num_customers * 100.0)
median_age = gl.Sketch(train['age']).quantile(0.5)
num_purchasing_emp_under_median_age = sum(train.apply(lambda x: 1 if x['age']<median_age
and ((x['job']!='unemployed') &
(x['job']!='student') &
(x['job']!='unknown'))
and x['y']==1 else 0))
probY_emp_under_median_age = (num_purchasing_emp_under_median_age / float(numY)) * 100.0
print "%.2f%% of the clients who opened long-term deposit accounts, were employed (but not students) and had age < %d (median)." % (probY_emp_under_median_age, median_age)
target_leadscore = test.apply(lambda x: 1 if x['age']<median_age
and ((x['job']!='unemployed') & (x['job']!='student') & (x['job']!='unknown'))
and x['y']==1 else 0)
age_targeting_roi = calc_call_roi(test, target_leadscore, 0.2)
print 'ROI for targeted calls [employed (not students) and age < %d (median)] to 20%% of contacts: %.2f%%' % (median_age, age_targeting_roi)
## remove features that introduce noise in ML prediction
features = train.column_names()
features.remove('duration')
features.remove('y')
features.remove('month_nr')
features.remove('wkday_nr')
features.remove('year')
## GLC AutoML Classifier
toolkit_model = gl.classifier.create(train, features=features, target='y')
results = toolkit_model.evaluate(test)
print "accuracy: %.5f, precision: %.5f, recall: %.5f" % (results['accuracy'], results['precision'], results['recall'])
toolkit_model.get_feature_importance()
toolkit_leadscore = toolkit_model.predict(test,output_type='probability')
toolkit_roi = calc_call_roi(test, toolkit_leadscore, 0.2 )
print 'ROI for calling 20%% of highest predicted contacts: %.2f%%' % toolkit_roi
import matplotlib.pyplot as plt
%matplotlib inline
qfeatures0 = ['emp.var.rate','cons.price.idx','cons.conf.idx','euribor3m']
plt.figure(figsize=(10,10))
subplot_idx = 1
for attrib1 in qfeatures0:
for attrib2 in qfeatures0:
if(attrib2 != attrib1):
if subplot_idx < 5:
plt.subplot(2,2,subplot_idx)
plt.scatter(train[attrib1], train[attrib2])
plt.xlabel(attrib1)
plt.ylabel(attrib2)
plt.title('\'%s\' vs \'%s\'' % (attrib1, attrib2))
subplot_idx +=1
plt.show()
## define a quadratic transformer object
quadratic_transformer = gl.feature_engineering.QuadraticFeatures(features=qfeatures0)
## fit the quadratic transformer object over the train set
quadratic = gl.feature_engineering.create(train, quadratic_transformer)
## transform the train data set
qtrain = quadratic.transform(train)
## remove the features that may worse our predictions
qfeatures = qtrain.column_names()
qfeatures.remove('duration')
qfeatures.remove('y')
qfeatures.remove('month_nr')
qfeatures.remove('wkday_nr')
qfeatures.remove('year')
qtrain.head(5)
new_toolkit_model = gl.classifier.create(qtrain, target='y', features=qfeatures)
results = new_toolkit_model.evaluate(quadratic.transform(test))
print "accuracy: %.5f, precision: %.5f, recall: %.5f" % (results['accuracy'], results['precision'], results['recall'])
print '\'newtoolkit_model\'\n[GLC AutoML Classifier wt quadratic interactions]:\n'
print new_toolkit_model.get_feature_importance()
print '\'toolkit_model\'\n[GLC AutoML Classifier wo quadratic interactions]:\n'
print toolkit_model.get_feature_importance()
## show ROI for experimentation model
newtoolkit_leadscore = new_toolkit_model.predict(quadratic.transform(test),output_type='probability')
newtoolkit_roi = calc_call_roi(quadratic.transform(test), newtoolkit_leadscore, 0.2)
print 'ROI for calling predicted contacts: %.2f%%' % newtoolkit_roi
qtrain['age'].show()
## define a binning transformer for the age attribute of contacts
age_binning_transformer = gl.feature_engineering.FeatureBinner(features='age', strategy='quantile', num_bins=12)
## fit the age binning transformer over the train set
age_binning = gl.feature_engineering.create(train, age_binning_transformer)
## transform the train data set
qtrain1 = age_binning.transform(qtrain)
## remove the features that may worse our predictions
qfeatures1 = qtrain1.column_names()
qfeatures1.remove('duration')
qfeatures1.remove('y')
qfeatures1.remove('month_nr')
qfeatures1.remove('wkday_nr')
qfeatures1.remove('year')
qtrain1['age'].show()
## We create a boosted trees classifier with the enriched dataset.
new_boostedtrees_model = gl.boosted_trees_classifier.create(qtrain1, target='y', features = qfeatures1,
max_iterations = 100,
max_depth=5,
step_size=0.1,
min_child_weight=0.06,
random_seed=1,
early_stopping_rounds=10)
results = new_boostedtrees_model.evaluate(age_binning.transform(quadratic.transform(test)))
print "accuracy: %.5f, precision: %.5f, recall: %.5f" % (results['accuracy'], results['precision'], results['recall'])
print '\'new_boostedtrees_model\'\n[GLC Boosted Trees Classifier wt quadratic interactions,\
age grouping & hyperparams tuned]:\n'
new_boostedtrees_model.get_feature_importance().print_rows(num_rows=20)
print '\'newtoolkit_model\'\n[GLC AutoML Classifier wt quadratic interactions]:\n'
print new_toolkit_model.get_feature_importance()
## show ROI for experimentation model
test1 = age_binning.transform(quadratic.transform(test))
boostedtrees_leadscore = new_boostedtrees_model.predict(test1, output_type='probability')
boostedtrees_roi = calc_call_roi(test1, boostedtrees_leadscore, 0.2)
print 'ROI for calling predicted contacts: %.2f%%' % boostedtrees_roi
pct_tocall = 0.2
boostedtrees_list = test1.sort('lead_score', ascending=False)
num_calls = int(len(boostedtrees_list)*pct_tocall)
print 'Assuming we have time and resources to call %d%% of the lead scored contact list, we\
need to make %d phone calls.\n' % (pct_tocall*100, num_calls)
print 'Lead Scored Contact List:'
boostedtrees_list['lead_score', 'age','campaign','euribor3m','job','loan', 'default', 'poutcome'].\
print_rows(num_rows=50, max_row_width=100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can also jump to Part 6 for more audio examples.
Step2: ...and looks something like this in Western music notation
Step3: Briefly
Step4: Note that both the music notation and the array are symbolic representations of the rhythm; the rhythm is abstracted so that there is no information about tempo, dynamics, timbre, or other musical information. All we have is the temporal relationship between each note in the sequence (as well as the base assumption that the notes are evenly spaced).
Step5: 1.2 Rhythmic properties
Step6: We can listen to the pulses and steps together
Step7: You can follow along with the printed array and hear that every 1 corresponds to a pulse, and every 0 to a step.
Step8: Note that the tresillo rhythm's pulse lengths all fall along the Fibonacci sequence. This allows us do some pretty fun things, as we'll see in a bit. But first let's take a step back.
Step9: And the first 20 numbers in the sequence are
Step10: The Fibonacci sequence is closely linked to the golden ratio in many ways, including the fact that as we go up the sequence, the ratio between successive numbers gets closer and closer to the golden ratio. (If you're interested, Vijay Iyer's article Strength in numbers
Step11: We can also use the golden ratio to find the index of a Fibonacci number
Step12: 2.2 Using Fibonacci numbers to manipulate rhythms
Step13: We might classify it as a Fibonacci rhythm, since every one of its pulse lengths is a Fibonacci number. If we wanted to expand that rhythm along the Fibonacci sequence, what would that look like?
Step14: We'll also want to be able to contract rhythms along the Fibonacci sequence (i.e. choose numbers in decreasing order instead of increasing order), as well as specify how many Fibonacci numbers away we want to end up.
Step15: Of course, once we have these scaled pulse lengths, we'll want to be able to convert them back into rhythms, in our original array format
Step16: This is exactly the kind of rhythmic expansion and contraction that the Vijay Iyer Trio explore in their renditions of "Mystic Brew" and "Human Nature (Trio Extension)".
Step17: An important feature we want to extract from the audio is tempo (i.e. the time interval between steps). Let's estimate that using the librosa.beat.tempo method (which requires us to first detect onsets, or [])
Step18: <div style="color
Step19: And let's listen to our extracted beats with the original audio track
Step20: 3.3 From beats to measures
Step21: Using beats_per_measure we can calculate the times for the start of each measure
Step22: Note that we're working in samples now, as this is the unit that the audio data is actually stored in; when we loaded up the audio track, we essentially read in a large array of samples. The sample rate, which we defined as sr, tells us how many samples there are per second.
Step23: We can visualize, and listen to, the measure and beat markers along with the original waveform
Step24: 3.4 Putting it all together
Step25: For this example, we want the rhythm to last an entire measure as well, so we'll set steps_per_measure to be the number of steps in the rhythm (in this case, 8)
Step28: With these markers in place, we can now overlay the tresillo rhythm onto each measure and listen to the result
Step29: The clicks for measures, pulses, and steps, overlap with each other at certain points. While you can hear this based on the fact that each click is at a different frequency, it can be hard to tell visually in the above figure. We can make this more apparent by plotting each set of clicks with a different color.
Step30: You can hear that the tresillo rhythm's pulses line up with the harmonic rhythm of "Human Nature"; generally, we want to pick rhythms and audio tracks that have at least some kind of musical relationship.
Step31: 4.2 Pulse ratios
Step32: 4.3 Modifying measures by time-stretching
Step33: You'll notice that in the part where we choose stretch methods, there's a function called euclidean_stretch that we haven't defined. We'll get to that in just a second! For now, let's just put a stub there
Step34: ... so that we can hear what our modification method sounds like when applied to the first measure of "Human Nature"
Step35: It doesn't sound like there's much difference between the stretched version and the original, does it?
Step36: Listening to the whole track, only perceptible difference is that the last two beats of each measure are slightly faster. If we look at the pulse ratios again
Step37: ... we can see that this makes sense, as we're time-stretching the first two pulses by the same amount, and then time-stretching the last pulse by a different amount.
Step38: This gets to the heart of the problem
Step39: Looking at the first pulses of the original rhythm and target rhythm, we want to turn
Step40: The concept of Euclidean rhythms was first introduced by Godfried Toussaint in his 2004 paper The Euclidean Algorithm Generates Traditional Musical Rhythms.
Step41: 5.3 Using Euclidean rhythms to subdivide pulses
Step42: We want to know how much to stretch each subdivision. To do this, we'll convert these single pulses into rhythms of their own. First, we'll treat each step in the original pulse as an onset
Step43: And as mentioned before, we'll use Bjorklund's algorithm to generate the target pulse's rhythm. The trick here is to use the number of steps in the original pulse as the number of pulses for the target pulse rhythm (hence the conversion to onsets earlier)
Step44: You might have noticed that this rhythm is exactly the same as the rhythm produced by contracting the tresillo rhythm along the Fibonacci sequence by a factor of 1
Step45: And it's true that there is some significant overlap between Euclidean rhythms and Fibonacci rhythms. The advantage of working with Euclidean rhythms here is that they work with any number of pulses and steps, not just ones that are Fibonacci numbers.
Step46: The resulting pulse ratios are
Step47: ... which doesn't intuitively look like it would produce something any different from what we tried before. However, we might perceive a greater difference because
Step48: Let's take a listen to how it sounds
Step49: Much better! With clicks
Step50: As you can hear, the modified track's rhythm is in line with the clicks, and sounds noticeably different from the original song. This is a pretty good place to end up!
Step51: Now we can simply feed the function a path to an audio file (as well as any parameters we want to customize).
Step52: And indeed we get the exact same result.
Step53: As mentioned in part 2.2, we can contract rhythms as well using negative numbers as our stretch_factor. Let's try that with "Chan Chan" by the Buena Vista Social Club
Step54: (Note that although we do end up with a perceptible difference (the song now sounds like it's in 7/8), it should actually sound like it's in 5/8, since [1 0 0 1 0 0 1 0] is getting compressed to [1 0 1 0 1]. This is an implementation detail with the Euclidean stretch method that I need to fix.)
Step55: We can define both a custom target rhythm as well. In addition, neither original_rhythm nor target_rhythm have to be Fibonacci rhythms for the stretch algorithm to work (although with this implementation they do both have to have the same number of pulses).
Step56: As another example, we can give a swing feel to the first movement of Mozart's "Eine kleine Nachtmusik" (K. 525), as performed by A Far Cry
Step57: It works pretty decently until around 0
Step58: 6.4 Examples
|
<ASSISTANT_TASK:>
Python Code:
import IPython.display as ipd
ipd.Audio("../data/out_humannature_90s_stretched.mp3", rate=44100)
ipd.Audio("../data/tresillo_rhythm.mp3", rate=44100)
%matplotlib inline
import math # Standard library imports
import IPython.display as ipd, librosa, librosa.display, numpy as np, matplotlib.pyplot as plt # External libraries
import pardir; pardir.pardir() # Allow imports from parent directory
import bjorklund # Fork of Brian House's implementation of Bjorklund's algorithm https://github.com/brianhouse/bjorklund
tresillo_rhythm = np.array([1, 0, 0, 1, 0, 0, 1, 0])
print(tresillo_rhythm)
# Generate tresillo clicks
sr = 44100
tresillo_click_interval = 0.25 # in seconds
tresillo_click_times = np.array([i * tresillo_click_interval for i in range(len(tresillo_rhythm))
if tresillo_rhythm[i] != 0])
tresillo_clicks = librosa.clicks(times=tresillo_click_times, click_freq=2000.0, sr=sr) # Generate clicks according to the rhythm
# Plot clicks and click times
plt.figure(figsize=(8, 2))
librosa.display.waveplot(tresillo_clicks, sr=sr)
plt.vlines(tresillo_click_times + 0.005, -1, 1, color="r") # Add tiny offset so the first line shows up
plt.xticks(np.arange(0, 1.75, 0.25))
# Render clicks as audio
ipd.Audio(tresillo_clicks, rate=sr)
tresillo_num_pulses = np.count_nonzero(tresillo_rhythm)
tresillo_num_steps = len(tresillo_rhythm)
print("The tresillo rhythm has {} pulses and {} steps".format(tresillo_num_pulses, tresillo_num_steps))
def generate_rhythm_clicks(rhythm, click_interval=0.25, sr=44100):
step_length_samples = int(librosa.time_to_samples(click_interval, sr=sr))
rhythm_length_samples = step_length_samples * (len(rhythm))
# Generate click times
pulse_click_times, step_click_times = generate_rhythm_times(rhythm, click_interval)
# Generate pulse clicks
pulse_click_times = np.array([i * click_interval for i in range(len(rhythm))
if rhythm[i] != 0])
pulse_clicks = librosa.clicks(times=pulse_click_times, click_freq=2000.0, sr=sr, length=rhythm_length_samples)
# Generate step clicks
step_click_times = np.array([i * click_interval for i in range(len(rhythm))])
step_clicks = librosa.clicks(times=step_click_times, click_freq=1000.0, sr=sr, length=rhythm_length_samples)
step_clicks = np.hstack((step_clicks, np.zeros(step_length_samples, dtype="int"))) # add last step samples
# Add zeros to pulse clicks so that it's the same length as the step clicks signal
pulse_clicks = np.hstack((pulse_clicks, np.zeros(len(step_clicks)-len(pulse_clicks), dtype="int")))
# Ensure proper length
pulse_clicks = pulse_clicks[:rhythm_length_samples]
step_clicks = step_clicks[:rhythm_length_samples]
return (pulse_clicks, step_clicks)
def generate_rhythm_times(rhythm, interval):
pulse_times = np.array([float(i * interval) for i in range(len(rhythm)) if rhythm[i] != 0])
step_times = np.array([float(i * interval) for i in range(len(rhythm))])
return (pulse_times, step_times)
# Generate the clicks
tresillo_pulse_clicks, tresillo_step_clicks = generate_rhythm_clicks(tresillo_rhythm, tresillo_click_interval)
tresillo_pulse_times, tresillo_step_times = generate_rhythm_times(tresillo_rhythm, tresillo_click_interval)
# Tresillo as an array
print(tresillo_rhythm)
# Tresillo audio, plotted
plt.figure(figsize=(8, 2))
librosa.display.waveplot(tresillo_pulse_clicks + tresillo_step_clicks, sr=sr)
plt.vlines(tresillo_pulse_times + 0.005, -1, 1, color="r")
plt.vlines(tresillo_step_times + 0.005, -0.5, 0.5, color="r")
# Tresillo as audio
ipd.Audio(tresillo_pulse_clicks + tresillo_step_clicks, rate=44100)
# Function to calculate pulse lengths based on rhythm patterns
def calculate_pulse_lengths(rhythm):
pulse_lengths = np.array(([i for i,p in enumerate(rhythm) if p > 0]))
pulse_lengths = np.hstack((pulse_lengths, len(rhythm)))
pulse_lengths = np.array([pulse_lengths[i+1] - pulse_lengths[i] for i in range(len(pulse_lengths) - 1)])
return pulse_lengths
tresillo_pulse_lengths = calculate_pulse_lengths(tresillo_rhythm)
print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
def fibonacci(n):
if n == 0 or n == 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
first_twenty_fibs = np.array([fibonacci(n) for n in range(20)])
plt.figure(figsize=(16,1))
plt.scatter(first_twenty_fibs, np.zeros(20), c="r")
plt.axis("off")
print(first_twenty_fibs)
# Calculate and plot Fibonacci number ratios
phi = (1 + math.sqrt(5)) / 2 # Golden ratio; 1.61803398875...
fibs_ratios = np.array([first_twenty_fibs[i] / float(max(1, first_twenty_fibs[i-1])) for i in range(2,20)])
plt.plot(np.arange(len(fibs_ratios)), fibs_ratios, "r")
# Plot golden ratio as a consant
phis = np.empty(len(fibs_ratios))
phis.fill(phi)
plt.xticks(np.arange(len(fibs_ratios)))
plt.xlabel("Fibonacci index (denotes i for ith Fibonacci number)")
plt.ylabel("Ratio between ith and (i-1)th Fibonacci number")
plt.plot(np.arange(len(phis)), phis, "b", alpha=0.5)
def find_fibonacci_index(n):
phi = (1 + math.sqrt(5)) / 2 # Golden ratio; 1.61803398875...
return int(math.log((n * math.sqrt(5)) + 0.5) / math.log(phi))
fib_n = 21
fib_i = find_fibonacci_index(fib_n)
assert(fibonacci(fib_i) == fib_n)
print("{} is the {}th Fibonacci number".format(fib_n, fib_i))
plt.figure(figsize=(8, 2))
plt.vlines(tresillo_pulse_times + 0.005, -1, 1, color="r")
plt.vlines(tresillo_step_times + 0.005, -0.5, 0.5, color="r", alpha=0.5)
plt.yticks([])
print("Tresillo rhythm sequence: {}".format(tresillo_rhythm))
print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
def fibonacci_expand_pulse_lengths(pulse_lengths):
new_pulse_lengths = np.array([], dtype="int")
for pulse_length in pulse_lengths:
fib_i = find_fibonacci_index(pulse_length)
new_pulse_length = fibonacci(fib_i + 1)
new_pulse_lengths = np.hstack((new_pulse_lengths, new_pulse_length))
return new_pulse_lengths
print("Expanded tresillo pulse lengths: {}".format(fibonacci_expand_pulse_lengths(tresillo_pulse_lengths)))
# Function to scale pulse lengths along the Fibonacci sequence
#
# Note that `scale_amount` determines the direction and magnitude of the scaling.
# If `scale_amount` > 0, it corresponds to a rhythmic expansion.
# If `scale_amount` < 0, it corresponds to a rhythmic contraction.
# If `scale_amount` == 0, the original scale is maintained and no changes are made.
def fibonacci_scale_pulse_lengths(pulse_lengths, scale_amount=0):
scaled_pulse_lengths = np.array([], dtype="int")
for pulse_length in pulse_lengths:
fib_i = find_fibonacci_index(pulse_length)
# if fib_i + scale_amount < 0:
# print("ERROR: Scale amount out of bounds")
# return pulse_lengths
scaled_pulse_length = fibonacci(max(fib_i + scale_amount, 0))
scaled_pulse_lengths = np.hstack((scaled_pulse_lengths, scaled_pulse_length))
return scaled_pulse_lengths
print("Tresillo pulse lengths: {}".format(tresillo_pulse_lengths))
print("Tresillo pulse lengths expanded by 1: {}".format(fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, 1)))
print("Tresillo pulse lengths expanded by 2: {}".format(fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, 2)))
print("Tresillo pulse lengths contracted by 1: {}".format(fibonacci_scale_pulse_lengths(tresillo_pulse_lengths, -1)))
# Define the functions we'll use to scale rhythms along the Fibonacci sequence
def fibonacci_scale_rhythm(rhythm, scale_amount):
pulse_lengths = calculate_pulse_lengths(rhythm)
scaled_pulse_lengths = fibonacci_scale_pulse_lengths(pulse_lengths, scale_amount)
scaled_pulse_lengths = np.array([p for p in scaled_pulse_lengths if p > 0])
scaled_rhythm = pulse_lengths_to_rhythm(scaled_pulse_lengths)
return scaled_rhythm
def pulse_lengths_to_rhythm(pulse_lengths):
rhythm = np.array([], dtype="int")
for p in pulse_lengths:
pulse = np.zeros(p, dtype="int")
pulse[0] = 1
rhythm = np.hstack((rhythm, pulse))
return rhythm
# Scale tresillo rhythm by a variety of factors and plot the results
for scale_factor, color in [(0, "r"), (1, "g"), (2, "b"), (-1, "y")]:
scaled_rhythm = fibonacci_scale_rhythm(tresillo_rhythm, scale_factor)
scaled_pulse_indices = np.array([p_i for p_i,x in enumerate(scaled_rhythm) if x > 0 ])
scaled_step_indices = np.array([s_i for s_i in range(len(scaled_rhythm))])
scaled_pulse_ys = np.empty(len(scaled_pulse_indices))
scaled_pulse_ys.fill(0)
scaled_step_ys = np.empty(len(scaled_step_indices))
scaled_step_ys.fill(0)
# plt.figure(figsize=(len([scaled_rhythm])*0.5, 1))
plt.figure(figsize=(8, 1))
if scale_factor > 0:
plt.title("Tresillo rhythm expanded by {}: {}".format(abs(scale_factor), scaled_rhythm), loc="left")
elif scale_factor < 0:
plt.title("Tresillo rhythm contracted by {}: {}".format(abs(scale_factor), scaled_rhythm), loc="left")
else: # scale_factor == 0, which means rhythm is unaltered
plt.title("Tresillo rhythm: {}".format(scaled_rhythm), loc="left")
# plt.scatter(scaled_pulse_indices, scaled_pulse_ys, c=color)
# plt.scatter(scaled_step_indices, scaled_step_ys, c="k", alpha=0.5)
# plt.grid(True)
plt.vlines(scaled_pulse_indices, -1, 1, color=color)
plt.vlines(scaled_step_indices, -0.5, 0.5, color=color, alpha=0.5)
plt.xticks(np.arange(0, plt.xlim()[1], 1))
plt.yticks([])
# plt.xticks(np.linspace(0, 10, 41))
# Load input audio file
filename = "../data/humannature_30s.mp3"
y, sr = librosa.load(filename, sr=sr)
plt.figure(figsize=(16,4))
librosa.display.waveplot(y, sr=sr)
ipd.Audio(y, rate=sr)
def estimate_tempo(y, sr, start_bpm=120.0):
# Estimate tempo
onset_env = librosa.onset.onset_strength(y, sr=sr) # TODO: Compare this with librosa.beat.beat_track
tempo = librosa.beat.tempo(y, sr=sr, onset_envelope=onset_env, start_bpm=start_bpm)
return float(tempo)
tempo = estimate_tempo(y, sr)
print("Tempo (calculated): {}".format(tempo))
tempo = 93.0 # Hard-coded from prior knowledge
print("Tempo (hard-coded): {}".format(tempo))
# Calculate beat times
def calculate_beat_times(y, sr, tempo):
# Calculate params based on input
T = len(y)/float(sr) # Total audio length in seconds
seconds_per_beat = 60.0/tempo
# Start beat at first onset rather than time 0
# TODO: Let this first onset also be user-supplied for more accurate results
beat_times = np.arange(detect_first_onset_time(y, sr), T, seconds_per_beat)
return beat_times
# Detect first onset
def detect_first_onset_time(y, sr, hop_length=1024):
onset_frames = librosa.onset.onset_detect(y, sr=sr, hop_length=hop_length)
onset_times = librosa.frames_to_time(onset_frames)
return onset_times[0]
beat_times = calculate_beat_times(y, sr, tempo)
print("First 10 beat times (in seconds): {}".format(beat_times[:10]))
# Listen to beat clicks (i.e. a metronome)
beat_clicks = librosa.clicks(times=beat_times, sr=sr, length=len(y))
# Plot waveform and beats
plt.figure(figsize=(16,4))
librosa.display.waveplot(y, sr=sr)
plt.vlines(beat_times, -0.25, 0.25, color="r")
ipd.Audio(y + beat_clicks, rate=sr)
beats_per_measure = 4
# Calculate measure indices in samples
def calculate_measure_samples(y, beat_samples, beats_per_measure):
max_samples = len(y)
start_sample = beat_samples[0]
beat_interval = beat_samples[1] - beat_samples[0]
measure_interval = beat_interval * beats_per_measure
if measure_interval >= beat_interval:
return np.array(beat_samples[::beats_per_measure], dtype="int")
else:
beat_indices = np.indices([len(beat_samples)])[0]
measure_indices = np.indices([len(beat_samples)/beats_per_measure])[0]
return np.interp(measure_indices, beat_indices/beats_per_measure, beat_samples)
# Work in samples from here on
beat_samples = librosa.time_to_samples(beat_times, sr=sr)
measure_samples = calculate_measure_samples(y, beat_samples, beats_per_measure)
print("First 10 measure samples: {}".format(measure_samples[:10]))
measure_times = librosa.samples_to_time(measure_samples, sr=sr)
print("First 10 measure times (in seconds): {}".format(measure_times[:10], sr=sr))
# Add clicks, then plot and listen
plt.figure(figsize=(16, 4))
librosa.display.waveplot(y, sr=sr)
plt.vlines(measure_times, -1, 1, color="r")
plt.vlines(beat_times, -0.5, 0.5, color="r")
measure_clicks = librosa.clicks(times=measure_times, sr=sr, click_freq=3000.0, length=len(y))
ipd.Audio(y + measure_clicks + beat_clicks, rate=sr)
print("Tresillo rhythm: {}\n"
"{} pulses, {} steps".format(tresillo_rhythm, tresillo_num_pulses, tresillo_num_steps))
steps_per_measure = tresillo_num_steps
steps_per_measure
Generating clicks for tresillo rhythm at the proper tempo and start time,
to overlay onto an audio track
def generate_rhythm_overlay(rhythm, measure_samples, steps_per_measure, sr):
# Calculate click interval
measure_length = measure_samples[1]-measure_samples[0]
# click_tempo = tempo * (steps_per_measure/float(beats_per_measure))
# click_interval = 60.0/click_tempo
measure_length_seconds = librosa.samples_to_time(measure_length, sr=sr)
click_interval = measure_length_seconds / float(steps_per_measure)
# Generate click times for single measure
pulse_times_measure, step_times_measure = generate_rhythm_times(rhythm, click_interval)
# Generate clicks for single measure
pulse_clicks_measure, step_clicks_measure = generate_rhythm_clicks(rhythm, click_interval, sr=sr)
# Concatenate clicks and click times for all measures
pulse_times, step_times, pulse_clicks, step_clicks = np.array([]), np.array([]), np.array([]), np.array([])
for s in measure_samples:
t = float(librosa.samples_to_time(s, sr=sr))
pulse_clicks = np.hstack((pulse_clicks, pulse_clicks_measure))
step_clicks = np.hstack((step_clicks, step_clicks_measure))
pulse_times = np.hstack((pulse_times, pulse_times_measure + t))
step_times = np.hstack((step_times, step_times_measure + t))
# Offset clicks by first onset
pulse_clicks = np.hstack((np.zeros(measure_samples[0]), pulse_clicks))
step_clicks = np.hstack((np.zeros(measure_samples[0]), step_clicks))
return (pulse_times, step_times, pulse_clicks, step_clicks)
Visualizing and hearing the result
def overlay_rhythm_onto_audio(rhythm, audio_samples, measure_samples, sr=44100, click_colors={"measure": "r",
"pulse": "r",
"step": "r"}):
# Get overlay data
pulse_times, step_times, pulse_clicks, step_clicks = generate_rhythm_overlay(rhythm,
measure_samples,
len(rhythm),
sr)
measure_times = librosa.samples_to_time(measure_samples, sr=sr)
measure_clicks = librosa.clicks(times=measure_times, sr=sr, click_freq=3000.0, length=len(audio_samples))
# Calculate max length in samples
available_lengths = [len(audio_samples), len(measure_clicks), len(pulse_clicks), len(step_clicks)]
length_samples = min(available_lengths)
# Plot original waveform
plt.figure(figsize=(16, 4))
librosa.display.waveplot(audio_samples, sr=sr, alpha=0.5)
# Plot rhythm clicks
plt.vlines(measure_times, -1, 1, color=click_colors["measure"])
plt.vlines(pulse_times, -0.5, 0.5, color=click_colors["pulse"])
plt.vlines(step_times, -0.25, 0.25, color=click_colors["step"], alpha=0.75)
# Play both clicks together with audio track
concatenated_audio_samples = ((audio_samples[:length_samples]*2.0)
+ (measure_clicks[:length_samples]*0.25)
+ (pulse_clicks[:length_samples]*0.25)
+ (step_clicks[:length_samples]*0.25))
audio_display = ipd.Audio(concatenated_audio_samples, rate=sr)
return audio_display
overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr)
overlay_rhythm_onto_audio(tresillo_rhythm, y, measure_samples, sr=sr, click_colors={"measure": "r",
"pulse": "g",
"step": "b"})
original_rhythm = tresillo_rhythm
target_rhythm = fibonacci_scale_rhythm(original_rhythm, 1) # "Fibonacci scale" original rhythm by a factor of 1
print("Original rhythm: {}\n"
"Target rhythm: {}".format(original_rhythm, target_rhythm))
# Calculate ratios between pulses for two rhythm sequences
# NOTE: This assumes that both rhythm sequences have the same number of pulses!
def calculate_pulse_ratios(original_rhythm, target_rhythm):
original_pulse_lengths = calculate_pulse_lengths(original_rhythm)
target_pulse_lengths = calculate_pulse_lengths(target_rhythm)
num_pulses = min(len(original_pulse_lengths), len(target_pulse_lengths))
pulse_ratios = np.array([original_pulse_lengths[i]/float(target_pulse_lengths[i]) for i in range(num_pulses)])
return pulse_ratios
print("Pulse ratios: {}".format(calculate_pulse_ratios(original_rhythm, target_rhythm)))
# Modify a single measure
def modify_measure(data, original_rhythm, target_rhythm, stretch_method):
modified_data = np.array([])
# Define the rhythmic properties we'll use
original_num_samples = len(data)
original_num_steps = len(original_rhythm)
target_num_steps = len(target_rhythm)
# Get indices of steps for measure
original_step_interval = original_num_samples / float(original_num_steps)
original_step_indices = np.arange(0, original_num_samples, original_step_interval, dtype="int")
# Get only indices of pulses based on rhythm
original_pulse_indices = np.array([original_step_indices[i] for i in range(original_num_steps) if original_rhythm[i] > 0])
# Calculate pulse ratios
pulse_ratios = calculate_pulse_ratios(original_rhythm, target_rhythm)
# Calculate pulse lengths
original_pulse_lengths = calculate_pulse_lengths(original_rhythm)
target_pulse_lengths = calculate_pulse_lengths(target_rhythm)
# Concatenate time-stretched versions of rhythm's pulses
for i,p in enumerate(original_pulse_indices):
# Get pulse sample data; samples between current and next pulse, or if it's the final pulse,
# samples between pulse and end of audio
pulse_start = p
pulse_stop = len(data)-1
if i < len(original_pulse_indices)-1:
pulse_stop = original_pulse_indices[i+1]
pulse_samples = data[pulse_start:pulse_stop]
# Time-stretch this step based on ratio of old to new rhythm length
# TODO: Try out other methods of manipulation, such as using onset detection in addition to steps and pulses
if stretch_method == "timestretch":
pulse_samples = librosa.effects.time_stretch(pulse_samples, pulse_ratios[i])
elif stretch_method == "euclidean":
pulse_samples = euclidean_stretch(pulse_samples,
original_pulse_lengths[i],
target_pulse_lengths[min(i, len(target_pulse_lengths)-1)])
else:
print("ERROR: Invalid stretch method {}".format(stretch_method))
# Add the samples to our modified audio time series
modified_data = np.hstack((modified_data, pulse_samples))
# Time-stretch entire measure to maintain original measure length (so that it sounds more natural)
stretch_multiplier = len(modified_data)/float(len(data))
modified_data = librosa.effects.time_stretch(modified_data, stretch_multiplier)
return modified_data
# Euclidean stretch STUB
def euclidean_stretch(pulse_samples, original_pulse_length, target_pulse_length):
return pulse_samples
first_measure_data = y[measure_samples[0]:measure_samples[1]]
first_measure_modified = modify_measure(first_measure_data,
original_rhythm, target_rhythm,
stretch_method="timestretch")
ipd.Audio(first_measure_modified, rate=sr)
# Modify an entire audio track; basically just loops through a track's measures
# and calls modify_measure() on each measure
def modify_track(data, measure_samples, original_rhythm, target_rhythm, stretch_method="timestretch"):
modified_track_data = np.zeros(measure_samples[0])
modified_measure_samples = np.array([], dtype="int")
for i, sample in enumerate(measure_samples[:-1]):
modified_measure_samples = np.hstack((modified_measure_samples, len(modified_track_data)))
measure_start = measure_samples[i]
measure_stop = measure_samples[i+1]
measure_data = data[measure_start:measure_stop]
modified_measure_data = modify_measure(measure_data, original_rhythm, target_rhythm, stretch_method)
modified_track_data = np.hstack((modified_track_data, modified_measure_data))
return (modified_track_data, modified_measure_samples)
# Modify the track using naive time-stretch
y_modified, measure_samples_modified = modify_track(y, measure_samples,
original_rhythm, target_rhythm,
stretch_method="timestretch")
plt.figure(figsize=(16,4))
librosa.display.waveplot(y_modified, sr=sr)
ipd.Audio(y_modified, rate=sr)
print(calculate_pulse_ratios(original_rhythm, target_rhythm))
overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
print("Original rhythm: {}\n"
"Target rhythm: {}".format(original_rhythm, target_rhythm))
def euclid(a, b):
m = max(a, b)
k = min(a, b)
if k==0:
return m
else:
return euclid(k, m%k)
print("Greatest common divisor of 8 and 12 is {}".format(euclid(8, 12)))
print(np.array(bjorklund.bjorklund(pulses=3, steps=8)))
original_pulse = np.array([1,0,0])
target_pulse = np.array([1,0,0,0,0])
original_pulse_rhythm = np.ones(len(original_pulse), dtype="int")
print(original_pulse_rhythm)
target_pulse_rhythm = np.array(bjorklund.bjorklund(pulses=len(original_pulse), steps=len(target_pulse)))
print(target_pulse_rhythm)
print(fibonacci_scale_rhythm(tresillo_rhythm, -1))
print("In order to stretch pulse-to-pulse {} --> {}\n"
"we subdivide and stretch rhythms {} --> {}".format(original_pulse, target_pulse,
original_pulse_rhythm, target_pulse_rhythm))
print(calculate_pulse_ratios(original_pulse_rhythm, target_pulse_rhythm))
# Euclidean stretch for modifying a single pulse (basically time-stretching subdivisions based on Euclidean rhythms)
def euclidean_stretch(pulse_samples, original_pulse_length, target_pulse_length):
target_pulse_samples = np.array([])
# Return empty samples array if target pulse length < 1
if target_pulse_length < 1:
return target_pulse_samples
# Ensure original pulse rhythm ("opr") has length equal to or less than target_pulse_length
# ... by using target pulse length
# original_pulse_length = min(original_pulse_length, target_pulse_length)
# ... by using divisors of original pulse length
# if original_pulse_length > target_pulse_length:
# # print("WARNING: original_pulse_length {} "
# # "is greater than target_pulse_length {}".format(original_pulse_length,
# # target_pulse_length))
# for i in range(1, original_pulse_length+1):
# opl_new = int(original_pulse_length / float(i))
# if opl_new <= target_pulse_length:
# original_pulse_length = opl_new
# # print("original_pulse_length is now {}".format(original_pulse_length))
# break
# ... by using lowest common multiple as target pulse length
if original_pulse_length > target_pulse_length:
# print("Target pulse length before: {}".format(target_pulse_length))
gcd = euclid(original_pulse_length, target_pulse_length)
lcm = (original_pulse_length*target_pulse_length) / gcd
target_pulse_length = lcm
# print("Target pulse length after: {}".format(target_pulse_length))
# original_pulse_length = target_pulse_length
opr = np.ones(original_pulse_length, dtype="int")
# Generate target pulse rhythm ("tpr")
tpr = bjorklund.bjorklund(pulses=original_pulse_length, steps=target_pulse_length)
tpr_pulse_lengths = calculate_pulse_lengths(tpr)
tpr_pulse_ratios = calculate_pulse_ratios(opr, tpr)
# Subdivide (i.e. segment) the pulse based on original pulse length
pulse_subdivision_step = int(len(pulse_samples) / float(original_pulse_length))
pulse_subdivision_indices = np.arange(0, len(pulse_samples), pulse_subdivision_step, dtype="int")
pulse_subdivision_indices = pulse_subdivision_indices[:original_pulse_length]
# Time-stretch each subdivision based on ratios
for i,si in enumerate(pulse_subdivision_indices):
subdivision_start = si
subdivision_stop = len(pulse_samples) - 1
if i < len(pulse_subdivision_indices)-1:
subdivision_stop = pulse_subdivision_indices[i+1]
pulse_subdivision_samples = pulse_samples[subdivision_start:subdivision_stop]
# Stretch the relevant subdivisions based on target pulse rhythm
pulse_subdivision_samples = librosa.effects.time_stretch(pulse_subdivision_samples, tpr_pulse_ratios[i])
# Concatenate phrase
target_pulse_samples = np.hstack((target_pulse_samples, pulse_subdivision_samples))
return target_pulse_samples
# Modify the track
y_modified, measure_samples_modified = modify_track(y, measure_samples,
original_rhythm, target_rhythm,
stretch_method="euclidean")
plt.figure(figsize=(16,4))
librosa.display.waveplot(y_modified, sr=sr)
ipd.Audio(y_modified, rate=sr)
overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples, sr)
def fibonacci_stretch_track(audio_filepath,
sr=44100,
original_rhythm=np.array([1,0,0,1,0,0,1,0], dtype="int"),
stretch_method="euclidean",
stretch_factor=1,
target_rhythm=None,
tempo=None,
beats_per_measure=4,
hop_length=1024,
overlay_clicks=False,
render_track=True):
# Load input audio
y, sr = librosa.load(audio_filepath, sr=sr)
# Extract rhythm features from audio
if tempo is None:
tempo = estimate_tempo(y, sr)
beat_times = calculate_beat_times(y, sr, tempo)
beat_samples = librosa.time_to_samples(beat_times, sr=sr)
measure_samples = calculate_measure_samples(y, beat_samples, beats_per_measure)
# Generate target rhythm
if target_rhythm is None:
target_rhythm = fibonacci_scale_rhythm(original_rhythm, stretch_factor)
# Modify the track
y_modified, measure_samples_modified = modify_track(y, measure_samples,
original_rhythm, target_rhythm,
stretch_method="euclidean")
# Render the modified track...
if render_track:
if overlay_clicks:
return overlay_rhythm_onto_audio(target_rhythm, y_modified, measure_samples_modified, sr)
else:
plt.figure(figsize=(16,4))
librosa.display.waveplot(y_modified, sr=sr)
return ipd.Audio(y_modified, rate=sr)
# ... or return modified track and measure samples
else:
return (y_modified, measure_samples_modified)
# "Human Nature" stretched by a factor of 1 using default parameters
fibonacci_stretch_track("../data/humannature_90s.mp3",
stretch_factor=1,
tempo=93.0)
# "Human Nature" stretched by a factor of 2
fibonacci_stretch_track("../data/humannature_30s.mp3",
tempo=93.0,
stretch_factor=2,
overlay_clicks=True)
# "Chan Chan" stretched by a factor of -1
fibonacci_stretch_track("../data/chanchan_30s.mp3",
stretch_factor=-1,
tempo=78.5)
# "I'm the One" stretched by a factor of 1
fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3",
tempo=162,
original_rhythm=np.array([1,0,0,0,0,1,0,0]),
stretch_factor=1)
# "I'm the One" in 5/4
fibonacci_stretch_track("../data/imtheone_cropped_chance_60s.mp3",
tempo=162,
original_rhythm=np.array([1,0,0,0,0,1,0,0]),
target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0]),
overlay_clicks=True)
# "Eine kleine Nachtmusik" with a swing feel
fibonacci_stretch_track("../data/einekleinenachtmusik_30s.mp3",
tempo=130,
original_rhythm=np.array([1,0,1,1]),
target_rhythm=np.array([1,0,0,1,0,1]))
# "Chan Chan" in 5/4
fibonacci_stretch_track("../data/chanchan_30s.mp3",
tempo=78.5,
original_rhythm=np.array([1,0,0,1,0,0,0,0]),
target_rhythm=np.array([1,0,0,0,0,1,0,0,0,0])) # Also interesting to try with [1,0,1]
# "Pink + White" stretched by a factor of 1
fibonacci_stretch_track("../data/pinkandwhite_30s.mp3",
beats_per_measure=6,
tempo=160,
# 6/8 to 4/4 using bassline rhythm
original_rhythm=np.array([1,1,1,1,0,0]),
target_rhythm=np.array([1,1,1,0,1,0,0,0]),
# 6/8 to 4/4 using half notes
# original_rhythm=np.array([1,0,0,1,0,0]),
# target_rhythm=np.array([1,0,0,0,1,0,0,0]),
# 6/8 to 10/8 (5/4) using Fibonacci stretch factor of 1
# original_rhythm=np.array([1,0,0,1,0,0]),
# stretch_factor=1,
overlay_clicks=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To be concrete, we'll use $\Delta t = 0.4$ s and a fundamental period $T_n=20$ s, hence a number of samples per period $N=50$, or $2.5$ samples per second.
Step2: To the values above, we associate the fundamental frequency of the DFT and the corresponding Nyquist frequency.
Step3: For comparison, we want to plot our functions also with a high
Step4: The function linspace generates a vector with a start and a stop
Step5: The Nyquist circular frequency is $25\Delta\omega$.
Step6: First, we plot the harmonics with a high frequency sampling (visually continuous, that is).
Step7: Not surprisingly, the two plots are really different.
Step8: If you look at the patterns of the dots they seem, at least, very similar. What happens is aliasing!
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
Tp = 20.0
N = 50
step = Tp/N
dw = 2*pi/Tp
wny = dw*N/2
print("omega_1 =", dw)
print("Nyquist freq. =",wny,"rad/s =", wny/dw, '* omega_1')
M = 1000
t_n=linspace(0.0,Tp,N+1)
t_m=linspace(0.0,Tp,M+1)
hf = 47
lf = hf - N
c_hs_hf = cos(hf*dw*t_m)
c_hs_lf = cos(lf*dw*t_m)
c_ls_hf = cos(hf*dw*t_n)
c_ls_lf = cos(lf*dw*t_n)
figsize(12,2.4)
figure(1);plot(t_m,c_hs_hf,'-r')
ylim((-1.05,+1.05))
grid()
title(r'$\cos(%+3d\omega_1t)$, continuous in red, 50 samples in blue'%(hf,))
figure(2);plot(t_m,c_hs_lf,'-r')
ylim((-1.05,+1.05))
grid()
title(r'$\cos(%+3d\omega_1 t)$, continuous in red, 50 samples in blue'%(lf,))
figure(1) ; plot(t_m,c_hs_hf,'-r',t_n,c_ls_hf,'ob')
ylim((-1.05,+1.05));grid();
figure(2) ; plot(t_m,c_hs_lf,'-r',t_n,c_ls_lf,'ob')
ylim((-1.05,+1.05));grid();
figure(3) ; grid()
title('The two cosines, sampled at 2.5 points per second')
figure(3)
plot(t_n,c_ls_hf,'-b', linewidth=.33)
plot(t_n,c_ls_lf,'xr', markersize=8)
xticks((2,4,6,8,10,12,14,16,18,20))
ylim((-1.05,+1.05));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Guardar en csv viajes de los correctos e incorrectos
Step2: Guardar en csv viajes de los correctos e incorrectos. sin transbordo
|
<ASSISTANT_TASK:>
Python Code:
paraderos_sinlatlong = frame_2['par_subida'][frame_2['lat_subida'].isnull()& frame_2['par_subida'].notnull()].unique()
paraderos_sinlatlong
frame_2 = frame_2[frame_2.lat_subida.notnull()]
from scipy.stats.mstats import mode
f = lambda x: mode(x, axis=None)[0][0]
g = lambda x: mode(x,axis=None)[1][0]
aggregations = {
'tiempo_subida': "count" # Calculate two results for the 'network' column with a list
}
a_group = frame.groupby(['par_subida','tipo_transporte']).agg(aggregations)
another_group = frame.groupby(['par_subida','par_bajada','lat_subida','tipo_transporte']).agg(aggregations)
sorted_group = another_group.sort_values('tiempo_subida',ascending=False)
sorted_group.head(100)
sorted_group.to_csv('od.csv')
another_group.head()
groupie_group = frame.groupby(['par_subida','tipo_transporte']).agg({'tiempo_subida':"count"})
sorted_par_subidas = groupie_group.sort_values('tiempo_subida',ascending=False)
sorted_par_subidas.to_csv('origin_.csv')
sorted_par_subidas.head()
groupie_group = frame.groupby(['par_bajada']).agg({'tiempo_subida':"count"})
sorted_par_bajadas = groupie_group.sort_values('tiempo_subida',ascending=False)
sorted_par_bajadas.to_csv('destination_.csv')
sorted_par_bajadas.head()
sorted_par_subidas[sorted_par_subidas['tiempo_subida']>10].to_csv('origin_10.csv')
sorted_par_bajadas[sorted_par_bajadas['tiempo_subida']>10].to_csv('destination_10.csv')
with open('correct_and_wrong_indexs_alg1.pickle') as f:
correct_alg1 = pickle.load(f)
wrong_alg1 = pickle.load(f)
with open('index_id_users.pickle') as f:
users_id = pickle.load(f)
correct_alg1_ids = []
wrong_alg1_ids = []
for i in range(len(correct_alg1)):
correct_alg1_ids.append(users_id[i])
for i in range(len(wrong_alg1)):
wrong_alg1_ids.append(users_id[i])
def write_csv_grouped_data(a_frame,name,threshold):
groupie_group = a_frame.groupby(['par_bajada','tipo_transporte']).agg({'tiempo_subida':"count"})
sorted_par_bajadas = groupie_group.sort_values('tiempo_subida',ascending=False)
sorted_par_bajadas[sorted_par_bajadas['tiempo_subida']>threshold].to_csv(name+'_'+str(threshold)+'_destination.csv')
groupie_group = a_frame.groupby(['par_subida','tipo_transporte']).agg({'tiempo_subida':"count"})
sorted_par_subidas = groupie_group.sort_values('tiempo_subida',ascending=False)
sorted_par_subidas[sorted_par_subidas['tiempo_subida']>threshold].to_csv(name+'_'+str(threshold)+'_origin.csv')
return sorted_par_subidas
ff = write_csv_grouped_data(frame,'',0)
write_csv_grouped_data(frame_2,'',0)
frame
write_csv_grouped_data(frame[frame['id'].isin(wrong_alg1_ids)],'wrong_alg1',1)
write_csv_grouped_data(frame[frame['id'].isin(correct_alg1_ids)],'correct_alg1',1)
without_transbordors_frame = frame[frame['netapa']==1]
write_csv_grouped_data(without_transbordors_frame[without_transbordors_frame['id'].isin(wrong_alg1_ids)],'wrong_alg1_wo_tr',1)
write_csv_grouped_data(without_transbordors_frame[without_transbordors_frame['id'].isin(correct_alg1_ids)],'correct_alg1_wo_tr',1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line Plots
Step2: Scatter Plots
Step3: Plot properties
Step4: Multiple plots
Step5: Save figure
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = np.arange(-np.pi,np.pi,0.01) # Create an array of x values from -pi to pi with 0.01 interval
y = np.sin(x) # Apply sin function on all x
plt.plot(x,y)
plt.plot(y)
x = np.arange(0,10,1) # x = 1,2,3,4,5...
y = x*x # Squared x
plt.plot(x,y,'bo') # plot x and y using blue circle markers
plt.plot(x,y,'r+') # plot x and y using red plusses
x = np.arange(-np.pi,np.pi,0.001)
plt.plot(x,np.sin(x))
plt.title('y = sin(x)') # title
plt.xlabel('x (radians)') # x-axis label
plt.ylabel('y') # y-axis label
# To plot the axis label in LaTex, we can run
from matplotlib import rc
## For sans-serif font:
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
plt.plot(x,np.sin(x))
plt.title(r'T = sin($\theta$)') # title, the `r` in front of the string means raw string
plt.xlabel(r'$\theta$ (radians)') # x-axis label, LaTex synatx should be encoded with $$
plt.ylabel('T') # y-axis label
x1 = np.linspace(0.0, 5.0)
x2 = np.linspace(0.0, 2.0)
y1 = np.cos(2 * np.pi * x1) * np.exp(-x1)
y2 = np.cos(2 * np.pi * x2)
plt.subplot(2, 1, 1)
plt.plot(x1, y1, '.-')
plt.title('Plot 2 graph at the same time')
plt.ylabel('Amplitude (Damped)')
plt.subplot(2, 1, 2)
plt.plot(x2, y2, '.-')
plt.xlabel('time (s)')
plt.ylabel('Amplitude (Undamped)')
plt.plot(x,np.sin(x))
plt.savefig('plot.pdf')
plt.savefig('plot.png')
# To load image into this Jupyter notebook
from IPython.display import Image
Image("plot.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. General note
Step2: 3. Compare
Step3: <img src='./tables/Dorogokupets2007_Pt.png'>
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
eta = np.linspace(1., 0.70, 7)
print(eta)
dorogokupets2007_pt = eos.platinum.Dorogokupets2007()
help(dorogokupets2007_pt)
dorogokupets2007_pt.print_equations()
dorogokupets2007_pt.print_equations()
dorogokupets2007_pt.print_parameters()
v0 = 60.37930856339099
dorogokupets2007_pt.three_r
v = v0 * (eta)
temp = 3000.
p = dorogokupets2007_pt.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
v = dorogokupets2007_pt.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print(1.-(v/v0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1,2,3,4])
b = np.array([5, 4, 3, 2])
result = np.correlate(a, np.hstack((b[1:], b)), mode='valid')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here the fmt is a format code controlling the appearance of lines and points, and has the same syntax as the shorthand used in plt.plot, outlined in Simple Line Plots and Simple Scatter Plots.
Step2: In addition to these options, you can also specify horizontal errorbars (xerr), one-sided errorbars, and many other variants.
Step3: We now have xfit, yfit, and dyfit, which sample the continuous fit to our data.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
x = np.linspace(0, 10, 50)
dy = 0.8
y = np.sin(x) + dy * np.random.randn(50)
plt.errorbar(x, y, yerr=dy, fmt='.k');
plt.errorbar(x, y, yerr=dy, fmt='o', color='black',
ecolor='lightgray', elinewidth=3, capsize=0);
from sklearn.gaussian_process import GaussianProcess
# define the model and draw some data
model = lambda x: x * np.sin(x)
xdata = np.array([1, 3, 5, 6, 8])
ydata = model(xdata)
# Compute the Gaussian process fit
gp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1E-1,
random_start=100)
gp.fit(xdata[:, np.newaxis], ydata)
xfit = np.linspace(0, 10, 1000)
yfit, MSE = gp.predict(xfit[:, np.newaxis], eval_MSE=True)
dyfit = 2 * np.sqrt(MSE) # 2*sigma ~ 95% confidence region
# Visualize the result
plt.plot(xdata, ydata, 'or')
plt.plot(xfit, yfit, '-', color='gray')
plt.fill_between(xfit, yfit - dyfit, yfit + dyfit,
color='gray', alpha=0.2)
plt.xlim(0, 10);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The default end point is simply there to "document" the API if the root endpoint of the API is called with a get method. This will show the JSON objects expected by the post methods.
Step2: The /LumbarSensorReading endpoint expects a POST method with a JSON object in the data payload. A unique reading identifier and the current time attributes are generated and added to the JSON packet before sending.
Step3: The /LumbarSensorTraining endpoint expects a POST method with a JSON object in the data payload. A unique reading identifier and the current time attributes are generated and added to the JSON packet before sending.
|
<ASSISTANT_TASK:>
Python Code:
from flask import Flask, Response, request, json, render_template
from kafka import KafkaProducer
import uuid
import datetime
app = Flask(__name__)
producer = KafkaProducer(bootstrap_servers='localhost:9092')
# Default end point
@app.route('/', methods = ['GET'])
def api_root():
data = {
"title":"IOT Back Brace REST API",
"sensorReading":{
"deviceID":"5d681c54e66ff4a5654e55c6d5a5b54",
"metricTypeID":6,
"uomID":4,
"actual":{"y":18,"p":17.50,"r":120},
"setPoints":{"y":25,"p":45,"r":10}
},
"trainingReading":{
"deviceID":"5d681c54e66ff4a5654e55c6d5a5b54",
"metricTypeID":6,
"uomID":4,
"currentPostureID":2,
"actual":{"y":18,"p":17.50,"r":120},
"setPoints":{"y":25,"p":45,"r":100}
}
}
try:
print(request.headers)
return render_template("index.html", data = data )
except Exception, e:
return str(e)
# End point for posting sensor readings.
@app.route('/LumbarSensorReading', methods = ['POST'])
def post_readings():
if request.headers['Content-Type'] == 'application/json':
# Create readingTime
readingTime = datetime.datetime.now().isoformat()
# Create readingID
readingID = str(uuid.uuid4())
# Add these to json object
request.json['readingTime'] = readingTime
request.json['readingID'] = readingID
# Send to Kafka producer
producer.send('LumbarSensorReadings', json.dumps(request.json))
return "JSON Message: " + json.dumps(request.json)
else:
return "415 Unsupported Media Type"
# End point for training the Machine Learning Model
@app.route('/LumbarSensorTraining', methods = ['POST'])
def post_trainingData():
if request.headers['Content-Type'] == 'application/json':
# Create readingTime
readingTime = datetime.datetime.now().isoformat()
# Create readingID
readingID = str(uuid.uuid4())
# Add these to json object
request.json['readingTime'] = readingTime
request.json['readingID'] = readingID
# Send to Kafka producer
producer.send('LumbarSensorTraining', json.dumps(request.json))
print(request.headers)
return "JSON Message: " + json.dumps(request.json)
else:
return "415 Unsupported Media Type"
if __name__ == '__main__':
app.run(host="0.0.0.0",debug=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To paraphrase two Georges, "All models are wrong, but some models are
Step2: When this function is called, it modifies bikeshare. As long as there
Step3: The name of the parameter is state rather than bikeshare as a
Step4: Again, the argument we provide gets assigned to the parameter, so this
Step5: And update them independently
Step7: Changes in bikeshare1 do not affect bikeshare2, and vice versa. So
Step8: Docstrings follow a conventional format
Step9: The first line checks whether the number of bikes at Wellesley is zero. If so, it uses a return statement, which causes the function to end immediately, without running the rest of the statements. So if there are no bikes at Wellesley, we "return" from bike_to_olin without changing the state.
Step10: The state of the system should be unchanged.
Step11: No more negative bikes (at least at Wellesley).
Step12: On the other hand, the following statement checks whether x is 5 and
Step13: You can use the equals operator in an if statement, like this
Step14: If you make a mistake and use = in an if statement, like this
Step15: If a customer arrives at the Wellesley station and finds no bike
Step16: We can test it by calling bike_to_olin
Step17: There should be 12 bikes at Olin, no bikes at Wellesley, and one unhappy customer.
Step22: Looks good!
Step24: Exercise
Step25: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
def bike_to_wellesley():
bikeshare.olin -= 1
bikeshare.wellesley += 1
def bike_to_wellesley(state):
state.olin -= 1
state.wellesley += 1
from modsim import State
bikeshare = State(olin=10, wellesley=2)
bike_to_wellesley(bikeshare)
bikeshare1 = State(olin=10, wellesley=2)
bikeshare2 = State(olin=2, wellesley=10)
bike_to_wellesley(bikeshare1)
bike_to_wellesley(bikeshare2)
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
# We decrease one state variable and increase the
# other, so the total number of bikes is unchanged.
state.wellesley -= 1
state.olin += 1
def bike_to_olin(state):
if state.wellesley == 0:
return
state.wellesley -= 1
state.olin += 1
bikeshare = State(olin=12, wellesley=0)
bike_to_olin(bikeshare)
bikeshare
x = 5
x == 5
if x == 5:
print('yes, x is 5')
def bike_to_olin(state):
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
bikeshare = State(olin=12, wellesley=0,
wellesley_empty=0)
bike_to_olin(bikeshare)
bikeshare
from modsim import TimeSeries, decorate
def run_simulation(state, p1, p2, num_steps):
Simulate the given number of time steps.
state: State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
num_steps: number of time steps
results = TimeSeries()
results[0] = state.olin
for i in range(num_steps):
step(state, p1, p2)
results[i+1] = state.olin
results.plot(label='Olin')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
from modsim import flip
def step(state, p1, p2):
Simulate one time step.
state: bikeshare State object
p1: probability of an Olin->Wellesley ride
p2: probability of a Wellesley->Olin ride
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
state.olin -= 1
state.wellesley += 1
# Solution
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
if state.olin == 0:
state.olin_empty += 1
return
state.olin -= 1
state.wellesley += 1
# Solution
bikeshare = State(olin=0, wellesley=12,
olin_empty=0, wellesley_empty=0)
# Solution
bike_to_wellesley(bikeshare)
# Solution
bikeshare
# Solution
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0)
run_simulation(bikeshare, 0.3, 0.2, 60)
bikeshare
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prometheus サーバーからメトリックを読み込む
Step2: CoreDNS と Prometheus のインストールとセットアップ
Step3: 次に、Prometheus サーバーをセットアップし、Prometheus を使用して、上記の9153番ポートで公開されている CoreDNS メトリックを取得します。また、構成用のprometheus.ymlファイルはダウンロードできます。
Step4: アクティビティを表示するためには、digコマンドを使用して、セットアップされている CoreDNS サーバーに対していくつかの DNS クエリを生成できます。
Step5: CoreDNS サーバーのメトリックが Prometheus サーバーによりスクレイピングされ、TensorFlow で使用する準備ができました。
Step6: データセットの仕様をさらに見てみましょう。
Step7: 作成されたDatasetは、トレーニングまたは推論のために直接tf.kerasに渡す準備ができています。
Step8: 使用するデータセットは、10 サンプルの CoreDNS の「go_memstats_sys_bytes」の値です。ただし、window = n_stepsおよびshift = 1のスライディングウィンドウが形成されるため、追加のサンプルが必要です (2 つの連続する要素の場合、最初のサンプルはxで、2 番目はyと見なされます) 。合計は10 + n_steps - 1 + 1 = 12 秒です。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
from datetime import datetime
import tensorflow as tf
import tensorflow_io as tfio
!curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz
!tar -xzf coredns_1.6.7_linux_amd64.tgz
!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile
!cat Corefile
# Run `./coredns` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./coredns &')
!curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz
!tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1
!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml
!cat prometheus.yml
# Run `./prometheus` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./prometheus &')
!sudo apt-get install -y -qq dnsutils
!dig @127.0.0.1 -p 9053 demo1.example.org
!dig @127.0.0.1 -p 9053 demo2.example.org
dataset = tfio.experimental.IODataset.from_prometheus(
"coredns_dns_request_count_total", 5, endpoint="http://localhost:9090")
print("Dataset Spec:\n{}\n".format(dataset.element_spec))
print("CoreDNS Time Series:")
for (time, value) in dataset:
# time is milli second, convert to data time:
time = datetime.fromtimestamp(time // 1000)
print("{}: {}".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total']))
dataset = tfio.experimental.IODataset.from_prometheus(
"go_memstats_gc_sys_bytes", 5, endpoint="http://localhost:9090")
print("Time Series CoreDNS/Prometheus Comparision:")
for (time, value) in dataset:
# time is milli second, convert to data time:
time = datetime.fromtimestamp(time // 1000)
print("{}: {}/{}".format(
time,
value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'],
value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes']))
n_steps, n_features = 2, 1
simple_lstm_model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)),
tf.keras.layers.Dense(1)
])
simple_lstm_model.compile(optimizer='adam', loss='mae')
n_samples = 10
dataset = tfio.experimental.IODataset.from_prometheus(
"go_memstats_sys_bytes", n_samples + n_steps - 1 + 1, endpoint="http://localhost:9090")
# take go_memstats_gc_sys_bytes from coredns job
dataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes'])
# find the max value and scale the value to [0, 1]
v_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum)
dataset = dataset.map(lambda v: (v / v_max))
# expand the dimension by 1 to fit n_features=1
dataset = dataset.map(lambda v: tf.expand_dims(v, -1))
# take a sliding window
dataset = dataset.window(n_steps, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda d: d.batch(n_steps))
# the first value is x and the next value is y, only take 10 samples
x = dataset.take(n_samples)
y = dataset.skip(1).take(n_samples)
dataset = tf.data.Dataset.zip((x, y))
# pass the final dataset to model.fit for training
simple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preliminary Conclusions
Step2: Looking at the distribution of data points, it looks like it will be difficult for us to resolve distances less than 24 ft.
|
<ASSISTANT_TASK:>
Python Code:
# Let's plot the data for each of the devices, from each of the base stations.
dfs = [(origin, 'origin'), (eastern, 'eastern'), (southern, 'southern')]
def plot_signal_vs_distance(device):
fig = plt.figure(figsize=(9,3))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
axes = [ax1, ax2, ax3]
for i, (df, title) in enumerate(dfs):
df.dropna().query('tracker_id=="{0}"'.format(device)).plot(x='distance', y='dB', kind='scatter', ax=axes[i])
axes[i].set_title(title)
plt.tight_layout()
device = '68:9E:19:11:A6:DB'
plot_signal_vs_distance(device)
device = 'F4:B8:5E:C4:56:22'
plot_signal_vs_distance(device)
# In order to have more data put together, I will concatenate the 3 data frames.
device = '68:9E:19:11:A6:DB'
# device = 'F4:B8:5E:C4:56:22'
all_data = origin.dropna().append(eastern.dropna()).append(southern.dropna()).query('tracker_id=="{0}"'.format(device))
all_data.plot(x='distance', y='dB', kind='scatter', title='{0} data points'.format(len(all_data)))
sns.violinplot(x=all_data['distance'], y=all_data['dB'])
import pymc3 as pm
with pm.Model() as model:
# Hyperparameters for A and B
# mu_a = pm.Normal('mu_A', mu=0, sd=10)
# sig_a = pm.Uniform('sigma_A', lower=0, upper=10)
# mu_b = pm.Normal('mu_B', mu=0, sd=10)
# sig_b = pm.Uniform('sigma_B', lower=0, upper=10)
# Estimate the intercepts
A = pm.Normal('A', mu=-0.2, sd=3)
B = pm.Normal('B', mu=-50, sd=3)
# Error term
err = pm.Normal('error', mu=-30, sd=10)
# Model prediction:
dB = A * all_data['distance'] + B + err
# dB likelihood
db_like = pm.Normal('dB', mu=dB, sd=10, observed=all_data['dB'])
with model:
start = pm.find_MAP()
step = pm.NUTS()
trace = pm.sample(10000, step)
pm.traceplot(trace)
pm.summary(trace)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def numberOfLines(S , widths ) :
if(S == "") :
return 0 , 0
lines , width = 1 , 0
for c in S :
w = widths[ord(c ) - ord(' a ' ) ]
width += w
if width > 10 :
lines += 1
width = w
return lines , width
S = "bbbcccdddaa "
Widths =[4 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ]
print(numberOfLines(S , Widths ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 위 해결 방법에는 두 가지 의문이 존재한다. 우선 역행렬이 존재하는지 어떻게 알 수 있는가? 또 두 번째 만약 미지수의 수와 방정식의 수가 다르다면 어떻게 되는가?
Step2: 행렬식과 역행렬 사이에는 다음의 관계가 있다.
|
<ASSISTANT_TASK:>
Python Code:
A = np.array([[1, 3, -2], [3, 5, 6], [2, 4, 3]])
A
b = np.array([[5], [7], [8]])
b
Ainv = np.linalg.inv(A)
Ainv
x = np.dot(Ainv, b)
x
np.dot(A, x) - b
x, resid, rank, s = np.linalg.lstsq(A, b)
x
np.random.seed(0)
A = np.random.randn(3, 3)
A
np.linalg.det(A)
A = np.array([[2, 0], [-1, 1], [0, 2]])
A
b = np.array([[1], [0], [-1]])
b
Apinv = np.dot(np.linalg.inv(np.dot(A.T, A)), A.T)
Apinv
x = np.dot(Apinv, b)
x
np.dot(A, x) - b
x, resid, rank, s = np.linalg.lstsq(A, b)
x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: https
Step2: Agora executamos o pacote para download dos datasets.
Step3: Carregando os Dados
Step4: Agora começamos a Análise Exploratória dos Dados.
Step5: 2- Qual o Número de Títulos Por Gênero?
Step6: Usamos o Countvectorizer para converter a coluna de gêneros em um vetor one-hot encoded para contar o número de filmes em cada gênero.
Step7: 3- Qual a Mediana de Avaliação dos Filmes Por Gênero?
Step8: 4- Qual a Mediana de Avaliação dos Filmes Em Relação ao Ano de Estréia?
Step9: 5- Qual o Número de Filmes Avaliados Por Gênero Em Relação ao Ano de Estréia?
Step10: 6- Qual o Filme Com Maior Tempo de Duração? Calcule os Percentis.
Step11: 'Logistics' é o filme mais longo da história (até este momento), com 857 horas (51420 minutos). Dirigido por Daniel Andersson e Erika Magnusson, o filme acompanha todo o ciclo de vida de um pedômetro (dispositivo de rastreamento de passos) em ordem cronológica reversa.
Step12: 8- Qual o Número de Filmes Produzidos Por País?
Step13: 9- Quais São os Top 10 Melhores Filmes?
Step14: 10- Quais São os Top 10 Piores Filmes?
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Instala o pacote
!pip install -q imdb-sqlite
# Instala o pacote
# https://pypi.org/project/pycountry/
!pip install -q pycountry
# Imports
import re
import time
import sqlite3
import pycountry
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import cm
from sklearn.feature_extraction.text import CountVectorizer
import warnings
warnings.filterwarnings("ignore")
sns.set_theme(style = "whitegrid")
%%time
!imdb-sqlite
# Conecta no banco de dados
conn = sqlite3.connect("imdb.db")
# Extrai a lista de tabelas
tabelas = pd.read_sql_query("SELECT NAME AS 'Table_Name' FROM sqlite_master WHERE type = 'table'", conn)
# Tipo do objeto
type(tabelas)
# Visualiza o resultado
tabelas.head()
# Vamos converter o dataframe em uma lista
tabelas = tabelas["Table_Name"].values.tolist()
# Vamos percorrer a lista de tabelas no banco de dados e extrair o esquema de cada uma
for tabela in tabelas:
consulta = "PRAGMA TABLE_INFO({})".format(tabela)
resultado = pd.read_sql_query(consulta, conn)
print("Esquema da tabela:", tabela)
display(resultado)
print("-"*100)
print("\n")
# Cria a consulta SQL
consulta1 = '''SELECT type, COUNT(*) AS COUNT FROM titles GROUP BY type'''
# Extrai o resultado
resultado1 = pd.read_sql_query(consulta1, conn)
# Visualiza o resultado
display(resultado1)
# Vamos calcular o percentual para cada tipo
resultado1['percentual'] = (resultado1['COUNT'] / resultado1['COUNT'].sum()) * 100
# Visualiza o resultado
display(resultado1)
# Vamos criar um gráfico com apenas 4 categorias:
# As 3 categorias com mais títulos e 1 categoria com todo o restante
# Cria um dicionário vazio
others = {}
# Filtra o percentual em 5% e soma o total
others['COUNT'] = resultado1[resultado1['percentual'] < 5]['COUNT'].sum()
# Grava o percentual
others['percentual'] = resultado1[resultado1['percentual'] < 5]['percentual'].sum()
# Ajusta o nome
others['type'] = 'others'
# Visualiza
others
# Filtra o dataframe de resultado
resultado1 = resultado1[resultado1['percentual'] > 5]
# Append com o dataframe de outras categorias
resultado1 = resultado1.append(others, ignore_index = True)
# Ordena o resultado
resultado1 = resultado1.sort_values(by = 'COUNT', ascending = False)
# Visualiza
resultado1.head()
# Ajusta os labels
labels = [str(resultado1['type'][i])+' '+'['+str(round(resultado1['percentual'][i],2)) +'%'+']' for i in resultado1.index]
# Plot
# Mapa de cores
# https://matplotlib.org/stable/tutorials/colors/colormaps.html
cs = cm.Set3(np.arange(100))
# Cria a figura
f = plt.figure()
# Pie Plot
plt.pie(resultado1['COUNT'], labeldistance = 1, radius = 3, colors = cs, wedgeprops = dict(width = 0.8))
plt.legend(labels = labels, loc = 'center', prop = {'size':12})
plt.title("Distribuição de Títulos", loc = 'Center', fontdict = {'fontsize':20,'fontweight':20})
plt.show()
# Cria a consulta SQL
consulta2 = '''SELECT genres, COUNT(*) FROM titles WHERE type = 'movie' GROUP BY genres'''
# Resultado
resultado2 = pd.read_sql_query(consulta2, conn)
# Visualiza o resultado
display(resultado2)
# Converte as strings para minúsculo
resultado2['genres'] = resultado2['genres'].str.lower().values
# Remove valores NA (ausentes)
temp = resultado2['genres'].dropna()
# Vamos criar um vetor usando expressão regular para filtrar as strings
# https://docs.python.org/3.8/library/re.html
padrao = '(?u)\\b[\\w-]+\\b'
# https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
vetor = CountVectorizer(token_pattern = padrao, analyzer = 'word').fit(temp)
type(vetor)
# Aplica a vetorização ao dataset sem valores NA
bag_generos = vetor.transform(temp)
type(bag_generos)
# Retorna gêneros únicos
generos_unicos = vetor.get_feature_names()
# Cria o dataframe de gêneros
generos = pd.DataFrame(bag_generos.todense(), columns = generos_unicos, index = temp.index)
# Visualiza
generos.info()
# Drop da coluna n
generos = generos.drop(columns = 'n', axis = 0)
# Calcula o percentual
generos_percentual = 100 * pd.Series(generos.sum()).sort_values(ascending = False) / generos.shape[0]
# Visualiza
generos_percentual.head(10)
# Plot
plt.figure(figsize = (16,8))
sns.barplot(x = generos_percentual.values, y = generos_percentual.index, orient = "h", palette = "terrain")
plt.ylabel('Gênero')
plt.xlabel("\nPercentual de Filmes (%)")
plt.title('\nNúmero (Percentual) de Títulos Por Gênero\n')
plt.show()
# Consulta SQL
consulta3 = '''
SELECT rating, genres FROM
ratings JOIN titles ON ratings.title_id = titles.title_id
WHERE premiered <= 2022 AND type = 'movie'
'''
# Resultado
resultado3 = pd.read_sql_query(consulta3, conn)
# Visualiza
display(resultado3)
# Vamos criar uma função para retornar os genêros
def retorna_generos(df):
df['genres'] = df['genres'].str.lower().values
temp = df['genres'].dropna()
vetor = CountVectorizer(token_pattern = '(?u)\\b[\\w-]+\\b', analyzer = 'word').fit(temp)
generos_unicos = vetor.get_feature_names()
generos_unicos = [genre for genre in generos_unicos if len(genre) > 1]
return generos_unicos
# Aplica a função
generos_unicos = retorna_generos(resultado3)
# Visualiza
generos_unicos
# Cria listas vazias
genero_counts = []
genero_ratings = []
# Loop
for item in generos_unicos:
# Retorna a contagem de filmes por gênero
consulta = 'SELECT COUNT(rating) FROM ratings JOIN titles ON ratings.title_id=titles.title_id WHERE genres LIKE '+ '\''+'%'+item+'%'+'\' AND type=\'movie\''
resultado = pd.read_sql_query(consulta, conn)
genero_counts.append(resultado.values[0][0])
# Retorna a avaliação de filmes por gênero
consulta = 'SELECT rating FROM ratings JOIN titles ON ratings.title_id=titles.title_id WHERE genres LIKE '+ '\''+'%'+item+'%'+'\' AND type=\'movie\''
resultado = pd.read_sql_query(consulta, conn)
genero_ratings.append(np.median(resultado['rating']))
# Prepara o dataframe final
df_genero_ratings = pd.DataFrame()
df_genero_ratings['genres'] = generos_unicos
df_genero_ratings['count'] = genero_counts
df_genero_ratings['rating'] = genero_ratings
# Visualiza
df_genero_ratings.head(20)
# Drop do índice 18 (news)
# Não queremos essa informação como gênero
df_genero_ratings = df_genero_ratings.drop(index = 18)
# Ordena o resultado
df_genero_ratings = df_genero_ratings.sort_values(by = 'rating', ascending = False)
# Plot
# Figura
plt.figure(figsize = (16,10))
# Barplot
sns.barplot(y = df_genero_ratings.genres, x = df_genero_ratings.rating, orient = "h")
# Textos do gráfico
for i in range(len(df_genero_ratings.index)):
plt.text(4.0,
i + 0.25,
str(df_genero_ratings['count'][df_genero_ratings.index[i]]) + " filmes")
plt.text(df_genero_ratings.rating[df_genero_ratings.index[i]],
i + 0.25,
round(df_genero_ratings["rating"][df_genero_ratings.index[i]],2))
plt.ylabel('Gênero')
plt.xlabel('Mediana da Avaliação')
plt.title('\nMediana de Avaliação Por Gênero\n')
plt.show()
# Consulta SQL
consulta4 = '''
SELECT rating AS Rating, premiered FROM
ratings JOIN titles ON ratings.title_id = titles.title_id
WHERE premiered <= 2022 AND type = 'movie'
ORDER BY premiered
'''
# Resultado
resultado4 = pd.read_sql_query(consulta4, conn)
display(resultado4)
# Calculamos a mediana ao longo do tempo (anos)
ratings = []
for year in set(resultado4['premiered']):
ratings.append(np.median(resultado4[resultado4['premiered'] == year]['Rating']))
type(ratings)
ratings[1:10]
# Lista de anos
anos = list(set(resultado4['premiered']))
anos[1:10]
# Plot
plt.figure(figsize = (16,8))
plt.plot(anos, ratings)
plt.xlabel('\nAno')
plt.ylabel('Mediana de Avaliação')
plt.title('\nMediana de Avaliação dos Filmes Em Relação ao Ano de Estréia\n')
plt.show()
# Consulta SQL
consulta5 = '''SELECT genres FROM titles '''
# Resultado
resultado5 = pd.read_sql_query(consulta5, conn)
display(resultado5)
# Retorna gêneros únicos
generos_unicos = retorna_generos(resultado5)
# Visualiza o resultado
generos_unicos
# Agora fazemos a contagem
genero_count = []
for item in generos_unicos:
consulta = 'SELECT COUNT(*) COUNT FROM titles WHERE genres LIKE '+ '\''+'%'+item+'%'+'\' AND type=\'movie\' AND premiered <= 2022'
resultado = pd.read_sql_query(consulta, conn)
genero_count.append(resultado['COUNT'].values[0])
# Prepara o dataframe
df_genero_count = pd.DataFrame()
df_genero_count['genre'] = generos_unicos
df_genero_count['Count'] = genero_count
# Calcula os top 5
df_genero_count = df_genero_count[df_genero_count['genre'] != 'n']
df_genero_count = df_genero_count.sort_values(by = 'Count', ascending = False)
top_generos = df_genero_count.head()['genre'].values
# Plot
# Figura
plt.figure(figsize = (16,8))
# Loop e Plot
for item in top_generos:
consulta = 'SELECT COUNT(*) Number_of_movies, premiered Year FROM titles WHERE genres LIKE '+ '\''+'%'+item+'%'+'\' AND type=\'movie\' AND Year <=2022 GROUP BY Year'
resultado = pd.read_sql_query(consulta, conn)
plt.plot(resultado['Year'], resultado['Number_of_movies'])
plt.xlabel('\nAno')
plt.ylabel('Número de Filmes Avaliados')
plt.title('\nNúmero de Filmes Avaliados Por Gênero Em Relação ao Ano de Estréia\n')
plt.legend(labels = top_generos)
plt.show()
# Consulta SQL
consulta6 = '''
SELECT runtime_minutes Runtime
FROM titles
WHERE type = 'movie' AND Runtime != 'NaN'
'''
# Resultado
resultado6 = pd.read_sql_query(consulta6, conn)
display(resultado6)
# Loop para cálculo dos percentis
for i in range(101):
val = i
perc = round(np.percentile(resultado6['Runtime'].values, val), 2)
print('{} percentil da duração (runtime) é: {}'.format(val, perc))
# Refazendo a consulta e retornando o filme com maior duração
consulta6 = '''
SELECT runtime_minutes Runtime, primary_title
FROM titles
WHERE type = 'movie' AND Runtime != 'NaN'
ORDER BY Runtime DESC
LIMIT 1
'''
resultado6 = pd.read_sql_query(consulta6, conn)
resultado6
# Consulta SQL
consulta7 = '''
SELECT AVG(runtime_minutes) Runtime, genres
FROM titles
WHERE type = 'movie'
AND runtime_minutes != 'NaN'
GROUP BY genres
'''
# Resultado
resultado7 = pd.read_sql_query(consulta7, conn)
# Retorna gêneros únicos
generos_unicos = retorna_generos(resultado7)
# Visualiza
generos_unicos
# Calcula duração por gênero
genero_runtime = []
for item in generos_unicos:
consulta = 'SELECT runtime_minutes Runtime FROM titles WHERE genres LIKE '+ '\''+'%'+item+'%'+'\' AND type=\'movie\' AND Runtime!=\'NaN\''
resultado = pd.read_sql_query(consulta, conn)
genero_runtime.append(np.median(resultado['Runtime']))
# Prepara o dataframe
df_genero_runtime = pd.DataFrame()
df_genero_runtime['genre'] = generos_unicos
df_genero_runtime['runtime'] = genero_runtime
# Remove índice 18 (news)
df_genero_runtime = df_genero_runtime.drop(index = 18)
# Ordena os dados
df_genero_runtime = df_genero_runtime.sort_values(by = 'runtime', ascending = False)
# Plot
# Tamanho da figura
plt.figure(figsize = (16,8))
# Barplot
sns.barplot(y = df_genero_runtime.genre, x = df_genero_runtime.runtime, orient = "h")
# Loop
for i in range(len(df_genero_runtime.index)):
plt.text(df_genero_runtime.runtime[df_genero_runtime.index[i]],
i + 0.25,
round(df_genero_runtime["runtime"][df_genero_runtime.index[i]], 2))
plt.ylabel('Gênero')
plt.xlabel('\nMediana de Tempo de Duração (Minutos)')
plt.title('\nRelação Entre Duração e Gênero\n')
plt.show()
# Consulta SQL
consulta8 = '''
SELECT region, COUNT(*) Number_of_movies FROM
akas JOIN titles ON
akas.title_id = titles.title_id
WHERE region != 'None'
AND type = \'movie\'
GROUP BY region
'''
# Resultado
resultado8 = pd.read_sql_query(consulta8, conn)
display(resultado8)
# Shape
resultado8.shape
# Número de linhas
resultado8.shape[0]
# Listas auxiliares
nomes_paises = []
contagem = []
# Loop para obter o país de acordo com a região
for i in range(resultado8.shape[0]):
try:
coun = resultado8['region'].values[i]
nomes_paises.append(pycountry.countries.get(alpha_2 = coun).name)
contagem.append(resultado8['Number_of_movies'].values[i])
except:
continue
# Prepara o dataframe
df_filmes_paises = pd.DataFrame()
df_filmes_paises['country'] = nomes_paises
df_filmes_paises['Movie_Count'] = contagem
# Ordena o resultado
df_filmes_paises = df_filmes_paises.sort_values(by = 'Movie_Count', ascending = False)
# Visualiza
df_filmes_paises.head(10)
# Plot
# Figura
plt.figure(figsize = (20,8))
# Barplot
sns.barplot(y = df_filmes_paises[:20].country, x = df_filmes_paises[:20].Movie_Count, orient = "h")
# Loop
for i in range(0,20):
plt.text(df_filmes_paises.Movie_Count[df_filmes_paises.index[i]]-1,
i + 0.30,
round(df_filmes_paises["Movie_Count"][df_filmes_paises.index[i]],2))
plt.ylabel('País')
plt.xlabel('\nNúmero de Filmes')
plt.title('\nNúmero de Filmes Produzidos Por País\n')
plt.show()
# Consulta SQL
consulta9 = '''
SELECT primary_title AS Movie_Name, genres, rating
FROM
titles JOIN ratings
ON titles.title_id = ratings.title_id
WHERE titles.type = 'movie' AND ratings.votes >= 25000
ORDER BY rating DESC
LIMIT 10
'''
# Resultado
top10_melhores_filmes = pd.read_sql_query(consulta9, conn)
display(top10_melhores_filmes)
# Consulta SQL
consulta10 = '''
SELECT primary_title AS Movie_Name, genres, rating
FROM
titles JOIN ratings
ON titles.title_id = ratings.title_id
WHERE titles.type = 'movie' AND ratings.votes >= 25000
ORDER BY rating ASC
LIMIT 10
'''
# Resultado
top10_piores_filmes = pd.read_sql_query(consulta10, conn)
display(top10_piores_filmes)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
Step2: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Step3: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Step4: Is it the same artist who has the largest number of followers?
Step5: 4) Print a list of Lil's that are more popular than Lil' Kim.
Step6: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Step7: 6) Will the world explode if a musicians swears?
Step8: First solution -- this felt like a lot of repeating and as if there was a more efficient way to do it. Turns out, there is! With some explanation from Soma first -- see below.
Step9: Soma explaining how to write functions in 30 seconds of Lab
Step10: Based on that, I re-wrote my above code using a function
Step11: How many minutes of explicit songs do they have? Non-explicit?
Step12: 7) Since we're talking about Lils, what about Biggies?
Step13: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
|
<ASSISTANT_TASK:>
Python Code:
import requests
lil_response = requests.get ('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US&limit=50')
lil_data = lil_response.json()
print(type(lil_data))
lil_data.keys()
lil_data['artists'].keys()
lil_artists = lil_data['artists']['items']
#check on what elements are in that list:
#print (lil_artists[0])
for artist in lil_artists:
print(artist['name'], "has a popularity score of", artist['popularity'])
#http://stackoverflow.com/questions/2600191/how-can-i-count-the-occurrences-of-a-list-item-in-python
from collections import Counter
genre_list = []
for genre in lil_artists:
if genre['genres'] != []:
genre_list = genre['genres'] + genre_list
c = Counter(genre_list)
print("These are the counts for each genre:", c)
#https://docs.python.org/2/library/collections.html
most_common = Counter(genre_list).most_common(1)
print("The most common genre is:",most_common)
for artist in lil_artists:
if artist['genres'] == []:
print(artist['name'], "has a popularity score of", artist['popularity'],
"But there are no genres listed for this artist.")
else:
artist_genres = artist['genres']
print(artist['name'], "has a popularity score of", artist['popularity'],
"This artist is associated with", ', '.join(artist_genres))
# http://stackoverflow.com/questions/5850986/joining-elements-of-a-list-python
most_popular_score = 0
most_popular_name = []
for artist in lil_artists:
if artist['popularity'] > most_popular_score:
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, "is the most popular, with a rating of", most_popular_score)
second_max_popular = 0
for artist in lil_artists:
if artist['popularity'] >= second_max_popular and artist['popularity'] < most_popular_score:
second_max_popular = artist['popularity']
print(artist['name'], "is the second most popular with a popularity rating of",artist['popularity'], "compared to", most_popular_name, "who has a rating of", most_popular_score)
most_followers = 0
for artist in lil_artists:
if artist['followers']['total'] > most_followers:
most_followers = artist['followers']['total']
print(artist['name'], "has the largest number followers:", artist['followers']['total'])
print("The second most popular Lils have the following amount of followers:")
second_most_followers = 0
for artist in lil_artists:
if artist['popularity'] >= second_max_popular and artist['popularity'] < 86:
second_max_popular = artist['popularity']
if artist['followers']['total'] > second_most_followers:
second_most_followers = artist['followers']['total']
print(artist['name'], artist['followers']['total'])
kim_popularity = 0
for artist in lil_artists:
if artist['name'] == "Lil' Kim":
kim_popularity = (artist['popularity'])
for artist in lil_artists:
if artist['popularity'] > kim_popularity:
print(artist['name'], "has a popularity of", artist['popularity'], "which is higher than that of Lil' Kim.")
#for artist in lil_artists:
#print(artist['name'], artist['id'])
#Lil Dicky 1tqhsYv8yBBdwANFNzHtcr
toptracks_Dicky_response = requests.get('https://api.spotify.com/v1/artists/1tqhsYv8yBBdwANFNzHtcr/top-tracks?country=US')
toptracks_Dicky_data = toptracks_Dicky_response.json()
tracks_Dicky = toptracks_Dicky_data['tracks']
print("THESE ARE THE TOP TRACKS OF LIL DICKY:")
for track in tracks_Dicky:
print(track['name'])
#Lil Jon 7sfl4Xt5KmfyDs2T3SVSMK
toptracks_Jon_response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
toptracks_Jon_data = toptracks_Jon_response.json()
tracks_Jon = toptracks_Jon_data['tracks']
print("THESE ARE THE TOP TRACKS OF LIL JON:")
for track in tracks_Jon:
print(track['name'])
print(tracks_Dicky[0].keys())
explicit_Dicky_count = 0
non_explicit_Dicky_count = 0
explicit_popularity_Dicky_sum = 0
non_explicit_popularity_Dicky_sum = 0
for track in tracks_Dicky:
if track['explicit'] == True:
explicit_Dicky_count = explicit_Dicky_count + 1
explicit_popularity_Dicky_sum = explicit_popularity_Dicky_sum + track['popularity']
else:
non_explicit_Dicky_count = non_explicit_Dicky_count + 1
non_explicit_popularity_Dicky_sum = non_explicit_popularity_Dicky_sum + track['popularity']
print("The average popularity of explicit Lil Dicky songs is", explicit_popularity_Dicky_sum / explicit_Dicky_count)
if non_explicit_Dicky_count == 0:
print("There are no non-explicit Lil Dicky songs.")
else:
print("The average popularity of non-explicit Lil Dicky songs is:", non_explicit_popularity_Dicky_sum / non_explicit_Dicky_count)
explicit_Jon_count = 0
non_explicit_Jon_count = 0
explicit_popularity_Jon_sum = 0
non_explicit_popularity_Jon_sum = 0
for track in tracks_Jon:
if track['explicit'] == True:
explicit_Jon_count = explicit_Jon_count + 1
explicit_popularity_Jon_sum = explicit_popularity_Jon_sum + track['popularity']
else:
non_explicit_Jon_count = non_explicit_Jon_count + 1
non_explicit_popularity_Jon_sum = non_explicit_popularity_Jon_sum + track['popularity']
print("The average popularity of explicit Lil Jon songs is", explicit_popularity_Jon_sum / explicit_Jon_count)
if non_explicit_Jon_count == 0:
print("There are no non-explicit Lil Jon songs.")
else:
print("The average popularity of non-explicit Lil Jon songs is:", non_explicit_popularity_Jon_sum / non_explicit_Jon_count)
#function writing
def add(a, b):
value = a + b
print("the sum of", a, "and", b, "is", value)
add(5, 7)
add(1, 2)
add(4, 55)
def average_popularity(a, b):
explicit_count = 0
non_explicit_count = 0
explicit_popularity_sum = 0
non_explicit_popularity_sum = 0
for track in a:
if track['explicit'] == True:
explicit_count = explicit_count + 1
explicit_popularity_sum = explicit_popularity_sum + track['popularity']
else:
non_explicit_count = non_explicit_count + 1
non_explicit_popularity_sum = non_explicit_popularity_sum + track['popularity']
if explicit_count == 0:
print("There are no explicit songs by", b)
else:
print("The average popularity of explicit songs by", b, "is", explicit_popularity_sum / explicit_count)
if non_explicit_count == 0:
print("There are no non-explicit songs by", b)
else:
print("The average popularity of non-explicit songs by", b, "is", non_explicit_popularity_sum / non_explicit_count)
average_popularity(tracks_Dicky, "Lil Dicky")
average_popularity(tracks_Jon, "Lil Jon")
def explicit_minutes(a, b):
explicit_milliseconds = 0
non_explicit_milliseconds = 0
for track in a:
if track['explicit'] == True:
explicit_milliseconds = explicit_milliseconds + track['duration_ms']
else:
non_explicit_milliseconds = non_explicit_milliseconds + track['duration_ms']
if explicit_milliseconds !=0:
print(b, "has", explicit_milliseconds / 6000 , "minutes of explicit music.")
if non_explicit_milliseconds !=0:
print(b, "has", non_explicit_milliseconds / 6000, "minutes of non-explicit music.")
else:
print(b, "has", "has no non-explicit music.")
explicit_minutes(tracks_Dicky, "Lil Dicky")
explicit_minutes(tracks_Jon, "Lil Jon")
import requests
biggieT_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
biggieT_data = biggieT_response.json()
biggieT_artists = biggieT_data['artists']['items']
artist_count = 0
for artist in biggieT_artists:
artist_count = artist_count + 1
print("There are in total", artist_count, "Biggies.")
import requests
import math
offset_valueB = 0
biggieT_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&offset=' + str(offset_valueB) + '')
biggieT_data = biggieT_response.json()
biggieT_artists = biggieT_data['artists']['items']
offset_limitB = biggieT_data['artists']['total']
offset_valueL = 0
lilT_response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&limit=50&offset=' + str(offset_valueL) + '')
lilT_data = lilT_response.json()
lilT_artists = lilT_data['artists']['items']
offset_limitL = lilT_data['artists']['total']
page_countB = math.ceil(offset_limitB/ 50)
print("The page count for all the Biggies is:", page_countB)
page_countL = math.ceil(offset_limitL/ 50)
print("The page count for all the Lils is:", page_countL)
print("If you made 1 request every 5 seconds, it will take", page_countL * 5, "seconds for all the Lils requests to process. Whereas for the Biggies it's", page_countB* 5, ", so the total amount of time is", page_countB*5 + page_countL*5, "seconds.")
artist_count = 0
offset_value = 0
for page in range(0, 1):
biggieT_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&offset=' + str(offset_valueB) + '')
biggieT_data = biggieT_response.json()
biggieT_artists = biggieT_data['artists']['items']
for artist in lilT_artists:
artist_count = artist_count + 1
offset_value = offset_value + 50
print("There are in total", artist_count, "Biggies.")
artist_count = 0
offset_value = 0
for page in range(0, 91):
lilT_response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&limit=50&offset=' + str(offset_value) + '')
lilT_data = lilT_response.json()
lilT_artists = lilT_data['artists']['items']
for artist in lilT_artists:
artist_count = artist_count + 1
offset_value = offset_value + 50
print("There are in total", artist_count, "Lils.")
# tried to solve it with a function as well, but didn't work out, gave an error message. So back to the old way.
biggie50_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
biggie50_data = biggie50_response.json()
biggie50_artists = biggie50_data['artists']['items']
popularity_biggie50 = 0
for artist in biggie50_artists:
popularity_biggie50 = popularity_biggie50 + artist['popularity']
print("The average popularity of the top50 Biggies is", popularity_biggie50 / 50)
lil50_response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&limit=50')
lil50_data = lil50_response.json()
lil50_artists = lil50_data['artists']['items']
popularity_lil50 = 0
for artist in lil50_artists:
popularity_lil50 = popularity_lil50 + artist['popularity']
print("The average popularity of the top50 Lils is", popularity_lil50 / 50)
if popularity_biggie50 > popularity_lil50:
print("The top50 Biggies are on average more popular than the top50 Lils.")
if popularity_biggie50 == popularity_lil50:
print("The top50 Biggies are on average as popular as the top50 Lils.")
else:
print("The top50 Lils are on average more popular than the top50 Biggies.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Create Logistic Regression
Step4: Train Logistic Regression
Step5: Create Previously Unseen Observation
Step6: Predict Class Of Observation
Step7: View Predicted Probabilities
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
# Load data with only two classes
iris = datasets.load_iris()
X = iris.data[:100,:]
y = iris.target[:100]
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create logistic regression object
clf = LogisticRegression(random_state=0)
# Train model
model = clf.fit(X_std, y)
# Create new observation
new_observation = [[.5, .5, .5, .5]]
# Predict class
model.predict(new_observation)
# View predicted probabilities
model.predict_proba(new_observation)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic series; default integer index
Step2: datetime index
Step3: sample NumPy data
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
my_series = pd.Series([1,3,5,np.nan,6,8])
my_series
my_dates_index = pd.date_range('20160101', periods=6)
my_dates_index
df_from_dictionary = pd.DataFrame({
'float' : 1.,
'time' : pd.Timestamp('20160825'),
'series' : pd.Series(1,index=list(range(4)),dtype='float32'),
'array' : np.array([3] * 4,dtype='int32'),
'categories' : pd.Categorical(["test","train","taxes","tools"]),
'dull' : 'boring data'
})
df_from_dictionary
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Resultado
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import stats
lambp=10
dist= stats.poisson(lambp)
x= stats.poisson.rvs(mu=lambp, loc=0, size=1000000)
media= np.mean(x)
var= np.var(x)
media2=[]
for i in range (0,10001):
valores = np.random.choice(x,2)
media2.append(np.mean(valores))
plt.hist(media2)
plt.title("n valendo 2")
media = np.mean(media2)
variancia = np.var(media2)
print("Media:{}".format(media))
print("Variancia:{}".format(variancia))
mediaVal=[]
for i in range (0,10001):
valores = np.random.choice(x,4)
mediaVal.append(np.mean(valores))
plt.hist(mediaVal)
media = np.mean(mediaVal)
plt.title("n valendo 4")
variancia = np.var(mediaVal)
print("Media:{}".format(media))
print("Variancia:{}".format(variancia))
media20=[]
for i in range (0,10001):
valores = np.random.choice(x,20)
media20.append(np.mean(valores))
plt.hist(media20)
plt.title("n valendo 20")
media = np.mean(media20)
variancia = np.var(media20)
print("Media:{}".format(media))
print("Variancia:{}".format(variancia))
media50=[]
for i in range (0,10001):
valores = np.random.choice(x,50)
media50.append(np.mean(valores))
plt.hist(media50)
plt.title("n valendo 50")
media = np.mean(media50)
variancia = np.var(media50)
print("Media:{}".format(media))
print("Variancia:{}".format(variancia))
media50=[]
primeiro= []
nono = []
for i in range (0,10001):
valores = np.random.choice(x,50)
primeiro.append(valores[0])
nono.append(valores[9])
media50.append(np.mean(valores))
media = np.mean(media50)
variancia = np.var(media50)
print("Media:{}".format(media))
print("Variancia:{}".format(variancia))
plt.hist(media50)
plt.title("n valendo 50")
plt.show()
dp1= (np.var(primeiro))**1/2
dp9=(np.var(nono))**1/2
cov = np.cov(primeiro,nono)
plt.hist(primeiro)
plt.title("primeiro")
plt.show()
plt.hist(nono)
plt.title("nono")
plt.show()
print(cov/(dp1*dp9))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Where's Mario?
Step6: Histograms
Step7: Test
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
import cv2
import time
images = ['hist_pics/mario-1.png', 'hist_pics/mario-2.png', 'hist_pics/mario-3.png', 'hist_pics/mario-4.png', 'hist_pics/mario-5.png', 'hist_pics/mario-6.png']
rgb = []
for image in images:
i = cv2.imread(image, cv2.IMREAD_COLOR)
i = cv2.cvtColor(i, cv2.COLOR_BGR2RGB) # pretty images
rgb.append(i)
plt.subplot(1,3,1)
plt.imshow(rgb[0]);
plt.subplot(1,3,2)
plt.imshow(rgb[1]);
plt.subplot(1,3,3)
plt.imshow(rgb[2]);
hsv = []
for i, image in enumerate(rgb):
img = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
hsv.append(img)
plt.subplot(1,3,1)
plt.imshow(hsv[0]);
plt.subplot(1,3,2)
plt.imshow(hsv[1]);
plt.subplot(1,3,3)
plt.imshow(hsv[2]);
# let's define a mask to block out things we don't want
# since our back ground is white hsv = [x,0,255] where is x is don't care
# white is s=0, v=255
masks = []
for image in hsv:
h,s,v = cv2.split(image)
# ret,vv = cv2.threshold(v,250,255,cv2.THRESH_BINARY)
# masks.append(vv)
ret,ss = cv2.threshold(s,50,255,cv2.THRESH_BINARY)
masks.append(ss)
# ss = cv2.merge((ss,ss,ss))
plt.subplot(1,4,1)
plt.imshow(masks[0], cmap = 'gray', interpolation = 'bicubic');
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.subplot(1,4,2)
plt.imshow(masks[1], cmap = 'gray', interpolation = 'bicubic');
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.subplot(1,4,3)
plt.imshow(masks[2], cmap = 'gray', interpolation = 'bicubic');
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.colorbar();
# plt.subplot(1,4,4)
# plt.imshow(masks[0]);
# let's take a look at our masks and see what shows through
tmp = []
for i in range(len(masks)):
ret = cv2.bitwise_and(rgb[i], cv2.merge((masks[i],masks[i],masks[i])))
tmp.append(ret)
plt.subplot(1,4,1)
plt.imshow(tmp[0]);
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.subplot(1,4,2)
plt.imshow(tmp[1]);
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.subplot(1,4,3)
plt.imshow(tmp[2]);
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.subplot(1,4,4)
plt.imshow(tmp[3]);
plt.xticks([]), plt.yticks([]); # to hide tick values on X and Y axis
class hsvHistogram(object):
This class creates and holds the histogram information of a HSV image. It
also is used to find our target in a given image. This class is designed
to construct a 2D histogram looking a t hue and saturation, but not value.
hist = None
bins = None
def __init__(self, bins):
self.bins = bins
self.kernel = np.ones((5,5),np.uint8)
def calcHist(self, im_array, masks=None):
Create a histogram from just images, no arrays
hist = cv2.calcHist(
im_array,
[0, 1],
masks,
[self.bins, self.bins],
[0,180, 0 ,256]
)
self.hist = hist
return hist
def calcHistMask(self, im_array, masks):
Create a histogram from just images and an array of masks
hist = self.calcHist([im_array[0]], masks[0])
for i in range(1, len(im_array)):
h = self.calcHist([im_array[i]], masks[i])
hist += h
self.hist = hist
return hist
def normalize(self):
cv2.normalize(self.hist,self.hist,0,self.bins,cv2.NORM_MINMAX)
def find(self, test, threshold=3):
test - target image
threshold - a tunning parameter
if self.hist is None:
print('Need to init histogram first!')
return 1
dst = cv2.calcBackProject(
[test],
[0, 1],
self.hist,
[0,180, 0 ,256],
1
)
disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
cv2.filter2D(dst,-1,disc,dst)
# cv2.threshold(image, low, high, method)
ret, thresh = cv2.threshold(dst,threshold,255,cv2.THRESH_BINARY)
# morphological/blobify --------
# thresh = cv2.erode(thresh, self.kernel)
# thresh = cv2.dilate(thresh, self.kernel)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, self.kernel)
return thresh
def plot(self):
# plt.plot(np.linspace(0,180,roiHist.bins), roiHist.hist)
# plt.grid(True)
# plt.xlabel('Hue')
plt.imshow(self.hist,interpolation = 'nearest')
plt.ylabel('Hue [0-179]')
plt.xlabel('Saturation [0-255]')
plt.title('Histogram')
plt.colorbar()
roiHist = hsvHistogram(32)
# roiHist.calcHist([hsv[1]], masks[1])
roiHist.calcHistMask(hsv, masks)
roiHist.normalize()
roiHist.plot()
test_rgb = cv2.imread('hist_pics/mario-test2.jpg')
# test_rgb = rgb[0]
test_rgb = cv2.cvtColor(test_rgb, cv2.COLOR_BGR2RGB) # make pretty for ipython
test_hsv = cv2.cvtColor(test_rgb, cv2.COLOR_RGB2HSV) # now convert to HSV
plt.subplot(1,2,1)
plt.imshow(test_rgb);
plt.xticks([]), plt.yticks([])
plt.subplot(1,2,2)
plt.imshow(test_hsv);
plt.xticks([]), plt.yticks([])
thresh = roiHist.find(test_hsv, 50)
thresh = cv2.merge((thresh,thresh,thresh)) # make 3 channels
# plot ----------
res = cv2.bitwise_and(test_rgb,thresh)
pics = np.hstack((test_rgb,thresh))
plt.imshow(pics);
plt.xticks([]), plt.yticks([]);
plt.imshow(res);
plt.xticks([]), plt.yticks([]);
test_rgb = cv2.imread('hist_pics/mario-test4.jpg')
test_rgb = cv2.cvtColor(test_rgb, cv2.COLOR_BGR2RGB) # make pretty for ipython
test_hsv = cv2.cvtColor(test_rgb, cv2.COLOR_RGB2HSV) # now convert to HSV
thresh = roiHist.find(test_hsv, 15)
thresh = cv2.merge((thresh,thresh,thresh)) # make 3 channels
res = cv2.bitwise_and(test_rgb,thresh)
pics = np.hstack((test_rgb,res))
plt.imshow(pics);
plt.xticks([]), plt.yticks([]);
test_rgb = cv2.imread('hist_pics/mario-test.png')
test_rgb = cv2.cvtColor(test_rgb, cv2.COLOR_BGR2RGB) # make pretty for ipython
test_hsv = cv2.cvtColor(test_rgb, cv2.COLOR_RGB2HSV) # now convert to HSV
thresh = roiHist.find(test_hsv, 50)
thresh = cv2.merge((thresh,thresh,thresh)) # make 3 channels
res = cv2.bitwise_and(test_rgb,thresh)
pics = np.hstack((test_rgb,res))
plt.imshow(pics);
plt.xticks([]), plt.yticks([]);
# h,s,v = cv2.split(test_hsv)
# plt.subplot(1,2,1)
# plt.grid(True)
# plt.hist(h.ravel(), 32,[0,180], label='hue');
# plt.subplot(1,2,2)
# plt.hist(s.ravel(), 32,[0,255], label='saturation');
# plt.hist(v.ravel(), 32,[0,255], label='value');
# plt.grid(True)
# plt.legend(loc='upper right');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download Data
Step2: Not too big at all.
Step3: Look for LkCa 4
Step4: The source is named T-Tau0-01262
Step5: The Devor et al. period is just twice the photometric period of 3.375 days.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from astropy.io import ascii, votable, misc
#! mkdir ../data/Devor2008
#! curl http://iopscience.iop.org/1538-3881/135/3/850/suppdata/aj259648_mrt7.txt >> ../data/Devor2008/aj259648_mrt7.txt
! du -hs ../data/Devor2008/aj259648_mrt7.txt
dat = ascii.read('../data/Devor2008/aj259648_mrt7.txt')
! head ../data/Devor2008/aj259648_mrt7.txt
dat.info
df = dat.to_pandas()
df.head()
df.columns
sns.distplot(df.Per, norm_hist=False, kde=False)
gi = (df.RAh == 4) & (df.RAm == 16) & (df.DEd == 28) & (df.DEm == 7)
gi.sum()
df[gi].T
! head ../data/Devor2008/T-Tau0-01262.lc
cols = ['HJD-2400000', 'r_band', 'r_unc']
lc_raw = pd.read_csv('../data/Devor2008/T-Tau0-01262.lc', names=cols, delim_whitespace=True)
lc_raw.head()
lc_raw.count()
sns.set_context('talk')
plt.plot(lc_raw['HJD-2400000'], lc_raw.r_band, '.')
plt.ylim(0.6, -0.6)
plt.plot(np.mod(lc_raw['HJD-2400000'], 3.375)/3.375, lc_raw.r_band, '.', alpha=0.5)
plt.xlabel('phase')
plt.ylabel('$\Delta \;\; r$')
plt.ylim(0.6, -0.6)
plt.plot(np.mod(lc_raw['HJD-2400000'], 6.74215), lc_raw.r_band, '.')
plt.ylim(0.6, -0.6)
! ls /Users/gully/Downloads/catalog/T-Tau0-* | head -n 10
lc2 = pd.read_csv('/Users/gully/Downloads/catalog/T-Tau0-00397.lc', names=cols, delim_whitespace=True)
plt.plot(lc2['HJD-2400000'], lc2.r_band, '.')
plt.ylim(0.6, -0.6)
this_p = df.Per[df.Name == 'T-Tau0-00397']
plt.plot(np.mod(lc2['HJD-2400000'], this_p), lc2.r_band, '.', alpha=0.5)
plt.xlabel('phase')
plt.ylabel('$\Delta \;\; r$')
plt.ylim(0.6, -0.6)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Yo-yo
Step2: The results are
Step3: Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Step4: Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here).
Step5: And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds.
Step6: The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v.
Step7: And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m).
Step8: Write a slope function for this system, using these results from the book
Step9: Test your slope function with the initial conditions.
Step10: Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall.
Step11: Test your event function
Step12: Then run the simulation.
Step13: Check the final state. If things have gone according to plan, the final value of y should be close to 0.
Step14: How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable?
Step15: y should decrease and accelerate down.
Step16: Plot velocity as a function of time; is the acceleration constant?
Step17: We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g?
Step18: And we can use the formula for r to plot the radius of the spooled thread over time.
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
from sympy import symbols, Eq, solve
T, a, alpha, I, m, g, r = symbols('T a alpha I m g r')
eq1 = Eq(a, -r * alpha)
eq1
eq2 = Eq(T - m * g, m * a)
eq2
eq3 = Eq(T * r, I * alpha)
eq3
soln = solve([eq1, eq2, eq3], [T, a, alpha])
soln[T]
soln[a]
soln[alpha]
Rmin = 8e-3 # m
Rmax = 16e-3 # m
Rout = 35e-3 # m
mass = 50e-3 # kg
L = 1 # m
g = 9.8 # m / s**2
1 / (Rmax)
I = mass * Rout**2 / 2
I
k = (Rmax**2 - Rmin**2) / 2 / L
k
init = State(theta=0, omega=0, y=L, v=0)
system = System(init=init, t_end=2)
# Solution
def slope_func(t, state, system):
theta, omega, y, v = state
r = np.sqrt(2*k*y + Rmin**2)
alpha = mass * g * r / (I + mass * r**2)
a = -r * alpha
return omega, alpha, v, a
# Solution
slope_func(0, system.init, system)
# Solution
def event_func(t, state, system):
theta, omega, y, v = state
return y
# Solution
event_func(0, system.init, system)
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func, max_step=0.05)
details.message
# Solution
results.tail()
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
results.v.plot(label='velocity', color='C3')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
a = gradient(results.v)
a.plot(label='acceleration', color='C4')
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
r = np.sqrt(2*k*results.y + Rmin**2)
r.plot(label='radius')
decorate(xlabel='Time (s)',
ylabel='Radius of spooled thread (m)')
import pandas as pd
s = pd.date_range('2020-1', '2020-12', freq='M').to_series()
list(s.dt.month_name())
pd.interval_range(start=pd.Timestamp('2017-01-01'),
periods=3, freq='MS').dt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
Step2: Visualize Data
Step3: Preprocess Data
Step4: Setup TensorFlow
Step5: SOLUTION
Step6: Features and Labels
Step7: Training Pipeline
Step8: Model Evaluation
Step9: Train the Model
Step10: Evaluate the Model
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet2')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, "lenet2")
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: List Superfund sites
Step2: In place of USA, you can specify any US state or county. You can use place search to find the corresponding DCID, as illustrated here.
Step3: Additional statistics for Tar Creek site
Step4: Get non-statistical attributes
|
<ASSISTANT_TASK:>
Python Code:
!pip install datacommons_pandas datacommons --upgrade --quiet
# Import Data Commons libraries
import datacommons as dc
import datacommons_pandas as dcpd
# Gets all Superfund sites within USA
place_dcid = 'country/USA' # DCID of USA
site_list = dc.get_places_in([place_dcid], 'SuperfundSite')[place_dcid]
site_list[:5]
# Gets stats for the listed variables from all sites in a Pandas table
site_df = dcpd.build_multivariate_dataframe(site_list,
['CrsiScore_SuperfundSite',
'NaturalHazardExposureScore_SuperfundSite',
'NaturalHazardRiskScore_SuperfundSite',
'NaturalHazardRiskScore_SuperfundSite_CoastalFloodEvent',
'NaturalHazardRiskScore_SuperfundSite_DroughtEvent',
'NaturalHazardRiskScore_SuperfundSite_EarthquakeEvent',
'NaturalHazardRiskScore_SuperfundSite_ExcessiveHeatEvent',
'NaturalHazardRiskScore_SuperfundSite_ExtremeColdWindChillEvent',
'NaturalHazardRiskScore_SuperfundSite_FloodEvent',
'NaturalHazardRiskScore_SuperfundSite_HailEvent',
'NaturalHazardRiskScore_SuperfundSite_HighWindEvent',
'NaturalHazardRiskScore_SuperfundSite_HurricaneEvent',
'NaturalHazardRiskScore_SuperfundSite_LandslideEvent',
'NaturalHazardRiskScore_SuperfundSite_TornadoEvent',
'NaturalHazardRiskScore_SuperfundSite_WildfireEvent'])
site_df.head()
# Gets all measurement sites contained in Tar Creek
tar_creek_site = 'epaSuperfundSiteId/OKD980629844' # DCID of Tar Creek
measurement_sites = dc.get_places_in([tar_creek_site], 'SuperfundMeasurementSite')[tar_creek_site]
# Gets stats for contaminant variables for said measurement sites
tar_creek_df = dcpd.build_multivariate_dataframe(
measurement_sites,
[
'Concentration_Cadmium_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Cadmium_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Iron_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Lead_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Zinc_BodyOfWater_GroundWater',
'Concentration_Iron_BodyOfWater_GroundWater',
'Concentration_Lead_BodyOfWater_GroundWater',
'Concentration_Sulfate_BodyOfWater_GroundWater',
'DissolvedOxygen_BodyOfWater_GroundWater',
'Concentration_Zinc_BodyOfWater_GroundWater',
'PotentialOfHydrogen_BodyOfWater_GroundWater',
'ElectricalConductivity_BodyOfWater_GroundWater',
'Temperature_BodyOfWater_GroundWater',
'WaterHardness_BodyOfWater_GroundWater'
])
tar_creek_df.head()
# Lists properties for a sample site
site_props = dc.get_property_labels([tar_creek_site], out=True)[tar_creek_site]
for prop in site_props:
# Gets values for a given property. pvs is dict from site-id -> list of values
pvs = dc.get_property_values(site_list, prop)
# Turns the list of values into a comma-separated a single-value
pvs = {p: ', '.join(v) for p, v in pvs.items()}
# Extends the dataframe
site_df[prop] = site_df.index.map(pvs)
site_df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lab 2
Step2: Part 1. Multiclass logistic regression
Step3: MNIST consists of small 28 by 28 pixel images of written digits (0-9). We split the dataset into a training, validation and testing arrays. The variables x_train, x_valid and x_test are $N \times M$ matrices, where $N$ is the number of datapoints in the respective set, and $M = 28^2 = 784$ is the dimensionality of the data. The second set of variables t_train, t_valid and t_test contain the corresponding $N$-dimensional vector of integers, containing the true class labels.
Step5: In multiclass logistic regression, the conditional probability of class label $j$ given the image $\bx$ for some datapoint is given by
Step8: 1.1.3 Stochastic gradient descent (10 points)
Step9: 1.2. Train
Step10: 1.2.2 Visualize weights (10 points)
Step11: Describe in less than 100 words why these weights minimize the loss The plot illustrates that every column j of W is `specialized' in recognising a number. The colours represent the penalty or reward of pixel intensity in data point x. Intensity in the grey area does not matter (contributes 0 to the likelihood), in the black area should be intensity if j is x's target (positive contribution) and in the white area there should not be intensity (negative contribution). If x's target follows this pattern, the likelihood of its target is maximized and thus the loss is minimized. The fuzziness in the numbers accounts for variations in handwriting.
Step13: We think these images make sense
Step14: 2.3.1. Explain the weights (5 points)
|
<ASSISTANT_TASK:>
Python Code:
NAME = "Michelle Appel"
NAME2 = "Verna Dankers"
NAME3 = "Yves van Montfort"
EMAIL = "michelle.appel@student.uva.nl"
EMAIL2 = "verna.dankers@student.uva.nl"
EMAIL3 = "yves.vanmontfort@student.uva.nl"
%pylab inline
plt.rcParams["figure.figsize"] = [9,5]
from sklearn.datasets import fetch_mldata
# Fetch the data
mnist = fetch_mldata('MNIST original')
data, target = mnist.data, mnist.target.astype('int')
# Shuffle
indices = np.arange(len(data))
np.random.seed(123)
np.random.shuffle(indices)
data, target = data[indices].astype('float32'), target[indices]
# Normalize the data between 0.0 and 1.0:
data /= 255.
# Split
x_train, x_valid, x_test = data[:50000], data[50000:60000], data[60000: 70000]
t_train, t_valid, t_test = target[:50000], target[50000:60000], target[60000: 70000]
def plot_digits(data, num_cols, targets=None, shape=(28,28)):
num_digits = data.shape[0]
num_rows = int(num_digits/num_cols)
for i in range(num_digits):
plt.subplot(num_rows, num_cols, i+1)
plt.imshow(data[i].reshape(shape), interpolation='none', cmap='Greys')
if targets is not None:
plt.title(int(targets[i]))
plt.colorbar()
plt.axis('off')
plt.tight_layout()
plt.show()
plot_digits(x_train[0:40000:5000], num_cols=4, targets=t_train[0:40000:5000])
import numpy as np
# 1.1.2 Compute gradient of log p(t|x;w,b) wrt w and b
def logreg_gradient(x, t, w, b):
Return the log likelihood of the data point's target,
together with the gradient with respect to w and b.
logq = np.array(np.matmul(x, w) + b)
if len(logq.shape) == 1:
logq = np.expand_dims(logq, 0)
if len(x.shape) == 1:
x = np.expand_dims(x, 0)
# Calculate log z with trick mentioned above
a = max(logq.flatten())
logz = a + np.log(sum([np.power(np.e, q - a) for q in logq]))
logp = logq - logz
zeros = np.zeros(logq.shape)
zeros[:, t[0]] = 1
dq = zeros - np.power(np.e, logq - logz)
# Final derivatives of the log-likelihood
dL_dw = np.transpose(np.matmul(dq.T, x))
dL_db = dq
return logp[:,t].squeeze(), dL_dw, dL_db.squeeze()
np.random.seed(123)
# scalar, 10 X 768 matrix, 10 X 1 vector
w = np.random.normal(size=(28*28,10), scale=0.001)
# w = np.zeros((784,10))
b = np.zeros((10,))
# test gradients, train on 1 sample
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
print("Test gradient on one point")
print("Likelihood:\t", logpt)
print("\nGrad_W_ij\t",grad_w.shape,"matrix")
print("Grad_W_ij[0,152:158]=\t", grad_w[152:158,0])
print("\nGrad_B_i shape\t",grad_b.shape,"vector")
print("Grad_B_i=\t", grad_b.T)
print("i in {0,...,9}; j in M")
assert logpt.shape == (), logpt.shape
assert grad_w.shape == (784, 10), grad_w.shape
assert grad_b.shape == (10,), grad_b.shape
# It's always good to check your gradient implementations with finite difference checking:
# Scipy provides the check_grad function, which requires flat input variables.
# So we write two helper functions that provide can compute the gradient and output with 'flat' weights:
from scipy.optimize import check_grad
np.random.seed(123)
# scalar, 10 X 768 matrix, 10 X 1 vector
w = np.random.normal(size=(28*28,10), scale=0.001)
# w = np.zeros((784,10))
b = np.zeros((10,))
def func(w):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)
return logpt
def grad(w):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)
return grad_w.flatten()
finite_diff_error = check_grad(func, grad, w.flatten())
print('Finite difference error grad_w:', finite_diff_error)
assert finite_diff_error < 1e-3, 'Your gradient computation for w seems off'
def func(b):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
return logpt
def grad(b):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
return grad_b.flatten()
finite_diff_error = check_grad(func, grad, b)
print('Finite difference error grad_b:', finite_diff_error)
assert finite_diff_error < 1e-3, 'Your gradient computation for b seems off'
import random
def predict(x, w, b):
Return the log likelihood for a datapoint's real target.
logq = np.array(np.matmul(x, w) + b)
if len(logq.shape) == 1:
logq = np.expand_dims(logq, 0)
if len(x.shape) == 1:
x = np.expand_dims(x, 0)
# Save number of classes
k = logq.shape[1]
# Calculate log z with trick mentioned above
a = max(logq.flatten())
logz = a + np.log(sum([np.power(np.e, logq[:,i] - a) for i in range(k)]))
logp = logq - logz
return logp
def sgd_iter(x_train, t_train, W, b):
Go over all datapoints randomly and adapt the weights and bias terms
according to the stochastic gradient descent algorithm.
indices = np.arange(0, len(x_train))
random.shuffle(indices)
eta = 1e-6
for i in indices:
logp_train, dL_dw, dL_db = logreg_gradient(x_train[i], [t_train[i]], W, b)
W = W - eta * -dL_dw
b = b - eta * -dL_db
predictions = []
for i in indices:
logp_train = predict(x_train[i], W, b)[:, t_train[i]].squeeze()
predictions.append(np.asscalar(logp_train))
return predictions, W, b
# Sanity check:
np.random.seed(1243)
w = np.zeros((28*28, 10))
b = np.zeros(10)
logp_train, W, b = sgd_iter(x_train[:5], t_train[:5], w, b)
from matplotlib import pyplot as plt
def test_sgd(x_train, t_train, w, b):
y_train = []
y_valid = []
k = 10
for i in range(k):
predictions_train, w, b = sgd_iter(x_train, t_train, w, b)
# Check how well the weights generalize for the validation set
predictions_valid = [predict(x, w, b)[:, t_valid[i]].squeeze()
for i, x in enumerate(x_valid)]
y_train.append(np.mean(predictions_train))
y_valid.append(np.mean(predictions_valid))
plt.plot(np.arange(1, k+1), y_train, label = "Training set")
plt.plot(np.arange(1, k+1), y_valid, label = "Validation set")
plt.legend()
plt.xlabel("Iteration")
plt.ylabel("Avg. log-probability")
plt.show()
return w, b
np.random.seed(1243)
w = np.zeros((28*28, 10))
b = np.zeros(10)
from time import time
start = time()
w,b = test_sgd(x_train, t_train, w, b)
end = time()
print(end-start)
def plot_digits(data, num_cols, targets=None, shape=(28,28)):
num_digits = data.shape[0]
num_rows = int(num_digits/num_cols)
for i in range(num_digits):
plt.subplot(num_rows, num_cols, i+1)
plt.imshow(data[i].reshape(shape), interpolation='none', cmap='Greys')
if targets is not None:
plt.title(int(targets[i]))
plt.colorbar()
plt.axis('off')
plt.tight_layout()
plt.show()
plot_digits(np.transpose(w), num_cols=5)
predictions_valid = np.array([predict(x, t_valid[i], w, b) for i, x in enumerate(x_valid)])
ordered = predictions_valid.argsort()
ind_worst = ordered[:8]
ind_best = ordered[-8:]
print("Lowest probability for the true class label")
plot_digits(x_valid[ind_worst], 4, t_valid[ind_worst])
print("Highest probability for the true class label")
plot_digits(x_valid[ind_best], 4, t_valid[ind_best])
# Write all helper functions here
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
def dsig(s):
return np.multiply(s, (1 - s))
def calc_h(x, v, a):
return sigmoid(np.matmul(v.T, x) + a).squeeze()
# Write training code here:
# Plot the conditional loglikelihoods for the train and validation dataset after every iteration.
# Plot the weights of the first layer.
def sgd_iter(x_train, t_train, V, W, a, b):
Go over all datapoints randomly and adapt the weights and bias terms
according to the stochastic gradient descent algorithm.
indices = np.arange(0, len(x_train))
random.shuffle(indices)
for i in indices:
h = calc_h(x_train[i], V, a)
_, dL_dw, dL_db = logreg_gradient(h, [t_train[i]], W, b)
dL_dh = np.matmul(W, dL_db)
dL_da = np.multiply(dL_dh, dsig(h))
dL_dv = np.multiply(np.matrix(x_train[i]).T, np.matrix(dL_da))
V = V + np.multiply(eta, dL_dv)
W = W + np.multiply(eta, dL_dw)
a = a + np.multiply(eta, dL_da)
b = b + np.multiply(eta, dL_db)
predictions = []
for i in indices:
logp_train = predict(calc_h(x_train[i], V, a), W, b)[:, t_train[i]].squeeze()
predictions.append(np.asscalar(logp_train))
return predictions, V, W, a, b
def test_sgd(x_train, t_train, V, W, a, b):
y_train = []
y_valid = []
k = 10
eta = 1e-2
for i in range(k):
predictions_train, V, W, a, b = sgd_iter(x_train, t_train, V, W, a, b)
# Check how well the weights generalize for the validation set
predictions_valid = [predict(calc_h(x, V, a), W, b)[:, t_valid[i]].squeeze()
for i, x in enumerate(x_valid)]
y_train.append(np.mean(predictions_train))
y_valid.append(np.mean(predictions_valid))
plt.plot(np.arange(1, k+1), y_train, label = "Training set")
plt.plot(np.arange(1, k+1), y_valid, label = "Validation set")
plt.legend()
plt.xlabel("Iteration")
plt.ylabel("Avg. log-probability")
plt.show()
return V, W, a, b
K = x_train.shape[1] # No. of input units
L = 20 # No. of hidden units
M = 10 # No. of output units
# Initialize weights gaussian
W = np.random.normal(loc =0, scale =1, size=(L, M))
b = np.random.uniform(size=(10,))
a = np.random.uniform(size=(L,))
V = np.random.normal(loc =0, scale =1, size=(784, L))
V, W, a, b = test_sgd(x_train, t_train, V, W, a, b)
plot_digits(V.T, num_cols=5)
predict_test = np.zeros(len(t_test))
# Fill predict_test with the predicted targets from your model, don't cheat :-).
# YOUR CODE HERE
def test_sgd_super(x_train, t_train, V, W, a, b):
y_train = []
y_valid = []
k = 25
eta = 0.1
for i in range(k):
_, V, W, a, b = sgd_iter(x_train, t_train, V, W, a, b)
_, V, W, a, b = sgd_iter(x_valid, t_valid, V, W, a, b)
for j, x in enumerate(x_test):
all_classes = predict(calc_h(x, V, a), W, b).tolist()[0]
predict_test[j] = all_classes.index(max(all_classes))
p = (10000 - np.sum(predict_test != t_test)) / 10000
print("Iteration {} done for eta {:.5f}, {:.2f}% correct.".format(i+1, eta, p * 100))
eta = eta * 0.75
return V, W, a, b
# init all values with gaussians
K = 10
L = 210
W = np.random.normal(loc =0, scale =1, size=(L, K))
b = np.random.uniform(size=(K,))
a = np.random.uniform(size=(L,))
V = np.random.normal(loc=0, scale=1, size=(28*28, L))
V, W, a, b = test_sgd_super(x_train, t_train, V, W, a, b)
assert predict_test.shape == t_test.shape
n_errors = np.sum(predict_test != t_test)
print('Test errors: %d' % n_errors)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate random x and y coordinates, and observation values proportional to x * y.
Step2: Set up a cKDTree object and query all of the observations within "radius" of each grid point.
Step3: For grid 0, we will use Cressman to interpolate its value.
Step4: For grid 1, we will use barnes to interpolate its value.
Step5: Plot all of the affiliated information and interpolation values.
Step6: For each point, we will do a manual check of the interpolation values by doing a step by
Step7: Step through the cressman calculations.
Step8: Now repeat for grid 1, except use barnes interpolation.
Step9: Step through barnes calculations.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import cKDTree
from scipy.spatial.distance import cdist
from metpy.interpolate.geometry import dist_2
from metpy.interpolate.points import barnes_point, cressman_point
from metpy.interpolate.tools import calc_kappa
def draw_circle(ax, x, y, r, m, label):
th = np.linspace(0, 2 * np.pi, 100)
nx = x + r * np.cos(th)
ny = y + r * np.sin(th)
ax.plot(nx, ny, m, label=label)
np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = xp * xp / 1000
sim_gridx = [30, 60]
sim_gridy = [30, 60]
grid_points = np.array(list(zip(sim_gridx, sim_gridy)))
radius = 40
obs_tree = cKDTree(list(zip(xp, yp)))
indices = obs_tree.query_ball_point(grid_points, r=radius)
x1, y1 = obs_tree.data[indices[0]].T
cress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1)
cress_obs = zp[indices[0]]
cress_val = cressman_point(cress_dist, cress_obs, radius)
x2, y2 = obs_tree.data[indices[1]].T
barnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)
barnes_obs = zp[indices[1]]
ave_spacing = np.mean((cdist(list(zip(xp, yp)), list(zip(xp, yp)))))
kappa = calc_kappa(ave_spacing)
barnes_val = barnes_point(barnes_dist, barnes_obs, kappa)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
for i, zval in enumerate(zp):
ax.plot(pts[i, 0], pts[i, 1], '.')
ax.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))
ax.plot(sim_gridx, sim_gridy, '+', markersize=10)
ax.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')
ax.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches')
draw_circle(ax, sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius')
draw_circle(ax, sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius')
ax.annotate('grid 0: cressman {:.3f}'.format(cress_val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
ax.annotate('grid 1: barnes {:.3f}'.format(barnes_val), xy=(sim_gridx[1] + 2, sim_gridy[1]))
ax.set_aspect('equal', 'datalim')
ax.legend()
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]),
xy=(sim_gridx[0] + 2, sim_gridy[0]))
ax.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)
mx, my = obs_tree.data[indices[0]].T
mz = zp[indices[0]]
for x, y, z in zip(mx, my, mz):
d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2)
ax.plot([sim_gridx[0], x], [sim_gridy[0], y], '--')
xave = np.mean([sim_gridx[0], x])
yave = np.mean([sim_gridy[0], y])
ax.annotate('distance: {}'.format(d), xy=(xave, yave))
ax.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))
ax.set_xlim(0, 80)
ax.set_ylim(0, 80)
ax.set_aspect('equal', 'datalim')
dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625])
values = np.array([0.064, 1.156, 3.364, 0.225])
cres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists)
total_weights = np.sum(cres_weights)
proportion = cres_weights / total_weights
value = values * proportion
val = cressman_point(cress_dist, cress_obs, radius)
print('Manual cressman value for grid 1:\t', np.sum(value))
print('Metpy cressman value for grid 1:\t', val)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]),
xy=(sim_gridx[1] + 2, sim_gridy[1]))
ax.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)
mx, my = obs_tree.data[indices[1]].T
mz = zp[indices[1]]
for x, y, z in zip(mx, my, mz):
d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2)
ax.plot([sim_gridx[1], x], [sim_gridy[1], y], '--')
xave = np.mean([sim_gridx[1], x])
yave = np.mean([sim_gridy[1], y])
ax.annotate('distance: {}'.format(d), xy=(xave, yave))
ax.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))
ax.set_xlim(40, 80)
ax.set_ylim(40, 100)
ax.set_aspect('equal', 'datalim')
dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779])
values = np.array([2.809, 6.241, 4.489, 2.704])
weights = np.exp(-dists**2 / kappa)
total_weights = np.sum(weights)
value = np.sum(values * (weights / total_weights))
print('Manual barnes value:\t', value)
print('Metpy barnes value:\t', barnes_point(barnes_dist, barnes_obs, kappa))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Forecasting BTC Price with Fb Prophet
Step2: Let's predict percentage change!
Step3: MSE IS 0.000488913299898903
Step4: we do terribly at predicting percent change! However, we know that percent change should be applied to the price of the previous day. Let's do that!
|
<ASSISTANT_TASK:>
Python Code:
from fbprophet import Prophet
from sklearn.metrics import r2_score
%run helper_functions.py
%autosave 120
%matplotlib inline
%run prophet_helper.py
%run prophet_baseline_btc.py
plt.style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (15,10)
plt.rcParams["xtick.labelsize"] = 16
plt.rcParams["ytick.labelsize"] = 16
plt.rcParams["axes.labelsize"] = 20
plt.rcParams['legend.fontsize'] = 20
plt.style.use('fivethirtyeight')
pd.set_option('display.max_colwidth', -1)
import numpy as np
import math
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
df = unpickle_object("blockchain_info_df.pkl")
df.head()
df_btc = pd.DataFrame(df['mkt_price'])
true, pred = prophet_baseline_BTC(df_btc, 30, "mkt_price")
r2_score(true, pred) #we see that our baseline model just predicts 44% of the variation when predicting price
plt.plot(pred)
plt.plot(true)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("TS FB Prophet Baseline - Price Prediction");
df_btc_pct = df_btc.pct_change()
df_btc_pct.rename(columns={"mkt_price": "percent_change"}, inplace=True)
df_btc_pct = df_btc_pct.iloc[1:, :]
print(df_btc_pct.shape)
df_btc_pct.head()
true_pct, pred_pct = prophet_baseline_BTC(df_btc_pct, 30, "percent_change")
r2_score(true_pct, pred_pct)
plt.plot(pred_pct)
plt.plot(true_pct)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("TS FB Prophet Baseline - Price Prediction");
prices_to_be_multiplied = df.loc[pd.date_range(start="2017-01-23", end="2017-02-21"), "mkt_price"]
forecast_price_lst = []
for index, price in enumerate(prices_to_be_multiplied):
predicted_percent_change = 1+float(pred_pct[index])
forecasted_price = (predicted_percent_change)*price
forecast_price_lst.append(forecasted_price)
ground_truth_prices = df.loc[pd.date_range(start="2017-01-24", end="2017-02-22"), "mkt_price"]
ground_truth_prices = list(ground_truth_prices)
r2_score(ground_truth_prices, forecast_price_lst) # such an incredible result! This is what we have to beat with my nested TS model
plt.plot(forecast_price_lst)
plt.plot(ground_truth_prices)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("TS FB Prophet Baseline - Price Prediction");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We begin by importing the usual libraries, setting up a very simple dataloader, and generating a toy dataset of spirals.
Step2: Now for our model.
Step3: And finally the training loop.
Step4: eqx.filter_value_and_grad will calculate the gradient with respect to the first argument (model). By default it will calculate gradients for all the floating-point JAX arrays and ignore everything else. For example the model parameters will be differentiated, whilst model.hidden_size is an integer and will be left alone. If you need finer control then these defaults can be adjusted; see [equinox.filter_grad][] and [equinox.filter_value_and_grad][].
|
<ASSISTANT_TASK:>
Python Code:
import math
import jax
import jax.lax as lax
import jax.numpy as jnp
import jax.random as jrandom
import optax # https://github.com/deepmind/optax
import equinox as eqx
def dataloader(arrays, batch_size, *, key):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = jnp.arange(dataset_size)
while True:
perm = jrandom.permutation(key, indices)
(key,) = jrandom.split(key, 1)
start = 0
end = batch_size
while end < dataset_size:
batch_perm = perm[start:end]
yield tuple(array[batch_perm] for array in arrays)
start = end
end = start + batch_size
def get_data(dataset_size, *, key):
t = jnp.linspace(0, 2 * math.pi, 16)
offset = jrandom.uniform(key, (dataset_size, 1), minval=0, maxval=2 * math.pi)
x1 = jnp.sin(t + offset) / (1 + t)
x2 = jnp.cos(t + offset) / (1 + t)
y = jnp.ones((dataset_size, 1))
half_dataset_size = dataset_size // 2
x1 = x1.at[:half_dataset_size].multiply(-1)
y = y.at[:half_dataset_size].set(0)
x = jnp.stack([x1, x2], axis=-1)
return x, y
class RNN(eqx.Module):
hidden_size: int
cell: eqx.Module
linear: eqx.nn.Linear
bias: jnp.ndarray
def __init__(self, in_size, out_size, hidden_size, *, key):
ckey, lkey = jrandom.split(key)
self.hidden_size = hidden_size
self.cell = eqx.nn.GRUCell(in_size, hidden_size, key=ckey)
self.linear = eqx.nn.Linear(hidden_size, out_size, use_bias=False, key=lkey)
self.bias = jnp.zeros(out_size)
def __call__(self, input):
hidden = jnp.zeros((self.hidden_size,))
def f(carry, inp):
return self.cell(inp, carry), None
out, _ = lax.scan(f, hidden, input)
# sigmoid because we're performing binary classification
return jax.nn.sigmoid(self.linear(out) + self.bias)
def main(
dataset_size=10000,
batch_size=32,
learning_rate=3e-3,
steps=200,
hidden_size=16,
depth=1,
seed=5678,
):
data_key, loader_key, model_key = jrandom.split(jrandom.PRNGKey(seed), 3)
xs, ys = get_data(dataset_size, key=data_key)
iter_data = dataloader((xs, ys), batch_size, key=loader_key)
model = RNN(in_size=2, out_size=1, hidden_size=hidden_size, key=model_key)
@eqx.filter_value_and_grad
def compute_loss(model, x, y):
pred_y = jax.vmap(model)(x)
# Trains with respect to binary cross-entropy
return -jnp.mean(y * jnp.log(pred_y) + (1 - y) * jnp.log(1 - pred_y))
# Important for efficiency whenever you use JAX: wrap everything into a single JIT
# region.
@eqx.filter_jit
def make_step(model, x, y, opt_state):
loss, grads = compute_loss(model, x, y)
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return loss, model, opt_state
optim = optax.adam(learning_rate)
opt_state = optim.init(model)
for step, (x, y) in zip(range(steps), iter_data):
loss, model, opt_state = make_step(model, x, y, opt_state)
loss = loss.item()
print(f"step={step}, loss={loss}")
pred_ys = jax.vmap(model)(xs)
num_correct = jnp.sum((pred_ys > 0.5) == ys)
final_accuracy = (num_correct / dataset_size).item()
print(f"final_accuracy={final_accuracy}")
main() # All right, let's run the code.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Data Validation
Step2: Install Data Validation packages
Step3: Import TensorFlow and reload updated packages
Step4: Check the versions of TensorFlow and the Data Validation before proceeding.
Step5: Load the dataset
Step6: Compute and visualize statistics
Step7: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data
Step8: <!-- <img class="tfo-display-only-on-site" src="images/statistics.png"/> -->
Step9: Check evaluation data for errors
Step10: <!-- <img class="tfo-display-only-on-site" src="images/statistics_eval.png"/> -->
Step11: Fix evaluation anomalies in the schema
Step12: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Step13: We'll deal with the tips feature below. We also have an INT value in our trip seconds, where our schema expected a FLOAT. By making us aware of that difference, TFDV helps uncover inconsistencies in the way the data is generated for training and serving. It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
Step14: Now we just have the tips feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
Step15: Check for drift and skew
Step16: In this example we do see some drift, but it is well below the threshold that we've set.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import colab
!pip install --upgrade pip
except:
pass
print('Installing TensorFlow Data Validation')
!pip install --upgrade 'tensorflow_data_validation[visualization]<2'
import pkg_resources
import importlib
importlib.reload(pkg_resources)
import tensorflow as tf
import tensorflow_data_validation as tfdv
print('TF version:', tf.__version__)
print('TFDV version:', tfdv.version.__version__)
import os
import tempfile, urllib, zipfile
# Set up some globals for our file paths
BASE_DIR = tempfile.mkdtemp()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
TRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv')
EVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv')
SERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv')
# Download the zip file from GCP and unzip it
zip, headers = urllib.request.urlretrieve('https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/chicago_data.zip')
zipfile.ZipFile(zip).extractall(BASE_DIR)
zipfile.ZipFile(zip).close()
print("Here's what we downloaded:")
!ls -R {os.path.join(BASE_DIR, 'data')}
train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA)
# docs-infra: no-execute
tfdv.visualize_statistics(train_stats)
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)
# Compute stats for evaluation data
eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA)
# docs-infra: no-execute
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
# Specify that 'tips' feature is not in SERVING environment.
tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING')
serving_anomalies_with_env = tfdv.validate_statistics(
serving_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies_with_env)
# Add skew comparator for 'payment_type' feature.
payment_type = tfdv.get_feature(schema, 'payment_type')
payment_type.skew_comparator.infinity_norm.threshold = 0.01
# Add drift comparator for 'company' feature.
company=tfdv.get_feature(schema, 'company')
company.drift_comparator.infinity_norm.threshold = 0.001
skew_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
tfdv.display_anomalies(skew_anomalies)
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format
file_io.recursive_create_dir(OUTPUT_DIR)
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we are going to load our data and generate random negative data aka gibberish data. The clean data files has negatives created from the data sets pulled from phosphoELM and dbptm.
Step2: Next we vectorize the sequences, we are going to use the sequence vector.
Step3: Now we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class.
Step4: Next we can check against the benchmarks pulled from dbptm.
Step5: Want to explore the data some more, easily generate PCA and TSNE diagrams of the training set.
|
<ASSISTANT_TASK:>
Python Code:
from pred import Predictor
from pred import sequence_vector
y = Predictor()
y.load_data(file="Data/Training/clean_Y.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function="ADASYN", random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos.csv", "Y")
y.generate_pca()
y.generate_tsne()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Данные для смеси 4 кишечных палочек в реальной пропорции. Выравнивали на референс не из данных.
Step2: Низкопокрытые образцы
Step3: Как видим, изменилось не много. Так что, оставим их в покое.
Step4: Переберем персентили [25, 20, 15, 10] и количество плохих образцов от 0 до 3.
|
<ASSISTANT_TASK:>
Python Code:
def normalize(M):
M_norm = np.full_like(M, 0)
for i in range(np.shape(M)[0]):
rev = 1 - M[i, :]
if np.dot(M[i, :], M[i, :]) > np.dot(rev, rev):
M_norm[i, :] = rev
else:
M_norm[i, :] = M[i, :]
return M_norm
r = np.genfromtxt("LICHeE_4ecoli_without_ref/matrices/R_all", dtype=int, delimiter=' ')
x = np.genfromtxt("LICHeE_4ecoli_without_ref/matrices/X_all", dtype=int, delimiter=' ')
print("%s sites" % len(r))
Ncut = 5
print("Delete zero and almost zero profiles:")
good_ind = [i for i in range(np.shape(x)[0])
if not ((np.abs(r[i, :] - x[i, :]) <= Ncut).all() or (x[i, :] <= Ncut).all())]
print(len(good_ind), "remained")
x = x[good_ind, :]
r = r[good_ind, :]
f = normalize(np.divide(x, r))
draw_PCA(f)
print(np.median(r, axis = 0))
r_2 = np.delete(r, [2, 6], axis=1)
x_2 = np.delete(x, [2, 6], axis=1)
f_2 = normalize(np.divide(x_2, r_2))
draw_PCA(f_2)
def filter_by_coverage(cur_r, bad_percent, bad_samples):
def filter_row(row):
num_of_samples = len(row)
valid = np.sum(np.array(([(min_coverage < row) & (row < max_coverage)])))
return num_of_samples - valid <= bad_samples
min_coverage = np.percentile(cur_r, bad_percent, axis=0)
max_coverage = np.percentile(cur_r, 100-bad_percent, axis=0)
good_coverage = np.array([filter_row(row) for row in cur_r])
return good_coverage
f_pca = PCA(n_components=2).fit(f).transform(f)
percentiles = [25, 20, 15, 10]
plt.figure(figsize=(15, 15))
for i in range(4):
for j in range(4):
print(i, j, end="-")
plt.subplot(4, 4, i * 4 + j + 1)
draw_PCA(f, filter_by_coverage(r, percentiles[i], j), f_pca)
plt.tight_layout();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading and ROI selection
Step2: DFT
Step3: Expansion by 4 without interpolation
Step4: DFT of the expansion without interpolation
Step5: Filtering by mean filtering - nearest neighbor
Step6: Interpretation of the mean filtering in the frequency domain
Step7: Filtering by pyramidal kernel, linear interpolation
Step8: Interpretation of the pyramid filtering in the frequency domain
Step9: Using an ideal filter
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
fin = mpimg.imread('../data/lenina.pgm')
nb = ia.nbshow(3)
#nb.nbshow(fin,'%s'% (fin.shape,))
froi = fin[137:137+64,157:157+64]
nb.nbshow(froi,'%s'% (froi.shape,))
nb.nbshow()
print(froi.shape)
import numpy as np
fd = froi.astype(float)
F = np.fft.fft2(fd)
nb = ia.nbshow(2)
nb.nbshow(froi)
nb.nbshow(ia.dftview(F))
nb.nbshow()
fx4 = np.zeros(4*np.array(froi.shape), 'uint8')
fx4[::4,::4] = froi
nb.nbshow(froi)
nb.nbshow(fx4)
nb.nbshow()
fdx4 = fx4.astype(np.float)
Fx4 = np.fft.fft2(fdx4)
nb.nbshow(fx4)
nb.nbshow(ia.dftview(Fx4))
nb.nbshow()
k = np.ones((4,4))
fx4nn = ia.pconv(fdx4, k)
nb.nbshow(fx4)
nb.nbshow(fx4nn.astype(np.uint8))
nb.nbshow()
print(fx4nn[:9,:9])
kzero = np.zeros(fx4.shape)
kzero[0:4,0:4] = k
K = np.fft.fft2(kzero)
nb.nbshow(ia.dftview(K))
Fx4nn = K * Fx4
nb.nbshow(ia.dftview(Fx4nn))
nb.nbshow()
nb.nbshow(ia.dftview(Fx4))
nb.nbshow(ia.dftview(Fx4nn))
nb.nbshow()
klinear = np.array([[1,2,3,4,3,2,1]])/4.
k2dlinear = klinear.T.dot(klinear)
print('k2dlinear=\n',k2dlinear)
fx4li = ia.pconv(fdx4, k2dlinear)
nb.nbshow(fx4)
nb.nbshow(fx4li.astype(np.uint8))
nb.nbshow()
klizero = np.zeros(fx4.shape)
klizero[0:7,0:7] = k2dlinear
Klinear = np.fft.fft2(klizero)
nb.nbshow(ia.dftview(Klinear))
Fx4li = Klinear * Fx4
nb.nbshow(ia.dftview(Fx4li))
nb.nbshow()
nb.nbshow(ia.dftview(Fx4))
nb.nbshow(ia.dftview(Fx4li))
nb.nbshow()
FI = np.zeros_like(Fx4)
FI[:FI.shape[0]//8,:FI.shape[1]//8] = 1
FI[-FI.shape[0]//8:,:FI.shape[1]//8] = 1
FI[:FI.shape[0]//8,-FI.shape[1]//8:] = 1
FI[-FI.shape[0]//8:,-FI.shape[1]//8:] = 1
Fx4ideal = Fx4 * FI
nb.nbshow(ia.dftview(Fx4))
nb.nbshow(ia.dftview(Fx4ideal))
nb.nbshow()
nb = ia.nbshow(2)
fx4ideal_c = np.fft.ifft2(Fx4ideal)
print(fx4ideal_c.imag.sum())
fx4ideal= 16*fx4ideal_c.real
print(fx4ideal.min(),fx4ideal.max())
print(froi.min(),froi.max())
nb.nbshow(ia.normalize(fx4ideal))
nb.nbshow()
nb = ia.nbshow(3)
#nb.nbshow(froi)
nb.nbshow(fx4li.astype(np.uint8))
nb.nbshow(ia.normalize(fx4ideal))
nb.nbshow()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( passenger['Sex'] == 'female' )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( ( passenger['Sex'] == 'female' or passenger['Age'] < 10 ) )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, "Age", ["Sex == 'female'", "Pclass == 3" ] )
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( (passenger['Sex'] == 'female' and passenger['Pclass']<3 ) or
(passenger['Sex'] == 'female' and passenger['Pclass']==3 and passenger['SibSp']<2.5 and passenger['Age']< 40 ) or
(passenger['Sex'] == 'male' and passenger['Age'] < 10 and passenger['SibSp']<2.5 ) )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variables
Step2: Independent variable
Step3: The above visualizations clearly show that the response time for the congruent words condition is much lower in comparison to the incongruent words condition.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%pylab inline
stroop_data = pd.read_csv('./stroopdata.csv')
stroop_data.head()
stroop_data.describe()
print "Median:\n", stroop_data.median()
print "\nVariance:\n", stroop_data.var()
fig, axs = plt.subplots(figsize=(18, 5), ncols = 3, sharey=True)
plt.figure(figsize=(8, 6))
sns.set_palette("Set2")
# Fig 1 - Congruent Words - Response Time
sns.boxplot(y="Congruent", data=stroop_data,
ax=axs[0]).set_title("Fig 1: Congruent Words - Response Time (in seconds)")
# Fig 2 - Incongruent Words - Response Time
sns.boxplot(y="Incongruent", data=stroop_data, color="coral",
ax=axs[1]).set_title("Fig 2: Incongruent Words - Response Time (in seconds)")
# Fig 3 - Congruence vs. Incongruence
sns.regplot(x="Congruent", y="Incongruent", data=stroop_data, color="m", fit_reg=False,
ax=axs[2]).set_title("Fig 3: Congruence vs. Incongruence (in seconds)")
# Dependent t-test for paired samples
stats.ttest_rel(stroop_data["Congruent"], stroop_data["Incongruent"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import time as tm
import matplotlib.pyplot as plt
# Discretization
c1=20 # Number of grid points per dominant wavelength
c2=0.5 # CFL-Number
nx=2000 # Number of grid points
T=10 # Total propagation time
# Source Signal
f0= 10 # Center frequency Ricker-wavelet
q0= 1 # Maximum amplitude Ricker-Wavelet
xscr = 100 # Source position (in grid points)
# Receiver
xrec1=400 # Position Reciever 1 (in grid points)
xrec2=800 # Position Reciever 2 (in grid points)
xrec3=1800 # Position Reciever 3 (in grid points)
# Velocity and density
modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))
rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))
# Init wavefields
vx=np.zeros(nx)
p=np.zeros(nx)
vx_x=np.zeros(nx)
p_x=np.zeros(nx)
vx_x2=np.zeros(nx)
p_x2=np.zeros(nx)
vx_x3=np.zeros(nx)
p_x3=np.zeros(nx)
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
x=np.arange(0,dx*nx,dx) # Space vector
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Update velocity
for kx in range(5,nx-4):
# Calculating spatial derivative
p_x[kx]=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*(25.0/24.0*p_x[kx]-1.0/12.0*p_x2[kx]+1.0/24.0*p_x3[kx])
# np.save old spatial derivations for Adam-Bashforth method
np.copyto(p_x3,p_x2)
np.copyto(p_x2,p_x)
# Update pressure
for kx in range(5,nx-4):
# Calculating spatial derivative
vx_x[kx]= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(25.0/24.0*vx_x[kx]-1.0/12.0*vx_x2[kx]+1.0/24.0*vx_x3[kx])
# np.save old spatial derivations for Adam-Bashforth method
np.copyto(vx_x3,vx_x2)
np.copyto(vx_x2,vx_x)
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!")
## Save seismograms
np.save("Seismograms/FD_1D_DX4_DT3_ABS",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: No es posible imprimir por pantalla el valor de "x" ya que solo existe dentro de la función. Para llevar a "x" a un scope superior, es decir que exista fuera de la función hay que regresar su valor utilizando la función return(), dentro de la definición de la función.
Step2: Cuando se ejecuta una funcion es posible acceder a una variable de scope superior, pero no se puede modificar o utilizar, es decir solo se accede a modo de lectura.
Step3: Como se observa es posible hacer lectura y usar la variable, pero no es posible reescribirla.
|
<ASSISTANT_TASK:>
Python Code:
def fun(a,b):
x = a + b
print(fun(1,2))
def fun(a,b):
x = a+b
return(x)
print(fun(1,2))
def fun2():
print(a)
a=2
fun2()
def fun3():
x = a+2
return(x)
fun3()
def fun4():
a = a+2
return(a)
fun4()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataframe
Step2: XML
|
<ASSISTANT_TASK:>
Python Code:
complex_xml = os.path.join(PROJECT_ROOT, 'complex-events.xml.gz')
# get just "complex events"
# Q: what's complex? -- complex == no full coordinates
def complex_measures(x):
if x.measure:
return (
# smattering of all non SNV variants
(x.measure.variant_type.lower() not in {'single nucleotide variant'} and np.random.random() < 0.01)
# be sure to get the rare ones
or (x.measure.variant_type.lower() in {'tandem duplication', 'fusion', 'complex', 'translocation', 'inversion'})
)
return False
filter_xml(
input_xml=clinvar_path,
output_xml=complex_xml,
filter_fct=complex_measures,
)
dataset = ClinVarDataset(complex_xml)
def get_measures(dataset):
for r in dataset:
if r.measure:
yield r.measure
for m in get_measures(dataset):
break
dir(m)
# just all the properties
props = [
'all_names',
'clinvar_record',
# 'explicit_insertion_length',
# 'has_complete_coordinates',
'hgnc_ids',
'hgvs',
'is_repeat_expansion_variant',
# 'measure_xml',
'microsatellite_category',
'nsv_id',
'preferred_gene_symbols',
# 'preferred_name',
'preferred_or_other_name',
# 'pubmed_refs',
'rs_id',
# 'sequence_location_helper',
# 'toplevel_refseq_hgvs',
'variant_type',
'chr',
'vcf_alt',
# 'vcf_full_coords',
'vcf_pos',
'vcf_ref'
]
# replaces empty list with None
measures = [[getattr(v, p) if getattr(v, p) != [] else None for p in props] for v in get_measures(dataset)]
df = pd.DataFrame(measures, columns=props)
df.count()
set(df['variant_type'])
df[df.variant_type == 'Translocation']
def get_measure_xml_for_rcv(dataset, rcv):
for r in dataset:
if r.accession == rcv:
return r.measure.measure_xml
# pretty print xml
def pprint(x):
print(ElementTree.tostring(x, encoding='unicode'))
def print_measure_xml_for_rcv(dataset, rcv):
x = get_measure_xml_for_rcv(dataset, rcv)
pprint(x)
xml = get_measure_xml_for_rcv(dataset, 'RCV001372309')
pprint(xml)
xml2 = get_measure_xml_for_rcv(dataset, 'RCV001255994')
pprint(xml2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute average sentiment score per week
Step2: read bitcoin price data
Step3: add news volume data
Step4: AlchemyAPI sentiment score
Step5: Daily score analysis
Step6: Prediction
Step7: Daily prediction
|
<ASSISTANT_TASK:>
Python Code:
score_data = pd.read_csv("../data/indico_nyt_bitcoin.csv", index_col='time',
parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format))
score_data.head()
weekly_score = score_data.resample('w', how='mean').loc['2013':].fillna(0.5)
weekly_score.head()
weekly_score.plot()
time_format = "%Y-%m-%dT%H:%M:%S"
data = pd.read_csv("../data/price.csv", names=['time', 'price'], index_col='time',
parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x[:-6], time_format))
bpi = data.resample('w', how='ohlc')
bpi.index.name = 'time'
bpi = pd.DataFrame(bpi['price']['close']).loc['2013':]
bpi.head()
trend_bpi = pd.merge(weekly_score, bpi, how='right', left_index=True, right_index=True)
trend_bpi.columns = ['sentiment', 'close_price']
trend_bpi.head()
trend_bpi.plot(secondary_y='close_price')
trend_bpi.corr()
time_format = "%Y-%m-%dT%H:%M:%SZ"
alchemy_data = pd.read_csv("../data/alchemy_nyt_bitcoin.csv"
, index_col='time',
parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format))
alchemy_data.head()
alchemy_data.alchemy_score.plot(kind='hist')
alchemy_data.describe()
weekly_alchemy = alchemy_data.resample('w', how='mean').loc['2013':].fillna(0.0)
weekly_alchemy.head()
weekly_alchemy.plot(kind='hist')
weekly_alchemy.describe()
alchemy_bpi = pd.merge(weekly_alchemy, bpi, how='right', left_index=True, right_index=True)
alchemy_bpi.columns = ['sentiment', 'close_price']
alchemy_bpi.head()
alchemy_bpi.plot(secondary_y='close_price')
merged_data = pd.merge(alchemy_bpi, weekly_score, how='right', left_index=True, right_index=True)
merged_data.head()
merged_data.plot(secondary_y='close_price')
merged_data.corr()
daily_alchemy = alchemy_data.resample('d', how='mean').loc['2013':].fillna(0.0)
daily_alchemy.head()
daily_price = data.resample('d', how='ohlc')
daily_price.index.name = 'time'
daily_price = pd.DataFrame(daily_price['price']['close']).loc['2013':]
daily_price.head()
daily_data = pd.merge(daily_price, daily_alchemy, how='right', left_index=True, right_index=True)
daily_data.head()
daily_data.plot(secondary_y='close')
alchemy_bpi['avg_sentiment'] = pd.rolling_mean(alchemy_bpi.sentiment, 1)
alchemy_bpi.head()
alchemy_bpi['avg_shifted'] = alchemy_bpi['avg_sentiment'].shift(1)
alchemy_bpi.head()
alchemy_bpi['order']= 'NA'
alchemy_bpi['diff'] = alchemy_bpi.sentiment - alchemy_bpi.avg_shifted
alchemy_bpi.head()
## SII_diff >= diff => search interest rises this week => price rises next week
alchemy_bpi.loc[alchemy_bpi['diff'] > 0,'order'] = False
## SII_diff < diff => search interest falls this week => price falls next week
alchemy_bpi.loc[alchemy_bpi['diff'] < 0,'order'] = True
alchemy_bpi.head()
alchemy_bpi['trend'] = alchemy_bpi.close_price > alchemy_bpi.close_price.shift(1)
alchemy_bpi.head()
total_predict = alchemy_bpi[alchemy_bpi.order!='NA'].order.count()
total_correct = alchemy_bpi[alchemy_bpi.order==alchemy_bpi.trend].order.count()
print "TP+TN: %f (%d/%d)" % (total_correct/float(total_predict), total_correct, total_predict)
alchemy_bpi.corr()
daily_data = pd.merge(daily_price, daily_alchemy, how='right', left_index=True, right_index=True)
daily_data['avg_sentiment'] = pd.rolling_mean(daily_data.alchemy_score, 1)
daily_data.head()
daily_data['avg_shifted'] = daily_data['avg_sentiment'].shift(3)
daily_data.head()
daily_data['order']= 'NA'
daily_data['diff'] = daily_data.alchemy_score - daily_data.avg_shifted
daily_data.head()
## SII_diff >= diff => search interest rises this week => price rises next week
daily_data.loc[daily_data['diff'] > 0,'order'] = True
## SII_diff < diff => search interest falls this week => price falls next week
daily_data.loc[daily_data['diff'] < 0,'order'] = False
daily_data.head()
daily_data['trend'] = daily_data.close > daily_data.close.shift(1)
daily_data.head()
total_predict = daily_data[daily_data.order!='NA'].order.count()
total_correct = daily_data[daily_data.order==daily_data.trend].order.count()
print "TP+TN: %f (%d/%d)" % (total_correct/float(total_predict), total_correct, total_predict)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Lynx Motion AL5D
Step8: The DH parameters are
Step11: Inverse Kinematics
Step12: Phasing
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
from sympy import symbols, sin, cos, simplify, trigsimp, pi
from math import radians as d2r
from math import degrees as r2d
from math import atan2, sqrt, acos, fabs
class mDH(object):
This uses the modified DH parameters
see Craig, eqn (3.6)
def __init__(self):
pass
def fk(self, params):
t = np.eye(4)
for p in params:
t = t.dot(self.makeT(*p))
return t
def makeT(self, a, alpha, d, theta):
return np.array([ # classic DH
[ cos(theta), -sin(theta), 0, a],
[sin(theta)*cos(alpha), cos(theta)*cos(alpha), -sin(alpha), -d*sin(alpha)],
[sin(theta)*sin(alpha), cos(theta)*sin(alpha), cos(alpha), d*cos(alpha)],
[ 0, 0, 0, 1]
])
def eval(f):
This allows you to simplify the trigonomic mess that kinematics can
create and also substitute in some inputs in the process
c = []
for row in f:
r = []
for col in row:
# use python symbolic toolbox to simplify the trig mess above
r.append(simplify(col))
c.append(r)
return np.array(c)
def subs(f, m):
This allows you to simplify the trigonomic mess that kinematics can
create and also substitute in some inputs in the process
c = []
for row in f:
r = []
for col in row:
r.append(col.subs(m))
c.append(r)
return np.array(c)
def rplot(t1, t2, t3, t4):
Given the 4 joint angles (in rads), plot the arm in the x-y and w-z planes
l1 = 5.75
l2 = 7.375
l3 = 3.375
ptsx = [0]
ptsy = [0]
# our definition is reverse or these joints
t3 = -t3
t4 = -t4
# link 1
x1 = l1*cos(t2)
y1 = l1*sin(t2)
ptsx.append(x1)
ptsy.append(y1)
# link 2
x2 = x1 + l2*cos(t2 + t3)
y2 = y1 + l2*sin(t2 + t3)
ptsx.append(x2)
ptsy.append(y2)
# link 3
x3 = x2 + l3*cos(t2 + t3 + t4)
y3 = y2 + l3*sin(t2 + t3 + t4)
ptsx.append(x3)
ptsy.append(y3)
plt.subplot(1,2,1,projection='polar')
plt.plot([0, t1], [0, 1.0])
plt.grid(True)
plt.title('Azimuth Angle (x-y plane)')
plt.subplot(1,2,2)
plt.plot(ptsx, ptsy, 'b-', marker='o')
plt.axis('equal')
plt.grid(True)
plt.title('w-z Plane')
from sympy import pi
t1, t2, t3, t4 = symbols('t1 t2 t3 t4')
# a2, a3, a4, a5 = symbols('a2 a3 a4 a5')
# t1 - base
# t2 - shoulder
# t3 - elbow
# t4 - wrist
# a2 = 2.75 # base to shoulder
a2 = 0.0
a3 = 5.75 # shoulder to elbow
a4 = 7.375 # elbow to wrist
a5 = 3.375 # wrist to end effector
# put all angles in rads
# sybolic can't handle converting a symbol using d2r()
# a, alpha, d, theta
params = [
[ 0, 0, 0, t1],
[ a3, -pi/2, 0, t2],
[ a4, 0, 0, t3],
[ a5, 0, 0, t4]
]
# params = [
# [ 0, 0, 2.75, t1],
# [ 0, pi/2, 0, t2],
# [ 5.75, 0, 0, t3],
# [ 7.375, 0, 0, t4],
# [ 3.375, 0, 0, 0]
# ]
dh = mDH()
al5d = dh.fk(params)
al5d = eval(al5d)
def printT(tt):
Print out the entire T matrix
R = tt[0:3,0:3]
D = tt[0:3, 3]
print('-'*30)
print('Position:')
print(' x:', D[0])
print(' y:', D[1])
print(' z:', D[2])
# R(n, o, a)
print('-'*30)
print('Orientation')
print(' nx:', R[0,0])
print(' ny:', R[0,1])
print(' nz:', R[0,2])
print('')
print(' ox:', R[1,0])
print(' oy:', R[1,1])
print(' oz:', R[1,2])
print('')
print(' ax:', R[2,0])
print(' ay:', R[2,1])
print(' az:', R[2,2])
def printP(dh):
dh is the forward kinematics equations matrix
pos = [float(x) for x in dh[0:3, 3]]
print('Pos (x,y,z): {:5.2f} {:5.2f} {:5.2f}'.format(*pos))
return pos
def printDegrees(angles):
angles are in radians
a = [r2d(x) for x in angles]
print('Angles: {:6.1f} {:6.1f} {:6.1f} {:6.1f}'.format(*a))
# the symbolic forward kinematics of our robot arm
printT(al5d)
# Let's set some angles
simp=subs(al5d, [(t1,0.0), (t2,pi/2), (t3, pi/2), (t4, 0.0)])
printT(simp)
simp=subs(al5d, [(t1,0.0),(t2,d2r(111.5)), (t3, d2r(-127)), (t4, d2r(-74.5))])
printP(simp)
from math import atan2, acos, sqrt, pi, cos, sin
def cosine_law(a, b, c, phase=False):
if phase:
angle = ((c**2 - (a**2 + b**2))/(2*a*b))
else:
angle = ((c**2 - (a**2 + b**2))/(-2*a*b))
# print('cosine_law', angle)
if angle > 1 or angle < -1:
raise Exception('angle outside range')
return acos(angle)
def line(x1, y1, x2, y2):
return sqrt((x2-x1)**2 + (y2-y1)**2)
def mag(a, b):
return sqrt(a**2 + b**2)
def mag3(a, b, c):
return sqrt(a**2 + b**2 + c**2)
def inverse(x, y, z, orient):
Azimuth angle is between x and w and lies in the x-y plane
^ x
w |
\ |
\ |
\ |
\|
<----------+ (z is out of the page - right hand rule)
y
Most of the robot arm move in the plane defined by w-z
^ z
| o-----o
| / \
| / E
|/
+----------------> w
All joint angles returned are in radians: (t1, t2, t3, t4)
l1 = 5.75
l2 = 7.375
l3 = 3.375
# check workspace constraints
if z < 0:
raise Exception('z in ground')
elif mag3(x,y,z) > (l1 + l2 + l3):
raise Exception('out of reach')
# get x-y plane azimuth
t1 = atan2(y, x)
# Now, most of the arm operates in the w-z frame
w = mag(x, y) # new frame axis
gamma = atan2(z, w)
r = mag(z, w)
c = mag(w-l3*cos(orient), z-l3*sin(orient))
t3 = cosine_law(l1, l2, c, True)
d = cosine_law(l2, c, l1)
e = cosine_law(c, l3, r)
t4 = pi - d - e
alpha = cosine_law(l1, c, l2)
beta = cosine_law(c,r,l3)
t2 = alpha + beta + gamma
return (t1, t2, t3, t4)
def checkPts(x, y, z, orient):
Given a point (in inches) and orientation (in rads), this calculates
the joint angles, then uses those angles to calculate the forward solution
and prints out the error. It also plots the arm.
angles = inverse(x, y, z, orient)
a,b,c,d = angles
simp=subs(al5d, [(t1, a), (t2, b), (t3, -c), (t4, -d)])
pts = printP(simp)
printDegrees(angles)
rplot(*angles)
error = [a-b for a,b in zip((x,y,z), pts)]
print('Error: {:6.3f} {:6.3f} {:6.3f}'.format(*error))
checkPts(10.75, 0, 5.75, 0.0)
# 0 90 -90 0
checkPts(7.385, 0, 5.75-3.375, -pi/2)
x,y,z = (7.385*cos(pi/4), 7.385*sin(pi/4), 5.75-3.375)
checkPts(x,y,z, -pi/2)
x,y,z = (7.385*cos(-pi/4), 7.385*sin(-pi/4), 5.75-3.375)
checkPts(x,y,z, -pi/2)
x,y,z = (7.385*cos(pi/2), 7.385*sin(pi/2), 5.75-3.375)
checkPts(x,y,z, -pi/2)
checkPts(5,0,0, -pi/2)
checkPts(7,-3,0, -pi/2)
checkPts(7,6,4, -pi/2)
print(cos(pi-1))
print(cos(1-pi))
print(cos(1))
# let's plot cos from -pi to pi
def draw(p, title):
test = []
for i in range(-pi*100, pi*100):
if p == pi:
ans = cos(pi - i/100)
elif p == -pi:
ans = cos(i/100 - pi)
else:
ans = cos(i/100)
test.append(ans)
x = [x/100 for x in range(-pi*100, pi*100)]
plt.plot(x,test)
plt.title(title)
plt.grid(True)
draw(pi, 'pi')
draw(-pi, '-pi')
draw(0, '0')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OptGene
Step2: OptKnock
Step3: Running multiple knockouts with OptKnock can take a few hours or days...
|
<ASSISTANT_TASK:>
Python Code:
from cameo import models
model = models.bigg.e_coli_core.copy()
model.solver = "cplex"
from cameo import phenotypic_phase_plane
ppp = phenotypic_phase_plane(model, variables=[model.reactions.BIOMASS_Ecoli_core_w_GAM], objective=model.reactions.EX_ac_e)
ppp.plot()
from cameo.strain_design.heuristic.evolutionary_based import OptGene
optgene = OptGene(model)
result = optgene.run(target="EX_ac_e",
biomass=model.reactions.BIOMASS_Ecoli_core_w_GAM,
substrate=model.metabolites.glc__D_e,
max_evaluations=5000,
plot=False)
result
result.plot(0)
result.display_on_map(0, "e_coli_core.Core metabolism")
from cameo.strain_design.deterministic.linear_programming import OptKnock
optknock = OptKnock(model, fraction_of_optimum=0.1)
result = optknock.run(max_knockouts=1, target="EX_ac_e", biomass="BIOMASS_Ecoli_core_w_GAM")
result
result.plot(0)
result.display_on_map(0, "e_coli_core.Core metabolism")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show what map() does.
Step2: fstrings can handle expressions, not just variable names.
|
<ASSISTANT_TASK:>
Python Code:
a = [i*i for i in range(3)]
a
b = a
b[1] = 'hello'
b
a
a = [i*i for i in range(3)]
a
b = a[:]
b[1] = 'hello'
b
a
a = [i*i for i in range(3)]
a
b = a.copy()
b[1] = 'hello'
b
a
def foo(s):
return s + ' on the Beach.'
list(map(foo, ('sand', 'clams', 'dunes')))
a = (1, 2, 3)
b = (2, 3, 4, 5)
from itertools import zip_longest
sum(map(sum, zip_longest(a, b, fillvalue=0)))
sum((sum(a), sum(b)))
sum(a + b)
f'hello {sum(map(sum, zip_longest(a, b, fillvalue=0)))} world'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: At this point anything above python 3.5 should be ok.
Step2: Notes
Step3: Notes
Step4: Notes
Step5: Notes
Step6: Notes
Step7: Notes
Step8: Notes
Step9: Notes
Step10: Notes
|
<ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
import numpy as np
np.__version__
import matplotlib as mpl
from matplotlib import pyplot as plt
mpl.__version__
values = np.zeros((2,50))
size = values.shape
print(size)
for i in range(size[1]):
values[0,i] = i * 2
values[1,i] = np.sin(i / 2)
print(values)
np.save('np_file.npy', values)
np.savetxt('txt_file.txt', np.transpose(values))
values_from_text = np.loadtxt("txt_file.txt")
values_from_np = np.load("np_file.npy")
print(values_from_text[0,0] == values_from_np[0,0])
print(values_from_text)
values = values_from_text
print(values.shape)
x_0 = values[0]
print(x_0)
x_1 = values[:,0]
print(x_1)
y_1 = values[:,1]
print(y_1)
fig = plt.figure()
plt.plot(x_1, y_1)
plt.show()
indices = [5,10, 15 ,20]
x = values[indices,0]
print(x)
y = values[indices,1]
print(y)
fig = plt.figure()
plt.plot(x, y)
plt.show()
indices = np.where(values[:,1] > -0.5)[0]
print("indices: ",indices)
x = values[indices,0]
print("x: ",x)
y = values[indices,1]
print("y: ",y)
fig = plt.figure()
plt.plot(x, y)
plt.show()
to_sort = np.random.rand(10)
print(to_sort)
to_sort.sort()
print(to_sort)
to_sort = np.random.rand(2,10)
print(to_sort)
to_sort.sort(axis=1)
print(to_sort)
to_sort = np.random.rand(3,10)
print(to_sort)
#investigate the axis we want to sort after
print("The axis to sort: \n",to_sort[1])
sort_indices = to_sort[1].argsort()
print("The indexes after the sort: \n",sort_indices)
#proceed the sort using the slicing method we just introduced
to_sort = to_sort[:,sort_indices]
print("The sorted full array:\n ",to_sort)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The logistic regression equation has a very simiar representation like linear regression. The difference is that the output value being modelled is binary in nature.
Step2: Making Predictions with Logistic Regression
Step3: Let's say you have been provided with the coefficient
Step4: Learning the Logistic Regression Model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
x = np.linspace(-6, 6, num = 1000)
plt.figure(figsize = (12,8))
plt.plot(x, 1 / (1 + np.exp(-x))); # Sigmoid Function
plt.title("Sigmoid Function");
tmp = [0, 0.4, 0.6, 0.8, 1.0]
tmp
np.round(tmp)
np.array(tmp) > 0.7
dataset = [[-2.0011, 0],
[-1.4654, 0],
[0.0965, 0],
[1.3881, 0],
[3.0641, 0],
[7.6275, 1],
[5.3324, 1],
[6.9225, 1],
[8.6754, 1],
[7.6737, 1]]
coef = [-0.806605464, 0.2573316]
for row in dataset:
yhat = 1.0 / (1.0 + np.exp(- coef[0] - coef[1] * row[0]))
print("yhat {0:.4f}, yhat {1}".format(yhat, round(yhat)))
from sklearn.linear_model import LogisticRegression
dataset
X = np.array(dataset)[:, 0:1]
y = np.array(dataset)[:, 1]
X
y
clf_LR = LogisticRegression(C=1.0, penalty='l2', tol=0.0001)
clf_LR.fit(X,y)
clf_LR.predict(X)
clf_LR.predict_proba(X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https
Step2: The origin column indicates the country of origin for the car manufacture. It has three numeric values, 1, 2, or 3. These indicate USA, Europe, or Japan, respectively. Replace the origin column with a string representing the country name.
Step3: In this plot we are going to show the trend of the average miles per gallon (MPG) rating for subsequent model years separated by country of origin. This time period saw a significant increase in MPG driven by the U.S. fuel crisis. We can use the pivot_table feature of pandas to get this information from the data. (Excel and other spreadsheets have similar functionality.)
Step4: Now use toyplot to plot this trend on a standard x-y chart.
Step5: For the talk, I want to compare this to using a 3D plot. Toyplot does not yet have such silly plot capabilities, so write out the results of this pivot table to csv so we can easily load it into Excel.
Step6: In one of my counterexamples, I remind the audiance to make colors consistent. Make a plot with inconsistent colors.
Step7: I make a point that it is a bad idea to clutter up the canvas with non-data items like grid lines. Create a counter example that has lots of distracting lines.
Step8: If you really want gridlines, you should make them very subtle so they don't interfere with the actual data.
Step9: Frankly, vertical gridlines are usually not all that necessary. If you remove them, less clutter. Not going overboard on horizontal lines is also good.
Step10: I personally find grid lines a bit overrated. Don't fear not having grid lines at all, as in the first example.
Step11: Do it again, but at least order the legend correctly.
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
import toyplot
import toyplot.pdf
import toyplot.png
import toyplot.svg
print('Pandas version: ', pandas.__version__)
print('Numpy version: ', numpy.__version__)
print('Toyplot version: ', toyplot.__version__)
column_names = ['MPG',
'Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin',
'Car Name']
data = pandas.read_table('auto-mpg.data',
delim_whitespace=True,
names=column_names,
index_col=False)
country_map = pandas.Series(index=[1,2,3],
data=['USA', 'Europe', 'Japan'])
data['Origin'] = numpy.array(country_map[data['Origin']])
average_mpg_per_year = data.pivot_table(index='Model Year',
columns='Origin',
values='MPG',
aggfunc='mean')
average_mpg_per_year
average_mpg_per_year.columns
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Toyplot is sometimes inaccurate in judging the width of labels.
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries.pdf')
toyplot.svg.render(canvas, 'MultiSeries.svg')
toyplot.png.render(canvas, 'MultiSeries.png', scale=5)
average_mpg_per_year.to_csv('auto-mpg-origin-year.csv')
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
for column in ['Europe', 'Japan', 'USA']:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Toyplot is sometimes inaccurate in judging the width of labels.
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Inconsistent_Colors.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Inconsistent_colors.svg')
toyplot.png.render(canvas, 'MultiSeries_Inconsistent_colors.png', scale=5)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
# Create some grid lines. (Not a great idea.)
axes.hlines(xrange(0,41,5), color='black')
axes.vlines(xrange(1970,1983), color='black')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Toyplot is sometimes inaccurate in judging the width of labels.
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Grid_Dark.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Grid_Dark.svg')
toyplot.png.render(canvas, 'MultiSeries_Grid_Dark.png', scale=5)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
# Create some grid lines. (Not a great idea.)
axes.hlines(xrange(0,41,5), color='lightgray')
axes.vlines(xrange(1970,1983), color='lightgray')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Grid_Light.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Grid_Light.svg')
toyplot.png.render(canvas, 'MultiSeries_Grid_Light.png', scale=5)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
# Create some grid lines. (Not a great idea.)
axes.hlines(xrange(0,41,10), color='lightgray')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Grid_Light_Fewer.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Grid_Light_Fewer.svg')
toyplot.png.render(canvas, 'MultiSeries_Grid_Light_Fewer.png', scale=5)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-11,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
marks = {}
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
marks[column] = axes.plot(x, y)
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
canvas.legend([('USA', marks['USA']),
('Europe', marks['Europe']),
('Japan', marks['Japan'])],
rect=('-1in', '-1.25in', '1in', '0.75in'))
toyplot.pdf.render(canvas, 'Legend_Backward.pdf')
toyplot.svg.render(canvas, 'Legend_Backward.svg')
toyplot.png.render(canvas, 'Legend_Backward.png', scale=5)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-11,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
marks = {}
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
marks[column] = axes.plot(x, y)
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
canvas.legend([('Europe', marks['Europe']),
('Japan', marks['Japan']),
('USA', marks['USA'])],
rect=('-1in', '-1.25in', '1in', '0.75in'))
toyplot.pdf.render(canvas, 'Legend_OK.pdf')
toyplot.svg.render(canvas, 'Legend_OK.svg')
toyplot.png.render(canvas, 'Legend_OK.png', scale=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test out learned distribution inside of SMC
Step2: Look at rate of path coalescence
|
<ASSISTANT_TASK:>
Python Code:
devices = factorial_hmm.gen_devices()
T = 50
np.random.seed(20)
X, Y = factorial_hmm.gen_dataset(devices, T)
plt.figure(figsize=(15,3.5))
plt.plot(Y)
plt.figure(figsize=(15,10))
plt.imshow((X*devices).T, interpolation='None', aspect=1);
plt.yticks(np.arange(len(devices)), devices);
print len(devices), 2**len(devices)
trace_train = []
trace_validation = []
dist_est = cde.ConditionalBinaryMADE(len(devices)+1, len(devices), H=300, num_layers=4)
if USE_GPU:
dist_est.cuda()
dist_est.load_state_dict(torch.load('../saved/trained_hmm_params.rar'))
X_hat_bootstrap, ancestry_bootstrap, ESS_bootstrap = \
factorial_hmm.run_smc(devices, Y, 500, factorial_hmm.baseline_proposal, verbose=False)
Y_hat_bootstrap = np.dot(X_hat_bootstrap, devices)
nn_proposal = factorial_hmm.make_nn_proposal(dist_est)
X_hat_nn, ancestry_nn, ESS_nn = \
factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)
Y_hat_nn = np.dot(X_hat_nn, devices)
plt.hist(ESS_bootstrap, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')
plt.hist(ESS_nn, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')
plt.xlim([0,plt.xlim()[1]])
plt.legend(['bootstrap', 'nnsmc'])
plt.title('Histogram of effective sample size of SMC filtering distribution');
plt.figure(figsize=(16,4))
plt.title('Ancestral paths for bootstrap proposals (blue) and nn (green)')
plt.plot(ancestry_bootstrap.T, color=sns.color_palette()[0]);
plt.plot(ancestry_nn.T, color=sns.color_palette()[1]);
plt.ylim(0,ancestry_nn.shape[0])
plt.xlim(0,T-1);
plt.figure(figsize=(14,3.25))
plt.plot(np.dot(X_hat_nn, devices).T, color=sns.color_palette()[1], alpha=0.1)
plt.plot(np.arange(len(Y)), Y,'k--')
plt.xlim([0,T-1])
plt.xlabel('Time step')
plt.ylabel('Total energy usage')
ANC_PRIOR = []
ANC_NN = []
def count_uniques(ancestry):
K, T = ancestry.shape
counts = np.empty((T,), dtype=int)
for t in xrange(T):
counts[t] = len(np.unique(ancestry[:,t]))
return counts
def run_iter():
X,Y = factorial_hmm.gen_dataset(devices, T=30)
X_particles_baseline, ancestry_baseline, _ = \
factorial_hmm.run_smc(devices, Y, 100, factorial_hmm.baseline_proposal, verbose=False)
print "smc complete"
X_particles, ancestry_nnsmc, _ = \
factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)
print "nn complete"
ANC_PRIOR.append(count_uniques(ancestry_baseline))
ANC_NN.append(count_uniques(ancestry_nnsmc))
return X,Y
for i in xrange(10):
print "iteration", i+1
X_tmp, Y_tmp = run_iter()
plt.figure(figsize=(8,3.5))
plt.plot(np.arange(len(X_tmp)), np.mean(ANC_PRIOR, 0));
plt.plot(np.arange(len(X_tmp)), np.mean(ANC_NN, 0));
plt.legend(['Bootstrap SMC', 'NN-SMC'], loc='upper left')
pm = np.mean(ANC_PRIOR, 0)
psd = np.std(ANC_PRIOR, 0)
safe_lb = (pm - psd) * (pm - psd > 1.0) + (pm - psd <= 1.0)
plt.fill_between(np.arange(len(X_tmp)), safe_lb, pm+psd, alpha=0.25, color=sns.color_palette()[0]);
pm = np.mean(ANC_NN, 0)
psd = np.std(ANC_NN, 0)
plt.fill_between(np.arange(len(X_tmp)), pm-psd, pm+psd, alpha=0.25, color=sns.color_palette()[1]);
plt.semilogy();
plt.xlabel('Time step')
plt.ylabel('Surviving paths')
plt.ylim(1, 100)
plt.xlim(0, len(X_tmp)-1)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
Step3: In order to see the effect of reflection, let's set "irrad_frac_refl_bol" of both of our stars to 0.9 - that is 90% of the incident flux will go towards reflection and 10% will be ignored.
Step4: Since reflection can be a computationally expensive process and in most cases is a low-order effect, there is a switch in the compute options that needs to be enabled in order for reflection to be taken into account. If this switch is False (which it is by default), the albedos are completely ignored and will be treated as if all incident light is lost/ignored.
Step5: Reflection has the most noticeable effect when the two stars are close to each other and have a large temperature ratio.
Step6: Influence on Light Curves (fluxes)
Step7: Let's run models with the reflection switch both turned on and off so that we can compare the two results. We'll also override delta to be a larger number since the computation time required by delta depends largely on the number of surface elements.
Step8: Influence on Meshes (Intensities)
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
#logger = phoebe.logger('error')
b = phoebe.default_binary()
print(b['irrad_frac_refl_bol'])
print(b['irrad_frac_lost_bol'])
print(b['irrad_frac_refl_bol@primary'])
print(b['irrad_frac_lost_bol@primary@component'])
b.set_value_all('irrad_frac_refl_bol', 0.9)
print(b['irrad_method@compute'])
b['sma@orbit'] = 4.0
b['teff@primary'] = 10000
b['teff@secondary'] = 5000
b.add_dataset('lc', times=np.linspace(0,1,101))
b.run_compute(irrad_method='none', ntriangles=700, model='refl_false')
b.run_compute(irrad_method='wilson', ntriangles=700, model='refl_true')
afig, mplfig = b.plot(show=True, legend=True)
artists = plt.plot(b['value@times@refl_false'], b['value@fluxes@refl_true']-b['value@fluxes@refl_false'], 'r-')
b.add_dataset('mesh', times=[0.2], columns=['teffs', 'intensities@lc01'])
b.disable_dataset('lc01')
b.run_compute(irrad_method='none', ntriangles=700, model='refl_false', overwrite=True)
b.run_compute(irrad_method='wilson', ntriangles=700, model='refl_true', overwrite=True)
#phoebe.logger('debug')
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_false',
fc='intensities', ec='face',
draw_sidebars=True, show=True)
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_true',
fc='intensities', ec='face',
draw_sidebars=True, show=True)
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_false',
fc='teffs', ec='face',
draw_sidebars=True, show=True)
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_true',
fc='teffs', ec='face',
draw_sidebars=True, show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to declare a name that we will use for the dictionnary containing initial conditions for two variables. In the same way we need to define a dictionary for all the parameters. This leads to the following code
Step2: The next step is to define the vector field of the system. Or in other words the right-hand sides of the differential equation.
Step3: Those two lines are particular. They assign strings to two names. The strings happen to use name that we have mentioned already and the name we assigned where only to remeber us which string belong to which variables. Now we need to tell which variables are dynamic, with the dynamic of the string that the variable maps to.
Step4: Now we need to construct the full model for PyDSTool. To do this we will need to call the 'args' class of PyDSTool
Step5: All the details of this class can be found here
Step6: We can interact in many ways with DS objects. In order to show all the things we can do, we can use the function 'help'.
Step7: If we want to change the parameters of the ODE, we need to be careful and use the already written function. If we change it directly we could mess with the related values of other parameters.
Step8: Now we can finally solve the system and obtain a trajectory http
Step9: Just to finish this part, we will see how to run scripts on a DS object it order to measure quantities from the simulation.
Step10: This code highlights the utility of the 'trajectory' object. We can use this object as a parametric function. Indeed it interpolates between independent variables automatically. In our example $5.4$ was not in the 'time set'.
Step11: A bit further
Step12: Like before we use a Generator in order to find the solution of the dynamical system. We can just note that $w$ in the code is jsut a dummy variable necessary to this version of PyDSTool (need two variables).
Step13: The equation we used is bistable. One way to highlight this is with the following code
Step14: We will now be interested in the bifurcation diagram and the nonlinear study, that are the main point of this package.
Step15: The 'LocBifPoints' attributes tells PyCont what type of bifurcation should be tracked. In this example, because we now the result we specify that only saddle-node bifurcation should be detected. We can then compute this diagram
Step16: PC['EQ1'] now consists of a "struct" data type that specifies the particular equilibrium curve we prepared the system for. The information of the equilibrium curve can be accessed via the 'info()' method. We can obtain detailed information about a particular special point calling the 'getSpecialPoint' method.
|
<ASSISTANT_TASK:>
Python Code:
from PyDSTool import *
icdict = {'x': 1, 'y': 0.4} # Initial conditions dictonnary
pardict = {'k': 0.1, 'm': 0.5} # Parameters values dictionnary
x_rhs = 'y'
y_rhs = '-k*x/m'
vardict = {'x': x_rhs, 'y': y_rhs}
DSargs = args() # create an empty object instance of the args class, call it DSargs
DSargs.name = 'SHM' # name our model
DSargs.ics = icdict # assign the icdict to the ics attribute
DSargs.pars = pardict # assign the pardict to the pars attribute
DSargs.tdata = [0, 20] # declare how long we expect to integrate for
DSargs.varspecs = vardict # assign the vardict dictionary to the 'varspecs' attribute of DSargs
DS = Generator.Vode_ODEsystem(DSargs)
help(DS
)
DS.set(pars={'k': 0.3},
ics={'x': 0.4})
traj = DS.compute('demo')
pts = traj.sample()
%matplotlib inline
plt.plot(pts['t'], pts['x'], label='x')
plt.plot(pts['t'], pts['y'], label='y')
plt.legend()
plt.xlabel('t')
def KE(pts):
return 0.5*DS.pars['m']*pts['y']**2
def PE(pts):
return 0.5*DS.pars['k']*pts['x']**2
total_energy = KE(pts) + PE(pts)
print (total_energy)
KE(traj(5.4)) # At time 5.4
pts.find(5.4)
import PyDSTool as dst # Give a name to the package
import numpy as np
from matplotlib import pyplot as plt
# we must give a name
DSargs = dst.args(name='Calcium channel model')
# parameters
DSargs.pars = { 'vl': -60,
'vca': 120,
'i': 0,
'gl': 2,
'gca': 4,
'c': 20,
'v1': -1.2,
'v2': 18 }
# auxiliary helper function(s) -- function name: ([func signature], definition)
DSargs.fnspecs = {'minf': (['v'], '0.5 * (1 + tanh( (v-v1)/v2 ))') }
# rhs of the differential equation, including dummy variable w
DSargs.varspecs = {'v': '( i + gl * (vl - v) - gca * minf(v) * (v-vca) )/c',
'w': 'v-w' }
# initial conditions
DSargs.ics = {'v': 0, 'w': 0 }
DSargs.tdomain = [0,30] # set the range of integration.
ode = dst.Generator.Vode_ODEsystem(DSargs) # an instance of the 'Generator' class.
traj = ode.compute('polarization') # integrate ODE with trajectory name : polarization / use print (traj.info(1)) to obtian the info
pts = traj.sample(dt=0.1) # Data for plotting
# PyPlot commands
plt.plot(pts['t'], pts['v'])
plt.xlabel('time') # Axes labels
plt.ylabel('voltage') # ...
plt.ylim([0,65]) # Range of the y axis
plt.title(ode.name) # Figure title from model name
plt.show()
plt.clf() # Clear the figure
for i, v0 in enumerate(np.linspace(-80,80,20)):
ode.set( ics = { 'v': v0 } ) # Initial condition
# Trajectories are called pol0, pol1, ...
# sample them on the fly to create Pointset tmp
tmp = ode.compute('pol%3i' % i).sample() # or specify dt option to sample to sub-sample
plt.plot(tmp['t'], tmp['v'])
plt.xlabel('time')
plt.ylabel('voltage')
plt.title(ode.name + ' multi ICs')
plt.show()
# Prepare the system to start close to a steady state
ode.set(pars = {'i': -220} ) # Lower bound of the control parameter 'i'
ode.set(ics = {'v': -170} ) # Close to one of the steady states present for i=-220
PC = dst.ContClass(ode) # Set up continuation class
PCargs = dst.args(name='EQ1', type='EP-C') # 'EP-C' stands for Equilibrium Point Curve. The branch will be labeled 'EQ1'.
PCargs.freepars = ['i'] # control parameter(s) (it should be among those specified in DSargs.pars)
PCargs.MaxNumPoints = 450 # The following 3 parameters are set after trial-and-error
PCargs.MaxStepSize = 2
PCargs.MinStepSize = 1e-5
PCargs.StepSize = 2e-2
PCargs.LocBifPoints = 'LP' # detect limit points / saddle-node bifurcations
PCargs.SaveEigen = True # to tell unstable from stable branches
PC.newCurve(PCargs)
PC['EQ1'].forward() # Forward because we look for t>0
PC.display(['i','v'], stability=True, figure=3) # stable and unstable branches as solid and dashed curves, resp.
PCargs = dst.args(name='SN1', type='LP-C')
PCargs.initpoint = 'EQ1:LP2'
PCargs.freepars = ['i', 'gca']
PCargs.MaxStepSize = 2
PCargs.LocBifPoints = ['CP']
PCargs.MaxNumPoints = 200
PC.newCurve(PCargs)
PC['SN1'].forward()
PC['SN1'].backward()
PC['SN1'].display(['i','gca'], figure=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Explore the Data using Pandas
Step2: You can see that this provides summary statistics for numerical columns. In our case, our only numerical column is score. We can access each of these summaries statistics separately, using Pandas built-in functions. For example, we can get the max score using the syntax DataFrame[Column Name].max()
Step3: What about the other columns? The columns that contain strings, not numbers? One way we can summarize these columns is by counting the unique string values in the column. We can do this using the DataFrame.value_counts() function.
Step4: Groupby
Step5: We now have a pandas object. We can perform most of the in-built Pandas functions on this object, but we'll see slightly different output than what we saw with a dataframe object.
Step6: How is this output different that the previous time we used the DataFrame.mean() function?
Step7: 2. Creating the DTM
Step8: This format is called Compressed Sparse Format. How do we know what each number indicates? We can access the words themselves through the CountVectorizer function get_feature_names.
Step9: It save a lot of memory to store the dtm in this format, but it is difficult to look at for a human. To illustrate the techniques in this lesson we will first convert this matrix back to a Pandas dataframe, a format we're more familiar with. For larger datasets, you will have to use the Compressed Sparse Format. Putting it into a DataFrame, however, will enable us to get more comfortable with Pandas!
Step10: 3. What can we do with a DTM?
Step11: We'll see further stuff we can do with a DTM in days to come. Because it is in the format of a matrix, we can perform any matrix algebra or vector manipulation on it, which enables some pretty exciting things (think vector space and Euclidean geometry). But, what do we lose when we reprsent text in this format?
Step12: It's still mostly zeros. Let's look at the 20 words with highest tf-idf weights.
Step13: Ok! We have successfully identified content words, without removing stop words. What else do you notice about this list?
Step14: Now lets compare the words with the highest tf-idf weight for each genre.
|
<ASSISTANT_TASK:>
Python Code:
import pandas
#create a dataframe called "df"
df = pandas.read_csv("BDHSI2016_music_reviews.csv", sep = '\t')
##I'm going to do a pre-processing step to remove digits in the text, for analytical purposes.
##If you don't understand this code right now it's ok. But challenge yourself to make sense of it!
df['body'] = df['body'].apply(lambda x: ''.join([i for i in x if not i.isdigit()]))
#view the dataframe
#notice the metadata. The column "body" contains our text of interest.
df
## Review Ex: Think back to yesterday's tutorial on Pandas.
###Use the dataframe slicing methods to print the full text of the first review.
df.describe()
df['score'].max()
##EX: Print the mean and the standard deviation for the score column.
df['genre'].value_counts()
##EX: Print the most frequent reviewers and artists in the dataframe.
#create a groupby dataframe grouped by genre
df_genres = df.groupby("genre")
#What kind of object is df_genres? Let's find out.
df_genres
#calculate the mean score by genre, print out the results
df_genres['score'].mean().sort_values(ascending=False)
##EX: Print the maximum score for each genre.
##Bonus EX: Find the artist with the highest average score. Find the artist with the lowest average score.
#import the function CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
countvec = CountVectorizer()
#fit and transform our text into a DTM. Ask me about what this code does...
sklearn_dtm = countvec.fit_transform(df.body)
print(sklearn_dtm)
print(countvec.get_feature_names()[:10])
##EX: What word is indicated by the first row of the DTM printed above?
###Hint: Think back to the tutorial on lists, and how to slice a list.
#we do the same as we did above, but covert it into a Pandas dataframe
#Don't worry about understanding every line of this code
dtm_df = pandas.DataFrame(countvec.fit_transform(df.body).toarray(), columns=countvec.get_feature_names(), index = df.index)
#view the dtm dataframe
dtm_df
dtm_df.sum().sort_values(ascending=False)
##Ex: print the average number of times each word is used in a review
#Print this out sorted from highest to lowest.
#import the function
from sklearn.feature_extraction.text import TfidfVectorizer
#define out tfidfvec object
tfidfvec = TfidfVectorizer()
#create the dtm, but with cells weigthed by the tf-idf score.
dtm_tfidf_df = pandas.DataFrame(tfidfvec.fit_transform(df.body).toarray(), columns=tfidfvec.get_feature_names(), index = df.index)
#view results
dtm_tfidf_df
dtm_tfidf_df.max().sort_values(ascending=False)[0:20]
#creat dataset with document index and genre
#firs make a copy of our dtm_tfidf_df
dtm_tfidf_df_genre = dtm_tfidf_df
#add a GENRE column to it. Why am I making the name of the column GENRE and no genre?
dtm_tfidf_df_genre['GENRE'] = df['genre']
dtm_tfidf_df_genre
groupby_dtm_genre = dtm_tfidf_df_genre.groupby('GENRE').max()
groupby_dtm_genre
#sort the values in the Indie column.
groupby_dtm_genre.loc['Indie'].sort_values(ascending=False)
##EX: do the same for Rap and Jazz genres. Compare the most distinctive words. What do you notice?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some Formal Basics (skip if you just want code examples)
Step2: Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.
Step3: Quadratic Time MMD
Step4: Any sub-class of <a href="http
Step5: Precomputing Kernel Matrices
Step6: Now let us visualise distribution of MMD statistic under $H_0
Step7: Null and Alternative Distribution Illustrated
Step8: Different Ways to Approximate the Null Distribution for the Quadratic Time MMD
Step9: The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.
Step10: The Gamma Moment Matching Approximation and Type I errors
Step11: As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution
Step12: We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors.
Step13: We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute
Step14: Sometimes, one might want to use <a href="http
Step15: The Gaussian Approximation to the Null Distribution
Step16: Kernel Selection for the MMD -- Overview
Step17: Now perform two-sample test with that kernel
Step18: For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)
Step19: And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%matplotlib inline
# import all Shogun classes
from modshogun import *
# use scipy for generating samples
from scipy.stats import norm, laplace
def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=sqrt(0.5)):
# sample from both distributions
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
return X,Y
mu=0.0
sigma2=1
b=sqrt(0.5)
n=220
X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)
# plot both densities and histograms
figure(figsize=(18,5))
suptitle("Gaussian vs. Laplace")
subplot(121)
Xs=linspace(-2, 2, 500)
plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))
plot(Xs, laplace.pdf(Xs, loc=mu, scale=b))
title("Densities")
xlabel("$x$")
ylabel("$p(x)$")
_=legend([ 'Gaussian','Laplace'])
subplot(122)
hist(X, alpha=0.5)
xlim([-5,5])
ylim([0,100])
hist(Y,alpha=0.5)
xlim([-5,5])
ylim([0,100])
legend(["Gaussian", "Laplace"])
_=title('Histograms')
print "Gaussian vs. Laplace"
print "Sample means: %.2f vs %.2f" % (mean(X), mean(Y))
print "Samples variances: %.2f vs %.2f" % (var(X), var(Y))
# turn data into Shogun representation (columns vectors)
feat_p=RealFeatures(X.reshape(1,len(X)))
feat_q=RealFeatures(Y.reshape(1,len(Y)))
# choose kernel for testing. Here: Gaussian
kernel_width=1
kernel=GaussianKernel(10, kernel_width)
# create mmd instance of test-statistic
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
# compute biased and unbiased test statistic (default is unbiased)
mmd.set_statistic_type(BIASED)
biased_statistic=mmd.compute_statistic()
mmd.set_statistic_type(UNBIASED)
unbiased_statistic=mmd.compute_statistic()
print "%d x MMD_b[X,Y]^2=%.2f" % (len(X), biased_statistic)
print "%d x MMD_u[X,Y]^2=%.2f" % (len(X), unbiased_statistic)
# this is not necessary as bootstrapping is the default
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_statistic_type(UNBIASED)
# to reduce runtime, should be larger practice
mmd.set_num_null_samples(100)
# now show a couple of ways to compute the test
# compute p-value for computed test statistic
p_value=mmd.compute_p_value(unbiased_statistic)
print "P-value of MMD value %.2f is %.2f" % (unbiased_statistic, p_value)
# compute threshold for rejecting H_0 for a given test power
alpha=0.05
threshold=mmd.compute_threshold(alpha)
print "Threshold for rejecting H0 with a test power of %.2f is %.2f" % (alpha, threshold)
# performing the test by hand given the above results, note that those two are equivalent
if unbiased_statistic>threshold:
print "H0 is rejected with confidence %.2f" % alpha
if p_value<alpha:
print "H0 is rejected with confidence %.2f" % alpha
# or, compute the full two-sample test directly
# fixed test power, binary decision
binary_test_result=mmd.perform_test(alpha)
if binary_test_result:
print "H0 is rejected with confidence %.2f" % alpha
significance_test_result=mmd.perform_test()
print "P-value of MMD test is %.2f" % significance_test_result
if significance_test_result<alpha:
print "H0 is rejected with confidence %.2f" % alpha
# precompute kernel to be faster for null sampling
p_and_q=mmd.get_p_and_q()
kernel.init(p_and_q, p_and_q);
precomputed_kernel=CustomKernel(kernel);
mmd.set_kernel(precomputed_kernel);
# increase number of iterations since should be faster now
mmd.set_num_null_samples(500);
p_value_boot=mmd.perform_test();
print "P-value of MMD test is %.2f" % p_value_boot
num_samples=500
# sample null distribution
mmd.set_num_null_samples(num_samples)
null_samples=mmd.sample_null()
# sample alternative distribution, generate new data for that
alt_samples=zeros(num_samples)
for i in range(num_samples):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
feat_p=RealFeatures(reshape(X, (1,len(X))))
feat_q=RealFeatures(reshape(Y, (1,len(Y))))
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
alt_samples[i]=mmd.compute_statistic()
def plot_alt_vs_null(alt_samples, null_samples, alpha):
figure(figsize=(18,5))
subplot(131)
hist(null_samples, 50, color='blue')
title('Null distribution')
subplot(132)
title('Alternative distribution')
hist(alt_samples, 50, color='green')
subplot(133)
hist(null_samples, 50, color='blue')
hist(alt_samples, 50, color='green', alpha=0.5)
title('Null and alternative distriution')
# find (1-alpha) element of null distribution
null_samples_sorted=sort(null_samples)
quantile_idx=int(num_samples*(1-alpha))
quantile=null_samples_sorted[quantile_idx]
axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')
_=legend()
plot_alt_vs_null(alt_samples, null_samples, alpha)
# optional: plot spectrum of joint kernel matrix
from numpy.linalg import eig
# get joint feature object and compute kernel matrix and its spectrum
feats_p_q=mmd.get_p_and_q()
mmd.get_kernel().init(feats_p_q, feats_p_q)
K=mmd.get_kernel().get_kernel_matrix()
w,_=eig(K)
# visualise K and its spectrum (only up to threshold)
figure(figsize=(18,5))
subplot(121)
imshow(K, interpolation="nearest")
title("Kernel matrix K of joint data $X$ and $Y$")
subplot(122)
thresh=0.1
plot(w[:len(w[w>thresh])])
_=title("Eigenspectrum of K until component %d" % len(w[w>thresh]))
# threshold for eigenspectrum
thresh=0.1
# compute number of eigenvalues to use
num_eigen=len(w[w>thresh])
# finally, do the test, use biased statistic
mmd.set_statistic_type(BIASED)
#tell Shogun to use spectrum approximation
mmd.set_null_approximation_method(MMD2_SPECTRUM)
mmd.set_num_eigenvalues_spectrum(num_eigen)
mmd.set_num_samples_spectrum(num_samples)
# the usual test interface
p_value_spectrum=mmd.perform_test()
print "Spectrum: P-value of MMD test is %.2f" % p_value_spectrum
# compare with ground truth bootstrapping
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_num_null_samples(num_samples)
p_value_boot=mmd.perform_test()
print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
# tell Shogun to use gamma approximation
mmd.set_null_approximation_method(MMD2_GAMMA)
# the usual test interface
p_value_gamma=mmd.perform_test()
print "Gamma: P-value of MMD test is %.2f" % p_value_gamma
# compare with ground truth bootstrapping
mmd.set_null_approximation_method(PERMUTATION)
p_value_boot=mmd.perform_test()
print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
# type I error is false alarm, therefore sample data under H0
num_trials=50
rejections_gamma=zeros(num_trials)
rejections_spectrum=zeros(num_trials)
rejections_bootstrap=zeros(num_trials)
num_samples=50
alpha=0.05
for i in range(num_trials):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
# simulate H0 via merging samples before computing the
Z=hstack((X,Y))
X=Z[:len(X)]
Y=Z[len(X):]
feat_p=RealFeatures(reshape(X, (1,len(X))))
feat_q=RealFeatures(reshape(Y, (1,len(Y))))
# gamma
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
mmd.set_null_approximation_method(MMD2_GAMMA)
mmd.set_statistic_type(BIASED)
rejections_gamma[i]=mmd.perform_test(alpha)
# spectrum
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
mmd.set_null_approximation_method(MMD2_SPECTRUM)
mmd.set_num_eigenvalues_spectrum(num_eigen)
mmd.set_num_samples_spectrum(num_samples)
mmd.set_statistic_type(BIASED)
rejections_spectrum[i]=mmd.perform_test(alpha)
# bootstrap (precompute kernel)
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
p_and_q=mmd.get_p_and_q()
kernel.init(p_and_q, p_and_q)
precomputed_kernel=CustomKernel(kernel)
mmd.set_kernel(precomputed_kernel)
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_num_null_samples(num_samples)
mmd.set_statistic_type(BIASED)
rejections_bootstrap[i]=mmd.perform_test(alpha)
convergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1)
convergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1)
convergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1)
print "Average rejection rate of H0 for Gamma is %.2f" % mean(convergence_gamma)
print "Average rejection rate of H0 for Spectrum is %.2f" % mean(convergence_spectrum)
print "Average rejection rate of H0 for Bootstrapping is %.2f" % mean(rejections_bootstrap)
# paramters of dataset
m=20000
distance=10
stretch=5
num_blobs=3
angle=pi/4
# these are streaming features
gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
# stream some data and plot
num_plot=1000
features=gen_p.get_streamed_features(num_plot)
features=features.create_merged_copy(gen_q.get_streamed_features(num_plot))
data=features.get_feature_matrix()
figure(figsize=(18,5))
subplot(121)
grid(True)
plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')
title('$X\sim p$')
subplot(122)
grid(True)
plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)
_=title('$Y\sim q$')
block_size=100
# if features are already under the streaming interface, just pass them
mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)
# compute an unbiased estimate in linear time
statistic=mmd.compute_statistic()
print "MMD_l[X,Y]^2=%.2f" % statistic
# note: due to the streaming nature, successive calls of compute statistic use different data
# and produce different results. Data cannot be stored in memory
for _ in range(5):
print "MMD_l[X,Y]^2=%.2f" % mmd.compute_statistic()
# data source
gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
# retreive some points, store them as non-streaming data in memory
data_p=gen_p.get_streamed_features(100)
data_q=gen_q.get_streamed_features(data_p.get_num_vectors())
print "Number of data is %d" % data_p.get_num_vectors()
# cast data in memory as streaming features again (which now stream from the in-memory data)
streaming_p=StreamingRealFeatures(data_p)
streaming_q=StreamingRealFeatures(data_q)
# it is important to start the internal parser to avoid deadlocks
streaming_p.start_parser()
streaming_q.start_parser()
# example to create mmd (note that m can be maximum the number of data in memory)
mmd=LinearTimeMMD(GaussianKernel(10,1), streaming_p, streaming_q, data_p.get_num_vectors(), 1)
print "Linear time MMD statistic: %.2f" % mmd.compute_statistic()
mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)
print "m=%d samples from p and q" % m
print "Binary test result is: " + ("Rejection" if mmd.perform_test(alpha) else "No rejection")
print "P-value test result is %.2f" % mmd.perform_test()
sigmas=[2**x for x in linspace(-5,5, 10)]
print "Choosing kernel width from", ["{0:.2f}".format(sigma) for sigma in sigmas]
combined=CombinedKernel()
for i in range(len(sigmas)):
combined.append_kernel(GaussianKernel(10, sigmas[i]))
# mmd instance using streaming features
block_size=1000
mmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size)
# optmal kernel choice is possible for linear time MMD
selection=MMDKernelSelectionOpt(mmd)
# select best kernel
best_kernel=selection.select_kernel()
best_kernel=GaussianKernel.obtain_from_generic(best_kernel)
print "Best single kernel has bandwidth %.2f" % best_kernel.get_width()
alpha=0.05
mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size)
mmd.set_null_approximation_method(MMD1_GAUSSIAN);
p_value_best=mmd.perform_test();
print "Bootstrapping: P-value of MMD test with optimal kernel is %.2f" % p_value_best
mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size)
num_samples=500
# sample null and alternative distribution, implicitly generate new data for that
null_samples=zeros(num_samples)
alt_samples=zeros(num_samples)
for i in range(num_samples):
alt_samples[i]=mmd.compute_statistic()
# tell MMD to merge data internally while streaming
mmd.set_simulate_h0(True)
null_samples[i]=mmd.compute_statistic()
mmd.set_simulate_h0(False)
plot_alt_vs_null(alt_samples, null_samples, alpha)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Single bidding zone with fixed load, one period
Step2: Two bidding zones connected by transmission, one period
Step3: Three bidding zones connected by transmission, one period
Step4: Single bidding zone with price-sensitive industrial load, one period
Step5: Single bidding zone with fixed load, several periods
Step6: Single bidding zone with fixed load and storage, several periods
|
<ASSISTANT_TASK:>
Python Code:
import pypsa, numpy as np
# marginal costs in EUR/MWh
marginal_costs = {"Wind": 0, "Hydro": 0, "Coal": 30, "Gas": 60, "Oil": 80}
# power plant capacities (nominal powers in MW) in each country (not necessarily realistic)
power_plant_p_nom = {
"South Africa": {"Coal": 35000, "Wind": 3000, "Gas": 8000, "Oil": 2000},
"Mozambique": {
"Hydro": 1200,
},
"Swaziland": {
"Hydro": 600,
},
}
# transmission capacities in MW (not necessarily realistic)
transmission = {
"South Africa": {"Mozambique": 500, "Swaziland": 250},
"Mozambique": {"Swaziland": 100},
}
# country electrical loads in MW (not necessarily realistic)
loads = {"South Africa": 42000, "Mozambique": 650, "Swaziland": 250}
country = "South Africa"
network = pypsa.Network()
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# Run optimisation to determine market dispatch
network.lopf()
# print the load active power (P) consumption
network.loads_t.p
# print the generator active power (P) dispatch
network.generators_t.p
# print the clearing price (corresponding to gas)
network.buses_t.marginal_price
network = pypsa.Network()
countries = ["Mozambique", "South Africa"]
for country in countries:
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# add transmission as controllable Link
if country not in transmission:
continue
for other_country in countries:
if other_country not in transmission[country]:
continue
# NB: Link is by default unidirectional, so have to set p_min_pu = -1
# to allow bidirectional (i.e. also negative) flow
network.add(
"Link",
"{} - {} link".format(country, other_country),
bus0=country,
bus1=other_country,
p_nom=transmission[country][other_country],
p_min_pu=-1,
)
network.lopf()
network.loads_t.p
network.generators_t.p
network.links_t.p0
# print the clearing price (corresponding to water in Mozambique and gas in SA)
network.buses_t.marginal_price
# link shadow prices
network.links_t.mu_lower
network = pypsa.Network()
countries = ["Swaziland", "Mozambique", "South Africa"]
for country in countries:
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# add transmission as controllable Link
if country not in transmission:
continue
for other_country in countries:
if other_country not in transmission[country]:
continue
# NB: Link is by default unidirectional, so have to set p_min_pu = -1
# to allow bidirectional (i.e. also negative) flow
network.add(
"Link",
"{} - {} link".format(country, other_country),
bus0=country,
bus1=other_country,
p_nom=transmission[country][other_country],
p_min_pu=-1,
)
network.lopf()
network.loads_t.p
network.generators_t.p
network.links_t.p0
# print the clearing price (corresponding to hydro in S and M, and gas in SA)
network.buses_t.marginal_price
# link shadow prices
network.links_t.mu_lower
country = "South Africa"
network = pypsa.Network()
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
# standard high marginal utility consumers
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# add an industrial load as a dummy negative-dispatch generator with marginal utility of 70 EUR/MWh for 8000 MW
network.add(
"Generator",
"{} industrial load".format(country),
bus=country,
p_max_pu=0,
p_min_pu=-1,
p_nom=8000,
marginal_cost=70,
)
network.lopf()
network.loads_t.p
# NB only half of industrial load is served, because this maxes out
# Gas. Oil is too expensive with a marginal cost of 80 EUR/MWh
network.generators_t.p
network.buses_t.marginal_price
country = "South Africa"
network = pypsa.Network()
# snapshots labelled by [0,1,2,3]
network.set_snapshots(range(4))
network.add("Bus", country)
# p_max_pu is variable for wind
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
p_max_pu=([0.3, 0.6, 0.4, 0.5] if tech == "Wind" else 1),
)
# load which varies over the snapshots
network.add(
"Load",
"{} load".format(country),
bus=country,
p_set=loads[country] + np.array([0, 1000, 3000, 4000]),
)
# specify that we consider all snapshots
network.lopf(network.snapshots)
network.loads_t.p
network.generators_t.p
network.buses_t.marginal_price
country = "South Africa"
network = pypsa.Network()
# snapshots labelled by [0,1,2,3]
network.set_snapshots(range(4))
network.add("Bus", country)
# p_max_pu is variable for wind
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
p_max_pu=([0.3, 0.6, 0.4, 0.5] if tech == "Wind" else 1),
)
# load which varies over the snapshots
network.add(
"Load",
"{} load".format(country),
bus=country,
p_set=loads[country] + np.array([0, 1000, 3000, 4000]),
)
# storage unit to do price arbitrage
network.add(
"StorageUnit",
"{} pumped hydro".format(country),
bus=country,
p_nom=1000,
max_hours=6, # energy storage in terms of hours at full power
)
network.lopf(network.snapshots)
network.loads_t.p
network.generators_t.p
network.storage_units_t.p
network.storage_units_t.state_of_charge
network.buses_t.marginal_price
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logistic regression
Step2: Non-parametric models
|
<ASSISTANT_TASK:>
Python Code:
%run ../src/LinearRegression.py
%run ../src/PolynomialFeatures.py
# LINEAR REGRESSION
# Generate random data
X = np.linspace(0,20,10)[:,np.newaxis]
y = 0.1*(X**2) + np.random.normal(0,2,10)[:,np.newaxis] + 20
# Fit model to data
lr = LinearRegression()
lr.fit(X,y)
# Predict new data
x_test = np.array([0,20])[:,np.newaxis]
y_predict = lr.predict(x_test)
# POLYNOMIAL REGRESSION
# Fit model to data
poly = PolynomialFeatures(2)
lr = LinearRegression()
lr.fit(poly.fit_transform(X),y)
# Predict new data
x_pol = np.linspace(0, 20, 100)[:, np.newaxis]
y_pol = lr.predict(poly.fit_transform(x_pol))
# Plot data
fig = plt.figure(figsize=(14, 6))
# Plot linear regression
ax1 = fig.add_subplot(1, 2, 1)
plt.scatter(X,y)
plt.plot(x_test, y_predict, "r")
plt.xlim(0, 20)
plt.ylim(0, 50)
# Plot polynomial regression
ax2 = fig.add_subplot(1, 2, 2)
plt.scatter(X,y)
plt.plot(x_pol, y_pol, "r")
plt.xlim(0, 20)
plt.ylim(0, 50);
%run ../src/LogisticRegression.py
X = np.hstack((np.random.normal(90, 2, 100), np.random.normal(110, 2, 100)))[:, np.newaxis]
y = np.array([0]*100 + [1]*100)[:, np.newaxis]
logr = LogisticRegression(learnrate=0.002, eps = 0.001)
logr.fit(X, y)
x_test = np.array([-logr.w[0]/logr.w[1]]).reshape(1,1) #np.linspace(-10, 10, 30)[:, np.newaxis]
y_probs = logr.predict_proba(x_test)[:, 0:1]
print("Probability:" + str(y_probs))
# Plot data
fig = plt.figure(figsize=(14, 6))
# Plot sigmoid function
ax1 = fig.add_subplot(1, 2, 1)
t = np.linspace(-15,15,100)
plt.plot(t, logr._sigmoid(t))
# Plot logistic regression
ax2 = fig.add_subplot(1, 2, 2)
plt.scatter(X, y)
plt.scatter(x_test, y_probs, c='r')
%run ../src/KNearestNeighbors.py
# Generate data from 3 gaussians
gaussian_1 = np.random.multivariate_normal(np.array([1, 0.0]), np.eye(2)*0.01, size=100)
gaussian_2 = np.random.multivariate_normal(np.array([0.0, 1.0]), np.eye(2)*0.01, size=100)
gaussian_3 = np.random.multivariate_normal(np.array([0.1, 0.1]), np.eye(2)*0.001, size=100)
X = np.vstack((gaussian_1, gaussian_2, gaussian_3))
y = np.array([1]*100 + [2]*100 + [3]*100)
# Fit the model
knn = KNearestNeighbors(5)
knn.fit(X, y)
# Predict various points in space
XX, YY = np.mgrid[-5:5:.2, -5:5:.2]
X_test = np.hstack((XX.ravel()[:, np.newaxis], YY.ravel()[:, np.newaxis]))
y_test = knn.predict(X_test)
fig = plt.figure(figsize=(14, 6))
# Plot original data
ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(X[y == 1,0], X[y == 1,1], 'bo')
ax1.plot(X[y == 2,0], X[y == 2,1], 'go')
ax1.plot(X[y == 3,0], X[y == 3,1], 'ro')
# Plot predicted data
ax2 = fig.add_subplot(1, 2, 2)
ax2.contourf(XX, YY, y_test.reshape(50,50));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dane pochodzą z ręcznie tagowanego treebanku (korpusu anotowanego składniowo) opracowanego przez Zespół Inżynierii Lingwistycznej IPI PAN na bazie Narodowego Korpusu Języka Polskiego (Wawer, 2015).
Step2: Architektura sieci
Step3: Word embeddings
Step4: Różnica wektorów „Paryż” i „Francja” reprezentuje pojęcie STOLICA?
Step5: Uczenie
Step6: Ocena trafności
Step7: Wartość bardzo przeszacowana ze względu na nierównomierną częśtość występowania klas (1
Step8: Nie wygląda imponująco, ale...
|
<ASSISTANT_TASK:>
Python Code:
import json
from itertools import chain
from pprint import pprint
from time import time
import os
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from gensim.models import Word2Vec
from gensim.corpora.dictionary import Dictionary
os.environ['THEANO_FLAGS'] = "device=gpu1"
import theano
# theano.config.device = 'gpu' # Compute using GPU
# theano.config.floatX = 'float32'
from keras.preprocessing import sequence
from keras.models import Sequential, Model
from keras.layers import Input
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Dropout
from keras.layers.wrappers import TimeDistributed
from keras.utils.visualize_util import plot
np.random.seed(1337)
print theano.config.device
def indices_to_one_hot_encodings(index, vector_length):
return [[1, 0] if i == index else [0, 1] for i in xrange(vector_length)]
# Load and process treebank data
treebank_file1 = open('json/OPTA-treebank-0.1.json')
treebank_file2 = open('skladnica_output.json')
treebank = chain(list(json.load(treebank_file1)), list(json.load(treebank_file2)))
X = []
y = []
labels = []
for entry in treebank:
tree = entry['parsedSent']
words = []
sentiment = None
for index, node in enumerate(tree):
word = node.split('\t')[1].lower()
words.append(word)
if node.split('\t')[10] == 'S':
sentiment = index
if sentiment:
labels.append(words[sentiment])
X.append(words)
y.append(indices_to_one_hot_encodings(sentiment, len(words)))
dataset_length = len(X)
slicing_point = int(dataset_length*0.9)
X_train_raw = X[:slicing_point]
y_train_raw = y[:slicing_point]
X_test_raw = X[slicing_point+1:]
y_test_raw = y[slicing_point+1:]
treebank_vocabulary = set(chain(*X))
print len(treebank_vocabulary)
X_train = X_train_raw
y_train = labels
len(X_train) + len(X_test_raw)
# Przykłady z danych treningowych:
for index in [2, 44, 111, 384, 69]:
print ' '.join(X_train[index]), '\n', y_train[index], '\n'
w2v_model = Word2Vec.load('w2v_allwiki_nkjp300_200.model')
# Import w2v's dictionary to a bag-of-words model
w2v_vocabulary = Dictionary()
w2v_vocabulary.doc2bow(w2v_model.vocab.keys(), allow_update=True)
print w2v_vocabulary.items()[:10]
# Initialize dicts for representing w2v's dictionary as indices and 200-dim vectors
w2indx = {v: k+1 for k, v in w2v_vocabulary.items()}
w2vec = {word: w2v_model[word] for word in w2indx.keys()}
w2v_vocabulary_size = len(w2indx) + 1
w2v_vocabulary_dimension = len(w2vec.values()[0])
def map_treebank_words_to_w2v_indices(treebank_data, w2indx):
treebank_data_vec = []
for sentence in treebank_data:
vectorized_sentence = []
for word in sentence:
try:
vectorized_sentence.append(w2indx[word])
except KeyError: # words absent in w2v model will be indexed as 0s
vectorized_sentence.append(0)
treebank_data_vec.append(vectorized_sentence)
return treebank_data_vec
X_train = map_treebank_words_to_w2v_indices(X_train_raw, w2indx)
X_test = map_treebank_words_to_w2v_indices(X_test_raw, w2indx)
print X_test[4]
# Define numpy weights matrix for embedding layer
embedding_weights = np.zeros((w2v_vocabulary_size , w2v_vocabulary_dimension))
for word, index in w2indx.items():
embedding_weights[index, :] = w2vec[word]
# max sentence length
max(
len(max(X_train, key=lambda sentence: len(sentence))),
len(max(X_test, key=lambda sentence: len(sentence)))
)
# Normalize sequences length to 40 (will be extended with 0s)
sentence_length = 40
X_train = sequence.pad_sequences(X_train, maxlen=sentence_length)
X_test = sequence.pad_sequences(X_test, maxlen=sentence_length)
y_train = sequence.pad_sequences(y_train_raw, maxlen=sentence_length, value=[0, 1])
y_test = sequence.pad_sequences(y_test_raw, maxlen=sentence_length, value=[0, 1])
# print X_train[2]
# print y_train[2]
inputs = Input(shape=(sentence_length,), dtype='int32')
x = Embedding(
input_dim=w2v_vocabulary_size,
output_dim=w2v_vocabulary_dimension,
input_length=sentence_length,
mask_zero=True,
weights=[embedding_weights]
)(inputs)
lstm_out = LSTM(200, return_sequences=True)(x)
regularized_data = Dropout(0.3)(lstm_out)
predictions = TimeDistributed(Dense(2, activation='sigmoid'))(regularized_data)
model = Model(input=inputs, output=predictions)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
from IPython.display import SVG
from keras.utils.visualize_util import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
# w modelu, który wykorzystaliśmy, słowa są reprezentowane jako
# 200-elementowe wektory 32-bitowych liczb zmiennoprzecinkowych
w2v_model['filozofia']
w2v_model['filozofia'].shape
w2v_model.similarity(u'filozofia', u'inżynieria')
w2v_model.similarity(u'filozofia', u'nauka')
w2v_model.similarity(u'filozofia', u'literatura')
# wskaż słowo niepasujące do pozostałych
w2v_model.doesnt_match(['Kant', 'Leibniz', 'Derrida', 'Wittgenstein'])
# Kobieta + król - mężczyzna = królowa
# Medialny przykład z (Mikolov et al., 2013b)
w2v_model.most_similar(positive=[u'kobieta', u'król'], negative=[u'mężczyzna'])
# Paryż - Francja + Polska = Warszawa
w2v_model.most_similar(positive=[u'Paryż', u'Polska'], negative=[u'Francja'])
# filozofia - logika = literatura
w2v_model.most_similar(positive=[u'filozofia',], negative=[u'logika'])
# filozofia - postmodernizm = wiedza
w2v_model.most_similar(positive=[u'filozofia',], negative=[u'postmodernizm'])
batch_size = 5
n_epoch = 5
hist = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=n_epoch,
validation_data=(X_test, y_test), verbose=2)
# epochs = 10
# for i in range(epochs):
# print('Epoch', i, '/', epochs)
# model.fit
# plt.rcParams['figure.figsize'] = (10,10)
# axes = plt.gca()
# x_min = hist.epoch[0]
# x_max = hist.epoch[-1]+1
# axes.set_xlim([x_min,x_max])
# plt.scatter(hist.epoch, hist.history['acc'], color='r')
# plt.plot(hist.history['acc'], color='r', label=u'Trafność mierzona na zbiorze treningowym')
# plt.scatter(hist.epoch, hist.history['val_acc'], color='c')
# plt.plot(hist.history['val_acc'], color='c', label=u'Trafność mierzona na zbiorze walidacyjnym')
# plt.xlabel('epoki')
# plt.ylabel(u'Trafność')
# plt.title(u'Trafność w kolejnych epokach')
# plt.legend()
# plt.show()
# Ułamek poprawnie sklasyfikowanych tokenów
score, acc = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=0)
print 'Test accuracy:', acc
predictions = model.predict(X_test, verbose=1)
def change_encoding_word(word):
return 1 if list(np.rint(word)) == [1, 0] else 0
def change_encoding(one_hot_encoded_sentence):
# Switch from ndarray([[0.88, 0.11], [0.34, 0.98]]) encoding to [1, 0] encoding
# and finally index number
normalized_sentence = []
for word in one_hot_encoded_sentence:
normalized_sentence.append(change_encoding_word(word))
return normalized_sentence
def accurately_evaluated_samples():
total_accuracy = 0
for n, sentence in enumerate(predictions):
index_of_sentiment = np.argmax(change_encoding(sentence))
# print change_encoding_word(y_test[n][index_of_sentiment])
total_accuracy += change_encoding_word(y_test[n][index_of_sentiment])
return total_accuracy
# Ułamek tokenów-wydźwięków, które poprawnie rozpoznano jako wydźwięki
float(accurately_evaluated_samples())/y_test.shape[0]
hist.history
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Datasets and Python modules
Step2: And to make use of interactive graphics, we import the folium package
Step3: Part 2. Hypothesis testing
Step4: This time we filter the S1 archive to get an image collection consisting of two images acquired in the month of August, 2020. Because we are interested in change detection, it is essential that the local incidence angles be the same in both images. So now we specify both the orbit pass (ASCENDING) as well the relative orbit number (15)
Step5: Here are the acquisition times in the collection, formatted with Python's time module
Step6: A ratio image
Step7: Now we'll build the ratio of the VV bands and display it
Step8: As in the first part of the Tutorial, standard GEE reducers can be used to calculate a histogram, mean and variance of the ratio image
Step9: Here is a plot of the (normalized) histogram using numpy and matplotlib
Step10: This looks a bit like the gamma distribution we met in Part 1 but is in fact an F probability distribution. The F distribution is defined as the ratio of two chi square distributions, see Eq. (1.12), with $m_1$ and $m_2$ degrees of freedom. The above histogram is an $F$ distribution with $m_1=2m$ and $m_2=2m$ degrees of freedom and is given by
Step11: Checking the mean and variance, we get approximate agreement
Step12: So what is so special about this distribution? When looking for changes between two co-registered Sentinel-1 images acquired at different times, it might seem natural to subtract one from the other and then examine the difference, much as we would do for instance with visual/infrared ground reflectance images. In the case of SAR intensity images this is not a good idea. In the difference of two uncorrelated multilook images $\langle s_1\rangle$ and $\langle s_2\rangle$ the variances add together and, from Eq. (1.21) in the first part of the Tutorial,
Step13: We might guess that the bright pixels here are significant changes, for instance due to aircraft movements on the tarmac or vehicles moving on the highway. Of course ''significant'' doesn't necessarily imply ''interesting''. We already know Frankfurt has a busy airport and that a German Autobahn is always crowded. The question is, how significant are the changes in the statistical sense? Let's now try to answer that question.
Step14: Most changes are within the airport or on the Autobahn. Barge movements on the Main River (upper left hand corner) are also signaled as significant changes. Note that the 'red' changes (significant increases in intensity) do not show up in the 'ratio' overlay, which displays $s_1/s_2$.
Step15: and then plot its histogram, comparing it with the chi square distribution scipy.stats.chi2.pdf() with two degrees of freedom
Step16: Looks pretty good. Note now that a small value of the LRT $Q$ in Eq. (2.15) corresponds to a large value of $-2\log{Q}$. Therefore the P value for a measurement $q$ is now the probability of getting the value $-2\log{q}$
Step17: This is a mixed agricultural/forest area in southern Manitoba, Canada. We'll gather two images, one from the beginning of August and one from the beginning of September, 2018. A lot of harvesting takes place in this interval, so we expect some extensive changes.
Step18: Here are the acquisition times
Step19: Fortunately it is possible to map the chi square cumulative distribution function over an ee.Image() so that a P value image can be calculated directly. This wasn't possible in the single band case, as the F cumulative distribution is not available on the GEE. Here are the P values
Step20: The uniformly dark areas correspond to small or vanishing P values and signify change. The bright areas correspond to no change. Why they are not uniformly bright will be explained below. Now we set a significance threshold of $\alpha=0.01$ and display the significant changes, whereby 1% of them will be false positives. For reference we also show the 2018 Canada AAFC Annual Crop Inventory map, which is available as a GEE collection
Step21: The major crops in the scene are soybeans (dark brown), oats (light brown), canola (light green), corn (light yellow) and winter wheat (dark gray). The wooded areas exhibit little change, while canola has evidently been extensively harvested in the interval.
Step22: So the P values of no-change measurements are uniformly distributed over $[0, 1]$ (the excess of small P values at the left can be ascribed to genuine changes within the polygon). A large P value is no more indicative of no change than a small one. Of course it has to be this way. When, for example, we set a significance level of 5%, then the fraction of false positives, i.e., the fraction of P values smaller than 0.05 given $H_0$, must also be 5%. This accounts for the noisy appearance of the P value image in the no-change regions.
Step23: Now we display the changes, with positive definite red, negative definite blue, and indefinite yellow
|
<ASSISTANT_TASK:>
Python Code:
import ee
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import norm, gamma, f, chi2
import IPython.display as disp
%matplotlib inline
# Import the Folium library.
import folium
# Define a method for displaying Earth Engine image tiles to folium map.
def add_ee_layer(self, ee_image_object, vis_params, name):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles = map_id_dict['tile_fetcher'].url_format,
attr = 'Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name = name,
overlay = True,
control = True
).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
geoJSON = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
8.473892211914062,
49.98081240937428
],
[
8.658599853515625,
49.98081240937428
],
[
8.658599853515625,
50.06066538593667
],
[
8.473892211914062,
50.06066538593667
],
[
8.473892211914062,
49.98081240937428
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi = ee.Geometry.Polygon(coords)
geoJSON = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
8.534317016601562,
50.021637833966786
],
[
8.530540466308594,
49.99780882512238
],
[
8.564186096191406,
50.00663576154257
],
[
8.578605651855469,
50.019431940583104
],
[
8.534317016601562,
50.021637833966786
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi_sub = ee.Geometry.Polygon(coords)
im_coll = (ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')
.filterBounds(aoi)
.filterDate(ee.Date('2020-08-01'),ee.Date('2020-08-31'))
.filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 15))
.sort('system:time_start'))
import time
acq_times = im_coll.aggregate_array('system:time_start').getInfo()
[time.strftime('%x', time.gmtime(acq_time/1000)) for acq_time in acq_times]
im_list = im_coll.toList(im_coll.size())
im1 = ee.Image(im_list.get(0)).select('VV').clip(aoi_sub)
im2 = ee.Image(im_list.get(1)).select('VV').clip(aoi_sub)
ratio = im1.divide(im2)
url = ratio.getThumbURL({'min': 0, 'max': 10})
disp.Image(url=url, width=800)
hist = ratio.reduceRegion(ee.Reducer.fixedHistogram(0, 5, 500), aoi_sub).get('VV').getInfo()
mean = ratio.reduceRegion(ee.Reducer.mean(), aoi_sub).get('VV').getInfo()
variance = ratio.reduceRegion(ee.Reducer.variance(), aoi_sub).get('VV').getInfo()
a = np.array(hist)
x = a[:, 0]
y = a[:, 1] / np.sum(a[:, 1])
plt.grid()
plt.plot(x, y, '.')
plt.show()
m = 5
plt.grid()
plt.plot(x, y, '.', label='data')
plt.plot(x, f.pdf(x, 2*m, 2*m) / 100, '-r', label='F-dist')
plt.legend()
plt.show()
print(mean, m/(m-1))
print(variance, m*(2*m-1)/(m-1)**2/(m-2))
im1 = ee.Image(im_list.get(0)).select('VV').clip(aoi)
im2 = ee.Image(im_list.get(1)).select('VV').clip(aoi)
ratio = im1.divide(im2)
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(ratio,
{'min': 0, 'max': 20, 'palette': ['black', 'white']}, 'Ratio')
mp.add_child(folium.LayerControl())
display(mp)
# Decision threshold alpha/2:
dt = f.ppf(0.0005, 2*m, 2*m)
# LRT statistics.
q1 = im1.divide(im2)
q2 = im2.divide(im1)
# Change map with 0 = no change, 1 = decrease, 2 = increase in intensity.
c_map = im1.multiply(0).where(q2.lt(dt), 1)
c_map = c_map.where(q1.lt(dt), 2)
# Mask no-change pixels.
c_map = c_map.updateMask(c_map.gt(0))
# Display map with red for increase and blue for decrease in intensity.
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(
location=location, tiles='Stamen Toner',
zoom_start=13)
folium.TileLayer('OpenStreetMap').add_to(mp)
mp.add_ee_layer(ratio,
{'min': 0, 'max': 20, 'palette': ['black', 'white']}, 'Ratio')
mp.add_ee_layer(c_map,
{'min': 0, 'max': 2, 'palette': ['black', 'blue', 'red']},
'Change Map')
mp.add_child(folium.LayerControl())
display(mp)
def det(im):
return im.expression('b(0) * b(1)')
# Number of looks.
m = 5
im1 = ee.Image(im_list.get(0)).select('VV', 'VH').clip(aoi)
im2 = ee.Image(im_list.get(1)).select('VV', 'VH').clip(aoi)
m2logQ = det(im1).log().add(det(im2).log()).subtract(
det(im1.add(im2)).log().multiply(2)).add(4*np.log(2)).multiply(-2*m)
hist = m2logQ.reduceRegion(
ee.Reducer.fixedHistogram(0, 20, 200), aoi).get('VV').getInfo()
a = np.array(hist)
x = a[:, 0]
y = a[:, 1] / np.sum(a[:, 1])
plt.plot(x, y, '.', label='data')
plt.plot(x, chi2.pdf(x, 2)/10, '-r', label='chi square')
plt.legend()
plt.grid()
plt.show()
geoJSON ={
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-98.2122802734375,
49.769291532628515
],
[
-98.00559997558594,
49.769291532628515
],
[
-98.00559997558594,
49.88578690918283
],
[
-98.2122802734375,
49.88578690918283
],
[
-98.2122802734375,
49.769291532628515
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi1 = ee.Geometry.Polygon(coords)
im1 = ee.Image(ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')
.filterBounds(aoi1)
.filterDate(ee.Date('2018-08-01'), ee.Date('2018-08-31'))
.filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 136))
.first()
.clip(aoi1))
im2 = ee.Image(ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT').filterBounds(aoi1)
.filterDate(ee.Date('2018-09-01'), ee.Date('2018-09-30'))
.filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 136))
.first()
.clip(aoi1))
acq_time = im1.get('system:time_start').getInfo()
print( time.strftime('%x', time.gmtime(acq_time/1000)) )
acq_time = im2.get('system:time_start').getInfo()
print( time.strftime('%x', time.gmtime(acq_time/1000)) )
def chi2cdf(chi2, df):
''' Chi square cumulative distribution function for df degrees of freedom
using the built-in incomplete gamma function gammainc() '''
return ee.Image(chi2.divide(2)).gammainc(ee.Number(df).divide(2))
# The observed test statistic image -2logq.
m2logq = det(im1).log().add(det(im2).log()).subtract(
det(im1.add(im2)).log().multiply(2)).add(4*np.log(2)).multiply(-2*m)
# The P value image prob(m2logQ > m2logq) = 1 - prob(m2logQ < m2logq).
p_value = ee.Image.constant(1).subtract(chi2cdf(m2logq, 2))
# Project onto map.
location = aoi1.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(p_value,
{'min': 0,'max': 1, 'palette': ['black', 'white']}, 'P-value')
mp.add_child(folium.LayerControl())
c_map = p_value.multiply(0).where(p_value.lt(0.01), 1)
crop2018 = (ee.ImageCollection('AAFC/ACI')
.filter(ee.Filter.date('2018-01-01', '2018-12-01'))
.first()
.clip(aoi1))
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(crop2018, {min: 0, max: 255}, 'crop2018')
mp.add_ee_layer(c_map.updateMask(
c_map.gt(0)), {'min': 0, 'max': 1, 'palette': ['black', 'red']}, 'c_map')
mp.add_child(folium.LayerControl())
geoJSON ={
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-98.18550109863281,
49.769735012247885
],
[
-98.13949584960938,
49.769735012247885
],
[
-98.13949584960938,
49.798109268622
],
[
-98.18550109863281,
49.798109268622
],
[
-98.18550109863281,
49.769735012247885
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi1_sub = ee.Geometry.Polygon(coords)
hist = p_value.reduceRegion(ee.Reducer.fixedHistogram(0, 1, 100), aoi1_sub).get('constant').getInfo()
a = np.array(hist)
x = a[:,0]
y = a[:,1]/np.sum(a[:,1])
plt.plot(x, y, '.b', label='p-value')
plt.ylim(0, 0.05)
plt.grid()
plt.legend()
plt.show()
c_map = p_value.multiply(0).where(p_value.lt(0.01), 1)
diff = im2.subtract(im1)
d_map = c_map.multiply(0) # Initialize the direction map to zero.
d_map = d_map.where(det(diff).gt(0), 2) # All pos or neg def diffs are now labeled 2.
d_map = d_map.where(diff.select(0).gt(0), 3) # Re-label pos def (and label some indef) to 3.
d_map = d_map.where(det(diff).lt(0), 1) # Label all indef to 1.
c_map = c_map.multiply(d_map) # Re-label the c_map, 0*X = 0, 1*1 = 1, 1*2= 2, 1*3 = 3.
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(crop2018, {min: 0, max: 255}, 'crop2018')
mp.add_ee_layer(
c_map.updateMask(c_map.gt(0)), {
'min': 0,
'max': 3,
'palette': ['black', 'yellow', 'blue', 'red']
}, 'c_map')
mp.add_child(folium.LayerControl())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1.5, -0.4, 1.3])
vals, idx = np.unique(a, return_inverse=True)
b = np.zeros((a.size, vals.size))
b[np.arange(a.size), idx] = 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 케라스 모델로 추정기 생성하기
Step2: 간단한 케라스 모델 만들기
Step3: 모델을 컴파일한 후, 모델 구조를 요약해 출력할 수 있습니다.
Step4: 입력 함수 만들기
Step5: input_fn이 잘 구현되었는지 확인해봅니다.
Step6: tf.keras.model을 추정기로 변환하기
Step7: 추정기를 훈련한 후 평가합니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(3)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam')
model.summary()
def input_fn():
split = tfds.Split.TRAIN
dataset = tfds.load('iris', split=split, as_supervised=True)
dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels))
dataset = dataset.batch(32).repeat()
return dataset
for features_batch, labels_batch in input_fn().take(1):
print(features_batch)
print(labels_batch)
import tempfile
model_dir = tempfile.mkdtemp()
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model, model_dir=model_dir)
keras_estimator.train(input_fn=input_fn, steps=500)
eval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10)
print('Eval result: {}'.format(eval_result))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: High Order Taylor Maps I
Step2: We perform the numerical integration using floats (the standard way)
Step3: We perform the numerical integration using gduals (to get a HOTM)
Step4: We visualize the HOTM
Step5: How much faster is now to evaluate the Map rather than perform a new numerical integration?
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import os
import numpy as np
os.environ['DES_BACKEND'] = 'numpy'
import desolver as de
import desolver.backend as D
from desolver.backend import gdual_double as gdual
T = 1e-3
@de.rhs_prettifier(equ_repr="[vr, -1/r**2 + r*vt**2, vt, -2*vt*vr/r]", md_repr=r$$
\begin{array}{l}
\dot r = v_r \\
\dot v_r = - \frac 1{r^2} + r v_\theta^2\\
\dot \theta = v_\theta \\
\dot v_\theta = -2 \frac{v_\theta v_r}{r}
\end{array}
$$)
def eom_kep_polar(t,y,**kwargs):
return D.array([y[1], - 1 / y[0] / y[0] + y[0] * y[3]*y[3], y[3], -2*y[3]*y[1]/y[0] - T])
eom_kep_polar
# The initial conditions
ic = [1.,0.1,0.,1.]
D.set_float_fmt('float64')
float_integration = de.OdeSystem(eom_kep_polar, y0=ic, dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict())
float_integration.set_method("RK45")
float_integration.integrate(eta=True)
# Here we transform from polar to cartesian coordinates
# to then plot
y = float_integration.y
cx = [it[0]*np.sin(it[2]) for it in y.astype(np.float64)]
cy = [it[0]*np.cos(it[2]) for it in y.astype(np.float64)]
plt.plot(cx,cy)
plt.title("Orbit resulting from the chosen initial conditions")
plt.xlabel("x")
plt.ylabel("y")
# Order of the Taylor Map. If we have 4 variables the number of terms in the Taylor expansion in 329 at order 7
order = 5
# We now define the initial conditions as gdual (not float)
ic_g = [gdual(ic[0], "r", order), gdual(ic[1], "vr", order), gdual(ic[2], "t", order), gdual(ic[3], "vt", order)]
import time
start_time = time.time()
D.set_float_fmt('gdual_double')
gdual_integration = de.OdeSystem(eom_kep_polar, y0=ic_g, dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict())
gdual_integration.set_method("RK45")
gdual_integration.integrate(eta=True)
print("--- %s seconds ---" % (time.time() - start_time))
# We extract the last point
yf = gdual_integration.y[-1]
# And unpack it into some convinient names
rf,vrf,tf,vtf = yf
# We compute the final cartesian components
xf = rf * D.sin(tf)
yf = rf * D.cos(tf)
# Note that you can get the latex representation of the gdual
print(xf._repr_latex_())
print("xf (latex):")
xf
# We can extract the value of the polinomial when $\mathbf {dy} = 0$
print("Final x from the gdual integration", xf.constant_cf)
print("Final y from the gdual integration", yf.constant_cf)
# And check its indeed the result of the 'reference' trajectory (the lineariation point)
print("\nFinal x from the float integration", cx[-1])
print("Final y from the float integration", cy[-1])
# Let us now visualize the Taylor map by creating a grid of perturbations on the initial conditions and
# evaluating the map for those values
Npoints = 10 # 10000 points
epsilon = 1e-3
grid = np.arange(-epsilon,epsilon,2*epsilon/Npoints)
nxf = [0] * len(grid)**4
nyf = [0] * len(grid)**4
i=0
import time
start_time = time.time()
for dr in grid:
for dt in grid:
for dvr in grid:
for dvt in grid:
nxf[i] = xf.evaluate({"dr":dr, "dt":dt, "dvr":dvr,"dvt":dvt})
nyf[i] = yf.evaluate({"dr":dr, "dt":dt, "dvr":dvr,"dvt":dvt})
i = i+1
print("--- %s seconds ---" % (time.time() - start_time))
f, axarr = plt.subplots(1,3,figsize=(15,5))
# Normal plot of the final map
axarr[0].plot(nxf,nyf,'.')
axarr[0].plot(cx,cy)
axarr[0].set_title("The map")
# Zoomed plot of the final map (equal axis)
axarr[1].plot(nxf,nyf,'.')
axarr[1].plot(cx,cy)
axarr[1].set_xlim([cx[-1] - 0.1, cx[-1] + 0.1])
axarr[1].set_ylim([cy[-1] - 0.1, cy[-1] + 0.1])
axarr[1].set_title("Zoom")
# Zoomed plot of the final map (unequal axis)
axarr[2].plot(nxf,nyf,'.')
axarr[2].plot(cx,cy)
axarr[2].set_xlim([cx[-1] - 0.01, cx[-1] + 0.01])
axarr[2].set_ylim([cy[-1] - 0.1, cy[-1] + 0.1])
axarr[2].set_title("Stretch")
#axarr[1].set_xlim([cx[-1] - 0.1, cx[-1] + 0.1])
#axarr[1].set_ylim([cy[-1] - 0.1, cy[-1] + 0.1])
# First we profile the method evaluate (note that you need to call the method 4 times to get the full state)
%timeit xf.evaluate({"dr":epsilon, "dt":epsilon, "dvr":epsilon,"dvt":epsilon})
# Then we profile the Runge-Kutta 4 integrator
%%timeit
D.set_float_fmt('float64')
float_integration = de.OdeSystem(eom_kep_polar, y0=[it + epsilon for it in ic], dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict())
float_integration.set_method("RK45")
float_integration.integrate(eta=False)
# It seems the speedup is 2-3 orders of magnitude, but did we loose precision?
# We plot the error in the final result as computed by the HOTM and by the Runge-Kutta
# as a function of the distance from the original initial conditions
out = []
pert = np.arange(0,0.1,1e-3)
for epsilon in pert:
res_map_xf = xf.evaluate({"dr":epsilon, "dt":epsilon, "dvr":epsilon,"dvt":epsilon})
res_int = de.OdeSystem(eom_kep_polar, y0=[it + epsilon for it in ic], dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict())
res_int.set_method("RK45")
res_int.integrate()
res_int_x = [it.y[0]*np.sin(it.y[2]) for it in res_int]
res_int_xf = res_int_x[-1]
out.append(np.abs(res_map_xf - res_int_xf))
plt.semilogy(pert,out)
plt.title("Error introduced by the use of the polynomial")
plt.xlabel("Perturbation of the initial conditions")
plt.ylabel("Error in estimating the final state (x)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: Computing gradients by hand
Step3: PyTorch code
Step4: Autograd on a DNN
Step5: Let's visualize the model and all the parameters in each layer.
Step6: Access a specific parameter.
Step7: The gradient is not defined until we call backward.
Step8: Nested models
Step9: Let us access the 0 element of the top level sequence,
Step10: Backprop
Step11: Tied parameters
Step12: Other material
|
<ASSISTANT_TASK:>
Python Code:
import sklearn
import scipy
import scipy.optimize
import matplotlib.pyplot as plt
import itertools
import time
from functools import partial
import os
import numpy as np
from scipy.special import logsumexp
np.set_printoptions(precision=3)
import torch
import torch.nn as nn
import torchvision
print("torch version {}".format(torch.__version__))
if torch.cuda.is_available():
print(torch.cuda.get_device_name(0))
print("current device {}".format(torch.cuda.current_device()))
else:
print("Torch cannot find GPU")
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
# Fit the model usign sklearn
import sklearn.datasets
from sklearn.model_selection import train_test_split
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0'
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.linear_model import LogisticRegression
# We set C to a large number to turn off regularization.
# We don't fit the bias term to simplify the comparison below.
log_reg = LogisticRegression(solver="lbfgs", C=1e5, fit_intercept=False)
log_reg.fit(X_train, y_train)
w_mle_sklearn = np.ravel(log_reg.coef_)
print(w_mle_sklearn)
# Binary cross entropy
def BCE_with_logits(logits, targets):
N = logits.shape[0]
logits = logits.reshape(N, 1)
logits_plus = np.hstack([np.zeros((N, 1)), logits]) # e^0=1
logits_minus = np.hstack([np.zeros((N, 1)), -logits])
logp1 = -logsumexp(logits_minus, axis=1)
logp0 = -logsumexp(logits_plus, axis=1)
logprobs = logp1 * targets + logp0 * (1 - targets)
return -np.sum(logprobs) / N
# Compute using numpy
def sigmoid(x):
return 0.5 * (np.tanh(x / 2.0) + 1)
def predict_logit(weights, inputs):
return np.dot(inputs, weights) # Already vectorized
def predict_np(weights, inputs):
return sigmoid(predict_logit(weights, inputs))
def NLL(weights, batch):
X, y = batch
logits = predict_logit(weights, X)
return BCE_with_logits(logits, y)
def NLL_grad(weights, batch):
X, y = batch
N = X.shape[0]
mu = predict_np(weights, X)
g = np.sum(np.dot(np.diag(mu - y), X), axis=0) / N
return g
w_np = w_mle_sklearn
y_pred = predict_np(w_np, X_test)
loss_np = NLL(w_np, (X_test, y_test))
grad_np = NLL_grad(w_np, (X_test, y_test))
print("params {}".format(w_np))
# print("pred {}".format(y_pred))
print("loss {}".format(loss_np))
print("grad {}".format(grad_np))
# data. By default, numpy uses double but torch uses float
X_train_t = torch.tensor(X_train, dtype=torch.float)
y_train_t = torch.tensor(y_train, dtype=torch.float)
X_test_t = torch.tensor(X_test, dtype=torch.float)
y_test_t = torch.tensor(y_test, dtype=torch.float)
# parameters
W = np.reshape(w_mle_sklearn, [D, 1]) # convert 1d vector to 2d matrix
w_torch = torch.tensor(W, requires_grad=True, dtype=torch.float)
# w_torch.requires_grad_()
# binary logistic regression in one line of Pytorch
def predict(X, w):
y_pred = torch.sigmoid(torch.matmul(X, w))[:, 0]
return y_pred
# This returns Nx1 probabilities
y_pred = predict(X_test_t, w_torch)
# loss function is average NLL
criterion = torch.nn.BCELoss(reduction="mean")
loss_torch = criterion(y_pred, y_test_t)
print(loss_torch)
# Backprop
loss_torch.backward()
print(w_torch.grad)
# convert to numpy. We have to "detach" the gradient tracing feature
loss_torch = loss_torch.detach().numpy()
grad_torch = w_torch.grad[:, 0].detach().numpy()
# Test
assert np.allclose(loss_np, loss_torch)
assert np.allclose(grad_np, grad_torch)
print("loss {}".format(loss_torch))
print("grad {}".format(grad_torch))
torch.manual_seed(0)
net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(), nn.Linear(8, 1))
X = torch.rand(size=(2, 4)) # batch x Din, batch=2, Din=4
out = net(X) # batch x Dout, Dout=1
print(out)
print(net)
for i in range(3):
print(f"layer {i}")
print(net[i].state_dict())
print(*[(name, param.shape) for name, param in net.named_parameters()])
print(type(net[2].bias))
print(net[2].bias)
print(net[2].bias.data)
print(net.state_dict()["2.bias"].data)
net[2].weight.grad == None
def block1():
return nn.Sequential(nn.Linear(4, 8), nn.ReLU(), nn.Linear(8, 4), nn.ReLU())
def block2():
net = nn.Sequential()
for i in range(4):
# Nested here
net.add_module(f"block {i}", block1())
return net
rgnet = nn.Sequential(block2(), nn.Linear(4, 1))
print(rgnet(X))
print(rgnet)
rgnet[0][1][0].bias.data
# set loss function to output squared
out = rgnet(X)
loss = torch.mean(out**2, dim=0)
# Backprop
loss.backward()
print(rgnet[0][1][0].bias.grad)
# We need to give the shared layer a name so that we can refer to its
# parameters
torch.manual_seed(0)
shared = nn.Linear(8, 8)
net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(), shared, nn.ReLU(), shared, nn.ReLU(), nn.Linear(8, 1))
net(X)
# Check whether the parameters are the same
print(net[2].weight.data[0] == net[4].weight.data[0])
net[2].weight.data[0, 0] = 100
# Make sure that they are actually the same object rather than just having the
# same value
print(net[2].weight.data[0] == net[4].weight.data[0])
x = torch.tensor([-2, -1, 0, 1, 2], dtype=float, requires_grad=True)
print(x)
y = torch.pow(x, 2)
print(y)
y.backward(torch.ones_like(x))
print(x.grad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ниже аналоги команд для пользователей Windows
Step2: удаление директории, если она не нужна (windows)
|
<ASSISTANT_TASK:>
Python Code:
! echo 'hello, world!'
!echo $t
%%bash
mkdir test_directory
cd test_directory/
ls -a
#удаление директории, если она не нужна
! rm -r test_directory
%%cmd
mkdir test_directory
cd test_directory
dir
%%cmd
rmdir test_directiory
%lsmagic
%pylab inline
y = range(11)
y
plot(y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the Red-Black Tree
Step2: Multithreadedness
|
<ASSISTANT_TASK:>
Python Code:
import timeseries, TimeseriesDB, Similarity
import cs207rbtree.RedBlackTree as Database
dir(Database)
demoDB = Database.connect("/tmp/test1.dbdb")
demoDB.set("rahul", 81)
demoDB.set("pavlos", 20)
demoDB.set("sarah", 29)
demoDB.set("courtney", 11)
demoDB.set("andrew", 12)
demoDB.set("laura", 81)
demoDB.get("sarah")
demoDB.get("laura")
from cs207rbtree import RedBlackTree
from threading import Thread
from pytest import raises
import portalocker
import os
def thread_function(num):
print("FIRST FUN")
db = RedBlackTree.connect("/tmp/test6.dbdb")
db.set("kobe", "baby"+str(num))
print("1")
db.set("rahul", "veryyoung"+str(num))
print("2")
db.set("pavlos", "stillyoung"+str(num))
print("3")
db.set("andy", "old"+str(num))
print("4")
db.set("lisa", "ancient"+str(num))
print("5")
db.commit()
print("6")
def thread_function2():
db = RedBlackTree.connect("/tmp/test6.dbdb")
for i in ["kobe","rahul","pavlos","andy","lisa"]:
print("SECOND FUNC")
with raises(KeyError):
print("FAILED")
print(db.get(i))
os.remove('/tmp/test6.dbdb')
t1 = Thread(target=thread_function, args=([1]))
t2 = Thread(target=thread_function2)#, args=(2))
t1.start()
t2.start()
print("DONE")
import time
def thread_function():
print("THREAD 1")
db = RedBlackTree.connect("/tmp/test6.dbdb")
db.set("Laura", "Ware")
time.sleep(200)
print("THREAD ONE DONE SLEEPING")
db.commit()
print("COMMITED RESULTS")
def thread_function2():
print("THREAD 2")
db2 = RedBlackTree.connect("/tmp/test6.dbdb")
with raises(KeyError):
print(db2.get('Laura'))
print("THERE")
time.sleep(10)
print("THREAD TWO DONE SLEEPING")
print(db2.get('Laura'))
os.remove('/tmp/test6.dbdb')
#t1 = Thread(target=thread_function)
#t2 = Thread(target=thread_function2)
#t1.start()
#t2.start()
import multiprocessing
p = multiprocessing.Process(target=thread_function)
p2 = multiprocessing.Process(target=thread_function2)
p.start()
p2.start()
print("I AM DONE")
#db = RedBlackTree.connect("/tmp/test6.dbdb")
#db.set("Laura", "Ware")
#print("HERE")
#db2 = RedBlackTree.connect("/tmp/test6.dbdb")
#print(db2.get("Laura"))
#print("HERE")
os.remove('/tmp/test6.dbdb')
db = RedBlackTree.connect("/tmp/test6.dbdb")
db.set("Laura", "Ware")
print("HERE")
db2 = RedBlackTree.connect("/tmp/test6.dbdb")
#print("CONNECTED")
with raises(KeyError):
print(db2.get("Laura"))
db.commit()
print(db2.get("Laura"))
print("HERE")
db.close()
db2.close()
from portalocker.utils import Lock
from portalocker import *
alock = Lock("/tmp/test6.dbdb", timeout=5)
#with assertRaises(Exception): #LockException
#print("HERE")
alock.acquire()
print("DONE")
from TimeseriesDB.MessageFormatting import *
import importlib
import unittest
from pytest import raises
import numpy as np
from TimeseriesDB.tsdb_error import *
from TimeseriesDB import DatabaseServer
from TimeseriesDB.MessageFormatting import * #Deserializer
from Similarity.find_most_similar import find_most_similiar, sanity_check
from TimeseriesDB.simsearch_init import initialize_simsearch_parameters
from socketserver import BaseRequestHandler, ThreadingTCPServer, TCPServer
from timeseries.ArrayTimeSeries import ArrayTimeSeries as ts
import threading
from socket import socket, AF_INET, SOCK_STREAM
import sys
from scipy.stats import norm
import multiprocessing
def query_1():
#function to compute simsearch
print("QUERY1")
s = socket(AF_INET, SOCK_STREAM)
s.connect(('localhost', 20000))
d2 = {'op':'simsearch_id','id':12,'n_closest':2,'courtesy':'please'}
s2 = serialize(json.dumps(d2))
s.send(s2)
msg = s.recv(8192)
ds = Deserializer()
ds.append(msg)
ds.ready()
response = ds.deserialize()
print(response)
s.close()
def query_2():
#function to return timeseries from id
print("QUERY2")
s = socket(AF_INET, SOCK_STREAM)
s.connect(('localhost', 20000))
d2 = {'op':'TSfromID','id':12,'courtesy':'please'}
s2 = serialize(json.dumps(d2))
s.send(s2)
msg = s.recv(8192)
ds = Deserializer()
ds.append(msg)
ds.ready()
response = ds.deserialize()
print(response)
s.close()
TCPServer.allow_reuse_address = True
serv = TCPServer(('', 20000), DatabaseServer)
serv.data = initialize_simsearch_parameters()
serv.deserializer = Deserializer()
serv_thread = threading.Thread(target=serv.serve_forever)
serv_thread.setDaemon(True)
serv_thread.start()
p = multiprocessing.Process(target=query_1)
p2 = multiprocessing.Process(target=query_2)
p.start()
p2.start()
serv.socket.close()
serv.server_close()
print("DONE")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The source space
Step2: Fixed dipole orientations
Step3: Restricting the dipole orientations in this manner leads to the following
Step4: The direction of the estimated current is now restricted to two directions
Step5: When computing the source estimate, the activity at each of the three dipoles
Step6: Limiting orientations, but not fixing them
Step7: Discarding dipole orientation information
|
<ASSISTANT_TASK:>
Python Code:
import mne
import numpy as np
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')
subject = 'sample'
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
dip_ori = lh['nn'][lh['vertno']]
dip_len = len(dip_pos)
dip_times = [0]
white = (1.0, 1.0, 1.0) # RGB values for a white color
actual_amp = np.ones(dip_len) # misc amp to create Dipole instance
actual_gof = np.ones(dip_len) # misc GOF to create Dipole instance
dipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)
trans = mne.read_trans(trans_fname)
fig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)
coord_frame = 'mri'
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white',
coord_frame=coord_frame, fig=fig)
# Mark the position of the dipoles with small red dots
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='sphere', subject=subject,
subjects_dir=subjects_dir, coord_frame=coord_frame, scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the dipoles as arrows pointing along the surface normal
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='arrow', subject=subject,
subjects_dir=subjects_dir, coord_frame='head', scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
# Compute the source estimate for the 'left - auditory' condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the three dipoles defined at each location in the source space
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, fwd=fwd,
surfaces='white', coord_frame='head', fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comparing line by line by eye is a bit annoying, so using Python
Step2: Then we can just use sets to compare
Step3: Installing just those in the fresh and trying again. Installing in this order
Step4: installed pyparsing
Step5: installed pytz
Step6: installed terminado==0.4
Step7: installed matplotlib
Step8: installed param
Step9: installed holoviews
|
<ASSISTANT_TASK:>
Python Code:
cd ..
!cat magical.freeze
!cat fresh.freeze
magical = []
with open("magical.freeze") as f:
for l in f:
magical.append(l)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
fresh = []
with open("fresh.freeze") as f:
for l in f:
fresh.append(l)
set(magical) - set(fresh)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Masking PAD symbols in attention weights
Step2: Masked assignment
Step3: Let's create a mask for the 'valid' symbols
Step4: Now we want to compute softmax on the 'valid' elements
Step5: In practice, a large negative number might work
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import torch
def softmax(mtx):
Compute softmax on 2D tensor alon the second dimension
e = np.exp(mtx)
s = np.sum(e, axis=1)
return e / s[:, None]
X = np.arange(18, dtype=np.float64).reshape(3, 6)
X[2, 4] = float("-inf")
s = softmax(X)
s
np.random.seed(1)
sample_no, sample_len = (4, 6)
data = np.zeros((sample_no, sample_len), dtype=np.float32)
seq_len = np.array([4, 1, 6, 3], dtype=np.int32)
mask = np.arange(sample_len) < seq_len[:, None]
data[~mask] = 1
annot = np.random.random(data.shape).round(1) * 2 + 3
fig, ax = plt.subplots(1, 2, sharey=True, figsize=(12, 3))
sns.heatmap(data, ax=ax[0], cbar=False, linewidths=.5, cmap='Set3',
annot=annot, xticklabels=False, yticklabels=False,
annot_kws={'fontsize': 'x-large'}, fmt=".2")
ax[0].set_title("Attention weights before softmax")
ax[1].set_title("Attention weights after softmax")
annot[~mask] = float("-inf")
annot = softmax(annot)
sns.heatmap(data, ax=ax[1], cbar=False, linewidths=.5, cmap='Set3',
annot=annot, xticklabels=False, yticklabels=False,
annot_kws={'fontsize': 'x-large'})
fig.savefig("softmax_before_after.png", dpi=100)
fig, ax = plt.subplots(1, figsize=(5, 3))
annot[~mask] = float("-inf")
annot = softmax(annot)
sns.heatmap(data, ax=ax, cbar=False, linewidths=.5, cmap='Set3',
annot=annot, xticklabels=False, yticklabels=False,
annot_kws={'fontsize': 'x-large'})
fig.savefig("masked_attention_final.png", dpi=100)
annot = np.random.random(data.shape).round(2) * 2 + 3
annot[~mask] = float("-inf")
fig, ax = plt.subplots(1, figsize=(5.2, 3))
sns.heatmap(data, ax=ax, cbar=False, linewidths=.5, cmap='Set3',
annot=annot, xticklabels=False, yticklabels=False,
annot_kws={'fontsize': 'x-large'})
fig.savefig("masked_attention_inf.png", dpi=100)
sample_no, sample_len = (4, 6)
data = np.zeros((sample_no, sample_len), dtype=np.float32)
seq_len = np.array([4, 1, 6, 3], dtype=np.int32)
mask = np.arange(sample_len) < seq_len[:, None]
data[~mask] = 1
fig, ax = plt.subplots(1, 2, figsize=(6, 3), gridspec_kw = {'width_ratios':[6, 1]})
sns.heatmap(data, ax=ax[0], cbar=False, linewidths=.5, cmap='Set3',
xticklabels=False, yticklabels=False,)
ax[0].set_title("Padded sequences")
sns.heatmap(np.zeros((sample_no, 1)), annot=seq_len[:, None], ax=ax[1], cmap='Set3',
cbar=False, linewidths=.5, annot_kws={'fontsize': 'x-large'},
xticklabels=False, yticklabels=False,)
ax[1].set_title("Length")
fig.savefig("padded_sequence.png", dpi=100)
X = torch.arange(12).view(4, 3)
mask = torch.zeros((4, 3), dtype=torch.uint8) # same as dtype=torch.ByteTensor
mask[0, 0] = 1
mask[1, 1] = 1
mask[3, 2] = 1
X[mask] = 100
print(X)
X = torch.arange(12).view(4, 3)
X[~mask] = 100
print(X)
X = np.random.random((4, 6)).round(1) * 2 + 3
X = torch.from_numpy(X)
X_len = torch.LongTensor([4, 1, 6, 3])
maxlen = X.size(1)
%%timeit
mask = torch.arange(maxlen)[None, :] < X_len[:, None]
%%timeit
idx = torch.arange(maxlen).unsqueeze(0).expand(X.size())
len_expanded = X_len.unsqueeze(1).expand(X.size())
mask = idx < len_expanded
mask
X = np.random.random((4, 6)).round(1) * 2 + 3
X = torch.from_numpy(X)
maxlen = X.size(1)
mask = torch.arange(maxlen)[None, :] < X_len[:, None]
X[~mask] = float('-inf')
print(torch.softmax(X, dim=1))
X = np.random.random((4, 6)).round(1) * 2 + 3
X = torch.from_numpy(X)
maxlen = X.size(1)
mask = torch.arange(maxlen)[None, :] < X_len[:, None]
X[~mask] = -10000
print(torch.softmax(X, dim=1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice that both threads share the same process memory space.
Step2: But ... why a has not been modified? Why the processed do not share a?
Step3: 3. "Low-level" coroutines
Step4: 3. Using 'asyncio'
|
<ASSISTANT_TASK:>
Python Code:
import threading
import time
a = ''
def task_1():
global a
for i in range(10):
print('o', end='', flush=True)
a += 'o'
print(a)
time.sleep(1) # Blocking -> yield to other thread
def task_2():
global a
for i in range(20):
print('O', end='', flush=True)
a += 'O'
print(a)
time.sleep(0.6) # Blocking -> yield to other thread
thread_1 = threading.Thread(target=task_1)
thread_2 = threading.Thread(target=task_2)
thread_1.start()
thread_2.start()
print("(Both threads have started)")
thread_1.join() # Wait for thread_1 to finish
thread_2.join()
print("\nBoth threads have finished")
print(a)
# This code does not work!
import multiprocessing
import time
a = '' # Does not work because each process has its own 'a'
def task_1():
global a
for i in range(10):
print('o', end='', flush=True)
a += 'o'
print(a)
time.sleep(1) # Blocking -> yield to other process
def task_2():
global a
for i in range(20):
print('O', end='', flush=True)
a += 'O'
print(a)
time.sleep(0.6) # Blocking -> yield to other process
process_1 = multiprocessing.Process(target=task_1)
process_2 = multiprocessing.Process(target=task_2)
process_1.start()
process_2.start()
print("(Both processes have started)")
process_1.join()
process_2.join()
print("\nBoth processes have finished")
print(a)
import multiprocessing
import time
import ctypes
def task_1(a):
for i in range(10):
print('o', end='', flush=True)
a.value += 'o'
time.sleep(1) # Blocking -> yield to other process
def task_2(a):
for i in range(20):
print('O', end='', flush=True)
a.value += 'O'
time.sleep(0.6) # Blocking -> yield to other process
manager = multiprocessing.Manager()
# See https://docs.python.org/3/library/ctypes.html#module-ctypes
a = manager.Value(ctypes.c_char_p, "")
process_1 = multiprocessing.Process(target=task_1, args=(a,))
process_2 = multiprocessing.Process(target=task_2, args=(a,))
process_1.start()
process_2.start()
print("(Both processes have started)")
process_1.join()
process_2.join()
print("\nBoth processes have finished")
print(a.value)
import time
a = ''
def task_1():
global a
for i in range(20):
print('o', end='', flush=True)
a += 'o'
yield
time.sleep(0.5)
def task_2():
global a
for i in range(20):
print('O', end='', flush=True)
a += 'O'
yield
time.sleep(0.25)
t1 = task_1()
t2 = task_2()
now = time.perf_counter() # Real time (not only user time)
for i in range(20):
t1.__next__() # Call task_1()
t2.__next__() # Call task_2()
elapsed = time.perf_counter() - now
print(f"\nelapsed {elapsed:0.2f} seconds.")
print(20*0.5 + 20*0.25)
print("a =", a)
import asyncio
import time
a = ''
async def task_1():
global a
for i in range(20):
print('o', end='', flush=True)
a += 'o'
await asyncio.sleep(0.5)
async def task_2():
global a
for i in range(20):
print('O', end='', flush=True)
a += 'O'
await asyncio.sleep(0.25)
now = time.perf_counter()
await asyncio.gather(task_1(), task_2())
elapsed = time.perf_counter() - now
print(f"\nelapsed {elapsed:0.2f} seconds.")
print("a =", a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CSV Module
Step2: Using Numpy
Step3: Pandas
Step4: Pandas dataframes do behave a bit differently than a lot of list based structures in Python, but we'll learn how to work with them soon. If you just want to pull the core data out of a dataframe you can do this using the values member (a member is just a variable associated with an object).
|
<ASSISTANT_TASK:>
Python Code:
datafile = open('./data/examp_data.txt', 'r')
data = []
for row in datafile:
data.append(row.strip().split(','))
data
import csv
datafile = open('./data/examp_data.txt', 'r')
datareader = csv.reader(datafile, delimiter=',')
data = []
for row in datareader:
data.append(row)
data
import numpy
data = numpy.genfromtxt('./data/examp_data.txt', delimiter = ',', skip_header=1)
data
import pandas as pd
data = pd.read_csv('./data/examp_data.txt')
data
data.values
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting and setting configuration variables
Step2: Note that the string values read from the JSON file are not parsed in any
Step3: If you're unsure whether a config variable has been set, there is a
Step4: There are also two convenience modes of
Step5: The second convenience mode will return a
Step6: It is possible to add config variables that are not part of the recognized
Step7: Let's delete that config variable we just created. To unset a config
Step8: Where configurations are stored
Step9: However it is not a good idea to directly edit files in the
Step10: Also by default,
Step11: Logging
Step12: The logging levels that can be set as config variables are debug,
Step13: No messages were generated, because none of the messages were of severity
Step14: This time, we got a few messages about extracting information from the file,
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
print(mne.get_config('MNE_USE_CUDA'))
print(type(mne.get_config('MNE_USE_CUDA')))
try:
mne.set_config('MNE_USE_CUDA', True)
except TypeError as err:
print(err)
print(mne.get_config('missing_config_key', default='fallback value'))
print(mne.get_config()) # same as mne.get_config(key=None)
print(mne.get_config(key=''))
mne.set_config('MNEE_USE_CUUDAA', 'false')
mne.set_config('MNEE_USE_CUUDAA', None)
assert 'MNEE_USE_CUUDAA' not in mne.get_config('')
print(mne.get_config_path())
# make sure it's not in the JSON file (no error means our assertion held):
assert mne.get_config('PATH', use_env=False) is None
# but it *is* in the environment:
print(mne.get_config('PATH'))
mne.set_config('foo', 'bar', set_env=False)
print('foo' in os.environ.keys())
mne.set_config('foo', 'bar')
print('foo' in os.environ.keys())
mne.set_config('foo', None) # unsetting a key deletes var from environment
print('foo' in os.environ.keys())
print(mne.get_config('MNE_LOGGING_LEVEL'))
kit_data_path = os.path.join(os.path.abspath(os.path.dirname(mne.__file__)),
'io', 'kit', 'tests', 'data', 'test.sqd')
raw = mne.io.read_raw_kit(kit_data_path, verbose='warning')
raw = mne.io.read_raw_kit(kit_data_path, verbose='info')
raw = mne.io.read_raw_kit(kit_data_path, verbose='debug')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ipyvolume
Step2: A Billion stars in the Jupyter notebook
|
<ASSISTANT_TASK:>
Python Code:
import vaex
import numpy as np
np.warnings.filterwarnings('ignore')
dstaxi = vaex.open('src/nyc_taxi2015.hdf5') # mmapped, doesn't cost extra memory
dstaxi.plot_widget("pickup_longitude", "pickup_latitude", f="log", backend="ipyleaflet", shape=600)
dstaxi.plot_widget("dropoff_longitude", "dropoff_latitude", f="log", backend="ipyleaflet",
z="dropoff_hour", type="slice", z_shape=24, shape=400, z_relative=True,
limits=[None, None, (-0.5, 23.5)])
ds = vaex.datasets.helmi_de_zeeuw.fetch()
ds.plot_widget("x", "y", f="log", limits=[-20, 20])
ds.plot_widget("Lz", "E", f="log")
import ipyvolume as ipv
import numpy as np
np.warnings.filterwarnings('ignore')
ipv.example_ylm();
N = 1000
x, y, z = np.random.random((3, N))
fig = ipv.figure()
scatter = ipv.scatter(x, y, z, marker='box')
ipv.show()
scatter.x = scatter.x + 0.1
scatter.color = "green"
scatter.size = 5
scatter.color = np.random.random((N,3))
scatter.size = 2
ipv.figure()
ipv.style.use('dark')
quiver = ipv.quiver(*ipv.datasets.animated_stream.fetch().data[:,::,::4], size=5)
ipv.animation_control(quiver, interval=200)
ipv.show()
ipv.style.use('light')
quiver.size = np.random.random(quiver.x.shape) * 10
quiver.color = np.random.random(quiver.x.shape + (3,))
quiver.geo = "cat"
# stereo
quiver.geo = "arrow"
N = 1000*1000
x, y, z = np.random.random((3, N)).astype('f4')
ipv.figure()
s = ipv.scatter(x, y, z, size=0.2)
ipv.show()
s.size = 0.1
#ipv.screenshot(width=2048, height=2048)
plot3d = ds.plot_widget("x", "y", "z", vx="vx", vy="vy", vz="vz",
backend="ipyvolume", f="log1p", shape=100, smooth_pre=0.5)
plot3d.vcount_limits = [50, 100000]
plot3d.backend.quiver.color = "red"
import ipywidgets as widgets
widgets.ColorPicker()
widgets.jslink((plot3d.backend.quiver, 'color'), (_, 'value'))
ipv.save("kapteyn-lunch-talk-2018.html")
!open kapteyn-lunch-talk-2018.html
# webrtc demo if time permits
import vaex
#gaia = vaex.open("ws://gaia:9000/gaia-dr1")
gaia = vaex.open('/Users/maartenbreddels/datasets/gaia/gaia-dr1-minimal_f4.hdf5')
%matplotlib inline
f"{len(gaia):,}"
ra_dec_limits = [[0, 360], [-90, 90]]
gaia.set_active_fraction(0.01)
gaia.plot_widget("ra", "dec", limits=ra_dec_limits)
gaia.mean("phot_g_mean_mag", selection=True)
gaia.plot1d("phot_g_mean_mag", selection=False, n=True, limits=[10, 22])
gaia.plot1d("phot_g_mean_mag", selection=True, show=True, n=True, limits=[10, 22])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set background parameters
Step2: Slug
Step3: Load data
Step4: Create single layer conceptual model
Step5: Calibrate with two datasets simultaneously
Step6: Try adding well skin resistance res
Step7: Adding well screen resistance does not improve the performance obviously. While the AIC value increases. Thus, res should be removed from the model.
Step8: Calibrate with two datasets simultaneously
Step9: Summary of values presented by AQTESOLV & MLU
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from ttim import *
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
H0 = 2.798 #initial displacement in m
b = -6.1 #aquifer thickness
rw1 = 0.102 #well radius of Ln-2 Well
rw2 = 0.071 #well radius of observation Ln-3 Well
rc1 = 0.051 #casing radius of Ln-2 Well
rc2 = 0.025 #casing radius of Ln-3 Well
r = 6.45 #distance from observation well to test well
Q = np.pi * rc1 ** 2 * H0
print('Slug:', round(Q, 5), 'm^3')
data1 = np.loadtxt('data/ln-2.txt')
t1 = data1[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h1 = data1[:, 1]
data2 = np.loadtxt('data/ln-3.txt')
t2 = data2[:, 0] / 60 / 60 / 24
h2 = data2[:, 1]
ml_0 = ModelMaq(kaq=10, z=[0, b], Saq=1e-4, \
tmin=1e-5, tmax=0.01)
w_0 = Well(ml_0, xw=0, yw=0, rw=rw1, rc=rc1, tsandQ=[(0, -Q)], layers=0, wbstype='slug')
ml_0.solve()
#unknown parameters: kaq, Saq
ca_0 = Calibrate(ml_0)
ca_0.set_parameter(name='kaq0', initial=10)
ca_0.set_parameter(name='Saq0', initial=1e-4)
ca_0.series(name='Ln-2', x=0, y=0, layer=0, t=t1, h=h1)
ca_0.series(name='Ln-3', x=r, y=0, layer=0, t=t2, h=h2)
ca_0.fit(report=True)
display(ca_0.parameters)
print('RMSE:', ca_0.rmse())
hm1_0 = ml_0.head(0, 0, t1, layers=0)
hm2_0 = ml_0.head(r, 0, t2, layers=0)
plt.figure(figsize=(8, 5))
plt.semilogx(t1, h1/H0, '.', label='obs ln-2')
plt.semilogx(t1, hm1_0[0]/H0, label='ttim ln-2')
plt.semilogx(t2, h2/H0, '.', label='obs ln-3')
plt.semilogx(t2, hm2_0[0]/H0, label='ttim ln-3')
plt.xlabel('time(d)')
plt.ylabel('h/H0')
plt.legend();
ml_1 = ModelMaq(kaq=10, z=[0, b], Saq=1e-4, \
tmin=1e-5, tmax=0.01)
w_1 = Well(ml_1, xw=0, yw=0, rw=rw1, res=0, rc=rc1, tsandQ=[(0, -Q)], layers=0, wbstype='slug')
ml_1.solve()
#unknown parameters: kaq, Saq, res
ca_1 = Calibrate(ml_1)
ca_1.set_parameter(name='kaq0', initial=10)
ca_1.set_parameter(name='Saq0', initial=1e-4)
ca_1.set_parameter_by_reference(name='res', parameter=w_1.res, initial=0)
ca_1.series(name='Ln-2', x=0, y=0, layer=0, t=t1, h=h1)
ca_1.series(name='Ln-3', x=r, y=0, layer=0, t=t2, h=h2)
ca_1.fit(report=True)
display(ca_1.parameters)
print('RMSE:', ca_1.rmse())
hm1_1 = ml_1.head(0, 0, t1, layers=0)
hm2_1 = ml_1.head(r, 0, t2, layers=0)
plt.figure(figsize=(8, 5))
plt.semilogx(t1, h1/H0, '.', label='obs ln-2')
plt.semilogx(t1, hm1_1[0]/H0, label='ttim ln-2')
plt.semilogx(t2, h2/H0, '.', label='obs ln-3')
plt.semilogx(t2, hm2_1[0]/H0, label='ttim ln-3')
plt.xlabel('time(d)')
plt.ylabel('h/H0')
plt.legend();
#Determine elevations of each layer.
#Thickness of each layer is set to be 0.5 m.
z = np.arange(0, b, -0.5)
zlay = np.append(z, b)
nlay = len(zlay) - 1
Saq_2 = 1e-4 * np.ones(nlay)
n = np.arange(0, 13,1)
ml_2 = Model3D(kaq=10, z=zlay, Saq=Saq_2, kzoverkh=1, tmin=1e-5, tmax=0.01, \
phreatictop=True)
w_2 = Well(ml_2, xw=0, yw=0, rw=rw1, tsandQ=[(0, -Q)], layers=n, rc=rc1, \
wbstype='slug')
ml_2.solve()
ca_2 = Calibrate(ml_2)
ca_2.set_parameter(name='kaq0_12', initial=10)
ca_2.set_parameter(name='Saq0_12', initial=1e-4, pmin=0)
ca_2.series(name='Ln-2', x=0, y=0, layer=n, t=t1, h=h1)
ca_2.series(name='Ln-3', x=r, y=0, layer=n, t=t2, h=h2)
ca_2.fit(report=True)
display(ca_2.parameters)
print('RMSE:', ca_2.rmse())
hm1_2 = ml_2.head(0, 0, t1, layers=n)
hm2_2 = ml_2.head(r, 0, t2, layers=n)
plt.figure(figsize=(8, 5))
plt.semilogx(t1, h1/H0, '.', label='obs ln-2')
plt.semilogx(t1, hm1_2[0]/H0, label='ttim ln-2')
plt.semilogx(t2, h2/H0, '.', label='obs ln-3')
plt.semilogx(t2, hm2_2[0]/H0, label='ttim ln-3')
plt.xlabel('time(d)')
plt.ylabel('h/H0')
plt.legend();
t = pd.DataFrame(columns=['k [m/d]', 'Ss [1/m]'], \
index=['MLU', 'AQTESOLV', 'ttim-single', 'ttim-multi'])
t.loc['AQTESOLV'] = [1.166, 9.368E-06]
t.loc['MLU'] = [1.311, 8.197E-06]
t.loc['ttim-single'] = ca_0.parameters['optimal'].values
t.loc['ttim-multi'] = ca_2.parameters['optimal'].values
t['RMSE'] = [0.010373, 0.009151, ca_0.rmse(), ca_1.rmse()]
t
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in data and perform basic preprocessing
Step2: Explore the electrodes on a template brain
Step3: Compute frequency features of the data
Step4: Visualize the time-evolution of the gamma power on the brain
Step5: We can project gamma power from the sensor data to the nearest locations on
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Chris Holdgraf <choldgraf@gmail.com>
# Adam Li <adam2392@gmail.com>
# Alex Rockhill <aprockhill@mailbox.org>
# Liberty Hamilton <libertyhamilton@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from mne_bids import BIDSPath, read_raw_bids
import mne
from mne.viz import plot_alignment, snapshot_brain_montage
print(__doc__)
# paths to mne datasets - sample ECoG and FreeSurfer subject
bids_root = mne.datasets.epilepsy_ecog.data_path()
sample_path = mne.datasets.sample.data_path()
subjects_dir = op.join(sample_path, 'subjects')
# first define the bids path
bids_path = BIDSPath(root=bids_root, subject='pt1', session='presurgery',
task='ictal', datatype='ieeg', extension='vhdr')
# then we'll use it to load in the sample dataset
# Here we use a format (iEEG) that is only available in MNE-BIDS 0.7+, so it
# will emit a warning on versions <= 0.6
raw = read_raw_bids(bids_path=bids_path, verbose=False)
# Pick only the ECoG channels, removing the EKG channels
raw.pick_types(ecog=True)
# Load the data
raw.load_data()
# Then we remove line frequency interference
raw.notch_filter([60], trans_bandwidth=3)
# drop bad channels
raw.drop_channels(raw.info['bads'])
# the coordinate frame of the montage
montage = raw.get_montage()
print(montage.get_positions()['coord_frame'])
# add fiducials to montage
montage.add_mni_fiducials(subjects_dir)
# now with fiducials assigned, the montage will be properly converted
# to "head" which is what MNE requires internally (this is the coordinate
# system with the origin between LPA and RPA whereas MNI has the origin
# at the posterior commissure)
raw.set_montage(montage)
# Find the annotated events
events, event_id = mne.events_from_annotations(raw)
# Make a 25 second epoch that spans before and after the seizure onset
epoch_length = 25 # seconds
epochs = mne.Epochs(raw, events, event_id=event_id['onset'],
tmin=13, tmax=13 + epoch_length, baseline=None)
# Make evoked from the one epoch and resample
evoked = epochs.average().resample(200)
del epochs
fig = plot_alignment(raw.info, trans='fsaverage',
subject='fsaverage', subjects_dir=subjects_dir,
surfaces=['pial'], coord_frame='head')
mne.viz.set_3d_view(fig, azimuth=0, elevation=70)
xy, im = snapshot_brain_montage(fig, raw.info)
sfreq = 10
gamma_power_t = evoked.copy().filter(30, 90).apply_hilbert(
envelope=True).resample(sfreq)
gamma_info = gamma_power_t.info
# convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in raw.info['ch_names']])
# get a colormap to color nearby points similar colors
cmap = get_cmap('viridis')
# create the figure of the brain with the electrode positions
fig, ax = plt.subplots(figsize=(5, 5))
ax.set_title('Gamma power over time', size='large')
ax.imshow(im)
ax.set_axis_off()
# normalize gamma power for plotting
gamma_power = -100 * gamma_power_t.data / gamma_power_t.data.max()
# add the time course overlaid on the positions
x_line = np.linspace(-0.025 * im.shape[0], 0.025 * im.shape[0],
gamma_power_t.data.shape[1])
for i, pos in enumerate(xy_pts):
x, y = pos
color = cmap(i / xy_pts.shape[0])
ax.plot(x_line + x, gamma_power[i] + y, linewidth=0.5, color=color)
xyz_pts = np.array([dig['r'] for dig in evoked.info['dig']])
src = mne.read_source_spaces(
op.join(subjects_dir, 'fsaverage', 'bem', 'fsaverage-ico-5-src.fif'))
stc = mne.stc_near_sensors(gamma_power_t, trans='fsaverage',
subject='fsaverage', src=src,
mode='nearest', subjects_dir=subjects_dir,
distance=0.02)
vmin, vmid, vmax = np.percentile(gamma_power_t.data, [10, 25, 90])
clim = dict(kind='value', lims=[vmin, vmid, vmax])
brain = stc.plot(surface='pial', hemi='rh', colormap='inferno', colorbar=False,
clim=clim, views=['lat', 'med'], subjects_dir=subjects_dir,
size=(250, 250), smoothing_steps='nearest',
time_viewer=False)
brain.add_sensors(raw.info, trans='fsaverage')
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=1, interpolation='linear', framerate=3,
# time_viewer=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What are the features?
Step2: Linear regression
Step3: Splitting X and y into training and testing sets
Step4: Linear regression in scikit-learn
Step5: Interpreting model coefficients
Step6: Making predictions
Step7: We need an evaluation metric in order to compare our predictions with the actual values!
Step8: Classifications on the iris dataset
Step9: Logistic regression
Step10: Evaluation metric
Step11: Alternatives
Step12: Apply SVM to iris
Step13: Supervised Learning In-Depth
Step14: Motivating Random Forests
Step15: Ensemble the decision tress
Step16: Unsupervised learning
Step17: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
Step18: Let's use scikit-learn for K-means clustering on Iris dataset
Step19: Introducing Principal Component Analysis
Step20: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# read CSV file directly from a URL and save the results
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# display the first 5 rows
data.head()
data.shape
# conventional way to import seaborn
import seaborn as sns
# allow plots to appear within the notebook
%matplotlib inline
sns.pairplot(data, x_vars=['TV','radio','newspaper'], y_vars='sales', size=7, aspect=0.7, kind='reg')
feature_cols = ['TV', 'radio', 'newspaper']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# equivalent command to do this in one line
X = data[['TV', 'radio', 'newspaper']]
# print the first 5 rows
X.head()
print(type(X))
print(X.shape)
# select a Series from the DataFrame
y = data['sales']
# equivalent command that works if there are no spaces in the column name
y = data.sales
# print the first 5 values
y.head()
# check the type and shape of y
print(type(y))
print(y.shape)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# default split is 75% for training and 25% for testing
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# import model
from sklearn.linear_model import LinearRegression
# instantiate
linreg = LinearRegression()
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# print the intercept and coefficients
print(linreg.intercept_)
print(linreg.coef_)
# pair the feature names with the coefficients
list(zip(feature_cols, linreg.coef_))
y_pred = linreg.predict(X_test)
print(y_pred)
from sklearn import metrics
import numpy as np
print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# import load_iris function from datasets module
from sklearn.datasets import load_iris
iris = load_iris()
type(iris)
# print the iris data
print(iris.feature_names)
print(len(iris.data))
# print integers representing the species of each observation
print(iris.target)
print(len(iris.target))
# print the encoding scheme for species: 0 = setosa, 1 = versicolor, 2 = virginica
print(iris.target_names)
X = iris.data
# store response vector in "y"
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(y_pred)
print(metrics.accuracy_score(y_test, y_pred))
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn;
from scipy import stats
import pylab as pl
seaborn.set()
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
# Draw three lines that couple separate the data
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
from sklearn.svm import SVC
clf = SVC(kernel='linear')
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
print(y_pred)
print(metrics.accuracy_score(y_test, y_pred))
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
from sklearn.cross_validation import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
metrics.accuracy_score(ypred, ytest)
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_jobs=2, random_state=0)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
metrics.accuracy_score(ypred, ytest)
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
from sklearn import datasets, cluster
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2)
# load data
iris = datasets.load_iris()
X_iris = iris.data
y_iris = iris.target
k_means = cluster.KMeans(n_clusters=3)
k_means.fit(X_iris)
labels = k_means.labels_
# check how many of the samples were correctly labeled
correct_labels = sum(y_iris == labels)
print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y_iris.size))
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting style defaults
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Test Brown Corpus
Step2: 2. Test NLTK Book Resources
Step3: 3. Sent Tokenize(sentence boundary detection, sentence segmentation), Word Tokenize and Pos Tagging
Step4: 4. Sentence Tokenize and Word Tokenize
Step6: Tokenizing text into words
Step7: 5. Part-Of-Speech Tagging and POS Tagger
Step8: TnT POS Tagger Model
Step9: 6. Stemming
Step10: 7. Lemmatization
Step11: Some simple things you can do with NLTK
|
<ASSISTANT_TASK:>
Python Code:
import nltk
from nltk.corpus import brown
brown.words()[0:10]
brown.tagged_words()[0:10]
len(brown.words())
dir(brown)
from nltk.book import *
dir(text1)
len(text1)
from nltk import sent_tokenize, word_tokenize, pos_tag
text = "Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you'll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you'll learn about some of Silicon Valley's best practices in innovation as it pertains to machine learning and AI."
sents = sent_tokenize(text)
sents
len(sents)
tokens = word_tokenize(text)
tokens
len(tokens)
tagged_tokens = pos_tag(tokens)
tagged_tokens
text = "this’s a sent tokenize test. this is sent two. is this sent three? sent 4 is cool! Now it’s your turn."
from nltk.tokenize import sent_tokenize
sent_tokenize_list = sent_tokenize(text)
len(sent_tokenize_list)
sent_tokenize_list
import nltk.data
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
tokenizer.tokenize(text)
spanish_tokenizer = nltk.data.load('tokenizers/punkt/spanish.pickle')
spanish_tokenizer.tokenize('Hola amigo. Estoy bien.')
from nltk.tokenize import word_tokenize
word_tokenize('Hello World.')
word_tokenize("this's a test")
from nltk.tokenize import TreebankWordTokenizer
tokenizer = TreebankWordTokenizer()
tokenizer.tokenize("this’s a test")
# Standard word tokenizer.
_word_tokenize = TreebankWordTokenizer().tokenize
def word_tokenize(text):
Return a tokenized copy of *text*,
using NLTK's recommended word tokenizer
(currently :class:`.TreebankWordTokenizer`).
This tokenizer is designed to work on a sentence at a time.
return _word_tokenize(text)
word_tokenize("this’s a test")
from nltk.tokenize import WordPunctTokenizer
word_punct_tokenizer = WordPunctTokenizer()
word_punct_tokenizer.tokenize('This’s a test')
text = nltk.word_tokenize("Dive into NLTK: Part-of-speech tagging and POS Tagger")
text
nltk.pos_tag(text)
nltk.help.upenn_tagset('NN.*')
nltk.help.upenn_tagset('VB.*')
nltk.help.upenn_tagset('JJ.*')
nltk.help.upenn_tagset('CC.*')
nltk.help.upenn_tagset('IN.*')
nltk.help.upenn_tagset('PRP.*')
nltk.help.upenn_tagset('DT.*')
# Natural Language Toolkit: TnT Tagger
#
# Copyright (C) 2001-2013 NLTK Project
# Author: Sam Huston <sjh900@gmail.com>
#
# URL: <http://www.nltk.org/>
# For license information, see LICENSE.TXT
'''
Implementation of 'TnT - A Statisical Part of Speech Tagger'
by Thorsten Brants
http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf
'''
from __future__ import print_function
from math import log
from operator import itemgetter
from nltk.probability import FreqDist, ConditionalFreqDist
from nltk.tag.api import TaggerI
class TnT(TaggerI):
'''
TnT - Statistical POS tagger
IMPORTANT NOTES:
* DOES NOT AUTOMATICALLY DEAL WITH UNSEEN WORDS
- It is possible to provide an untrained POS tagger to
create tags for unknown words, see __init__ function
* SHOULD BE USED WITH SENTENCE-DELIMITED INPUT
- Due to the nature of this tagger, it works best when
trained over sentence delimited input.
- However it still produces good results if the training
data and testing data are separated on all punctuation eg: [,.?!]
- Input for training is expected to be a list of sentences
where each sentence is a list of (word, tag) tuples
- Input for tag function is a single sentence
Input for tagdata function is a list of sentences
Output is of a similar form
* Function provided to process text that is unsegmented
- Please see basic_sent_chop()
TnT uses a second order Markov model to produce tags for
a sequence of input, specifically:
argmax [Proj(P(t_i|t_i-1,t_i-2)P(w_i|t_i))] P(t_T+1 | t_T)
IE: the maximum projection of a set of probabilities
The set of possible tags for a given word is derived
from the training data. It is the set of all tags
that exact word has been assigned.
To speed up and get more precision, we can use log addition
to instead multiplication, specifically:
argmax [Sigma(log(P(t_i|t_i-1,t_i-2))+log(P(w_i|t_i)))] +
log(P(t_T+1|t_T))
The probability of a tag for a given word is the linear
interpolation of 3 markov models; a zero-order, first-order,
and a second order model.
P(t_i| t_i-1, t_i-2) = l1*P(t_i) + l2*P(t_i| t_i-1) +
l3*P(t_i| t_i-1, t_i-2)
A beam search is used to limit the memory usage of the algorithm.
The degree of the beam can be changed using N in the initialization.
N represents the maximum number of possible solutions to maintain
while tagging.
It is possible to differentiate the tags which are assigned to
capitalized words. However this does not result in a significant
gain in the accuracy of the results.'''
from nltk.corpus import treebank
len(treebank.tagged_sents())
train_data = treebank.tagged_sents()[:3000]
test_data = treebank.tagged_sents()[3000:]
train_data[0]
test_data[0]
from nltk.tag import tnt
tnt_pos_tagger = tnt.TnT()
tnt_pos_tagger.train(train_data)
tnt_pos_tagger.evaluate(test_data)
import pickle
f = open('tnt_pos_tagger.pickle', "w")
pickle.dump(tnt_pos_tagger, f)
f.close()
tnt_tagger.tag(nltk.word_tokenize("this is a tnt treebank tnt tagger"))
from nltk.stem.porter import PorterStemmer
porter_stemmer = PorterStemmer()
from nltk.stem.lancaster import LancasterStemmer
lancaster_stemmer = LancasterStemmer()
from nltk.stem import SnowballStemmer
snowball_stemmer = SnowballStemmer("english")
#from nltk.stem.api import StemmerI
#api_stemmer = StemmerI()
from nltk.stem.regexp import RegexpStemmer
regexp_stemmer = RegexpStemmer("english")
from nltk.stem.isri import ISRIStemmer
isri_stemmer = ISRIStemmer()
from nltk.stem.rslp import RSLPStemmer
rlsp_stemmer = RSLPStemmer()
if __name__ == "__main__":
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
words = ['maximum','presumably','multiply','provision','owed','ear','saying','crying','string','meant','cement']
porter_words = []
for word in words:
porter_words.append(porter_stemmer.stem(word))
porter_words
lancaster_words = []
for word in words:
lancaster_words.append(lancaster_stemmer.stem(word))
lancaster_words
snowball_words = []
for word in words:
snowball_words.append(snowball_stemmer.stem(word))
snowball_words
isri_words = []
for word in words:
isri_words.append(isri_stemmer.stem(word))
isri_words
rlsp_words = []
for word in words:
rlsp_words.append(rlsp_stemmer.stem(word))
rlsp_words
regexp_words = []
for word in words:
regexp_words.append(regexp_stemmer.stem(word))
regexp_words
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
words_lem = ['dogs','churches','aardwolves','abaci','hardrock','attractive','are','is']
#words_lem_pos = pos_tag(words_lem)
wordnet_words = []
for word in words_lem:
if word == 'is' or word == 'are':
# for verbs
wordnet_words.append(wordnet_lemmatizer.lemmatize(word, pos='v'))
else:
#
wordnet_words.append(wordnet_lemmatizer.lemmatize(word))
wordnet_words
import nltk
sentence = "At eight o'clock on Thursday morning Arthur didn't feel very good."
tokens = nltk.word_tokenize(sentence)
tokens
tagged = nltk.pos_tag(tokens)
tagged[0:6]
entities = nltk.chunk.ne_chunk(tagged)
entities
from nltk.corpus import treebank
t = treebank.parsed_sents('wsj_0001.mrg')[0]
t.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 2 Convert the reserved stock units (RSUs) an employee has in a company to the current cash value.
Step2: Let's take a look at some values and see how we can produce certain outputs.
Step3: Generators
Step4: Let's try Creating
|
<ASSISTANT_TASK:>
Python Code:
# Store even numbers from 0 to 20
even_lst = [num for num in range(21) if num % 2 == 0]
print(even_lst)
cash_value = 20
rsu_dict = {"Max":20, "Willie":13, "Joanna":14}
lst = [rsu_dict[name]*cash_value for name in rsu_dict]
print(lst)
my_dict = {"Ross":19, "Bernie":13, "Micah":15}
cash_value = 20
# [19*20, 13*20, 15*20]
cash_lst = [my_dict[key]*20 for key in my_dict]
print(cash_lst)
rows = 'ABC'
cols = '123'
vowels = ('a', 'e', 'i', 'o', 'u')
sentence = 'cogito ergo sum'
words = sentence.split()
# Produce [A3, B2, C1]
number_letter_lst = [rows[element]+cols[2-element] for element in range(3)]
print(number_letter_lst)
row_col_lst = [rows[i]+cols[2-i] for i in range(3)]
# Produce [A1, B1, C1, A2, B2, C2, A3, B3, C3]
my_lst = [r+c for c in cols for r in rows]
print(my_lst)
# Simply accessing rows and cols in a comprehensions [A1, A2, A3, B1, B2, B3, C1, C2, C3]
# Non-Pythonic
lst = []
for r in rows:
for c in cols:
lst.append(r+c)
# Pythonic
lst = [r+c for r in rows for c in cols]
print(lst)
# let's figure this list out with normal syntax
lst = []
for r in (rows[i]+cols[i] for i in range(3)):
for c in (rows[2-i]+cols[i] for i in range(3)):
lst.append(r + 'x' + c)
print(lst)
# shortened
crossed_list = [x + " x " + y for x in (rows[i]+cols[i] for i in range(3)) for y in (rows[2-i]+cols[i] for i in range(3))]
print(crossed_list)
x = sorted(words, key=lambda x: len(x))
print(x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data
Step2: Part 2
Step3: Part 3
Step4: Submit the training job.
Step5: [Optional] StackDriver Logging
|
<ASSISTANT_TASK:>
Python Code:
%env PROJECT_ID <YOUR_PROJECT_ID>
%env BUCKET_ID <YOUR_BUCKET_ID>
%env REGION <REGION>
%env TRAINER_PACKAGE_PATH ./census_training
%env MAIN_TRAINER_MODULE census_training.train
%env JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir>
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
! mkdir census_training
%%writefile ./census_training/train.py
# [START setup]
import datetime
import os
import subprocess
from sklearn.preprocessing import LabelEncoder
import pandas as pd
from google.cloud import storage
import xgboost as xgb
# TODO: REPLACE 'BUCKET_CREATED_ABOVE' with your GCS BUCKET_ID
BUCKET_ID = 'torryyang-xgb-models'
# [END setup]
# ---------------------------------------
# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).
# AI Platform will then be able to use the data when training your model.
# ---------------------------------------
# [START download-data]
census_data_filename = 'adult.data.csv'
# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
data_dir = 'ml-engine/census/data/'
# Download the data
blob = bucket.blob(''.join([data_dir, census_data_filename]))
blob.download_to_filename(census_data_filename)
# [END download-data]
# ---------------------------------------
# This is where your model code would go. Below is an example model using the census dataset.
# ---------------------------------------
# [START define-and-load-data]
# these are the column labels from the census data files
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# categorical columns contain data that need to be turned into numerical values before being used by XGBoost
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open(census_data_filename, 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# remove column we are trying to predict ('income-level') from features list
train_features = raw_training_data.drop('income-level', axis=1)
# create training labels list
train_labels = (raw_training_data['income-level'] == ' >50K')
# [END define-and-load-data]
# [START categorical-feature-conversion]
# Since the census data set has categorical features, we need to convert
# them to numerical values.
# convert data in categorical columns to numerical values
encoders = {col:LabelEncoder() for col in CATEGORICAL_COLUMNS}
for col in CATEGORICAL_COLUMNS:
train_features[col] = encoders[col].fit_transform(train_features[col])
# [END categorical-feature-conversion]
# [START load-into-dmatrix-and-train]
# load data into DMatrix object
dtrain = xgb.DMatrix(train_features, train_labels)
# train model
bst = xgb.train({}, dtrain, 20)
# [END load-into-dmatrix-and-train]
# ---------------------------------------
# 2. Export and save the model to GCS
# ---------------------------------------
# [START export-to-gcs]
# Export the model to a file
model = 'model.bst'
bst.save_model(model)
# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_ID)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
# [END export-to-gcs]
%%writefile ./census_training/__init__.py
# Note that __init__.py can be an empty file.
! gcloud config set project $PROJECT_ID
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier BASIC
! gsutil ls gs://$BUCKET_ID/census_*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Hail Tables from GRCh37 and GRCh38 assembly reports
Step2: GRCh38
Step3: Create Hail Tables for dbSNP
Step4: Use the function and known keys below to convert allele frequency arrays to structs
Step5: Now can read in VCF files again as MatrixTables with the correct contig recodings, and reformat the allele frequency information in info.FREQ and the clinical attributes in info.
Step6: Then we can just grab the rows table since we have no sample or entry information in the MatrixTable.
Step7: Also write tables with only the rsID field, for smaller tables that just map [locus, alleles] to rsID.
Step8: Add dbSNP to datasets API and annotation DB
Step10: After adding tables to datasets.json, create .rst schema files for docs
|
<ASSISTANT_TASK:>
Python Code:
import hail as hl
hl.init()
ht = hl.import_table("gs://hail-datasets-tmp/dbSNP/GCF_000001405.25_GRCh37.p13_assembly_report.txt",
no_header=True,
comment="#",
delimiter="\t",
missing="na")
field_names = ['sequence_name','sequence_role','assigned_molecule',
'assigned_molecule_location/type', 'genbank_accn', 'relationship',
'refseq_accn', 'assembly_unit', 'sequence_length', 'ucsc_style_name']
name = "dbSNP"
version = "154"
build = "GRCh37"
n_rows = ht.count()
n_partitions = ht.n_partitions()
ht = ht.annotate_globals(
metadata=hl.struct(
name=name,
version=version,
reference_genome=build,
n_rows=n_rows,
n_partitions=n_partitions
)
)
ht = ht.rename(dict(zip([f"f{i}" for i in range(10)], field_names)))
ht = ht.drop("relationship").key_by("refseq_accn")
ht.write("gs://hail-datasets-us/NCBI_assembly_report_p13_GRCh37.ht")
ht = hl.read_table("gs://hail-datasets-us/NCBI_assembly_report_p13_GRCh37.ht")
ht.describe()
ht = hl.import_table("gs://hail-datasets-tmp/dbSNP/GCF_000001405.39_GRCh38.p13_assembly_report.txt",
no_header=True,
comment="#",
delimiter="\t",
missing="na")
field_names = ['sequence_name','sequence_role','assigned_molecule',
'assigned_molecule_location/type', 'genbank_accn', 'relationship',
'refseq_accn', 'assembly_unit', 'sequence_length', 'ucsc_style_name']
name = "dbSNP"
version = "154"
build = "GRCh38"
n_rows = ht.count()
n_partitions = ht.n_partitions()
ht = ht.annotate_globals(
metadata=hl.struct(
name=name,
version=version,
reference_genome=build,
n_rows=n_rows,
n_partitions=n_partitions
)
)
ht = ht.rename(dict(zip([f"f{i}" for i in range(10)], field_names)))
ht = ht.drop("relationship").key_by("refseq_accn")
ht.write("gs://hail-datasets-us/NCBI_assembly_report_p13_GRCh38.ht")
ht = hl.read_table("gs://hail-datasets-us/NCBI_assembly_report_p13_GRCh38.ht")
ht.describe()
mt37 = hl.import_vcf(f"gs://hail-datasets-tmp/dbSNP/dbSNP_154_GRCh37.vcf.bgz",
header_file=f"gs://hail-datasets-tmp/dbSNP/dbSNP_154_GRCh37_header_only.vcf.txt",
reference_genome=None,
min_partitions=512)
mt38 = hl.import_vcf(f"gs://hail-datasets-tmp/dbSNP/dbSNP_154_GRCh38.vcf.bgz",
header_file=f"gs://hail-datasets-tmp/dbSNP/dbSNP_154_GRCh38_header_only.vcf.txt",
reference_genome=None,
min_partitions=512)
mt37 = mt37.checkpoint(f"gs://hail-datasets-tmp/checkpoints/dbSNP_154_GRCh37_no_coding.mt",
_read_if_exists=True,
overwrite=False)
mt38 = mt38.checkpoint(f"gs://hail-datasets-tmp/checkpoints/dbSNP_154_GRCh38_no_coding.mt",
_read_if_exists=True,
overwrite=False)
# To get all contigs present for recoding to correct reference genome mapping
contigs_present37 = mt37.aggregate_rows(hl.agg.collect_as_set(mt37.locus.contig))
contigs_present38 = mt38.aggregate_rows(hl.agg.collect_as_set(mt38.locus.contig))
# Load NCBI assembly reports with RefSeq mappings
assembly37_ht = hl.read_table("gs://hail-datasets-us/NCBI_assembly_report_p13_GRCh37.ht")
assembly37_ht = assembly37_ht.annotate(
contig = hl.if_else(assembly37_ht.sequence_role == "unlocalized-scaffold",
assembly37_ht.genbank_accn,
assembly37_ht.assigned_molecule)
)
assembly38_ht = hl.read_table("gs://hail-datasets-us/NCBI_assembly_report_p13_GRCh38.ht")
# Map RefSeq identifiers to chromosomes for GRCh37 using the "contig" field we created in assembly report
rg37 = hl.get_reference("GRCh37")
refseq_to_chr37 = dict(zip(assembly37_ht.refseq_accn.collect(), assembly37_ht.contig.collect()))
refseq_to_chr37 = {k: v for k, v in refseq_to_chr37.items() if k in contigs_present37 and v in rg37.contigs}
# Map RefSeq identifiers to chromosomes for GRCh38 using the "ucsc_style_name" field in assembly report
rg38 = hl.get_reference("GRCh38")
refseq_to_chr38 = dict(zip(assembly38_ht.refseq_accn.collect(), assembly38_ht.ucsc_style_name.collect()))
refseq_to_chr38 = {k: v for k, v in refseq_to_chr38.items() if k in contigs_present38 and v in rg38.contigs}
recodings = {
"GRCh37": refseq_to_chr37,
"GRCh38": refseq_to_chr38
}
# Convert array of strings like hl.array(["GnomAD:.,1,3.187e-05","TOPMED:.,1,2.389e-05"]) to a struct
def arr_str_to_struct(hl_array, known_keys):
_dict = hl.dict(
hl_array.map(
lambda x: ("_" + x.split(":")[0],
x.split(":")[1].split(",").map(lambda x: hl.if_else(x == ".", hl.missing(hl.tfloat), hl.float(x))))
)
)
_struct = hl.rbind(_dict, lambda d: hl.struct(**{k: _dict.get(k) for k in known_keys}))
return _struct
# To get all possible keys for allele frequency arrays after loading VCF as MatrixTable
# known_keys_FREQ = mt.aggregate_rows(
# hl.agg.explode(
# lambda x: hl.agg.collect_as_set(x), mt.info.FREQ.split("\\|").map(lambda x: x.split(":")[0])
# )
# )
known_keys = ['GENOME_DK','TWINSUK','dbGaP_PopFreq','Siberian','Chileans',
'FINRISK','HapMap','Estonian','ALSPAC','GoESP',
'TOPMED','PAGE_STUDY','1000Genomes','Korea1K','ChromosomeY',
'ExAC','Qatari','GoNL','MGP','GnomAD',
'Vietnamese','GnomAD_exomes','PharmGKB','KOREAN','Daghestan',
'HGDP_Stanford','NorthernSweden','SGDP_PRJ']
known_keys_FREQ = list(map(lambda x: f"_{x}", known_keys))
name = "dbSNP"
version = "154"
builds = ["GRCh37", "GRCh38"]
for build in builds:
mt = hl.import_vcf(f"gs://hail-datasets-tmp/{name}/{name}_{version}_{build}.vcf.bgz",
header_file=f"gs://hail-datasets-tmp/{name}/{name}_{version}_{build}_header_only.vcf.txt",
contig_recoding=recodings[build],
skip_invalid_loci=True,
reference_genome=build,
min_partitions=512)
# First annotation, go from str to array<str> for FREQ
mt = mt.annotate_rows(
info = mt.info.annotate(
FREQ = hl.or_missing(hl.is_defined(mt.info.FREQ), mt.info.FREQ.split("\\|"))
)
)
# Second annotation, turn array<str> into a struct for FREQ
mt = mt.annotate_rows(
info = mt.info.annotate(
FREQ = hl.or_missing(hl.is_defined(mt.info.FREQ),
arr_str_to_struct(mt.info.FREQ, known_keys_FREQ))
)
)
# Reformat clinical attributes from str to array
mt = mt.annotate_rows(
info = mt.info.annotate(
CLNHGVS = hl.or_missing(
hl.is_defined(mt.info.CLNHGVS),
mt.info.CLNHGVS.split("(?:(\|)|(\,))")).map(lambda x: hl.if_else((x == "."), hl.missing(hl.tstr), x)),
CLNVI = hl.or_missing(
hl.is_defined(mt.info.CLNVI),
mt.info.CLNVI.split("(?:(\|)|(\,))")).filter(lambda x: x != "."),
CLNORIGIN = hl.or_missing(
hl.is_defined(mt.info.CLNORIGIN),
mt.info.CLNORIGIN.split("(?:(\|)|(\,))")).filter(lambda x: x != "."),
CLNSIG = hl.or_missing(
hl.is_defined(mt.info.CLNSIG),
mt.info.CLNSIG.split("(?:(\|)|(\,))")).filter(lambda x: x != "."),
CLNDISDB = hl.or_missing(
hl.is_defined(mt.info.CLNDISDB),
mt.info.CLNDISDB.split("(?:(\|)|(\,))")).filter(lambda x: x != "."),
CLNDN = hl.or_missing(
hl.is_defined(mt.info.CLNDN),
mt.info.CLNDN.split("(?:(\|)|(\,))")).filter(lambda x: x != "."),
CLNREVSTAT = hl.or_missing(
hl.is_defined(mt.info.CLNREVSTAT),
mt.info.CLNREVSTAT.split("(?:(\|)|(\,))")).filter(lambda x: x != "."),
CLNACC = hl.or_missing(
hl.is_defined(mt.info.CLNACC),
mt.info.CLNACC.split("(?:(\|)|(\,))")).filter(lambda x: x != ".")
)
)
mt = mt.checkpoint(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}.mt",
_read_if_exists=True,
overwrite=False)
name = "dbSNP"
version = "154"
builds = ["GRCh37", "GRCh38"]
for build in builds:
# No samples or entries in MT, just grab table with the rows
mt = hl.read_matrix_table(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}.mt")
ht = mt.rows()
ht_ba = ht.filter(hl.len(ht.alleles) <= 2)
ht_ba = ht_ba.checkpoint(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}_biallelic.ht",
_read_if_exists=True,
overwrite=False)
ht_ma = ht.filter(hl.len(ht.alleles) > 2)
ht_ma = ht_ma.checkpoint(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}_multiallelic.ht",
_read_if_exists=True,
overwrite=False)
ht_split = hl.split_multi(ht_ma, keep_star=True, permit_shuffle=True)
ht_split = ht_split.repartition(64, shuffle=False)
ht_split = ht_split.checkpoint(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}_split_multiallelic.ht",
_read_if_exists=True,
overwrite=False)
# Next, have to fix indices and union ht_split with ht_ba
ht_union = ht_ba.union(ht_split, unify=True)
ht_union = ht_union.annotate(
a_index = hl.if_else(hl.is_missing(ht_union.a_index), 1, ht_union.a_index),
was_split = hl.if_else(hl.is_missing(ht_union.was_split), False, ht_union.was_split),
old_locus = hl.if_else(hl.is_missing(ht_union.old_locus), ht_union.locus, ht_union.old_locus),
old_alleles = hl.if_else(hl.is_missing(ht_union.old_alleles), ht_union.alleles, ht_union.old_alleles)
)
ht_union = ht_union.checkpoint(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}_unioned.ht",
_read_if_exists=True,
overwrite=False)
# Arrays for AFs start w/ ref allele in index 0, so just use a_index to get alternate AFs
ht = ht_union.annotate(
info = ht_union.info.annotate(
FREQ = ht_union.info.FREQ.annotate(
**{k: hl.or_missing(hl.is_defined(ht_union.info.FREQ[k]),
ht_union.info.FREQ[k][ht_union.a_index])
for k in known_keys_FREQ}
)
)
)
ht = ht.repartition(512, shuffle=True)
ht = ht.checkpoint(f"gs://hail-datasets-tmp/checkpoints/{name}_{version}_{build}.ht",
_read_if_exists=True,
overwrite=False)
n_rows = ht.count()
n_partitions = ht.n_partitions()
ht = ht.annotate_globals(
metadata=hl.struct(
name=name,
version=version,
reference_genome=build,
n_rows=n_rows,
n_partitions=n_partitions
)
)
ht.write(f"gs://hail-datasets-us/{name}_{version}_{build}.ht")
ht = hl.read_table(f"gs://hail-datasets-us/{name}_{version}_{build}.ht")
ht.describe()
print(str(hl.eval(ht.metadata)) + "\n")
name = "dbSNP"
version = "154"
builds = ["GRCh37", "GRCh38"]
for build in builds:
# Write table with only rsid's
ht_rsid = hl.read_table(f"gs://hail-datasets-us/{name}_{version}_{build}.ht")
ht_rsid = ht_rsid.select("rsid")
n_rows = ht_rsid.count()
n_partitions = ht_rsid.n_partitions()
ht_rsid = ht_rsid.annotate_globals(
metadata=hl.struct(
name=f"{name}_rsid",
version=version,
reference_genome=build,
n_rows=n_rows,
n_partitions=n_partitions
)
)
ht_rsid.write(f"gs://hail-datasets-us/{name}_rsid_{version}_{build}.ht")
ht_rsid = hl.read_table(f"gs://hail-datasets-us/{name}_rsid_{version}_{build}.ht")
ht_rsid.describe()
print(str(hl.eval(ht_rsid.metadata)) + "\n")
# To check uniqueness of keys
tables = ["gs://hail-datasets-us/dbSNP_rsid_154_GRCh37.ht", "gs://hail-datasets-us/dbSNP_rsid_154_GRCh38.ht"]
for table in tables:
ht = hl.read_table(table)
ht_count = ht.count()
print(f"n = {ht_count}")
ht_distinct_count = ht.distinct().count()
print(f"n_distinct = {ht_distinct_count}")
if ht_count == ht_distinct_count:
print(f"{table} rows unique\n")
else:
print(f"{table} rows NOT unique\n")
import os
import json
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
names = ["dbSNP", "dbSNP_rsid"]
version = "154"
builds = ["GRCh37", "GRCh38"]
gcs_us_url_root = "gs://hail-datasets-us"
gcs_eu_url_root = "gs://hail-datasets-eu"
aws_us_url_root = "s3://hail-datasets-us-east-1"
for name in names:
json_entry = {
"annotation_db": {
"key_properties": []
},
"description": "dbSNP: Reference SNP (rs or RefSNP) Hail Table. The database includes both common and rare single-base nucleotide variation (SNV), short (=< 50bp) deletion/insertion polymorphisms, and other classes of small genetic variations.",
"url": "https://www.ncbi.nlm.nih.gov/snp/docs/RefSNP_about/",
"versions": [
{
"reference_genome": builds[0],
"url": {
"aws": {
"us": f"{aws_us_url_root}/{name}_{version}_{builds[0]}.ht"
},
"gcp": {
"eu": f"{gcs_eu_url_root}/{name}_{version}_{builds[0]}.ht",
"us": f"{gcs_us_url_root}/{name}_{version}_{builds[0]}.ht"
}
},
"version": version
},
{
"reference_genome": builds[1],
"url": {
"aws": {
"us": f"{aws_us_url_root}/{name}_{version}_{builds[1]}.ht"
},
"gcp": {
"eu": f"{gcs_eu_url_root}/{name}_{version}_{builds[1]}.ht",
"us": f"{gcs_us_url_root}/{name}_{version}_{builds[1]}.ht"
}
},
"version": version
}
]
}
if name == "dbSNP_rsid":
json_entry["description"] = "dbSNP: This Hail Table contains a mapping from locus/allele pairs to Reference SNP IDs (rsID). For the full dataset, see dbSNP."
datasets[name] = json_entry
# Write new entries back to datasets.json config:
with open(datasets_path, "w") as f:
json.dump(datasets, f, sort_keys=True, ensure_ascii=False, indent=2)
# Create/update schema .rst file
import textwrap
output_dir = os.path.abspath("../../hail/python/hail/docs/datasets/schemas")
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
names = ["dbSNP", "dbSNP_rsid"]
for name in names:
versions = sorted(set(dataset["version"] for dataset in datasets[name]["versions"]))
if not versions:
versions = [None]
reference_genomes = sorted(set(dataset["reference_genome"] for dataset in datasets[name]["versions"]))
if not reference_genomes:
reference_genomes = [None]
print(name)
print(versions[0])
print(reference_genomes[0] + "\n")
path = [dataset["url"]["gcp"]["us"]
for dataset in datasets[name]["versions"]
if all([dataset["version"] == versions[0],
dataset["reference_genome"] == reference_genomes[0]])]
assert len(path) == 1
path = path[0]
if path.endswith(".ht"):
table = hl.methods.read_table(path)
table_class = "hail.Table"
else:
table = hl.methods.read_matrix_table(path)
table_class = "hail.MatrixTable"
description = table.describe(handler=lambda x: str(x)).split("\n")
description = "\n".join([line.rstrip() for line in description])
template = .. _{dataset}:
{dataset}
{underline1}
* **Versions:** {versions}
* **Reference genome builds:** {ref_genomes}
* **Type:** :class:`{class}`
Schema ({version0}, {ref_genome0})
{underline2}
.. code-block:: text
{schema}
context = {
"dataset": name,
"underline1": len(name) * "=",
"version0": versions[0],
"ref_genome0": reference_genomes[0],
"versions": ", ".join([str(version) for version in versions]),
"ref_genomes": ", ".join([str(reference_genome) for reference_genome in reference_genomes]),
"underline2": len("".join(["Schema (", str(versions[0]), ", ", str(reference_genomes[0]), ")"])) * "~",
"schema": textwrap.indent(description, " "),
"class": table_class
}
with open(output_dir + f"/{name}.rst", "w") as f:
f.write(template.format(**context).strip())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pro & Contra
Step2: Модельный пример
Step3: Метод модифицированной функции Лагранжа
Step4: Существенная проблема
Step5: Учтём, что все свойства сохранятся при аффинных преобразованиях
|
<ASSISTANT_TASK:>
Python Code:
import cvxpy
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc("text", usetex=True)
num_iters = 30
n = 20
m = 10
A = np.random.randn(m, n)
b = np.random.randn(m, 1)
# Initialize problem
x = cvxpy.Variable(shape=(n, 1))
f = cvxpy.norm(x, 2)
# Solve with CVXPY.
cvxpy.Problem(cvxpy.Minimize(f), [A @ x == b]).solve(solver=cvxpy.SCS, verbose=True)
cvxpy_f = f.value
print ("Optimal value from CVXPY =", f.value)
resid = A@x - b
mu = cvxpy.Parameter(value=1, nonneg=True)
penal_f = f + (mu/2.)*cvxpy.sum_squares(resid)
res_conv = []
f_conv = []
for t in range(num_iters):
cvxpy.Problem(cvxpy.Minimize(penal_f)).solve(solver=cvxpy.SCS)
mu.value = mu.value * 2
res_conv.append(np.linalg.norm(resid.value))
f_conv.append(f.value)
print("Optimal value from method of augmented Lagrangian =", f.value)
plt.figure(figsize=(8, 6))
fontsize=20
plt.plot(f_conv, label="Penalty")
plt.plot(np.ones(num_iters) * cvxpy_f, label="CVXPy")
plt.legend(fontsize=fontsize)
plt.xlabel("Number of iterations, $k$", fontsize=fontsize)
plt.ylabel("Objective, $f(x_k)$", fontsize=fontsize)
plt.xticks(fontsize=fontsize)
plt.grid(True)
_ = plt.yticks(fontsize=fontsize)
plt.figure(figsize=(8, 6))
fontsize=20
plt.semilogy(res_conv)
plt.xlabel("Number of iterations, $k$", fontsize=fontsize)
plt.ylabel("Norm of residuals, $\|Ax_k - b\|_2$", fontsize=fontsize)
plt.xticks(fontsize=fontsize)
plt.grid(True)
_ = plt.yticks(fontsize=fontsize)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc("text", usetex=True)
import cvxpy as cvx
def dual_ascent(update_x, A, b, alpha, x0, lambda0, max_iter):
x = x0.copy()
lam = lambda0.copy()
conv_x = [x]
conv_lam = [lam]
for i in range(max_iter):
x = update_x(x, lam, A, b)
lam = lam + alpha * (A @ x - b)
conv_x.append(x.copy())
conv_lam.append(lam.copy())
return x, lam, conv_x, conv_lam
m, n = 10, 20
A = np.random.randn(m, n)
b = np.random.randn(m)
P = np.random.randn(n, n)
P = P.T @ P
c = np.random.randn(n)
spec = np.linalg.eigvalsh(P)
mu = spec.min()
print(mu)
x = cvx.Variable(n)
obj = 0.5 * cvx.quad_form(x, P) - c @ x
problem = cvx.Problem(cvx.Minimize(obj), [A @ x == b])
problem.solve(verbose=True)
print(np.linalg.norm(A @ x.value - b))
print(problem.value)
x0 = np.random.randn(n)
lam0 = np.random.randn(m)
max_iter = 100000
alpha = 1e-3
def f(x):
return 0.5 * x @ P @ x - c @ x
def L(x, lam):
return f(x) + lam @ (A @ x - b)
def update_x(x, lam, A, b):
return np.linalg.solve(P, c - A.T @ lam)
x_da, lam_da, conv_x_da, conv_lam_da = dual_ascent(update_x, A, b, alpha, x0, lam0, max_iter)
print(np.linalg.norm(A @ x_da - b))
print(0.5 * x_da @ P @ x_da - c @ x_da)
plt.figure(figsize=(10, 8))
plt.plot([f(x) for x in conv_x_da], label="Objective")
plt.plot(problem.value * np.ones(len(conv_x_da)), label="Traget value")
# plt.yscale("log")
plt.xscale("log")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
plt.plot([L(x, lam) for x, lam in zip(conv_x_da, conv_lam_da)],
label="Lagrangian")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_da], label="$\|Ax - b\|_2$")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
def augmented_lagrangian(update_x, A, b, rho0, x0, lambda0, max_iter):
x = x0.copy()
lam = lambda0.copy()
conv_x = [x]
conv_lam = [lam]
rho = rho0
for i in range(max_iter):
x = update_x(x, lam, A, b)
lam = lam + rho * (A @ x - b)
conv_x.append(x.copy())
conv_lam.append(lam.copy())
return x, lam, conv_x, conv_lam
def update_x_al(x, lam, A, b):
return np.linalg.solve(P + rho * A.T @ A, c - A.T @ lam + A.T @ b)
rho = 10
max_iter = 1000
x_al, lam_al, conv_x_al, conv_lam_al = augmented_lagrangian(update_x_al, A, b, rho, x0, lam0, max_iter)
print(np.linalg.norm(A @ x_al - b))
print(0.5 * x_al @ P @ x_al - c @ x_al)
plt.plot([f(x) for x in conv_x_da], label="DA")
plt.plot([f(x) for x in conv_x_al], label="AL")
# plt.yscale("log")
plt.xscale("log")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("Objective", fontsize=20)
plt.plot([L(x, lam) for x, lam in zip(conv_x_da, conv_lam_da)],
label="DA")
plt.plot([L(x, lam) for x, lam in zip(conv_x_al, conv_lam_al)],
label="AL")
plt.legend(fontsize=20)
plt.xscale("log")
plt.xlabel("\# iterations", fontsize=20)
plt.xlabel("Lagrangian", fontsize=20)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_da], label="DA")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_al], label="AL")
plt.legend(fontsize=20)
plt.xscale("log")
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$\|Ax - b\|_2$", fontsize=20)
plt.grid(True)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
def admm(update_x, update_z, rho0, x0, z0, lambda0, max_iter):
x = x0.copy()
z = z0.copy()
lam = lambda0.copy()
conv_x = [x]
conv_z = [z]
conv_lam = [lam]
rho = rho0
for i in range(max_iter):
x = update_x(x, z, lam, A, b)
z = update_z(x, z, lam, A, b)
lam = lam + rho * (x - z)
conv_x.append(x.copy())
conv_z.append(z.copy())
conv_lam.append(lam.copy())
return x, z, lam, conv_x, conv_z, conv_lam
def update_x_admm(x, z, lam, A, b):
n = x.shape[0]
return np.linalg.solve(P + rho*np.eye(n), -lam + c + rho * z)
def update_z_admm(x, z, lam, A, b):
x_hat = lam / rho + x
return x_hat - A.T @ np.linalg.solve(A @ A.T, A @ x_hat - b)
z0 = np.random.randn(n)
lam0 = np.random.randn(n)
rho = 10
x_admm, z_admm, lam_admm, conv_x_admm, conv_z_admm, conv_lam_admm = admm(update_x_admm,
update_z_admm,
rho, x0, z0, lam0,
max_iter=100)
print(f(x_admm))
plt.figure(figsize=(10, 8))
plt.plot([f(x) for x in conv_x_da], label="DA")
plt.plot([f(x) for x in conv_x_al], label="AL")
plt.plot([f(x) for x in conv_x_admm], label="ADMM x")
plt.plot([f(z) for z in conv_z_admm], label="ADMM z")
# plt.yscale("log")
plt.xscale("log")
plt.legend(fontsize=20)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("Objective", fontsize=20)
plt.grid(True)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_da], label="DA")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_al], label="AL")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_x_admm], label="ADMM")
plt.legend(fontsize=20)
plt.xscale("log")
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$\|Ax - b\|_2$", fontsize=20)
plt.grid(True)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
plt.show()
plt.semilogy([np.linalg.norm(x - z) for x, z in zip(conv_x_admm, conv_z_admm)])
plt.grid(True)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$\|x_k - z_k\|_2$", fontsize=20)
plt.yticks(fontsize=20)
plt.show()
import scipy.optimize as scopt
m, n = 10, 200
A = np.random.rand(m, n)
b = np.random.rand(m)
c = np.random.rand(n)
scipy_linprog_conv = []
def callback_splin(cur_res):
scipy_linprog_conv.append(cur_res)
res = scopt.linprog(c, A_eq=A, b_eq=b,
bounds=[(0, None) for i in range(n)],
callback=callback_splin, method="simplex")
print(res)
def update_x_admm(x, z, lam, A, b):
n = x.shape[0]
m = A.shape[0]
C = np.block([[rho * np.eye(n), A.T], [A, np.zeros((m, m))]])
rhs = np.block([-lam - c + rho * z, b])
return np.linalg.solve(C, rhs)[:n]
def update_z_admm(x, z, lam, A, b):
x_hat = lam / rho + x
return np.clip(x_hat, 0, np.max(x_hat))
x0 = np.random.randn(n)
z0 = np.random.randn(n)
lam0 = np.random.randn(n)
rho = 10
x_admm, z_admm, lam_admm, conv_x_admm, conv_z_admm, conv_lam_admm = admm(update_x_admm,
update_z_admm,
rho, x0, z0, lam0, max_iter=10000)
print(c @ x_admm, res.fun)
print(c @ x_admm - res.fun, np.linalg.norm(x_admm - res.x))
plt.figure(figsize=(10, 8))
plt.plot([c @ x for x in conv_x_admm[:100]], label="ADMM")
plt.plot([c @ res.x for res in scipy_linprog_conv], label="Scipy")
plt.legend(fontsize=20)
plt.grid(True)
plt.xlabel("\# iterations", fontsize=20)
plt.ylabel("$c^{\\top}x_k$", fontsize=20)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: Selecting data
Step3: Columns can also be accessed as attributes (as long as they have a valid Python name).
Step4: We can select elements of a DataFrame either by label (with the .loc attribute) or by position (with the .iloc attribute). Row and column indices take the usual order (first and second place, respectively).
Step5: Slicing works too.
Step6: Label indexing is more natural than positional indexing (think of a function call, where keyword arguments are easier to work with than positional arguments).
Step7: Often we want to select data based on certain conditions.
Step8: Subsets can be selected by callable functions (returning valid indexers).
Step9: So it can serve as a column indexer.
Step10: The following function filters for data where the number of cases is greater than 100,000.
Step11: So it can serve as a row indexer.
Step12: We may want to select or mask data while preserving the original shape.
Step13: Hands-on exercises
Step14: Indexing
Step15: We could specify that the first (unnamed) column should be used as the index (row labels).
Step16: Remember we learnt set_index() in the previous section? We also have reset_index() at our disposal.
Step17: And we are back to a default index for this DataFrame. The original index is stored in its own column.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('../data/tidy_who.csv')
df.head()
df.shape
df.sample(10)
df.describe()
df['g_whoregion'].unique()
df['country'].nunique()
df['country'].head(3)
df.country[1000:1003]
df.loc[0, 'country']
df.loc[df.shape[0] - 1, 'country']
df.iloc[0, 0]
df.iloc[df.shape[0] - 1, 0]
df.loc[:5, 'country']
df.loc[:5, 'country':'type']
df.iloc[:5, :5]
cond = df.year < 1981
df[cond].shape
df[cond & (df.country == 'Argentina') & (df.type == 'rel') & (df.sex == 'm')]
gr_and_it = df.country.isin(['Greece', 'Italy'])
df[gr_and_it].tail()
lambda x: ['country', 'g_whoregion']
df.loc[:3, lambda x: ['country', 'g_whoregion']]
lambda x: x.cases > 100000
great = df.loc[lambda x: x.cases > 100000, :]
great
df.cases.loc[lambda x: x > 100000]
great.where(great.country == 'India')
great.mask(great.country == 'India')
df1 = df[(df.country == 'Greece') & (df.year == 2000) & (df.age_range.isin([14, 1524]))]
df1.to_csv('../data/df1.csv')
df2 = pd.read_csv('../data/df1.csv')
df1.index
df2.index
pd.read_csv('../data/df1.csv', index_col=0)
df1.reset_index()
df1.reset_index().index
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1
Step2: Example 2
Step3: Example 3 - Reaction time test
Step4: Question from class
|
<ASSISTANT_TASK:>
Python Code:
from psychopy import visual, core, event
import numpy as np
win = visual.Window()
core.wait(1)
win.close()
# Create a window and a circle
win = visual.Window()
circle = visual.Circle(win, radius=0.1)
# Show the circle until keypress
circle.draw()
win.flip()
event.waitKeys()
# Close the window
win.close()
# Parameters in the experiment
nTrials = 5
reaction_times = []
# Create a window and the circle
win = visual.Window()
circle = visual.Circle(win, radius=0.1)
# Run the trials
for t in np.arange(nTrials):
# Show an empty screen for 1 second
win.flip()
core.wait(1)
# Show the circle until keypress
circle.draw()
win.flip()
keypressed = False
t0 = core.getTime()
while not keypressed:
k = event.getKeys()
if k:
reaction_times.append(core.getTime() - t0)
break
# Close the window
win.close()
print(np.array(reaction_times) * 1000) # in ms
import time
import matplotlib.pyplot as plt
# Run nLoops iterations
nLoops = 10000
loop_times = []
for i in range(nLoops):
loop_times.append(time.time())
# The difference between two values are the loop-times (in microseconds)
loop_dur = np.diff(loop_times) * 1000000
# Plot the results as a histogram (in milliseconds). Set the axis to [-1, +1] ms
plt.hist(loop_dur, bins=100, range = (-1, 1))
plt.show()
# Compute the M \pm SD of loop-time
print('%.2f %.2f' % (np.mean(loop_dur), np.std(loop_dur)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Closed Form Approximation
Step2: Thats the one-standard deviation range about the estimator. For example
Step3: that's the same relationship as a plot
Step4: For reference, the 2-sided tail probabilites as a function of $z$ (the way to read it is as follows
Step5: Using Monte Carlo
Step6: We do some intermediate calculations...
Step7: ...and then generate our random numbers...
Step8: ...that we then reduce in one dimension (ie, over that people in the sample) to obtain our estimator for the probas for males and females as well as the difference. On the differences finally we look at the mean (should be zero-ish) and the standard deviation (should be consistent with the numbers above)
|
<ASSISTANT_TASK:>
Python Code:
N_people = 500
ratio_female = 0.30
proba = 0.40
def the_sd(N, p, r):
N = float(N)
p = float(p)
r = float(r)
return sqrt(1.0/N*(p*(1.0-p))/(r*(1.0-r)))
def sd_func_factory(N,r):
def func(p):
return the_sd(N,p,r)
return func
f = sd_func_factory(N_people, ratio_female)
f2 = sd_func_factory(N_people/2, ratio_female)
p = linspace(0,0.25,5)
f = sd_func_factory(N_people, ratio_female)
f2 = sd_func_factory(N_people/2, ratio_female)
sd = list(map(f, p))
sd2 = list(map(f2, p))
pd.DataFrame(data= {'p':p, 'sd':sd, 'sd2':sd2})
p = linspace(0,0.25,50)
sd = list(map(f, p))
sd2 = list(map(f2, p))
plot (p,p, 'k')
plot (p,p-sd, 'g--')
plot (p,p+sd, 'g--')
plot (p,p-sd2, 'r--')
plot (p,p+sd2, 'r--')
grid(b=True, which='major', color='k', linestyle='--')
z=linspace(1.,3,100)
plot(z,1. - (norm.cdf(z)-norm.cdf(-z)))
grid(b=True, which='major', color='k', linestyle='--')
plt.title("Probability of being beyond Z (2-sided) vs Z")
number_of_tries = 1000
N_female = int (N_people * ratio_female)
N_male = N_people - N_female
data_male = np.random.binomial(n=1, p=proba, size=(number_of_tries, N_male))
data_female = np.random.binomial(n=1, p=proba, size=(number_of_tries, N_female))
proba_male = map(numpy.mean, data_male)
proba_female = map(numpy.mean, data_female)
proba_diff = list((pm-pf) for pm,pf in zip(proba_male, proba_female))
np.mean(proba_diff), np.std(proba_diff)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configure environment settings
Step2: We now create the ARTIFACT_STORE bucket if it's not there. Note that this bucket should be created in the region specified in the variable REGION (if you have already a bucket with this name in a different region than REGION, you may want to change the ARTIFACT_STORE name so that you can recreate a bucket in REGION with the command in the cell below).
Step3: Importing the dataset into BigQuery
Step4: Explore the Covertype dataset
Step5: Create training and validation splits
Step6: Create a validation split
Step7: Develop a training application
Step8: Convert all numeric features to float64
Step9: Run the pipeline locally.
Step10: Calculate the trained model's accuracy.
Step11: Prepare the hyperparameter tuning application.
Step12: Write the tuning script.
Step13: Package the script into a docker image.
Step14: Build the docker image.
Step15: Submit an Vertex AI hyperparameter tuning job
Step16: Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs".
Step17: You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
Step20: Retrain the model with the best hyperparameters
Step21: Examine the training output
Step22: Deploy the model to Vertex AI Prediction
Step23: Uploading the trained model
Step24: Deploying the uploaded model
Step25: Serve predictions
|
<ASSISTANT_TASK:>
Python Code:
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
ARGS = f\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
!gsutil ls $JOB_DIR
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
endpoint.predict([instance])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It you can get the list of files stored in the zip archive, and you can open files using the instance's open function
Step2: 2. Parse header files to dictionaries
Step3: The remainder of the lines should contain properties following the syntax mentioned above. They can easily be parsed to a dictionary.
Step4: 2.1 Parsing properties into tree-like dictionary
Step5: 2.2 In contrast to jpk-force archives, lower level header files appear to have the same beginning with time stamp in first line
Step6: 3. Read data from files
Step7: According to the JPKay guys, every 4 items make one data point
Step11: 3.1 How to figure out the number of items making up one data point, i.e., how to figure out the data format of the .dat files?
Step12: But where is this information in a tweezer archive's header?
|
<ASSISTANT_TASK:>
Python Code:
from zipfile import ZipFile
fname = "../examples/force-save-2016.07.05-10.00.50.062.jpk-nt-force"
z = ZipFile(fname)
list_of_files = z.filelist
for f in list_of_files:
print f.filename
print list_of_files[0].filename
f = z.open(list_of_files[0].filename)
lines = f.readlines()
print lines[0]
print lines[1]
print lines[2]
from dateutil import parser
t = parser.parse(lines[0][1:])
print t
_properties = {}
for line in lines[1:]:
key, value = line.split("=")
value.strip()
_properties[key] = value
for p in _properties:
print p," = ",_properties[p]
properties = {}
for line in lines[1:]:
key,value = line.split("=")
value = value.strip()
split_key = key.split(".")
d = properties
if len(split_key) > 1:
for s in split_key[:-1]:
if d.keys().count(s):
d = d[s]
else:
d[s] = {}
d = d[s]
d[split_key[-1]] = value
for p in properties:
print p, " = ",properties[p]
properties['force-scan-series']['header']['force-settings']['force-baseline-adjust-settings']
fname = z.filelist[-25].filename
print fname
f = z.open(fname)
lines = f.readlines()
print(lines[0])
print(lines[1])
from struct import unpack
fname = z.filelist[-12].filename
print fname
f = z.open(fname)
content = f.read()
print(len(content))
content[0], content[1], content[2], content[3]
data = unpack(">i", content[0:4])
print data
_z = ZipFile("force-save-2016.06.15-13.17.08.jpk-force")
fname = _z.filelist[-13].filename
print fname
open header file
f = _z.open(fname)
lines = f.readlines()
parse properties to dict
properties = {}
for line in lines[2:]:
key,value = line.split("=")
value = value.strip()
split_key = key.split(".")
d = properties
if len(split_key) > 1:
for s in split_key[:-1]:
if d.keys().count(s):
d = d[s]
else:
d[s] = {}
d = d[s]
d[split_key[-1]] = value
read one channel's data.type property
print properties['channel']['height']['data']['type']
fname = z.filelist[-25].filename
print fname
f = z.open(fname)
lines = f.readlines()
properties = {}
for line in lines[2:]:
key,value = line.split("=")
value = value.strip()
split_key = key.split(".")
d = properties
if len(split_key) > 1:
for s in split_key[:-1]:
if d.keys().count(s):
d = d[s]
else:
d[s] = {}
d = d[s]
d[split_key[-1]] = value
for key in properties:
print key, " = ", properties[key]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.
Step2: We then load the training set and the test set and prepare them with the function prep_datas.
Step3: Image before/after compression
Step4: Lenet neural network
Step5: We build the neural network and fit it on the training set
Step6: We now compare with the real world images (with the deshear method)
Step7: with the labels of Peter
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import image_loader as im
from matplotlib import pyplot as plt
from skimage.transform import resize
%matplotlib inline
path=os.getcwd()+'/' # finds the path of the folder in which the notebook is
path_train=path+'images/train/'
path_test=path+'images/test/'
path_real=path+'images/real_world/'
def prep_datas(xset,xlabels):
X=list(xset)
for i in range(len(X)):
X[i]=resize(X[i],(32,32,1)) #reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X=np.reshape(X,(len(X),1,32,32)) # reshape the liste to have the form required by keras (theano), ie (1,32,32)
X=np.array(X) #transforms it into an array
Y = np.eye(2, dtype='uint8')[xlabels] # generates vectors, here of two elements as required by keras (number of classes)
return X,Y
training_set, training_labels = im.load_images(path_train)
test_set, test_labels = im.load_images(path_test)
X_train,Y_train=prep_datas(training_set,training_labels)
X_test,Y_test=prep_datas(test_set,test_labels)
i=11
plt.subplot(1,2,1)
plt.imshow(training_set[i],cmap='gray')
plt.subplot(1,2,2)
plt.imshow(X_train[i][0],cmap='gray')
# import the necessary packages
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras.optimizers import SGD
# this code comes from http://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/
class LeNet:
@staticmethod
def build(width, height, depth, classes, weightsPath=None):
# initialize the model
model = Sequential()
# first set of CONV => RELU => POOL
model.add(Convolution2D(20, 5, 5, border_mode="same",input_shape=(depth, height, width)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# second set of CONV => RELU => POOL
model.add(Convolution2D(50, 5, 5, border_mode="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# set of FC => RELU layers
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
model = LeNet.build(width=32, height=32, depth=1, classes=2)
opt = SGD(lr=0.01)#Sochastic gradient descent with learning rate 0.01
model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"])
model.fit(X_train, Y_train, batch_size=10, nb_epoch=300,verbose=1)
y_pred = model.predict_classes(X_test)
print(y_pred)
print(test_labels)
real_world_set=[]
for i in np.arange(1,73):
filename=path+'images/real_world/'+str(i)+'.png'
real_world_set.append(im.deshear(filename))
fake_label=np.ones(len(real_world_set),dtype='int32')
X_real,Y_real=prep_datas(real_world_set,fake_label)
y_pred = model.predict_classes(X_real)
f=open(path+'images/real_world/labels.txt',"r")
lines=f.readlines()
result=[]
for x in lines:
result.append((x.split(' ')[1]).replace('\n',''))
f.close()
result=np.array([int(x) for x in result])
result[result>1]=1
plt.plot(y_pred,'o')
plt.plot(2*result,'o')
plt.ylim(-0.5,2.5);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the Data
Step2: Explore the Data
Step4: Implement Preprocess Functions
Step6: One-hot encode
Step7: Randomize Data
Step8: Check Point
Step12: Build the network
Step14: Convolution and Max Pooling Layer
Step16: Flatten Layer
Step18: Fully-Connected Layer
Step20: Output Layer
Step22: Create Convolutional Model
Step24: Train the Neural Network
Step26: Show Stats
Step27: Hyperparameters
Step28: Train on a Single CIFAR-10 Batch
Step29: Fully Train the Model
Step31: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
import helper
import numpy as np
from sklearn.preprocessing import LabelBinarizer
import pickle
import tensorflow as tf
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x / 255
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
lb = LabelBinarizer()
lb.fit(x)
lb.classes_ = list(range(0, 10))
return lb.transform(x)
tests.test_one_hot_encode(one_hot_encode)
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=[None, *image_shape], name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name="keep_prob")
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weights = tf.Variable(tf.truncated_normal([*conv_ksize, int(x_tensor.get_shape()[3]), conv_num_outputs], mean=0.0, stddev=0.05, dtype=tf.float32))
biases = tf.Variable(tf.constant(0, shape=[conv_num_outputs], dtype=tf.float32))
x = tf.nn.conv2d(x_tensor, weights, strides=[1, *conv_strides, 1], padding='SAME')
x = tf.nn.bias_add(x, biases)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME')
return x
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.contrib.layers.flatten(x_tensor)
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.relu)
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Apply Convolution and Max Pool layers
conv_num_outputs = [8*3, 16*3, 32*3]
conv_ksize, conv_strides, pool_ksize, pool_strides, num_outputs = (3,3), (1,1), (2,2), (2,2), 512
conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = conv2d_maxpool(conv, conv_num_outputs[1], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = conv2d_maxpool(conv, conv_num_outputs[2], conv_ksize, conv_strides, pool_ksize, pool_strides)
# Apply a Flatten Layer
flattened = flatten(conv)
# Apply Fully Connected Layers
fully = fully_conn(flattened, num_outputs)
fc_layer = tf.nn.dropout(fully, keep_prob)
fully = fully_conn(fc_layer, num_outputs)
fc_layer = tf.nn.dropout(fully, keep_prob)
return output(fully, 10)
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
pass
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f}, Accuracy: {:.4f}'.format(loss, validation_accuracy))
pass
epochs = 20
batch_size = 512
keep_probability = 0.5
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.1 Get the Main Delay Data for 2014 from Downloaded zip Files
Step2: The columns we now have in the dataset are
Step3: However, we just need a subset of these columns for our analysis
Step4: 2.2 Combine Data with External Aircraft Data
Step5: We can now join these two tables based on their common ID that is saved in the column MFR MDL CODE of the master table and in the index of the aircraft table respectively.
Step6: We now join this aircraft information with our delay data and extend the original dataset with the two new features
Step7: 2.3 Combine Data with External Airport Location Data
Step8: Just 0.7% of alll flight origins could not be located, so the merge was quite successful.
Step9: 3 Creation of Data Subsets for Weather Analysis and Predictive Models
Step10: 3.2 Creation of the Prediction Datasets
Step11: Filtering the data for active Airlines only
Step12: A quick check reveals, that filtering was not (really) necessary as all Airlines are still active today.
|
<ASSISTANT_TASK:>
Python Code:
# import required modules for data preparation tasks
import requests, zipfile, StringIO
import pandas as pd
import random
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import re
import json
import os
# reads all predefined months for a year and merge into one data frame
rawData = pd.DataFrame()
months = ['01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']
for m in months:
z = zipfile.ZipFile('cache/{y}{mo}.zip'.format(y=str(2014), mo = m))
rawData = rawData.append(pd.read_csv(z.open(z.namelist()[0])))
print "Downloaded", m
# reset index of complete dataset for delays to prepare merging in next step
rawData.reset_index(inplace=True)
rawData.columns
selectedColumns = [u'index', u'FL_DATE', u'UNIQUE_CARRIER', u'TAIL_NUM', u'FL_NUM',
u'ORIGIN', u'DEST', u'CRS_DEP_TIME', u'DEP_TIME', u'DEP_DELAY', u'TAXI_OUT',
u'WHEELS_OFF', u'WHEELS_ON', u'TAXI_IN', u'CRS_ARR_TIME', u'ARR_TIME', u'ARR_DELAY',
u'CANCELLED', u'DIVERTED', u'CANCELLATION_CODE', u'AIR_TIME', u'DISTANCE',
u'CARRIER_DELAY', u'WEATHER_DELAY', u'NAS_DELAY', u'SECURITY_DELAY', u'LATE_AIRCRAFT_DELAY',
u'ORIGIN_CITY_NAME', u'DEST_CITY_NAME']
rawData = rawData[selectedColumns]
z = zipfile.ZipFile('externalData/AircraftInformation.zip')
# master table containing tail numbers of aircraft
df_master = pd.DataFrame.from_csv(z.open('MASTER.txt'))
# detailed table containing information about manufacturer, age, etc.
df_aircrafts = pd.DataFrame.from_csv(z.open('ACFTREF.txt'))
master = df_master[['MFR MDL CODE', 'YEAR MFR']].reset_index()
aircrafts = df_aircrafts['MFR'].reset_index()
master.columns = ['TAIL_NUM', 'CODE', 'YEAR']
aircrafts.columns = ['CODE', 'MFR']
joined = pd.merge(master, aircrafts, how='left', on='CODE')
delayFinal = rawData[['TAIL_NUM','UNIQUE_CARRIER']]
delayFinal.TAIL_NUM = delayFinal.TAIL_NUM.str.strip('N')
delaymfr = pd.merge(delayFinal, joined, how='left', on=['TAIL_NUM'])
rawData['AIRCRAFT_YEAR'] = delaymfr.YEAR
rawData['AIRCRAFT_MFR'] = delaymfr.MFR
airportLocation = pd.DataFrame.from_csv('externalData/airport_codes_with_geo_name_ids_and_nl_names-2008-04-14.csv', header=None)
usAirports = airportLocation[airportLocation[4]=='US'].reset_index()
# we just need a subsets of the columns (origin, latitude and longitude)
usAirports = usAirports[[0, 5, 6]]
usAirports.columns = ['ORIGIN', 'LAT', 'LONG']
complete2014Data = pd.merge(rawData, usAirports, how='left', on='ORIGIN')
1.0*np.sum(complete2014Data.LAT.isnull())/complete2014Data.shape[0]
complete2014Data.to_csv('cache/complete2014Data.csv')
# load the weather file
weatherFile = os.path.join('data', 'weather_data.json')
with open(weatherFile) as infile:
weatherDict = json.load(infile)
# extract the weather data for new york and boston out of the json file and save it in weather_df
dates = []
frames = []
# create df for weather in new york
for datapoint in weatherDict['JFK']:
date = datapoint['date']
frames.append(pd.DataFrame(datapoint['data'], index=['%s-%s-%s' % (date[0:4], date[4:6], date[6:8])]))
weather_jfk = pd.concat(frames).reset_index()
# create df for weather in boston
for datapoint in weatherDict['BOS']:
date = datapoint['date']
frames.append(pd.DataFrame(datapoint['data'], index=['%s-%s-%s' % (date[0:4], date[4:6], date[6:8])]))
weather_bos = pd.concat(frames).reset_index()
# get just the departures for the John F. Kennedy airport in New York City and Logan airport in Boston
jfk_delays = complete2014Data[complete2014Data.ORIGIN=='JFK']
bos_delays = complete2014Data[complete2014Data.ORIGIN=='BOS']
# merge delays with weather_df created above
jfk_dalayWeather = pd.merge(jfk_delays, weather_jfk, how='left', left_on='FL_DATE', right_on = 'index')
bos_dalayWeather = pd.merge(bos_delays, weather_bos, how='left', left_on='FL_DATE', right_on = 'index')
jfk_bos_comparison = pd.concat([jfk_dalayWeather, bos_dalayWeather]).reset_index()
# save everything in a csv
jfk_bos_comparison.to_csv('cache/jfk_bos_weather_2014.csv', encoding='UTF-8')
#entries to be dropped in the analysis
flight_data_dropped = ['QUARTER', 'DAY_OF_MONTH', 'AIRLINE_ID', 'CARRIER', 'FL_NUM', 'TAIL_NUM']
location_data_dropped = ['ORIGIN_STATE_FIPS', 'ORIGIN_STATE_NM',\
'ORIGIN_WAC', 'DEST_STATE_FIPS', \
'DEST_STATE_NM', 'DEST_WAC']
departure_data_dropped = ['DEP_TIME', 'DEP_DELAY', 'DEP_DELAY_NEW', 'DEP_DEL15', 'DEP_DELAY_GROUP',\
'DEP_TIME_BLK', 'TAXI_OUT', 'WHEELS_OFF']
arrival_data_dropped = ['WHEELS_ON', 'TAXI_IN', 'ARR_TIME', 'ARR_DELAY_NEW',\
'ARR_DELAY_GROUP', 'ARR_TIME_BLK']
cancel_data_dropped = ['CANCELLED','CANCELLATION_CODE', 'DIVERTED']
summaries_dropped = ['CRS_ELAPSED_TIME', 'AIR_TIME', 'FLIGHTS']
cause_delay_dropped = ['CARRIER_DELAY', 'WEATHER_DELAY', 'NAS_DELAY', 'SECURITY_DELAY', 'LATE_AIRCRAFT_DELAY']
gate_return_dropped = ['FIRST_DEP_TIME', 'TOTAL_ADD_GTIME', 'LONGEST_ADD_GTIME']
diverted_data_dropped = ['DIV_AIRPORT_LANDINGS', 'DIV_REACHED_DEST', 'DIV_ACTUAL_ELAPSED_TIME', \
'DIV_ARR_DELAY', 'DIV_DISTANCE', 'DIV1_AIRPORT', 'DIV1_WHEELS_ON', \
'DIV1_TOTAL_GTIME', 'DIV1_LONGEST_GTIME', 'DIV1_WHEELS_OFF', \
'DIV1_TAIL_NUM', 'DIV2_AIRPORT', 'DIV2_WHEELS_ON', \
'DIV2_TOTAL_GTIME', 'DIV2_LONGEST_GTIME', 'DIV2_WHEELS_OFF', \
'DIV2_TAIL_NUM', 'DIV3_AIRPORT', 'DIV3_WHEELS_ON', \
'DIV3_TOTAL_GTIME', 'DIV3_LONGEST_GTIME', 'DIV3_WHEELS_OFF', 'DIV3_TAIL_NUM', \
'DIV4_AIRPORT', 'DIV4_WHEELS_ON', 'DIV4_TOTAL_GTIME', 'DIV4_LONGEST_GTIME', \
'DIV4_WHEELS_OFF', 'DIV4_TAIL_NUM', 'DIV5_AIRPORT', 'DIV5_WHEELS_ON', \
'DIV5_TOTAL_GTIME', 'DIV5_LONGEST_GTIME', 'DIV5_WHEELS_OFF', 'DIV5_TAIL_NUM']
other_dropped = ['Unnamed: 93']
columns_dropped = flight_data_dropped + location_data_dropped + departure_data_dropped + arrival_data_dropped \
+ cancel_data_dropped + summaries_dropped + cause_delay_dropped + gate_return_dropped + diverted_data_dropped \
+ other_dropped
def clean(data, list_col):
'''
Creates a dataset by excluding undesirable columns
Parameters:
-----------
df: pandas.DataFrame
Flight dataframe
list_col: <list 'string'>
Comumns to exclude from the data set
'''
# security check to drop only columns that exist
list_col = list(set(list_col) & set(data.columns))
res = data[(data.CANCELLED == 0) & (data.DIVERTED == 0)]
res.drop(list_col, axis=1, inplace=True)
res.dropna(axis = 0, inplace = True)
return res
%%time
data2014 = clean(complete2014Data, columns_dropped)
print data2014.columns
df_active_airlines = pd.read_csv('data/cur_airlines.txt', header=None)
df_active_airlines.columns = [['UNIQUE_CARRIER']];
df_active_airlines.head()
filteredData2014 = data2014.merge(df_active_airlines, on=['UNIQUE_CARRIER', 'UNIQUE_CARRIER'], how='inner')
filteredData2014.count()[0], data2014.count()[0]
# save data to cache
filteredData2014.to_csv('cache/linear_model_data.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Post-training dynamic range quantization
Step2: Train a TensorFlow model
Step3: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Step4: Write it out to a tflite file
Step5: To quantize the model on export, set the optimizations flag to optimize for size
Step6: Note how the resulting file, is approximately 1/4 the size.
Step7: Run the TFLite models
Step8: Test the model on one image
Step9: Evaluate the models
Step10: Repeat the evaluation on the dynamic range quantized model to obtain
Step11: In this example, the compressed model has no difference in the accuracy.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
!ls -lh {tflite_models_dir}
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
print(evaluate_model(interpreter_quant))
import tensorflow_hub as hub
resnet_v2_101 = tf.keras.Sequential([
keras.layers.InputLayer(input_shape=(224, 224, 3)),
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4")
])
converter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)
# Convert to TF Lite without quantization
resnet_tflite_file = tflite_models_dir/"resnet_v2_101.tflite"
resnet_tflite_file.write_bytes(converter.convert())
# Convert to TF Lite with quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
resnet_quantized_tflite_file = tflite_models_dir/"resnet_v2_101_quantized.tflite"
resnet_quantized_tflite_file.write_bytes(converter.convert())
!ls -lh {tflite_models_dir}/*.tflite
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assume that we have 4 assets, each with a return series of length 1000. We can use numpy.random.randn to sample returns from a normal distribution.
Step2: These return series can be used to create a wide range of portfolios. We will produce random weight vectors and plot those portfolios. As we want all our capital to be invested, the weights will have to sum to one.
Step3: Next, let's evaluate how these random portfolios would perform by calculating the mean returns and the volatility (here we are using standard deviation). You can see that there is
Step4: We calculate the return using
Step5: Upon plotting these you will observe that they form a characteristic parabolic
Step6: Markowitz optimization and the Efficient Frontier
Step7: In yellow you can see the optimal portfolios for each of the desired returns (i.e. the mus). In addition, we get the weights for one optimal portfolio
Step8: Backtesting on real market data
Step9: Next, we'll create a zipline algorithm by defining two functions
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd
np.random.seed(123)
# Turn off progress printing
solvers.options['show_progress'] = False
## NUMBER OF ASSETS
n_assets = 4
## NUMBER OF OBSERVATIONS
n_obs = 1000
return_vec = np.random.randn(n_assets, n_obs)
plt.plot(return_vec.T, alpha=.4);
plt.xlabel('time')
plt.ylabel('returns');
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.rand(n)
return k / sum(k)
print rand_weights(n_assets)
print rand_weights(n_assets)
def random_portfolio(returns):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
p = np.asmatrix(np.mean(returns, axis=1))
w = np.asmatrix(rand_weights(returns.shape[0]))
C = np.asmatrix(np.cov(returns))
mu = w * p.T
sigma = np.sqrt(w * C * w.T)
# This recursion reduces outliers to keep plots pretty
if sigma > 2:
return random_portfolio(returns)
return mu, sigma
n_portfolios = 500
means, stds = np.column_stack([
random_portfolio(return_vec)
for _ in xrange(n_portfolios)
])
plt.plot(stds, means, 'o', markersize=5)
plt.xlabel('std')
plt.ylabel('mean')
plt.title('Mean and standard deviation of returns of randomly generated portfolios');
def optimal_portfolio(returns):
n = len(returns)
returns = np.asmatrix(returns)
N = 100
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks
weights, returns, risks = optimal_portfolio(return_vec)
plt.plot(stds, means, 'o')
plt.ylabel('mean')
plt.xlabel('std')
plt.plot(risks, returns, 'y-o');
print weights
data = get_pricing(['IBM', 'GLD', 'XOM', 'AAPL', 'MSFT', 'TLT', 'SHY'],
start_date='2005-06-07', end_date='2014-01-27')
data.loc['price', :, :].plot(figsize=(8,5))
plt.ylabel('price in $');
import zipline
from zipline.api import (add_history,
history,
set_slippage,
slippage,
set_commission,
commission,
order_target_percent)
from zipline import TradingAlgorithm
def initialize(context):
'''
Called once at the very beginning of a backtest (and live trading).
Use this method to set up any bookkeeping variables.
The context object is passed to all the other methods in your algorithm.
Parameters
context: An initialized and empty Python dictionary that has been
augmented so that properties can be accessed using dot
notation as well as the traditional bracket notation.
Returns None
'''
# Register history container to keep a window of the last 100 prices.
add_history(100, '1d', 'price')
# Turn off the slippage model
set_slippage(slippage.FixedSlippage(spread=0.0))
# Set the commission model (Interactive Brokers Commission)
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(context, data):
'''
Called when a market event occurs for any of the algorithm's
securities.
Parameters
data: A dictionary keyed by security id containing the current
state of the securities in the algo's universe.
context: The same context object from the initialize function.
Stores the up to date portfolio as well as any state
variables defined.
Returns None
'''
# Allow history to accumulate 100 days of prices before trading
# and rebalance every day thereafter.
context.tick += 1
if context.tick < 100:
return
# Get rolling window of past prices and compute returns
prices = history(100, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights, _, _ = optimal_portfolio(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
# Instantinate algorithm
algo = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
results = algo.run(data.swapaxes(2, 0, 1))
results.portfolio_value.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.2 Setup your GCP project
Step2: 0.4 Set DV360 account settings
Step4: Create a new 'sandbox' campaign to use with the rest of the exercises
Step5: 1A) SDF using DBM API (sunset)
Step6: Define a boilerplate targeting template that all Line Items should adhere too
Step7: Modify latest SDF LineItems file and update the columns according to the targeting template
Step8: Upload the output .csv file in the DV360 UI
Step9: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations
Step10: Download the latest SDF LineItems (because we've made changes since our last download)
Step11: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs
Step12: Upload the output .csv file in the DV360 UI
Step14: Define a function to send images to the Cloud Vision API
Step15: Run our images through the function, and return a lookup table
Step16: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field
Step18: Upload the output .csv file in the DV360 UI
Step19: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5)
Step20: Note the only rows included in the output, are those that we want to modify.
Step21: Solution
Step23: Upload the output .csv file in the DV360 UI
Step24: Define a boilerplate targeting template that all Line Items should adhere too
Step25: Modify latest SDF LineItems file and update the columns according to the targeting template
Step26: Upload the output .csv file in the DV360 UI
Step27: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations
Step28: Download the latest SDF LineItems (because we've made changes since our last download)
Step29: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs
Step30: Upload the output .csv file in the DV360 UI
Step32: Define a function to send images to the Cloud Vision API
Step33: Run our images through the function, and return a lookup table (reference)
Step34: View the results of our Vision analysis
Step35: Download the latest SDF LineItems (because we've made changes since our last download)
Step36: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field
Step37: Upload the output .csv file in the DV360 UI
Step38: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5)
Step39: Upload the output .csv file in the DV360 UI
Step40: Solution
Step42: Upload the output .csv file in the DV360 UI
Step43: Upload the extended feed for Google Store's new territories
Step45: Create Insertion Order template
Step47: Create LineItem template
Step48: Build our new campaign
Step49: If successful, the result should look similar to the below in DV360
Step50: Apply individual targeting criteria to single entity
Step51: Applying individual targeting criteria to multiple entities
Step53: 2.3 Bulk targeting
Step54: Retrieve list of active LineItems, and Apply bulk targeting
Step56: 2.4 Optimisation (external trigger)
Step57: Creat list of out of stock products
Step58: Process optimisation
Step59: 2.5 Optimisation (reporting data)
Step60: Load report to Pandas DataFrame
Step61: Create two lists of poorly performing LineItems
Step62: Process optimisation
Step64: 2.6 Creative upload
Step65: Upload image creatives
Step66: 2.7 Challenge
Step67: Solution
Step73: Link to DV360 UI
|
<ASSISTANT_TASK:>
Python Code:
!pip install google-api-python-client
!pip install google-cloud-vision
import csv
import datetime
import io
import json
import pprint
from google.api_core import retry
from google.cloud import vision
from google.colab import files
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient import discovery
from googleapiclient import http
import pandas as pd
import requests
print('Successfully imported Python libraries!')
API_SCOPES = ['https://www.googleapis.com/auth/doubleclickbidmanager',
'https://www.googleapis.com/auth/display-video',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/cloud-vision']
# Authenticate using user credentials stored in client_secrets.json
client_secrets_file = files.upload()
client_secrets_json = json.loads(next(iter(client_secrets_file.values())))
flow = InstalledAppFlow.from_client_config(client_secrets_json, API_SCOPES)
credentials = flow.run_console()
print('Success!')
# Build DBM Read API service object
dbm_service = discovery.build(
'doubleclickbidmanager', 'v1.1', credentials=credentials)
print('DBM API service object created')
# Build Google Cloud Storage Read API service object
gcs_service = discovery.build('storage', 'v1', credentials=credentials)
print('GCS service object created')
# Create Display Video API service object
display_video_service = discovery.build(
'displayvideo', 'v1', credentials=credentials)
print('Display Video API service object created')
PARTNER_ID = '234340' #@param {type:"string"}
ADVERTISER_ID = '2436036' #@param {type:"string"}
CAMPAIGN_ID = '4258803' #@param {type:"string"}
# For use with legacy DBM API
SDF_VERSION = '5.3' #@param {type:"string"}
# For use with DV360 API
SDF_VERSION_DV360 = 'SDF_VERSION_5_3' #@param {type:"string"}
print('DV360 settings saved!')
YOUR_NAME = 'Matt' #@param {type:"string"}
# Set dates for new campaign
month = datetime.datetime.today().strftime('%m')
day = datetime.datetime.today().strftime('%d')
year = datetime.datetime.today().strftime('%Y')
month_plus30 = (datetime.datetime.today() +
datetime.timedelta(days=30)).strftime('%m')
day_plus30 = (datetime.datetime.today() +
datetime.timedelta(days=30)).strftime('%d')
year_plus30 = (datetime.datetime.today() +
datetime.timedelta(days=30)).strftime('%Y')
def create_campaign(YOUR_NAME):
Creates a new DV360 Campaign object.
campaign_name = f'{year}-{month}-{day} | {YOUR_NAME}'
campaign_obj = {
'displayName': campaign_name,
'entityStatus': 'ENTITY_STATUS_ACTIVE',
'campaignGoal': {
'campaignGoalType': 'CAMPAIGN_GOAL_TYPE_ONLINE_ACTION',
'performanceGoal': {
'performanceGoalType': 'PERFORMANCE_GOAL_TYPE_CPC',
'performanceGoalAmountMicros': 1000000
}
},
'campaignFlight': {
'plannedSpendAmountMicros': 1000000,
'plannedDates': {
'startDate': {
'year': year,
'month': month,
'day': day
},
'endDate': {
'year': year_plus30,
'month': month_plus30,
'day': day_plus30
}
}
},
'frequencyCap': {
'maxImpressions': 10,
'timeUnit': 'TIME_UNIT_DAYS',
'timeUnitCount': 1
}
}
# Create the campaign.
campaign = display_video_service.advertisers().campaigns().create(
advertiserId=ADVERTISER_ID,
body=campaign_obj
).execute()
return campaign
new_campaign = create_campaign(YOUR_NAME)
# Display the new campaign.
CAMPAIGN_ID = new_campaign['campaignId']
print(f"\nCampaign '{new_campaign['name']}' was created."
f"\nCampaign id: '{new_campaign['campaignId']}'"
f"\nCampaign name: '{new_campaign['displayName']}'"
f"\nCampaign status: '{new_campaign['entityStatus']}'")
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
# Make the request to download all SDF LineItems for your new campaign
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
# Show sample (5 rows) of DataFrame
sdf_df.head()
targeting_template = {
'Channel Targeting - Include':
'2580510;',
'Channel Targeting - Exclude':
'2580509;',
'Inventory Source Targeting - Include':
'1;',
'Inventory Source Targeting - Exclude':
'6; 8; 9; 10; 2; 11; 12; 13; 16; 20; 23; 27; 29; 30; 31; 34; 35; 36; '
'38; 43; 46; 50; 51; 56; 60; 63; 67; 74;',
'Digital Content Labels - Exclude':
'G; PG; T;',
'Brand Safety Sensitivity Setting':
'Use custom',
'Brand Safety Custom Settings':
'Adult; Alcohol; Derogatory; Downloads & Sharing; Drugs; Gambling; '
'Politics; Profanity; Religion; Sensitive social issues; Suggestive; '
'Tobacco; Tragedy; Transportation Accidents; Violence; Weapons;'
}
# Overwrite targeting columns using 'targeting_template'
sdf_df['Channel Targeting - Include'] = targeting_template[
'Channel Targeting - Include']
sdf_df['Channel Targeting - Exclude'] = targeting_template[
'Channel Targeting - Exclude']
sdf_df['Inventory Source Targeting - Include'] = targeting_template[
'Inventory Source Targeting - Include']
sdf_df['Inventory Source Targeting - Exclude'] = targeting_template[
'Inventory Source Targeting - Exclude']
sdf_df['Digital Content Labels - Exclude'] = targeting_template[
'Digital Content Labels - Exclude']
sdf_df['Brand Safety Sensitivity Setting'] = targeting_template[
'Brand Safety Sensitivity Setting']
sdf_df['Brand Safety Custom Settings'] = targeting_template[
'Brand Safety Custom Settings']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update1_controls.csv', index=False)
# Show sample (5 rows) of DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update1_controls.csv')
print(
"Success, check your downloads for a file called 'sdf_update1_controls.csv'"
)
# Actually today-7 to avoid issues with collection
yesterday = datetime.date.today() - datetime.timedelta(7)
# Download public ERF for geolocation info
request = gcs_service.objects().get_media(
bucket='gdbm-public',
object='entity/' + yesterday.strftime('%Y%m%d') + '.0.GeoLocation.json')
response = request.execute()
geolocations = json.loads(response)
print('GeoLocation.json successfully downloaded \n')
print("Here's a random sample of 5 entries:\n")
pprint.pprint(geolocations[0:5])
# Provide a list of store locations
store_locations = ['United Kingdom', 'France', 'Spain', 'Germany', 'Portugal']
# Create a new dictionary to save the country code and ID later on
geo_targeting_ids = {}
# Note: GeoLocation.json is over 800,000 lines
for location in geolocations:
if location['canonical_name'] in store_locations:
geo_targeting_ids[location['country_code']] = location['id']
print(location)
print(geo_targeting_ids)
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
# Make the request to download all SDF LineItems for your new campaign
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
# Show sample (5 rows) of DataFrame
sdf_df.head()
for country in geo_targeting_ids:
target_country = geo_targeting_ids[country]
sdf_df.loc[sdf_df.Name.str.contains(country),
'Geography Targeting - Include'] = f'{target_country};'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update2_geo.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update2_geo.csv')
print("Success, look for a file called 'sdf_update2_geo.csv' in your downloads folder")
# Upload product feed using Colab's upload utility
product_feed_csv = files.upload()
contents = next(iter(product_feed_csv.values())).decode('utf-8')
products = csv.DictReader(io.StringIO(contents))
image_url_list = {}
# Iterate through each row and update dict() with sku:link
for row in products:
image_url_list[row['sku']] = row['image_link']
pprint.pprint(image_url_list)
def vision_analysis(image_url):
Process images using the Cloud Vision API.
# Assign image URL
image = vision.Image()
image.source.image_uri = image_url
# Instantiates a Vision client
client = vision.ImageAnnotatorClient(credentials=credentials)
# Performs label detection on the image file
vision_response = client.label_detection(image=image)
dv360_targeting_keywords = []
labels = []
for label in vision_response.label_annotations:
dv360_targeting_keywords.append(label.description)
label = f'{label.description} ({label.score:.2%})'
labels.append(label)
return dv360_targeting_keywords, labels
imageslookup = {}
for sku, url in image_url_list.items():
imageslookup[sku], vision_labels = vision_analysis(url)
print(f'Analysis completed for: {url}')
print('Labels (confidence score):')
pprint.pprint(vision_labels, indent=4)
print('=' * 30)
print('\n\nLookup table:')
pprint.pprint(imageslookup, indent=4)
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
for product in imageslookup:
sdf_df.loc[sdf_df.Name.str.contains(product),
'Keyword Targeting - Include'] = ';'.join(
imageslookup[product]).lower()
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update3_keywords.csv', index=False)
# Show sample (5 rows) of DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update3_keywords.csv')
print("Success, look for the file called 'sdf_update3_keywords.csv' in your downloads folder")
# Define DV360 report definition (i.e. metrics and filters)
report_definition = {
'params': {
'type': 'TYPE_GENERAL',
'metrics': [
'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR',
'METRIC_REVENUE_ADVERTISER'
],
'groupBys': [
'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM',
'FILTER_ADVERTISER_CURRENCY'
],
'filters': [{
'type': 'FILTER_ADVERTISER',
'value': ADVERTISER_ID
}],
},
'metadata': {
'title': 'DV360 Automation API-generated report',
'dataRange': 'LAST_90_DAYS',
'format': 'csv'
},
'schedule': {
'frequency': 'ONE_TIME'
}
}
# Create new query using report definition
operation = dbm_service.queries().createquery(body=report_definition).execute()
pprint.pprint(operation)
# Runs the given Queries.getquery request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(
predicate=retry.if_exception_type(Exception),
initial=5,
maximum=60,
deadline=18000)
def check_get_query_completion(getquery_request):
Queries metadata to check for completion.
completion_response = getquery_request.execute()
pprint.pprint(completion_response)
if completion_response['metadata']['running']:
raise Exception('The operation has not completed.')
return completion_response
getquery_request = dbm_service.queries().getquery(queryId=operation['queryId'])
getquery_response = check_get_query_completion(getquery_request)
report_url = getquery_response['metadata'][
'googleCloudStoragePathForLatestReport']
# Use skipfooter to remove report footer from data
report_df = pd.read_csv(report_url, skipfooter=16, engine='python')
report_df.head(10)
# Define our 'KPIs'
ctr_target = 0.15
imp_threshold = 10000
# Convert IDs to remove decimal point, then string
report_df['Line Item ID'] = report_df['Line Item ID'].apply(int)
poor_performers = report_df.query(
'Impressions > @imp_threshold & (Clicks / Impressions)*100 < @ctr_target')
# Convert results to Python list
poor_performers = list(poor_performers['Line Item ID'])
print(f'There are {len(poor_performers)} LineItems with a CTR'
f' < {ctr_target}% and over {imp_threshold} impressions:'
f'\n{poor_performers}')
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': ['1914007'],
'version': SDF_VERSION
}
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
for li in poor_performers:
geo = sdf_df.loc[sdf_df['Line Item Id'] == li,
'Geography Targeting - Include'].iloc[0]
sdf_df.loc[sdf_df['Line Item Id'] == li,
'Bid Multipliers'] = f'(geo; {geo} 0.5;);'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update4_bidmultipliers.csv', index=False)
# Display updated DataFrame
sdf_df.head()
files.download('sdf_update4_bidmultipliers.csv')
print('Success, your new SDF file has been downloaded')
#TODO
# Format today-2 in required date format
yesterday = (datetime.date.today() - datetime.timedelta(2)).strftime('%Y%m%d')
# Download ERF for Language.json from public GCS bucket
request = gcs_service.objects().get_media(
bucket='gdbm-public', object='entity/' + yesterday + '.0.Language.json')
response = request.execute()
languages = json.loads(response)
language_targets = ['en', 'fr']
lang_targeting_ids = {}
# Search language.json for language targets 'en' and 'fr'
for lang in languages:
if lang['code'] in language_targets:
lang_targeting_ids[lang['code']] = lang['id']
print(lang)
print(lang_targeting_ids)
# Define targeting template
targeting_template = {
'Affinity & In Market Targeting - Include': '4569529;4586809;4497529;',
}
# Configure the sdf.download request
request_body = {
'fileTypes': ['LINE_ITEM'],
'filterType': 'CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID],
'version': SDF_VERSION
}
request = dbm_service.sdf().download(body=request_body)
response = request.execute()
# Load SDF response to Pandas DataFrame
sdf_df = pd.read_csv(io.StringIO(response['lineItems']))
# Update DataFrame with Language and Audience targeting
sdf_df.loc[sdf_df.Name.str.contains('GB'),
'Language Targeting - Include'] = f"{lang_targeting_ids['en']};"
sdf_df.loc[sdf_df.Name.str.contains('FR'),
'Language Targeting - Include'] = f"{lang_targeting_ids['fr']};"
sdf_df['Affinity & In Market Targeting - Include'] = targeting_template[
'Affinity & In Market Targeting - Include']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update5_challenge.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download file to disk using Colab syntax
files.download('sdf_update5_challenge.csv')
print("Success, check your downloads for a file called 'sdf_update5_challenge.csv'")
def download_sdf(request_body):
Download sdf .zip, extract .csv files, load 'SDF-LineItems.csv' to Pandas DataFrame.
# Create the sdfdownloadtask
sdf_operation = display_video_service.sdfdownloadtasks().create(
body=sdf_body).execute()
print(f'Operation {sdf_operation["name"]} was created.')
# Configure the operations.get request
get_request = display_video_service.sdfdownloadtasks().operations().get(
name=sdf_operation['name'])
# Runs the given operations.get request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(predicate=retry.if_exception_type(Exception),
initial=5, maximum=60, deadline=18000)
def check_sdf_downloadtask_completion(get_request):
operation = get_request.execute()
if 'done' not in operation:
raise Exception('The operation has not completed.')
return operation
# Get current status of operation with exponential backoff retry logic
operation = check_sdf_downloadtask_completion(get_request)
# Check if the operation finished with an error and return
if 'error' in operation:
raise Exception(f'The operation finished in error with code {operation["error"]["code"]} {operation["error"]["message"]}')
print('The operation completed successfully.')
print('Resource {operation["response"]["resourceName"]} was created.')
# Extract download file resource name to use in download request
resource_name = operation['response']['resourceName']
# Configure the Media.download request
dowload_request = display_video_service.media().download_media(
resourceName=resource_name)
output_file = f"{resource_name.replace('/','-')}.zip"
# Create output stream for downloaded file
outstream = io.FileIO(output_file, mode='wb')
# Make downloader object
downloader = http.MediaIoBaseDownload(outstream, dowload_request)
# Download media file in chunks until finished
download_finished = False
while download_finished is False:
_, download_finished = downloader.next_chunk()
print(f'File downloaded to {output_file}')
# Load output into a Pandas dataframe
df = pd.read_csv(output_file, compression='zip')
return df
print('Download SDF function created')
targeting_template = {
'Channel Targeting - Include':
'2580510;',
'Channel Targeting - Exclude':
'2580509;',
'Inventory Source Targeting - Include':
'1;',
'Inventory Source Targeting - Exclude':
'6; 8; 9; 10; 2; 11; 12; 13; 16; 20; 23; 27; 29; 30; 31; 34; 35; 36; '
'38; 43; 46; 50; 51; 56; 60; 63; 67; 74;',
'Digital Content Labels - Exclude':
'G; PG; T;',
'Brand Safety Sensitivity Setting':
'Use custom',
'Brand Safety Custom Settings':
'Adult; Alcohol; Derogatory; Downloads & Sharing; Drugs; Gambling; '
'Politics; Profanity; Religion; Sensitive social issues; Suggestive; '
'Tobacco; Tragedy; Transportation Accidents; Violence; Weapons;'
}
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
# Fetch updated SDF lineitem
sdf_df = download_sdf(sdf_body)
# Overwrite targeting columns using 'targeting_template'
sdf_df['Channel Targeting - Include'] = targeting_template[
'Channel Targeting - Include']
sdf_df['Channel Targeting - Exclude'] = targeting_template[
'Channel Targeting - Exclude']
sdf_df['Inventory Source Targeting - Include'] = targeting_template[
'Inventory Source Targeting - Include']
sdf_df['Inventory Source Targeting - Exclude'] = targeting_template[
'Inventory Source Targeting - Exclude']
sdf_df['Digital Content Labels - Exclude'] = targeting_template[
'Digital Content Labels - Exclude']
sdf_df['Brand Safety Sensitivity Setting'] = targeting_template[
'Brand Safety Sensitivity Setting']
sdf_df['Brand Safety Custom Settings'] = targeting_template[
'Brand Safety Custom Settings']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update1_controls.csv', index=False)
# Show sample (5 rows) of DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update1_controls.csv')
print(
"Success, check your downloads for a file called 'sdf_update1_controls.csv'"
)
# Actually today-7 to avoid issues with collection
yesterday = datetime.date.today() - datetime.timedelta(7)
# Download public ERF for geolocation info
request = gcs_service.objects().get_media(
bucket='gdbm-public',
object='entity/' + yesterday.strftime('%Y%m%d') + '.0.GeoLocation.json')
response = request.execute()
geolocations = json.loads(response)
print('GeoLocation.json successfully downloaded \n')
print("Here's a random sample of 5 entries:\n")
pprint.pprint(geolocations[0:5])
# Provide a list of store locations
store_locations = ['United Kingdom', 'France', 'Spain', 'Germany', 'Portugal']
# Create a new dictionary to save the country code and ID later on
geo_targeting_ids = {}
# Note: GeoLocation.json is over 800,000 lines
for location in geolocations:
if location['canonical_name'] in store_locations:
geo_targeting_ids[location['country_code']] = location['id']
print(location)
print(geo_targeting_ids)
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
sdf_df = download_sdf(sdf_body)
sdf_df.head()
for country in geo_targeting_ids:
target_country = geo_targeting_ids[country]
sdf_df.loc[sdf_df.Name.str.contains(country),
'Geography Targeting - Include'] = f'{target_country};'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update2_geo.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update2_geo.csv')
print("Success, see file 'sdf_update2_geo.csv' in your downloads folder")
# Upload product feed using Colab's upload utility
product_feed_csv = files.upload()
contents = next(iter(product_feed_csv.values())).decode('utf-8')
products = csv.DictReader(io.StringIO(contents))
image_url_list = {}
# Iterate through each row and update dict() with sku:link
for row in products:
image_url_list[row['sku']] = row['image_link']
pprint.pprint(image_url_list)
def vision_analysis(image_url):
Process images using the Cloud Vision API.
# Assign image URL
image = vision.Image()
image.source.image_uri = image_url
# Instantiates a Vision client
client = vision.ImageAnnotatorClient(credentials=credentials)
# Performs label detection on the image file
response = client.label_detection(image=image)
dv360_targeting_keywords = []
vision_labels = []
for label in response.label_annotations:
dv360_targeting_keywords.append(label.description)
label = f'{label.description} ({label.score:.2%})'
vision_labels.append(label)
return dv360_targeting_keywords, vision_labels
print("Vision function created")
imageslookup = {}
for sku, url in image_url_list.items():
imageslookup[sku], vision_labels = vision_analysis(url)
print(f'Analysis completed for: {url}')
print('Labels (confidence score):')
pprint.pprint(vision_labels, indent=4)
print('=' * 30)
print('\n\nLookup table:')
pprint.pprint(imageslookup, indent=4)
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
sdf_df = download_sdf(sdf_body)
sdf_df.head()
for product in imageslookup:
sdf_df.loc[sdf_df.Name.str.contains(product),
'Keyword Targeting - Include'] = ';'.join(
imageslookup[product]).lower()
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update3_keywords.csv', index=False)
sdf_df.head()
# Download modified csv to local storage
files.download('sdf_update3_keywords.csv')
print("Success, see 'sdf_update3_keywords.csv' in your downloads folder")
# Define DV360 report definition (i.e. metrics and filters)
report_definition = {
'params': {
'type': 'TYPE_GENERAL',
'metrics': [
'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR',
'METRIC_REVENUE_ADVERTISER'
],
'groupBys': [
'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM',
'FILTER_ADVERTISER_CURRENCY'
],
'filters': [{
'type': 'FILTER_ADVERTISER',
'value': ADVERTISER_ID
}],
},
'metadata': {
'title': 'DV360 Automation API-generated report',
'dataRange': 'LAST_90_DAYS',
'format': 'csv'
},
'schedule': {
'frequency': 'ONE_TIME'
}
}
# Create new query using report definition
operation = dbm_service.queries().createquery(body=report_definition).execute()
pprint.pprint(operation)
# Runs the given Queries.getquery request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(predicate=retry.if_exception_type(Exception),
initial=5, maximum=60, deadline=18000)
def check_get_query_completion(getquery_request):
response = getquery_request.execute()
pprint.pprint(response)
if response['metadata']['running']:
raise Exception('The operation has not completed.')
return response
getquery_request = dbm_service.queries().getquery(queryId=operation['queryId'])
response = check_get_query_completion(getquery_request)
report_url = response['metadata']['googleCloudStoragePathForLatestReport']
# Use skipfooter to remove report footer from data
report_df = pd.read_csv(report_url, skipfooter=16, engine='python')
report_df.head(10)
# Define our 'KPIs'
ctr_target = 0.15
imp_threshold = 1000
# Convert IDs to remove decimal point, then string
report_df['Line Item ID'] = report_df['Line Item ID'].apply(int)
poor_performers = report_df.query(
'Impressions > @imp_threshold & (Clicks / Impressions)*100 < @ctr_target')
# Convert results to Python list
poor_performers = list(poor_performers['Line Item ID'])
print(f'There are {len(poor_performers)} LineItems with a CTR'
f' < {ctr_target}% and over {imp_threshold} impressions:'
f'\n{poor_performers}')
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': ['1914007']
}
}
sdf_df = download_sdf(sdf_body)
sdf_df.head()
for li in poor_performers:
geo = sdf_df.loc[sdf_df['Line Item Id'] == li,
'Geography Targeting - Include'].iloc[0]
sdf_df.loc[sdf_df['Line Item Id'] == li,
'Bid Multipliers'] = f'(geo; {geo} 0.5;);'
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update4_bidmultipliers.csv', index=False)
# Display updated DataFrame
sdf_df.head()
files.download('sdf_update4_bidmultipliers.csv')
print('Success, your new SDF file has been downloaded')
#TODO
# Format today-7 in required date format
yesterday = (datetime.date.today() - datetime.timedelta(7)).strftime('%Y%m%d')
# Download ERF for Language.json from public GCS bucket
request = gcs_service.objects().get_media(
bucket='gdbm-public', object='entity/' + yesterday + '.0.Language.json')
response = request.execute()
languages = json.loads(response)
language_targets = ['en', 'fr']
lang_targeting_ids = {}
# Search language.json for language targets 'en' and 'fr'
for lang in languages:
if lang['code'] in language_targets:
lang_targeting_ids[lang['code']] = lang['id']
print(lang)
print(lang_targeting_ids)
# Define targeting template
targeting_template = {
'Affinity & In Market Targeting - Include': '4569529;4586809;4497529;',
}
# Configure the sdfdownloadtasks.create request
sdf_body = {
'version': SDF_VERSION_DV360,
'advertiserId': ADVERTISER_ID,
'parentEntityFilter': {
'fileType': ['FILE_TYPE_LINE_ITEM'],
'filterType': 'FILTER_TYPE_CAMPAIGN_ID',
'filterIds': [CAMPAIGN_ID]
}
}
sdf_df = download_sdf(sdf_body)
# Update DataFrame with Language and Audience targeting
sdf_df.loc[sdf_df.Name.str.contains('GB'),
'Language Targeting - Include'] = f"{lang_targeting_ids['en']};"
sdf_df.loc[sdf_df.Name.str.contains('FR'),
'Language Targeting - Include'] = f"{lang_targeting_ids['fr']};"
sdf_df['Affinity & In Market Targeting - Include'] = targeting_template[
'Affinity & In Market Targeting - Include']
# Save modified dataframe to remote storage in Colab
sdf_df.to_csv('sdf_update5_challenge.csv', index=False)
# Display updated DataFrame
sdf_df.head()
# Download file to disk using Colab syntax
files.download('sdf_update5_challenge.csv')
print("Success, see downloads folder for file 'sdf_update5_challenge.csv'")
request = display_video_service.advertisers().lineItems().list(
advertiserId=ADVERTISER_ID,
filter='entityStatus="ENTITY_STATUS_ACTIVE"',
pageSize=1
)
response = request.execute()
# Check if response is empty.
if not response:
print('Advertiser has no active Line Items')
else:
pprint.pprint(response['lineItems'])
def get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID):
Returns list of Lineitems with active status.
list_lineitems = display_video_service.advertisers().lineItems().list(
advertiserId=ADVERTISER_ID,
filter=f'entityStatus="ENTITY_STATUS_ACTIVE" AND campaignId="{CAMPAIGN_ID}"',
fields='lineItems(lineItemId,displayName)' # Return only two fields
).execute()
active_lineitems = [li['lineItemId'] for li in list_lineitems['lineItems']]
return active_lineitems
# Upload product feed using Colab's upload utility
product_feed_csv = files.upload()
contents = next(iter(product_feed_csv.values())).decode('utf-8')
products = list(csv.DictReader(io.StringIO(contents)))
# Create unique list of country-codes -- set() automatically de dupes
unique_country_codes = set([row['country code'] for row in products])
print(unique_country_codes)
def create_insertion_order(parent_campaign_id, new_io_name):
Creates a new DV360 insertion order object.
# Define our new Insertion Order boilerplate
new_insertion_order = {
'campaignId': parent_campaign_id,
'displayName': new_io_name, # Define naming convention
'entityStatus': 'ENTITY_STATUS_DRAFT',
'pacing': {
'pacingPeriod': 'PACING_PERIOD_DAILY',
'pacingType': 'PACING_TYPE_EVEN',
'dailyMaxMicros': '1000000' # Equiv to $1 or local currency
},
'frequencyCap': {
'unlimited': False,
'timeUnit': 'TIME_UNIT_MONTHS',
'timeUnitCount': 1,
'maxImpressions': 5
},
'performanceGoal': {
'performanceGoalType': 'PERFORMANCE_GOAL_TYPE_CPC',
'performanceGoalAmountMicros': '1000000', # $1 CPM/CPC target
},
'bidStrategy': {
'fixedBid': {
'bidAmountMicros': '0'
},
},
'budget': {
'automationType':
'INSERTION_ORDER_AUTOMATION_TYPE_NONE',
'budgetUnit':
'BUDGET_UNIT_CURRENCY',
'budgetSegments': [{
'budgetAmountMicros':
'30000000', # Equiv to $30 or local currency
'description': 'My first segment',
'dateRange': {
'startDate': {
'year': year,
'month': month,
'day': day
},
'endDate': {
'year': year_plus30,
'month': month_plus30,
'day': day_plus30
}
}
}]
}
}
# API create() request to generate new Insertion Order
newinsertionorder_request = display_video_service.advertisers(
).insertionOrders().create(
advertiserId=ADVERTISER_ID, body=new_insertion_order).execute()
# Define patch to activate new Insertion Order afer creation
patch = {
'entityStatus': 'ENTITY_STATUS_ACTIVE',
}
# API patch() request
display_video_service.advertisers().insertionOrders().patch(
advertiserId=ADVERTISER_ID,
insertionOrderId=newinsertionorder_request['insertionOrderId'],
updateMask='entityStatus',
body=patch).execute()
print(newinsertionorder_request)
return newinsertionorder_request
print('Insertion Order function created')
def create_lineitem(parent_io_id, new_li_name):
Creates a new DV360 lineitem object.
# Define our new LineItem boilerplate
new_lineitem = {
'advertiserId': ADVERTISER_ID,
'insertionOrderId': parent_io_id,
'displayName': new_li_name, # Define naming convention
'lineItemType': 'LINE_ITEM_TYPE_DISPLAY_DEFAULT',
'entityStatus': 'ENTITY_STATUS_DRAFT',
'flight': {
'flightDateType': 'LINE_ITEM_FLIGHT_DATE_TYPE_INHERITED',
},
'pacing': {
'pacingPeriod': 'PACING_PERIOD_DAILY',
'pacingType': 'PACING_TYPE_EVEN',
'dailyMaxMicros': '1000000'
},
'frequencyCap': {
'timeUnit': 'TIME_UNIT_MONTHS',
'timeUnitCount': 1,
'maxImpressions': 5
},
'partnerRevenueModel': {
'markupType': 'PARTNER_REVENUE_MODEL_MARKUP_TYPE_TOTAL_MEDIA_COST_MARKUP'
},
'budget': {
'budgetAllocationType': 'LINE_ITEM_BUDGET_ALLOCATION_TYPE_UNLIMITED',
'budgetUnit': 'BUDGET_UNIT_CURRENCY'
},
'bidStrategy': {
'fixedBid': {
'bidAmountMicros': '1000000'
}
}
}
# API create() request to generate new Lineitem
newlineitem_request = display_video_service.advertisers().lineItems().create(
advertiserId=ADVERTISER_ID, body=new_lineitem).execute()
# Define patch to activate new Line Item afer creation
patch = {
'entityStatus': 'ENTITY_STATUS_ACTIVE',
}
# API patch() request
display_video_service.advertisers().lineItems().patch(
advertiserId=ADVERTISER_ID,
lineItemId=newlineitem_request['lineItemId'],
updateMask='entityStatus',
body=patch).execute()
print(newlineitem_request)
return newlineitem_request
print('LineItem function created')
%%time
for country_code in unique_country_codes:
# Create() and patch() new Insertion Order
io_name = f'Google Store | {country_code} | Display | Prospecting'
insertionorder = create_insertion_order(CAMPAIGN_ID, io_name)
for row in products:
if country_code in row['country code']:
# Create() and patch() new LineItem
li_name = f"{row['country code']} | {row['title']} | {row['sku']}"
lineitem = create_lineitem(insertionorder['insertionOrderId'], li_name)
print('Process completed')
# Create the page token variable.
next_page_token = ''
while True:
# Request the targeting options list.
response = display_video_service.targetingTypes().targetingOptions().list(
advertiserId=ADVERTISER_ID,
targetingType='TARGETING_TYPE_BROWSER',
pageToken=next_page_token).execute()
# Check if response is empty.
if not response:
print('List request returned no Targeting Options')
break
# Iterate over retrieved targeting options.
options_dict = {}
for option in response['targetingOptions']:
options_dict[
option['targetingOptionId']] = option['browserDetails']['displayName']
# Break out of loop if there is no next page.
if 'nextPageToken' not in response:
break
# Update the next page token.
next_page_token = response['nextPageToken']
pprint.pprint(options_dict)
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Fetch first Lineitem ID
lineitem_id = active_lineitems[0]
# Create a assigned targeting option object.
assigned_targeting_option_obj = {
'browserDetails': {
'targetingOptionId': '500072'
}
}
# Create the assigned targeting option.
assigned_targeting_option = display_video_service.advertisers().lineItems(
).targetingTypes().assignedTargetingOptions().create(
advertiserId=ADVERTISER_ID,
lineItemId=f'{lineitem_id}',
targetingType='TARGETING_TYPE_BROWSER',
body=assigned_targeting_option_obj
).execute()
# Display the new assigned targeting option.
print(f"Assigned Targeting Option {assigned_targeting_option['name']} created.")
# Create the page token variable.
next_page_token = ''
while True:
# Request the targeting options list.
response = display_video_service.googleAudiences().list(
advertiserId=ADVERTISER_ID,
filter='displayName : "Technology"',
pageToken=next_page_token).execute()
# Check if response is empty.
if not response:
print('List request returned no Targeting Options')
break
# Iterate over retrieved targeting options.
options_dict = {}
for option in response['googleAudiences']:
options_dict[option['googleAudienceId']] = [
option['displayName'], option['googleAudienceType']
]
# Break out of loop if there is no next page.
if 'nextPageToken' not in response:
break
# Update the next page token.
next_page_token = response['nextPageToken']
pprint.pprint(response)
google_audience_id = '92948'
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Create a assigned targeting option object.
assigned_targeting_option_obj = {
'audienceGroupDetails': {
'includedGoogleAudienceGroup': {
'settings': [{
'googleAudienceId': f'{google_audience_id}'
}]
}
}
}
pprint.pprint(assigned_targeting_option_obj)
# Update bulk targeting
for li in active_lineitems:
# Create the assigned targeting option.
assigned_targeting_option = display_video_service.advertisers().lineItems(
).targetingTypes().assignedTargetingOptions().create(
advertiserId=ADVERTISER_ID,
lineItemId=f'{li}',
targetingType='TARGETING_TYPE_AUDIENCE_GROUP',
body=assigned_targeting_option_obj).execute()
# Display the new assigned targeting option.
print(f"Targeting Option {assigned_targeting_option['name']} created.")
def set_default_li_targeting(lineitem_id):
Sets default LineItem targeting according to standard template.
# Define 'Channels'
create_channel_assigned_targetingoptions = []
for targeting_id in ['1777746835', '1778039430']:
create_channel_assigned_targetingoptions.append(
{'channelDetails': {
'channelId': targeting_id,
'negative': False
}})
# Define 'Inventory'
create_inventory_assigned_targetingoptions = []
for targeting_id in ['1']:
create_inventory_assigned_targetingoptions.append(
{'inventorySourceDetails': {'inventorySourceId': targeting_id}}
)
# Define 'Sensitive categories'
create_sensitive_cat_assigned_targetingoptions = []
sensitive_category = [
'1163177997', '1163178297', '118521027123', '118521027843',
'118521028083', '118521028563', '118521028803', '1596254697'
]
for targeting_id in sensitive_category:
create_sensitive_cat_assigned_targetingoptions.append({
'sensitiveCategoryExclusionDetails': {
'excludedTargetingOptionId': targeting_id
}
})
# Define 'Digital content labels'
create_digital_content_assigned_targetingoptions = []
content_rating_tier = ['19875634320', '19875634200', '19875634080']
for targeting_id in content_rating_tier:
create_digital_content_assigned_targetingoptions.append({
'digitalContentLabelExclusionDetails': {
'excludedTargetingOptionId': targeting_id
}
})
# Contruct request
bulk_edit_line_item_request = {
'createRequests': [
{
'targetingType':
'TARGETING_TYPE_CHANNEL',
'assignedTargetingOptions': [
create_channel_assigned_targetingoptions
]
},
{
'targetingType':
'TARGETING_TYPE_INVENTORY_SOURCE',
'assignedTargetingOptions': [
create_inventory_assigned_targetingoptions
]
},
{
'targetingType':
'TARGETING_TYPE_SENSITIVE_CATEGORY_EXCLUSION',
'assignedTargetingOptions': [
create_sensitive_cat_assigned_targetingoptions
]
},
{
'targetingType':
'TARGETING_TYPE_DIGITAL_CONTENT_LABEL_EXCLUSION',
'assignedTargetingOptions': [
create_digital_content_assigned_targetingoptions
]
},
]
}
# Edit the line item targeting.
bulk_request = display_video_service.advertisers().lineItems(
).bulkEditLineItemAssignedTargetingOptions(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id,
body=bulk_edit_line_item_request
)
bulk_response = bulk_request.execute()
# Check if response is empty.
# If not, iterate over and display new assigned targeting options.
if not bulk_response:
print('Bulk edit request created no new AssignedTargetingOptions')
else:
for assigned_targeting_option in bulk_response[
'createdAssignedTargetingOptions']:
print(f"Targeting Option {assigned_targeting_option['name']} created.")
print('Lineitem targeting function created')
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Update bulk targeting
for li in active_lineitems:
set_default_li_targeting(li)
def optimise_lineitem(lineitem_id, action):
Optimises lineitem according to given parameter.
lineitem_object = display_video_service.advertisers().lineItems().get(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id).execute()
if lineitem_object['entityStatus'] == 'ENTITY_STATUS_ACTIVE':
if action == 'pause':
patch = {
'entityStatus': 'ENTITY_STATUS_PAUSED',
}
lineitem_patched = display_video_service.advertisers().lineItems().patch(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id,
updateMask='entityStatus',
body=patch).execute()
print(f"LineItemID {lineitem_patched['name']} was paused")
elif action == 'optimise':
patch = {'bidStrategy': {'fixedBid': {'bidAmountMicros': '500000'},}}
lineitem_patched = display_video_service.advertisers().lineItems().patch(
advertiserId=ADVERTISER_ID,
lineItemId=lineitem_id,
updateMask='bidStrategy',
body=patch).execute()
print(f"{lineitem_patched['name']} was optimised")
else:
print("Not a valid action, must be either 'pause' or 'optimise'")
else:
print(
f"{lineitem_object['name']} already paused/archived - no action taken")
print('Optimisation function created')
out_of_stock_list = []
products = csv.DictReader(io.StringIO(contents))
# Iterate through each row, checking for products where availability = 0
for row in products:
if row['availability'] == '0':
out_of_stock_list.append(row['sku'])
# This should generate a list of 9 SKUs that are no-longer in stock
print(
f'Found {len(out_of_stock_list)} out-of-stock products {out_of_stock_list}')
# Return list of Lineitems with active status
active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID)
# Iterate through out-of-stock list. If sku is found in lineitem's name, perform optimisation.
for product in out_of_stock_list:
for key, value in active_lineitems.items():
if product in key:
optimise_lineitem(value, 'pause')
# Define DV360 report definition (i.e. metrics and filters)
report_definition = {
'params': {
'type': 'TYPE_GENERAL',
'metrics': [
'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR',
'METRIC_REVENUE_ADVERTISER'
],
'groupBys': [
'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM',
'FILTER_ADVERTISER_CURRENCY'
],
'filters': [{
'type': 'FILTER_ADVERTISER',
'value': ADVERTISER_ID
}],
},
'metadata': {
'title': 'DV360 Automation API-generated report',
'dataRange': 'LAST_90_DAYS',
'format': 'csv'
},
'schedule': {
'frequency': 'ONE_TIME'
}
}
# Create new query using report definition
operation = dbm_service.queries().createquery(body=report_definition).execute()
pprint.pprint(operation)
# Runs the given Queries.getquery request, retrying with an exponential
# backoff. Returns completed operation. Will raise an exception if the
# operation takes more than five hours to complete.
@retry.Retry(
predicate=retry.if_exception_type(Exception),
initial=5,
maximum=60,
deadline=18000)
def check_get_query_completion(getquery_request):
response = getquery_request.execute()
pprint.pprint(response)
if response['metadata']['running']:
raise Exception('The operation has not completed.')
return response
getquery_request = dbm_service.queries().getquery(queryId=operation['queryId'])
response = check_get_query_completion(getquery_request)
# Capture report URL from response
report_url = response['metadata']['googleCloudStoragePathForLatestReport']
# Use skipfooter to remove report footer from data
report_df = pd.read_csv(report_url, skipfooter=16, engine='python')
report_df.head(10)
# Define our 'KPIs'
ctr_to_pause = 0.1
ctr_to_optimise = 0.3
imp_threshold = 5000
# Convert IDs to remove decimal point, then string
report_df['Line Item ID'] = report_df['Line Item ID'].apply(int)
lineitems_to_pause = report_df.query('Impressions > @imp_threshold and (Clicks / Impressions)*100 < @ctr_to_pause')
lineitems_to_reducebid = report_df.query('Impressions > @imp_threshold and (Clicks / Impressions)*100 > @ctr_to_pause < @ctr_to_optimise')
# Convert results to Python list
lineitems_to_pause = list(lineitems_to_pause['Line Item ID'])
lineitems_to_reducebid = list(lineitems_to_reducebid['Line Item ID'])
print(f'Found {len(lineitems_to_pause)} LineItems with a CTR'
f'< {ctr_to_pause}% and > {imp_threshold} impressions:'
f'{lineitems_to_pause}')
print(f'Found {len(lineitems_to_reducebid)} LineItems with a CTR'
f' between {ctr_to_pause}%-{ctr_to_optimise}%, and > {imp_threshold}'
f'\n impressions: {lineitems_to_reducebid}')
%%time
if lineitems_to_pause:
for lineitem in lineitems_to_pause:
optimise_lineitem(str(lineitem), 'pause')
if lineitems_to_reducebid:
for lineitem in lineitems_to_reducebid:
optimise_lineitem(str(lineitem), 'optimise')
print('Optimisation completed')
def upload_creative_image_asset(asset_url, click_url):
Creates a new DV360 creative object.
# Fetch asset from cloud storage using requests library
asset = requests.get(asset_url)
# Create upload object from http image url
fh = io.BytesIO(asset.content)
media_body = http.MediaIoBaseUpload(fh, mimetype='image/png',
chunksize=1024*1024, resumable=True)
# Extract filename from url path
filename = str(asset_url.rsplit(sep='/', maxsplit=1)[1])
# Create the request body
body = {'filename': filename}
# Upload the asset
asset_request = display_video_service.advertisers().assets().upload(
advertiserId=ADVERTISER_ID, body=body, media_body=media_body).execute()
# Display the new asset media ID
print(f"Asset was created with media ID {asset_request['asset']['mediaId']}")
display_name = f'{filename}'.split(sep='.')[0].lower() + ' 300x250'
# Create a creative object.
creative_obj = {
'displayName':
f'{display_name}',
'entityStatus':
'ENTITY_STATUS_ACTIVE',
'creativeType':
'CREATIVE_TYPE_STANDARD',
'hostingSource':
'HOSTING_SOURCE_HOSTED',
'dimensions': {
'widthPixels': 300,
'heightPixels': 250
},
'assets': [{
'asset': {
'mediaId': asset_request['asset']['mediaId']
},
'role': 'ASSET_ROLE_MAIN'
}],
'exitEvents': [{
'type': 'EXIT_EVENT_TYPE_DEFAULT',
'url': f'{click_url}',
}]
}
creative_request = display_video_service.advertisers().creatives().create(
advertiserId=ADVERTISER_ID,
body=creative_obj
).execute()
# Display the new creative ID
print(f"Creative was created with ID {creative_request['creativeId']}"
f" and DisplayName '{creative_request['displayName']}'")
pprint.pprint(creative_request)
print('Creative upload function defined')
image_assets = {
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/pixelbook.png?raw=true':
'https://store.google.com/product/google_pixelbook',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/googlehome.png?raw=true':
'https://store.google.com/product/google_home_hub',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/googlehomemini.png?raw=true':
'https://store.google.com/product/google_home_mini',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/pixel2.png?raw=true':
'https://store.google.com/product/pixel_2',
'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/chromecastultra.png?raw=true':
'https://store.google.com/product/chromecast_ultra'
}
for asset, click_url in image_assets.items():
upload_creative_image_asset(asset, click_url)
#TODO
%%time
# Load flight information from CSV file
googleairways_routes = files.upload()
contents = next(iter(googleairways_routes.values())).decode('utf-8')
routes = list(csv.DictReader(io.StringIO(contents)))
# Create a unique set (de-duped) of cities from the routes provided
unique_cities = set()
for row in routes:
unique_cities.add(row['airport-city'])
print(unique_cities)
# Create Campaign and Patch()
new_campaign = create_campaign('Google Airways')
print(new_campaign)
# Step through each city within our unique set of cities
for city in unique_cities:
# Create Insertion Order and Patch()
io_name = f'Flights | {city}'
create_io = create_insertion_order(new_campaign['campaignId'], io_name)
# Step through each route(row) of the CSV upload
for row in routes:
if city == row['airport-city']:
# Create LineItems and Patch()
li_name = f"Flight {row['flightno']} | {row['depairport-city']} to {row['arrairport-city']}"
create_lis = create_lineitem(create_io['insertionOrderId'], li_name)
print('Process completed')
# Exclude following campaigns in the reset process
protected_campaigns = ['1914007','985747']
def reset_demo_account():
Reset DV360 account to earlier state.
print('Resetting DV360 account...')
# Reactivate Campaigns
list_campaigns = display_video_service.advertisers().campaigns().list(
advertiserId=ADVERTISER_ID,
filter='entityStatus="ENTITY_STATUS_ACTIVE"').execute()
results = list_campaigns['campaigns']
print(f'Found {len(results)} active campaigns')
for index, campaign in enumerate(results, start=1):
print(f'Campaign {index} of {len(results)}')
pause_campaign(campaign['campaignId'])
# Reactivate LineItems
list_lineitems = display_video_service.advertisers().lineItems().list(
advertiserId=ADVERTISER_ID,
filter='entityStatus="ENTITY_STATUS_PAUSED" AND campaignId="1914007"'
).execute()
if not list_lineitems:
print('No paused lineitems found')
else:
for index, li in enumerate(list_lineitems['lineItems'], start=1):
print(f"Lineitem {index} of {len(list_lineitems['lineItems'])}")
lineitem_id = li['lineItemId']
activate_lineitem(lineitem_id)
print('Account reset completed')
def delete_campaign(campaign_id):
Updates DV360 campaign object status to deleted.
if campaign_id in protected_campaigns:
print(f'Campaign ID {campaign_id} not deleted (protected campaign)')
else:
try:
display_video_service.advertisers().campaigns().delete(
advertiserId=ADVERTISER_ID, campaignId=campaign_id).execute()
print(f'{campaign_id} successfully deleted')
except Exception:
print('Could not delete campaign')
def archive_campaign(campaign_id):
Updates DV360 campaign object status to archived.
patch = {'entityStatus': 'ENTITY_STATUS_ARCHIVED'}
if campaign_id in protected_campaigns:
print(f'Campaign ID {campaign_id} not archived (protected campaign)')
else:
archive_campaign = display_video_service.advertisers().campaigns().patch(
advertiserId=ADVERTISER_ID,
campaignId=campaign_id,
updateMask='entityStatus',
body=patch).execute()
print(f'Campaign ID {campaign_id} successfully archived')
def pause_campaign(campaign_id):
Updates DV360 campaign object status to paused.
patch = {'entityStatus': 'ENTITY_STATUS_PAUSED'}
if campaign_id in protected_campaigns:
print(f'Campaign ID {campaign_id} not paused (protected campaign)')
else:
display_video_service.advertisers().campaigns().patch(
advertiserId=ADVERTISER_ID,
campaignId=campaign_id,
updateMask='entityStatus',
body=patch).execute()
print(f'Campaign ID {campaign_id} successfully paused')
def activate_lineitem(lineitem_id):
Updates DV360 lineitem object status to active.
patch = {'entityStatus': 'ENTITY_STATUS_ACTIVE'}
display_video_service.advertisers().lineItems().patch(
lineItemId=lineitem_id,
advertiserId=ADVERTISER_ID,
updateMask='entityStatus',
body=patch).execute()
print(f'Lineitem ID {lineitem_id} reactivated')
# @title { display-mode: "form" }
#@markdown Reset DV360 account
# Call main function to intialise reset procedure
reset_demo_account()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: http
Step4: 1. The glass - dataset
Step5: 1.3 Classification
Step6: 2. Mushroom dataset (containing categorical data)
Step7: 2.1 Preprocessing the Dataset
Step8: 2.1.1 Remove columns with only 1 value
Step9: 2.1.2 Handling columns with missing or incorrect values
Step10: 2.1.2.2 Drop column with more than X percent missing values
Step11: 2.1.2.3 Fill missing values with zero / -1
Step12: 2.1.2.4 Fill missing values with backward fill
Step13: 2.1.2.5 Fill missing values with forward fill
Step14: 2.2 Handling columns with categorical data
Step15: 2.2.2 Expanding the columns with categorical data
Step16: 2.4 Classifying the dataset
Step17: 2.5 Improving upon the Classifier
Step18: 3. Understanding complex datasets
Step19: 3.2 Cumulative Explained Variance
Step20: 3.3 Pairwise relationships between the features
|
<ASSISTANT_TASK:>
Python Code:
#This is a notebook containing the code of blogpost http://ataspinar.com/2017/05/26/classification-with-scikit-learn/
#Although I'll also give a short description in this notebook, for a full explanation you should read the blog.
# Lets import some modules for basic computation
import time
import pandas as pd
import numpy as np
import pickle
# Some modules for plotting and visualizing
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display
# And some Machine Learning modules from scikit-learn
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn import tree
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
#These Classifiers have been commented out because they take too long and do not give more accuracy as the other ones.
#from sklearn.ensemble import AdaBoostClassifier
#from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
#from sklearn.gaussian_process import GaussianProcessClassifier
dict_classifiers = {
"Logistic Regression": LogisticRegression(),
"Nearest Neighbors": KNeighborsClassifier(),
"Linear SVM": SVC(),
"Gradient Boosting Classifier": GradientBoostingClassifier(n_estimators=1000),
"Decision Tree": tree.DecisionTreeClassifier(),
"Random Forest": RandomForestClassifier(n_estimators=1000),
"Neural Net": MLPClassifier(alpha = 1),
"Naive Bayes": GaussianNB(),
#"AdaBoost": AdaBoostClassifier(),
#"QDA": QuadraticDiscriminantAnalysis(),
#"Gaussian Process": GaussianProcessClassifier()
}
def batch_classify(X_train, Y_train, X_test, Y_test, no_classifiers = 5, verbose = True):
This method, takes as input the X, Y matrices of the Train and Test set.
And fits them on all of the Classifiers specified in the dict_classifier.
The trained models, and accuracies are saved in a dictionary. The reason to use a dictionary
is because it is very easy to save the whole dictionary with the pickle module.
Usually, the SVM, Random Forest and Gradient Boosting Classifier take quiet some time to train.
So it is best to train them on a smaller dataset first and
decide whether you want to comment them out or not based on the test accuracy score.
dict_models = {}
for classifier_name, classifier in list(dict_classifiers.items())[:no_classifiers]:
t_start = time.clock()
classifier.fit(X_train, Y_train)
t_end = time.clock()
t_diff = t_end - t_start
train_score = classifier.score(X_train, Y_train)
test_score = classifier.score(X_test, Y_test)
dict_models[classifier_name] = {'model': classifier, 'train_score': train_score, 'test_score': test_score, 'train_time': t_diff}
if verbose:
print("trained {c} in {f:.2f} s".format(c=classifier_name, f=t_diff))
return dict_models
def label_encode(df, list_columns):
This method one-hot encodes all column, specified in list_columns
for col in list_columns:
le = LabelEncoder()
col_values_unique = list(df[col].unique())
le_fitted = le.fit(col_values_unique)
col_values = list(df[col].values)
le.classes_
col_values_transformed = le.transform(col_values)
df[col] = col_values_transformed
def expand_columns(df, list_columns):
for col in list_columns:
colvalues = df[col].unique()
for colvalue in colvalues:
newcol_name = "{}_is_{}".format(col, colvalue)
df.loc[df[col] == colvalue, newcol_name] = 1
df.loc[df[col] != colvalue, newcol_name] = 0
df.drop(list_columns, inplace=True, axis=1)
def get_train_test(df, y_col, x_cols, ratio):
This method transforms a dataframe into a train and test set, for this you need to specify:
1. the ratio train : test (usually 0.7)
2. the column with the Y_values
mask = np.random.rand(len(df)) < ratio
df_train = df[mask]
df_test = df[~mask]
Y_train = df_train[y_col].values
Y_test = df_test[y_col].values
X_train = df_train[x_cols].values
X_test = df_test[x_cols].values
return df_train, df_test, X_train, Y_train, X_test, Y_test
def display_dict_models(dict_models, sort_by='test_score'):
cls = [key for key in dict_models.keys()]
test_s = [dict_models[key]['test_score'] for key in cls]
training_s = [dict_models[key]['train_score'] for key in cls]
training_t = [dict_models[key]['train_time'] for key in cls]
df_ = pd.DataFrame(data=np.zeros(shape=(len(cls),4)), columns = ['classifier', 'train_score', 'test_score', 'train_time'])
for ii in range(0,len(cls)):
df_.loc[ii, 'classifier'] = cls[ii]
df_.loc[ii, 'train_score'] = training_s[ii]
df_.loc[ii, 'test_score'] = test_s[ii]
df_.loc[ii, 'train_time'] = training_t[ii]
display(df_.sort_values(by=sort_by, ascending=False))
def display_corr_with_col(df, col):
correlation_matrix = df.corr()
correlation_type = correlation_matrix[col].copy()
abs_correlation_type = correlation_type.apply(lambda x: abs(x))
desc_corr_values = abs_correlation_type.sort_values(ascending=False)
y_values = list(desc_corr_values.values)[1:]
x_values = range(0,len(y_values))
xlabels = list(desc_corr_values.keys())[1:]
fig, ax = plt.subplots(figsize=(8,8))
ax.bar(x_values, y_values)
ax.set_title('The correlation of all features with {}'.format(col), fontsize=20)
ax.set_ylabel('Pearson correlatie coefficient [abs waarde]', fontsize=16)
plt.xticks(x_values, xlabels, rotation='vertical')
plt.show()
filename_glass = '../datasets/glass.csv'
df_glass = pd.read_csv(filename_glass)
print("This dataset has nrows, ncols: {}".format(df_glass.shape))
display(df_glass.head())
display(df_glass.describe())
y_col_glass = 'Type'
x_cols_glass = list(df_glass.columns.values)
x_cols_glass.remove(y_col_glass)
train_test_ratio = 0.7
df_train, df_test, X_train, Y_train, X_test, Y_test = get_train_test(df_glass, y_col_glass, x_cols_glass, train_test_ratio)
dict_models = batch_classify(X_train, Y_train, X_test, Y_test, no_classifiers = 8)
display_dict_models(dict_models)
filename_mushrooms = '../datasets/mushrooms.csv'
df_mushrooms = pd.read_csv(filename_mushrooms)
display(df_mushrooms.head())
for col in df_mushrooms.columns.values:
print(col, df_mushrooms[col].unique())
for col in df_mushrooms.columns.values:
if len(df_mushrooms[col].unique()) <= 1:
print("Removing column {}, which only contains the value: {}".format(col, df_mushrooms[col].unique()[0]))
print("Number of rows in total: {}".format(df_mushrooms.shape[0]))
print("Number of rows with missing values in column 'stalk-root': {}".format(df_mushrooms[df_mushrooms['stalk-root'] == '?'].shape[0]))
df_mushrooms_dropped_rows = df_mushrooms[df_mushrooms['stalk-root'] != '?']
drop_percentage = 0.8
df_mushrooms_dropped_cols = df_mushrooms.copy(deep=True)
df_mushrooms_dropped_cols.loc[df_mushrooms_dropped_cols['stalk-root'] == '?', 'stalk-root'] = np.nan
for col in df_mushrooms_dropped_cols.columns.values:
no_rows = df_mushrooms_dropped_cols[col].isnull().sum()
percentage = no_rows / df_mushrooms_dropped_cols.shape[0]
if percentage > drop_percentage:
del df_mushrooms_dropped_cols[col]
print("Column {} contains {} missing values. This is {} percent. Dropping this column.".format(col, no_rows, percentage))
df_mushrooms_zerofill = df_mushrooms.copy(deep = True)
df_mushrooms_zerofill.loc[df_mushrooms_zerofill['stalk-root'] == '?', 'stalk-root'] = np.nan
df_mushrooms_zerofill.fillna(0, inplace=True)
df_mushrooms_bfill = df_mushrooms.copy(deep = True)
df_mushrooms_bfill.loc[df_mushrooms_bfill['stalk-root'] == '?', 'stalk-root'] = np.nan
df_mushrooms_bfill.fillna(method='bfill', inplace=True)
df_mushrooms_ffill = df_mushrooms.copy(deep = True)
df_mushrooms_ffill.loc[df_mushrooms_ffill['stalk-root'] == '?', 'stalk-root'] = np.nan
df_mushrooms_ffill.fillna(method='ffill', inplace=True)
df_mushrooms_ohe = df_mushrooms.copy(deep=True)
to_be_encoded_cols = df_mushrooms_ohe.columns.values
label_encode(df_mushrooms_ohe, to_be_encoded_cols)
display(df_mushrooms_ohe.head())
## Now lets do the same thing for the other dataframes
df_mushrooms_dropped_rows_ohe = df_mushrooms_dropped_rows.copy(deep = True)
df_mushrooms_zerofill_ohe = df_mushrooms_zerofill.copy(deep = True)
df_mushrooms_bfill_ohe = df_mushrooms_bfill.copy(deep = True)
df_mushrooms_ffill_ohe = df_mushrooms_ffill.copy(deep = True)
label_encode(df_mushrooms_dropped_rows_ohe, to_be_encoded_cols)
label_encode(df_mushrooms_zerofill_ohe, to_be_encoded_cols)
label_encode(df_mushrooms_bfill_ohe, to_be_encoded_cols)
label_encode(df_mushrooms_ffill_ohe, to_be_encoded_cols)
y_col = 'class'
to_be_expanded_cols = list(df_mushrooms.columns.values)
to_be_expanded_cols.remove(y_col)
df_mushrooms_expanded = df_mushrooms.copy(deep=True)
label_encode(df_mushrooms_expanded, [y_col])
expand_columns(df_mushrooms_expanded, to_be_expanded_cols)
display(df_mushrooms_expanded.head())
## Now lets do the same thing for all other dataframes
df_mushrooms_dropped_rows_expanded = df_mushrooms_dropped_rows.copy(deep = True)
df_mushrooms_zerofill_expanded = df_mushrooms_zerofill.copy(deep = True)
df_mushrooms_bfill_expanded = df_mushrooms_bfill.copy(deep = True)
df_mushrooms_ffill_expanded = df_mushrooms_ffill.copy(deep = True)
label_encode(df_mushrooms_dropped_rows_expanded, [y_col])
label_encode(df_mushrooms_zerofill_expanded, [y_col])
label_encode(df_mushrooms_bfill_expanded, [y_col])
label_encode(df_mushrooms_ffill_expanded, [y_col])
expand_columns(df_mushrooms_dropped_rows_expanded, to_be_expanded_cols)
expand_columns(df_mushrooms_zerofill_expanded, to_be_expanded_cols)
expand_columns(df_mushrooms_bfill_expanded, to_be_expanded_cols)
expand_columns(df_mushrooms_ffill_expanded, to_be_expanded_cols)
dict_dataframes = {
"df_mushrooms_ohe": df_mushrooms_ohe,
"df_mushrooms_dropped_rows_ohe": df_mushrooms_dropped_rows_ohe,
"df_mushrooms_zerofill_ohe": df_mushrooms_zerofill_ohe,
"df_mushrooms_bfill_ohe": df_mushrooms_bfill_ohe,
"df_mushrooms_ffill_ohe": df_mushrooms_ffill_ohe,
"df_mushrooms_expanded": df_mushrooms_expanded,
"df_mushrooms_dropped_rows_expanded": df_mushrooms_dropped_rows_expanded,
"df_mushrooms_zerofill_expanded": df_mushrooms_zerofill_expanded,
"df_mushrooms_bfill_expanded": df_mushrooms_bfill_expanded,
"df_mushrooms_ffill_expanded": df_mushrooms_ffill_expanded
}
y_col = 'class'
train_test_ratio = 0.7
for df_key, df in dict_dataframes.items():
x_cols = list(df.columns.values)
x_cols.remove(y_col)
df_train, df_test, X_train, Y_train, X_test, Y_test = get_train_test(df, y_col, x_cols, train_test_ratio)
dict_models = batch_classify(X_train, Y_train, X_test, Y_test, no_classifiers = 8, verbose=False)
print()
print(df_key)
display_dict_models(dict_models)
print("-------------------------------------------------------")
GDB_params = {
'n_estimators': [100, 500, 1000],
'learning_rate': [0.5, 0.1, 0.01, 0.001],
'criterion': ['friedman_mse', 'mse', 'mae']
}
df_train, df_test, X_train, Y_train, X_test, Y_test = get_train_test(df_glass, y_col_glass, x_cols_glass, 0.6)
for n_est in GDB_params['n_estimators']:
for lr in GDB_params['learning_rate']:
for crit in GDB_params['criterion']:
clf = GradientBoostingClassifier(n_estimators=n_est,
learning_rate = lr,
criterion = crit)
clf.fit(X_train, Y_train)
train_score = clf.score(X_train, Y_train)
test_score = clf.score(X_test, Y_test)
print("For ({}, {}, {}) - train, test score: \t {:.5f} \t-\t {:.5f}".format(n_est, lr, crit[:4], train_score, test_score))
correlation_matrix = df_glass.corr()
plt.figure(figsize=(10,8))
ax = sns.heatmap(correlation_matrix, vmax=1, square=True, annot=True,fmt='.2f', cmap ='GnBu', cbar_kws={"shrink": .5}, robust=True)
plt.title('Correlation matrix between the features', fontsize=20)
plt.show()
display_corr_with_col(df_glass, 'Type')
X = df_glass[x_cols_glass].values
X_std = StandardScaler().fit_transform(X)
pca = PCA().fit(X_std)
var_ratio = pca.explained_variance_ratio_
components = pca.components_
#print(pca.explained_variance_)
plt.plot(np.cumsum(var_ratio))
plt.xlim(0,9,1)
plt.xlabel('Number of Features', fontsize=16)
plt.ylabel('Cumulative explained variance', fontsize=16)
plt.show()
ax = sns.pairplot(df_glass, hue='Type')
plt.title('Pairwise relationships between the features')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create data
Step2: As you can see, most of the variation is in just one direction. We can formalize this intuition by using PCA to draw out the appropriate axes
Step3: The two red lines define our new informative axis. As you can see, most of the information is stored in how far along the thicker red line the points are.
Step5: Example 2
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.decomposition import PCA
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # 3D Plotting
from scipy import stats
import ipywidgets # interactions
%matplotlib inline
np.random.seed(4) # Reproducible results
mean = np.zeros(2)
cov = [[1, 0.9],
[0.9, 1]]
data = np.random.multivariate_normal(mean, cov, size=500)
xs = data[:, 0]
ys = data[:, 1]
plt.scatter(xs, ys, marker=".", color="blue")
pca = PCA(n_components=2)
pca.fit(data)
axis1 = pca.components_[0] # axis of most variation
axis2 = pca.components_[1] # axis of second-most variation
plt.scatter(xs, ys, marker='.', color="blue")
plt.plot(axis1[0] * np.arange(-4, 5), axis1[1] * np.arange(-4, 5), linewidth=4,
color="red")
plt.plot(axis2[0] * np.arange(-1, 2), axis2[1] * np.arange(-1, 2), linewidth=2,
color="red")
from ipywidgets import interact
mean = np.zeros(3)
cov = [[1, 0, 0],
[0, 1, 0.9],
[0, 0.9, 1]]
# here dimension 1 is independent but dimensions 2 and 3 covary
data = np.random.multivariate_normal(mean, cov, 1000)
xs = data[:, 0]
ys = data[:, 1]
zs = data[:, 2]
figure1 = plt.figure()
ax1 = Axes3D(figure1)
ax1.scatter(xs, ys, zs, marker='.')
plt.close(figure1) # prevent double-display with interact
# You must be running the Jupyter notebook for interactions to work.
# It will just be a static image when viewed on GitHub or Nbviewer
@interact(elev=(0, 180), azim=(0, 180))
def plot_point_cloud(elev, azim):
ax1.view_init(elev=elev, azim=azim)
return figure1
# Apply PCA, pick 2 highest principal components and fit data to them
pca = PCA(n_components=2)
pca.fit(data)
axis1 = pca.components_[0]
axis2 = pca.components_[1]
a, b, c = np.cross(axis1, axis2)
# By definition of cross product, <a, b, c> is orthogonal to the plane
# spanned by axis1 and axis2 through (0, 0, 0). The plane's equation is thus:
# ax + by + cz = 0
# or z = -(ax + by) / c
xx, yy = np.meshgrid(np.arange(-4, 4), np.arange(-4, 4))
zz = -(a * xx + b * yy) / c
figure2 = plt.figure()
ax2 = Axes3D(figure2)
ax2.plot_surface(xx, yy, zz, color="red", alpha=0.5)
ax2.scatter(xs, ys, zs, marker='.')
plt.close(figure2)
@interact(elev=(0, 180), azim=(0, 180))
def plot_point_cloud(elev, azim):
ax2.view_init(elev=elev, azim=azim)
return figure2
from numpy.random import RandomState
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn import decomposition
# Load faces data
faces = fetch_olivetti_faces(shuffle=True, random_state=RandomState(1)).data
n_samples, n_features = faces.shape
# Center the faces
faces_centered = faces - faces.mean(axis=0)
faces_centered -= faces_centered.mean(axis=1) \
.reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=3, n_row=2):
Helper function to plot images.
image_shape = (64, 64)
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks([])
plt.yticks([])
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
n_components = 6
# Plot a sample of the input data
plot_gallery("Input", faces_centered[:n_components])
# Apply PCA and plot results
print("Extracting the top %d components" % (n_components))
data = faces_centered
# We use a variant of PCA called Randomized PCA for efficiency. It uses stochastic SVD.
estimator = decomposition.RandomizedPCA(n_components=n_components, whiten=True)
estimator.fit(data)
plot_gallery('PCA', estimator.components_[:n_components])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problems
Step2: Problems
Step3: Problems
Step4: If we want to look at covariates, we need a new approach.
Step5: Once we've fit the data, we need to do something useful with it. Try to do the following things
Step6: Model selection
|
<ASSISTANT_TASK:>
Python Code:
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 14.9
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
#Fitting solution
cf = lifelines.CoxPHFitter()
cf.fit(df, 'lifetime', event_col = 'event')
cf.summary
#Solution to 1
fig, axis = plt.subplots(nrows=1, ncols=1)
cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival")
regressors = np.array([[1,45,0,0]])
survival = cf.predict_survival_function(regressors)
survival.head()
#Solution to plotting multiple regressors
fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)
regressor1 = np.array([[1,45,0,1]])
regressor2 = np.array([[1,23,1,1]])
survival_1 = cf.predict_survival_function(regressor1)
survival_2 = cf.predict_survival_function(regressor2)
plt.plot(survival_1,label = "45 year old male - search")
plt.plot(survival_2,label = "45 year old male - display")
plt.legend(loc = "upper")
odds = survival_1 / survival_2
plt.plot(odds, c = "red")
from pyBMA import CoxPHFitter
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4)
bmaCox.summary
#Low probability for everything favours parsimonious models
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4)
bmaCox.summary
#Boost probability of brand
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3])
print(bmaCox.summary)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.