text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get one GOTerm object
Step2: The optional 'def' obo field becomes a 'defn' data member in a GOTerm object
Step3: Print one GO term and its optional definition (defn) data member value
Step4: Print the first term in biological process
|
<ASSISTANT_TASK:>
Python Code:
from os.path import exists
from goatools.obo_parser import GODag
if not exists('go-basic.obo'):
!wget http://geneontology.org/ontology/go-basic.obo
dag = GODag('go-basic.obo', optional_attrs=['def'])
term = next(iter(dag.values()))
print(dir(term))
pattern = '{ID} {NAME}\n{DEF}'
print(pattern.format(ID=term.item_id, NAME=term.name, DEF=term.defn))
term1 = dag['GO:0000001']
print(pattern.format(ID=term1.item_id, NAME=term1.name, DEF=term1.defn))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a smoothing Node with IsotropicSmooth
Step2: Now, to use iterables and therefore smooth with different fwhm is as simple as that
Step3: And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
Step4: If we visualize the graph with exec, we can see where the parallelization actually takes place.
Step5: If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16.
Step6: Now, let's visualize the results!
Step7: IdentityInterface (special use case of iterabels)
Step8: Now, we can create the IdentityInterface Node
Step9: That's it. Now, we can connect the output fields of this infosource node like any other node to wherever we want.
|
<ASSISTANT_TASK:>
Python Code:
from nipype import Node, Workflow
from nipype.interfaces.fsl import BET, IsotropicSmooth
# Initiate a skull stripping Node with BET
skullstrip = Node(BET(mask=True,
in_file='/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'),
name="skullstrip")
isosmooth = Node(IsotropicSmooth(), name='iso_smooth')
isosmooth.iterables = ("fwhm", [4, 8, 16])
# Create the workflow
wf = Workflow(name="smoothflow")
wf.base_dir = "/output"
wf.connect(skullstrip, 'out_file', isosmooth, 'in_file')
# Run it in parallel (one core for each smoothing kernel)
wf.run('MultiProc', plugin_args={'n_procs': 3})
# Visualize the detailed graph
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
Image(filename='/output/smoothflow/graph_detailed.dot.png')
!tree /output/smoothflow -I '*txt|*pklz|report*|*.json|*js|*.dot|*.html'
%pylab inline
from nilearn import plotting
plotting.plot_anat(
'/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz', title='original',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/skullstrip/sub-01_ses-test_T1w_brain.nii.gz', title='skullstripped',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_4/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=4',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_8/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=8',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_16/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=16',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
# First, let's specify the list of input variables
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']
session_list = ['run-01', 'run-02']
fwhm_widths = [4, 8]
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'session_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('session_id', session_list),
('fwhm_id', fwhm_widths)]
infosource.outputs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And then we'll ask it for some help
Step2: The most basic way in which the %mldb magic can help us with MLDB's REST API is by allowing us to type natural-feeling REST commands, like this one, which will list all of the available dataset types
Step3: You can use similar syntax to run PUT, POST and DELETE queries as well.
Step4: And now let's run an SQL query on it
Step5: We can get the results out as a Pandas DataFrame just as easily
Step6: Server-Side Python Magic
|
<ASSISTANT_TASK:>
Python Code:
%reload_ext pymldb
%mldb help
%mldb GET /v1/types/datasets
%mldb DELETE /v1/datasets/titanic
%mldb loadcsv titanic https://raw.githubusercontent.com/datacratic/mldb-pytanic-plugin/master/titanic_train.csv
%mldb query select * from titanic limit 5
df = %mldb query select * from titanic
type(df)
%%mldb py
# this code will run on the server!
print mldb.perform("GET", "/v1/types/datasets", [], {})["response"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show the true graph
Step2: Run an analysis using SPM
Step3: Click "done", which will take you to the next window to specify the B and C matrices. Your entries should look like this
Step4: This will create a file called "DCM_truemodel.mat" in the same directory as the other files.
|
<ASSISTANT_TASK:>
Python Code:
import numpy
# use a consistent seed so that everyone has the same data
numpy.random.seed(1000)
import os,sys
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../utils')
from mkdesign import create_design_singlecondition
from graph_utils import show_graph_from_adjmtx
import math
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
import scipy.interpolate
results_dir = os.path.abspath("../results")
if not os.path.exists(results_dir):
os.mkdir(results_dir)
# first let's build the model without the bilinear influence (aka PPI)
# after http://spm.martinpyka.de/?p=81
nregions=5
z=numpy.zeros(nregions)
# intrinsic connectivity
A=numpy.zeros((z.shape[0],z.shape[0]))
A=numpy.diag(numpy.ones(z.shape[0])*-1)
# add some structure
#A=A + numpy.diag(numpy.ones(z.shape[0]-1),k=-1)
A[2,1]=1
A[3,1]=1
B=numpy.zeros(A.shape)
B[2,0]=1
B[4,0]=1
C=numpy.zeros((z.shape[0],1))
C[0]=1
u=0
print (A)
print (B)
print (C)
# we are assuming a 1 second TR for the resulting data
# but the neural data are at a 1/16 millisecond time resolution
stepsize=.01
tslength=300
timepoints=numpy.arange(0,tslength,stepsize)
# create a blocked design
d,design=create_design_singlecondition(blockiness=1.0,deslength=tslength,blocklength=20,offset=20)
u=scipy.interpolate.griddata(numpy.arange(1,d.shape[0]),d,timepoints,fill_value=0)
def dcm_model(t,z,A,B,C,u):
ut=numpy.abs(timepoints - t).argmin()
return (A.dot(z)+ u[ut]*B.dot(z) + C.dot(u[ut]).T)[0]
def mk_dcm_dataset(timepoints,z,noise_sd):
data=numpy.zeros((len(timepoints),len(z)))
for i in range(1,len(timepoints)):
data[i,:]=data[i-1,:] + dcm_model(timepoints[i],data[i-1,:],A,B,C,u) + numpy.random.randn(len(z))*noise_sd
hrf=spm_hrf(stepsize,oversampling=1)
data_conv=numpy.zeros(data.shape)
for i in range(len(z)):
data_conv[:,i]=numpy.convolve(data[:,i],hrf)[:data.shape[0]]
return data,data_conv
noise_sd=2
data,data_conv=mk_dcm_dataset(timepoints,z,noise_sd)
numpy.savez(os.path.join(results_dir,'dcmdata.npz'),data=data_conv,A=A,B=B,C=C,u=u,d=d,design=design)
plt.subplot(211)
plt.plot(data_conv)
cc=numpy.corrcoef(data_conv.T)
print ('correlation matrix')
print (cc)
from sklearn.covariance import GraphLassoCV
import matplotlib.colors
glasso=GraphLassoCV()
glasso.fit(data_conv)
from pcor_from_precision import pcor_from_precision
pcor=pcor_from_precision(glasso.precision_)
print ('partial r^2 matrix')
print (pcor**2)
plt.figure(figsize=(10,5))
plt.subplot(141)
plt.imshow(A,interpolation='nearest',norm=matplotlib.colors.Normalize(vmin=-1,vmax=1))
plt.title('A mtx')
plt.subplot(142)
plt.imshow(B,interpolation='nearest',norm=matplotlib.colors.Normalize(vmin=-1,vmax=1))
plt.title('B mtx')
plt.subplot(143)
plt.imshow(cc,interpolation='nearest',norm=matplotlib.colors.Normalize(vmin=-1,vmax=1))
plt.title('correlation')
plt.subplot(144)
plt.imshow(pcor**2,interpolation='nearest',norm=matplotlib.colors.Normalize(vmin=-1,vmax=1))
plt.title('partial correlation')
gr=show_graph_from_adjmtx(A,B,C)
from IPython.display import Image
Image(filename='DCM_Amtx.png',retina=True)
Image(filename='DCM_BCmtx.png',retina=True)
Image(filename='DCM_BMS.png',retina=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read File Containing Zones
Step2: Extract Budget Information from ZoneBudget Object
Step3: Convert Units
Step4: Alias Names
Step5: Return the Budgets as a Pandas DataFrame
Step6: Slice the multi-index dataframe to retrieve a subset of the budget.
Step7: Look at pumpage (WELLS_OUT) as a percentage of recharge (RECHARGE_IN)
Step8: Pass start_datetime and timeunit keyword arguments to return a dataframe with a datetime multi-index
Step9: Pass index_key to indicate which fields to use in the multi-index (defualt is "totim"; valid keys are "totim" and "kstpkper")
Step10: Write Budget Output to CSV
Step11: Net Budget
Step12: Plot Budget Components
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('pandas version: {}'.format(pd.__version__))
print('flopy version: {}'.format(flopy.__version__))
# Set path to example datafiles
loadpth = os.path.join('..', 'data', 'zonbud_examples')
cbc_f = os.path.join(loadpth, 'freyberg_mlt', 'freyberg.gitcbc')
from flopy.utils import read_zbarray
zone_file = os.path.join(loadpth, 'zonef_mlt')
zon = read_zbarray(zone_file)
nlay, nrow, ncol = zon.shape
fig = plt.figure(figsize=(10, 4))
for lay in range(nlay):
ax = fig.add_subplot(1, nlay, lay+1)
im = ax.pcolormesh(zon[lay, :, :])
cbar = plt.colorbar(im)
plt.gca().set_aspect('equal')
plt.show()
np.unique(zon)
# Create a ZoneBudget object and get the budget record array
zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 1096))
zb.get_budget()
# Get a list of the unique budget record names
zb.get_record_names()
# Look at a subset of fluxes
names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN']
zb.get_budget(names=names)
# Look at fluxes in from zone 2
names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN']
zones = ['ZONE_2']
zb.get_budget(names=names, zones=zones)
# Look at all of the mass-balance records
names = ['TOTAL_IN', 'TOTAL_OUT', 'IN-OUT', 'PERCENT_DISCREPANCY']
zb.get_budget(names=names)
cmd = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 0))
cfd = cmd / 35.3147
inyr = (cfd / (250 * 250)) * 365 * 12
cmdbud = cmd.get_budget()
cfdbud = cfd.get_budget()
inyrbud = inyr.get_budget()
names = ['RECHARGE_IN']
rowidx = np.in1d(cmdbud['name'], names)
colidx = 'ZONE_1'
print('{:,.1f} cubic meters/day'.format(cmdbud[rowidx][colidx][0]))
print('{:,.1f} cubic feet/day'.format(cfdbud[rowidx][colidx][0]))
print('{:,.1f} inches/year'.format(inyrbud[rowidx][colidx][0]))
cmd is cfd
aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'}
zb = flopy.utils.ZoneBudget(cbc_f, zon, totim=[1097.], aliases=aliases)
zb.get_budget()
zon = np.ones((nlay, nrow, ncol), np.int)
zon[1, :, :] = 2
zon[2, :, :] = 3
aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'}
zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=None, aliases=aliases)
df = zb.get_dataframes()
print(df.head())
print(df.tail())
dateidx1 = 1092.
dateidx2 = 1097.
names = ['RECHARGE_IN', 'WELLS_OUT', 'CONSTANT_HEAD']
zones = ['SURF', 'CONF']
df = zb.get_dataframes(names=names)
df.loc[(slice(dateidx1, dateidx2), slice(None)), :][zones]
dateidx1 = 1092.
dateidx2 = 1097.
zones = ['SURF']
# Pull out the individual records of interest
rech = df.loc[(slice(dateidx1, dateidx2), ['RECHARGE_IN']), :][zones]
pump = df.loc[(slice(dateidx1, dateidx2), ['WELLS_OUT']), :][zones]
# Remove the "record" field from the index so we can
# take the difference of the two DataFrames
rech = rech.reset_index()
rech = rech.set_index(['totim'])
rech = rech[zones]
pump = pump.reset_index()
pump = pump.set_index(['totim'])
pump = pump[zones] * -1
# Compute pumping as a percentage of recharge
pump_as_pct = (pump / rech) * 100.
pump_as_pct
dateidx1 = pd.Timestamp('1972-12-01')
dateidx2 = pd.Timestamp('1972-12-06')
names = ['RECHARGE_IN', 'WELLS_OUT', 'CONSTANT_HEAD']
zones = ['SURF', 'CONF']
df = zb.get_dataframes(start_datetime='1970-01-01', timeunit='D', names=names)
df.loc[(slice(dateidx1, dateidx2), slice(None)), :][zones]
df = zb.get_dataframes(index_key='kstpkper')
df.head()
zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=[(0, 0), (0, 1096)])
zb.to_csv(os.path.join(loadpth, 'zonbud.csv'))
# Read the file in to see the contents
fname = os.path.join(loadpth, 'zonbud.csv')
try:
import pandas as pd
print(pd.read_csv(fname).to_string(index=False))
except:
with open(fname, 'r') as f:
for line in f.readlines():
print('\t'.join(line.split(',')))
zon = np.ones((nlay, nrow, ncol), np.int)
zon[1, :, :] = 2
zon[2, :, :] = 3
aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'}
zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=None, aliases=aliases)
cfd = zb.get_budget(names=['STORAGE', 'WELLS'], zones=['SURF', 'UFA'], net=True)
cfd
df = zb.get_dataframes(names=['STORAGE', 'WELLS'], zones=['SURF', 'UFA'], net=True)
df.head(6)
def tick_label_formatter_comma_sep(x, pos):
return '{:,.0f}'.format(x)
def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs):
if 'ax' in kwargs:
ax = kwargs.pop('ax')
else:
ax = plt.gca()
x_pos = np.arange(len(values_in))
rects_in = ax.bar(x_pos, values_in, align='center', alpha=0.5)
x_pos = np.arange(len(values_out))
rects_out = ax.bar(x_pos, values_out, align='center', alpha=0.5)
plt.xticks(list(x_pos), labels)
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=90)
ax.get_yaxis().set_major_formatter(mpl.ticker.FuncFormatter(tick_label_formatter_comma_sep))
ymin, ymax = ax.get_ylim()
if ymax != 0:
if abs(ymin) / ymax < .33:
ymin = -(ymax * .5)
else:
ymin *= 1.35
else:
ymin *= 1.35
plt.ylim([ymin, ymax * 1.25])
for i, rect in enumerate(rects_in):
label = '{:,.0f}'.format(values_in[i])
height = values_in[i]
x = rect.get_x() + rect.get_width() / 2
y = height + (.02 * ymax)
vertical_alignment = 'bottom'
horizontal_alignment = 'center'
ax.text(x, y, label, ha=horizontal_alignment, va=vertical_alignment, rotation=90)
for i, rect in enumerate(rects_out):
label = '{:,.0f}'.format(values_out[i])
height = values_out[i]
x = rect.get_x() + rect.get_width() / 2
y = height + (.02 * ymin)
vertical_alignment = 'top'
horizontal_alignment = 'center'
ax.text(x, y, label, ha=horizontal_alignment, va=vertical_alignment, rotation=90)
# horizontal line indicating zero
ax.plot([rects_in[0].get_x() - rects_in[0].get_width() / 2,
rects_in[-1].get_x() + rects_in[-1].get_width()], [0, 0], "k")
return rects_in, rects_out
fig = plt.figure(figsize=(16, 5))
times = [2., 500., 1000., 1095.]
for idx, time in enumerate(times):
ax = fig.add_subplot(1, len(times), idx + 1)
zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=time, aliases=aliases)
recname = 'STORAGE'
values_in = zb.get_dataframes(names='{}_IN'.format(recname)).T.squeeze()
values_out = zb.get_dataframes(names='{}_OUT'.format(recname)).T.squeeze() * -1
labels = values_in.index.tolist()
rects_in, rects_out = volumetric_budget_bar_plot(values_in, values_out, labels, ax=ax)
plt.ylabel('Volumetric rate, in Mgal/d')
plt.title('totim = {}'.format(time))
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting the results demonstrates the effect of the transformations of the state variables through the monitor. Here, a Raw monitor was used to make the effects clear, but the pre- and post-expressions can be provided to any of the Monitors.
|
<ASSISTANT_TASK:>
Python Code:
sim = simulator.Simulator(
model=models.Generic2dOscillator(),
connectivity=connectivity.Connectivity(load_default=True),
coupling=coupling.Linear(),
integrator=integrators.EulerDeterministic(),
monitors=Raw(pre_expr='V;W;V**2;W-V', post_expr=';;sin(mon);exp(mon)'))
sim.configure()
ts, ys = [], []
for (t, y), in sim(simulation_length=250):
ts.append(t)
ys.append(y)
t = numpy.array(ts)
v, w, sv2, ewmv = numpy.array(ys).transpose((1, 0, 2, 3))
figure(figsize=(7, 5), dpi=600)
subplot(311)
plot(t, v[:, 0, 0], 'k')
plot(t, w[:, 0, 0], 'k')
ylabel('$V(t), W(t)$')
grid(True, axis='x')
xticks(xticks()[0], [])
subplot(312)
plot(t, sv2[:, 0, 0], 'k')
ylabel('$\\sin(G(V^2(t)))$')
grid(True, axis='x')
xticks(xticks()[0], [])
subplot(313)
plot(t, ewmv[:, 0, 0], 'k')
ylabel('$\\exp(G(W(t)-V(t)))$')
grid(True, axis='x')
xlabel('Time (ms)')
tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulator illustration
Step2: ELFI model
Step3: Test the model
Step4: The inference
Step5: We will create a sample of 1000 points from the approximate posterior distribution with ELFI.
Step6: The approximation error after 5000 simulations is still significant due to the small number of simulations. However we can see that the approximate marginal posteriors of R1 and R2 have started to slowly progress towards the true posterior.
|
<ASSISTANT_TASK:>
Python Code:
import time
import sys
import operator
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
import elfi
%matplotlib inline
# Add the code folder to the path
sys.path.append('code')
import simulator as si
import elfi_operations as ops
s = si.Simulator(200, 0.1, 6, trans_non_c=4.35, death_non_c=0.35, t_obs=np.inf, t_warmup=0,
track_clusters=False)
# Placeholders for population sizes and number of deaths
pops = [[0,0]]
deaths = [0]
for y in range(71):
s.advance_to(y)
pops.append([s.n_c, s.n_nc])
plt.plot(pops)
plt.ylabel('Population size')
plt.xlabel('years')
plt.legend(['Compliant population', 'Non-compliant population'])
print('Balance sizes (compliant, non-compliant) = ', s.analytical_means[:-1])
# Observation pediod in years
t_obs = 2
# Some bounds that discard unrealistic initial values to optimize the computation
mean_obs_bounds = (0, 350)
# Upper bounds for t1 and a1
t1_bound = 30
a1_bound = 40
# Upper bound for the largest allowed cluster size within the observation period.
# These are chosen to eliminate outcomes that are clearly different from the observed data early
cluster_size_bound = 80
# Restrict warmup between 15 and 300 years
warmup_bounds = (15, 300)
# Set observed data and a fixed value for delta_2
y0 = ops.get_SF_data(cluster_size_bound)
d2 = 5.95
m = elfi.new_model()
burden = elfi.Prior('normal', 200, 30)
joint = elfi.RandomVariable(ops.JointPrior, burden, mean_obs_bounds, t1_bound, a1_bound)
# DummyPrior takes a marginal from the joint prior
R2 = elfi.Prior(ops.DummyPrior, joint, 0)
R1 = elfi.Prior(ops.DummyPrior, joint, 1)
t1 = elfi.Prior(ops.DummyPrior, joint, 2)
# Turn the epidemiological parameters to rate parameters for the simulator
a2 = elfi.Operation(operator.mul, R2, d2)
a1 = elfi.Operation(ops.Rt_to_a, R1, t1)
d1 = elfi.Operation(ops.Rt_to_d, R1, t1)
# Add the simulator
sim = elfi.Simulator(ops.simulator, burden, a2, d2, a1, d1, 2, cluster_size_bound, warmup_bounds, observed=y0)
# Summaries extracted from the simulator output
clusters = elfi.Summary(ops.pick, sim, 'clusters')
n_obs = elfi.Summary(ops.pick, sim, 'n_obs')
n_clusters = elfi.Summary(ops.pick, sim, 'n_clusters')
largest = elfi.Summary(ops.pick, sim, 'largest')
obs_times = elfi.Summary(ops.pick, sim, 'obs_times')
# Distance
dist = elfi.Discrepancy(ops.distance, n_obs, n_clusters, largest, clusters, obs_times)
# Add some other interesting side products of the simulations
n_oversized = elfi.Operation(ops.pick, sim, 'n_oversized')
elfi.Operation(ops.pick, sim, 'time', name='time')
n_c = elfi.Operation(ops.pick, sim, 'n_c')
n_nc = elfi.Operation(ops.pick, sim, 'n_nc')
m_obs = elfi.Operation(operator.getitem, joint, (slice(None), 3))
elfi.draw(m)
# Generate 3 values from each node of the model
m.generate(3)
# Set up a real time plotting environment for jupyter notebook.
def create_axes():
plt.figure(figsize=(16,16))
ax1 = plt.subplot2grid((2, 2), (0, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1))
ax3 = plt.subplot2grid((2, 2), (1, 0)) #, colspan=2)
ax4 = plt.subplot2grid((2, 2), (1, 1)) #, colspan=2)
return ax1, ax2, ax3, ax4
def draw(rej, thresholds, ax1, ax2, ax3, ax4):
display.clear_output(True)
b = rej.state['samples']
ax1.clear()
ax1.set_xlim([-1, 13])
ax1.set_ylim([-1, t1_bound])
ax1.set_xlabel('R1')
ax1.set_ylabel('t1')
ax1.scatter(b['R1'], b['t1'])
ax2.clear()
ax2.set_xlim([-1, 13])
ax2.set_ylim([0, 0.6])
ax2.set_xlabel('R1')
ax2.set_ylabel('R2')
ax2.scatter(b['R1'], b['R2'])
ax3.clear()
ax3.semilogy()
ax3.set_ylabel('threshold')
ax3.set_xlabel('num of batches')
ax3.plot(thresholds)
ax3.legend(['1', '100', '1000'])
ax4.clear()
ax4.set_xlim([0, 13])
ax4.set_xlabel('R1')
ax4.hist(b['R1'][np.isfinite(b['dist'])], range=(-1,14), bins=20)
display.display(ax2.figure)
# Setup a pool to store selected outputs from the ELFI model nodes (optional)
# In production, it is recommended to use the `elfi.ArrayPool` that uses persisted numpy arrays
pool = elfi.OutputPool(m.parameter_names + ['n_obs', 'n_clusters', 'largest', 'clusters', 'time', 'obs_times',
'n_oversized', 'n_c', 'n_nc', 'm_obs'], name='tbpool')
# Setup the parallel client
# The actual study used ipyparallel client connected to a slurm based computational cluster.
elfi.set_client('multiprocessing')
# In the study, the randomly selected seed and the batch_size were
# seed = 3331714042
# batch_size = 200
# Create the inference object. Using a smaller batch_size for demonstration purposes.
# In production, adjust the max_parallel_batches according to the available memory and cores
rej = elfi.Rejection(m, 'dist', seed=None, batch_size=10, pool=pool, output_names=pool.output_names,
max_parallel_batches=16)
# Set the number of simulations and the sample size (we used 6M simulations in the study)
n_sim = 5000
rej.set_objective(n_samples=1000, n_sim=n_sim)
axes = create_axes()
thresholds = []
while not rej.finished:
rej.iterate()
thresholds.append([rej.state['samples']['dist'][ss] for ss in [0, 99, 999]])
draw(rej, thresholds, *axes)
print(time.strftime("%d.%m.%Y %H:%M:%S"), ':', rej.state['n_sim'], "simulations generated.")
display.clear_output(True)
sample = rej.extract_result()
sample.plot_pairs();
sample.sample_means
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1./(1.+np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)# signals into final output layer
final_outputs = final_inputs #self.activation_function(final_inputs)# signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# with help of: https://nd101.slack.com/archives/project-1/p1486732677020232?thread_ts=1486731562.020134&cid=C3QVC209L
output_errors = targets-final_outputs
del_err_output = output_errors
# TODO: Backpropagated error
del_err_hidden = np.dot(self.weights_hidden_to_output.T, del_err_output) * hidden_outputs * (1 - hidden_outputs)
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * np.dot(del_err_output, hidden_outputs.T)
self.weights_input_to_hidden += self.lr * np.dot(del_err_hidden, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)# signals into final output layer
final_outputs = final_inputs #self.activation_function(final_inputs)# signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 6000#2700
learning_rate = 0.01 #0.03
hidden_nodes = 18
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data Source
Step2: We see that the data set has been successfully imported, though the meaning of some variables in their current form are likely to be unclear to a reader who has not also read Frechette and Bo's paper. We will clarify these variables and do some other cleaning soon, but first it will be helpful to introduce a description and some theory about the Prisoner's Dilemma itself. Readers familiar with the topic may want to skip over some of this explanatory section.
Step3: The variables in the payoff matrix can be interpreted as follows
Step4: We can see above that a pair of participants play anywhere between a single one-shot round of the prisoner's dilemma up to 69 rounds in a row without changing partners. One set of games with a single partner is referred to as a supergame, such that the supergame variable indicates the observation is for a person playing with their first, second, third, etc. partner since starting the experiment.
Step5: The string variable coop denotes whether the player chose to cooperate or defect, which will be our dependent (y) variable of interest. We convert it now to a binary integer ('cooperate') with 1 representing cooperation, allowing us to calculate cooperation rates later on.
Step6: The other factor will will investigate as influencing cooperation rates is the relative differences in the payoffs, which we saw represented above as T, R, P, and S. However, it can be difficult to compare experiments which may vary on these four different paramenters, especially when the stakes make be uniformly higher or lower in any given experimental design. It turns out these differences can be summarized into just two standardized ratios, which are included in our data set
Step7: Above we quickly summarize the mean and range of the gain and loss variables to gain a sense of how they vary.
Step8: Analysis
Step9: The simplest measure of all indicates the cooperation rate across the entire sample shows about 40% of all choices are cooperative. This at least tells us there is a good mix of choices among the whole population, but we'll need to dig deeper to understand how other factors impact cooperation levels.
Step10: The upper panel of the figure above charts the cooperation rate grouped by round of a given supergame. It seems to indicate that simply playing repeatedly and gaining experience will increase the rate of cooperation over time, regardless of the factors $\delta$, l and g, which we plan to investigate!
Step11: We creat the dataFrame coop1, the new subset of our original data filtered to include only round 1 observations, with which we will work for the rest of this investigation; the output of the shape command tells us that even with this filtering we retain 50,458 observations, approximately a third of the total dataset.
Step12: Discount Rate $\delta$ and Cooperation
Step13: The table above shows us there are seven values of $\delta$ for which our data set contains observations, and gives the number of observations at each level. The values range from 0% up to 95%, with a sizable number of observations at each value included in the data set. That said, there are far, far more observations at $\delta = 0.00, 0.50,$ and $0.75$, than at the other four values. This will be helpful to keep in mind in a moment.
Step14: There are two significant observations we can make from this graph
Step15: Hence, we can safely conclude $\delta$ has a sizeable effect on a player's propensity to cooperate and is a major determinant of outcomes. This discovery is in alignment with Frechette and Bo's conclusions and the academic literature on prisoner's dilemmas more generally.
Step16: 2. Risk Sensitive Index
Step17: 3. Basin of Attraction to Defect
|
<ASSISTANT_TASK:>
Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
from IPython.display import Image
from IPython.core.display import HTML # display images in the notebook
%matplotlib inline
# check versions
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
# web address where the data set for Frechette and Bo (2016) is stored
url = 'http://cess.nyu.edu/frechette/data/Dal_Bo_2016a_data.txt'
coop = pd.read_csv(url,
sep = '\t', # denote the separator in the data file is tab
skiprows = 31, # the first 31 rows of the file are the legend
usecols = # specify columns of interest
['paper', 'supergame', 'round', 'g', 'l', 'delta', 'sgpe', 'rd', 'coop', 'sizeBAD'])
coop.head(6) # display the first rows the dataFrame to see it is in a workable format
coop.tail(6) # final rows of the DataFrame
pris_image = 'https://www.learner.org/courses/mathilluminated/images/units/9/1198.png'
Image(url = pris_image, width = 600, height = 500)
# This image is taken from the free online course 'MATHematics Illuminated' provided by Annenberg Learner
# at the address above.
print('Minimum Rounds:', coop['round'].min(), '\nMaximum rounds:', coop['round'].max())
coop = coop.rename(columns = {'delta' : '$\delta$'})
coop['$\delta$'].describe()
# we create a dictionary and use the map command to convert cooperative choices to 1s
# and defections to 0s, then drop the original `coop` variable, as it is no longer needed
cooperateInt = {'coop' : 1, 'defect' : 0}
coop['cooperate'] = coop['coop'].map(cooperateInt)
coop = coop.drop('coop', axis=1)
coop.head()
Image(url = 'http://oi67.tinypic.com/126af0p.jpg')
print('Mean l:', coop['l'].mean(), '\t\tMin l:', coop['l'].min(), '\t\tMax l:', coop['l'].max())
print('Mean g:', coop['g'].mean(), '\t\tMin g:', coop['g'].min(), '\t\tMax g:', coop['g'].max())
coop = coop.rename(columns = {'sgpe': 'Subgame Perfect', 'rd': 'Risk Dominant'})
coop.head()
coop['cooperate'].mean()
plt.style.use('ggplot') # styles the graphs similar to ggplot package for R
fig1, ax1 = plt.subplots(2, # create a two-graph figure with a shared x-axis
sharex = True,
figsize = (10,7))
# add titles and labels and adjust line thickness and font size
ax1[0].set_ylim(.3,.8)
ax1[0].set_title('Cooperation Rate by Round Number', fontsize = 16, loc = 'center')
ax1[1].set_title('Observations by Round', fontsize = 14, loc = 'center')
ax1[1].set_xlabel('Round', fontsize = 12)
ax1[0].set_ylabel('Proportion Cooperating', fontsize = 12)
ax1[1].set_ylabel('Number of Supergames', fontsize = 12)
# plot cooperation rate by round in thick red across the top panel
coop[['round','cooperate']].groupby('round').mean().plot(ax = ax1[0],
kind = 'line',
color = 'red',
linewidth = 3,
legend = False)
# we use grouping and the mean() command here so the average of the binary
# 1 and 0 values for 'cooperate' gives us the cooperation rate across the sample
# using the count function, we show the number of supergames that reach each round, and
# thus the number of observations used in the calculation of each mean
coop[['round','cooperate']].groupby('round').count().plot(ax = ax1[1],
legend = False,
color = 'blue',
linewidth = 2)
# we again use mean() and grouping, this time by supergame, to show the rate of cooperative actions in the
# first round in each successive supergame
coop[['supergame','cooperate']].groupby('supergame').mean().plot(figsize = (10,4),
title = 'Cooperation Rate Across Supergames',
legend = False,
grid = True,
linewidth=1.5)
# the new DataFrame is set to contain all observations where round = 1, and subsequently drops
# the round variable since it will be 1 in all cases
coop1 = coop[coop['round'] == 1].drop('round', axis = 1)
print(coop1.shape)
coop1.head()
coop1[['$\delta$', 'cooperate']].groupby('$\delta$').count()
# create the figure
fig2, ax2 = plt.subplots(figsize = (10,6))
# group by the seven delta values and plot the cooperation rate
coop1[['$\delta$','cooperate']].groupby('$\delta$').mean().plot(ax = ax2,
kind = 'line',
color = 'red',
legend = False,
linewidth = 3)
# add titles and axis labels
ax2.set_title('Cooperation Rate by $\delta$', fontsize = 18, loc = 'center')
ax2.set_xlabel('Continuation Probability ($\delta$)', fontsize = 14)
ax2.set_ylabel('Cooperation Rate', fontsize = 14)
# create a dataFrame containing the subset of coop 1 for which delta is 0, 0.5 or 0.75
coop1filtered = coop1[coop1['$\delta$'] == 0.0]
for addMe in [0.5, 0.75]:
coop1filtered = pd.concat([coop1filtered, coop1[coop1['$\delta$'] == addMe]])
# remove all unnecessary columns from the filtered dataFrame
coop1filtered = coop1filtered[['$\delta$', 'cooperate']]
# create the figure
fig3, ax3 = plt.subplots(figsize = (7,4))
coop1filtered.groupby('$\delta$').mean().plot(ax = ax3,
kind = 'bar',
colormap = 'copper_r',
legend = False)
# add title and axis labels
ax3.set_title("Cooperation Rate at Three Most Common $\delta$'s", fontsize = 18)
ax3.set_xlabel('Continuation Probability ($\delta$)', fontsize = 12)
ax3.set_ylabel('Cooperation Rate', fontsize = 12)
# calculate the new column SPE Index using the values of g and delta already in the DataFrame
# and diplay the updated data set.
coop1['SPE Index'] = coop1['$\delta$'] - (coop1['g'] / (1 + coop1['g']))
coop1.head()
# calculate the new column RD Index using the values of g, l and delta already in
# the DataFrame and diplay the updated data set.
coop1['RD Index'] = coop1['$\delta$'] - (1 + coop1['g']) / (1 + coop1['l'])
coop1.head()
# create the 6 subplot figure
fig4, ax4 = plt.subplots(nrows = 2, ncols = 3, figsize = (11, 7), sharey = True)
# graph cooperation rates against each of the three indices discussed in the top
# half of the figure in red
coop1[['SPE Index', 'cooperate']].groupby('SPE Index').mean().plot(ax = ax4[0][0],
legend = False,
ylim = (0,0.9))
coop1[['RD Index', 'cooperate']].groupby('RD Index').mean().plot(ax = ax4[0][1],
legend = False,
xlim = (-1,0.2))
coop1[['sizeBAD', 'cooperate']].groupby('sizeBAD').mean().plot(ax = ax4[0][2],
legend = False)
# graph verticle dotted lines at the subgame perfect equilibrium or risk dominant threshold
# for each of the three indices
ax4[0][0].axvline(color = 'k', linewidth = 2, linestyle = '--')
ax4[0][1].axvline(color = 'k', linewidth = 2, linestyle = '--')
ax4[0][2].axvline(color = 'k', linewidth = 2, x = 0.5, linestyle = '--')
# fit the linear regression for each of the three indices
mSPE, bSPE = np.polyfit(coop1['SPE Index'], coop1['cooperate'], 1)
mRD, bRD = np.polyfit(coop1['RD Index'], coop1['cooperate'], 1)
mBAD, bBAD = np.polyfit(coop1['sizeBAD'], coop1['cooperate'], 1)
# graph the best fit lines in blue
ax4[0][0].plot((-1, 1), (bSPE - mSPE, bSPE + mSPE))
ax4[0][1].plot((-1, 1), (bRD - mRD, bRD + mRD))
ax4[0][2].plot((-1, 1), (bBAD - mBAD, bBAD + mBAD))
# title the three regression graphs
ax4[0][0].set_title('SPE Index', fontsize = 14)
ax4[0][1].set_title('RD Index', fontsize = 14)
ax4[0][2].set_title('BAD Index', fontsize = 14)
# add the x-axis labels and a common y-axis label
ax4[0][0].set_xlabel('SPE Index', fontsize = 8)
ax4[0][1].set_xlabel('RD Index', fontsize = 8)
ax4[0][2].set_xlabel('BAD Index', fontsize = 8)
ax4[0][0].set_ylabel('Cooperation Rate', fontsize = 11)
# calculate the cooperation rates divided by whether cooperation is or is not subgame perfect
coopSPEyes = coop1[coop1['Subgame Perfect'] == 1]
SPEyesRate = coopSPEyes['cooperate'].mean()
coopSPEno = coop1[coop1['Subgame Perfect'] == 0]
SPEnoRate = coopSPEno['cooperate'].mean()
# calculate the cooperation rates divided by whether cooperation is or is not risk dominant
coopRDyes = coop1[coop1['Risk Dominant'] == 1]
RDyesRate = coopRDyes['cooperate'].mean()
coopRDno = coop1[coop1['Risk Dominant'] == 0]
RDnoRate = coopRDno['cooperate'].mean()
# calculate the cooperation rates for BAD score noticeable to either side of the risk dominant
# threshold
coopBADyes = coop1[coop1['sizeBAD'] < 0.3]
BADyesRate = coopBADyes['cooperate'].mean()
coopBADno = coop1[coop1['sizeBAD'] > 0.7]
BADnoRate = coopBADno['cooperate'].mean()
# set bar width, bar opacity, and spacing for the three bar graphs
ind = np.arange(0.25, 1.0, 0.5)
width = 0.4
opacity = 0.6
# graph the cooperation rates based on whether cooperation was a subgame perfect equilibrium,
# risk dominant equilibrium, or had a high vs. low basin of attraction in the lower panels of
# the figure below the corresponding continuous graph
ax4[1][0].bar(ind,
[SPEnoRate, SPEyesRate],
width = width,
tick_label = ['Not SPE', 'SPE'],
color = ['r', 'g'],
alpha = opacity)
ax4[1][1].bar(ind,
[RDnoRate, RDyesRate],
width = width,
tick_label = ['Not RD', 'RD'],
color = ['r', 'g'],
alpha = opacity)
ax4[1][2].bar(ind,
[BADnoRate, BADyesRate],
width = width,
tick_label = ['BAD < 0.3', 'BAD > 0.7'],
color = ['r', 'g'],
alpha = opacity)
# add labels to each bar with the cooperation rates
ax4[1][0].text(0.38, SPEnoRate + 0.01, '20.2%')
ax4[1][0].text(0.87, SPEyesRate + 0.01, '48.8%')
ax4[1][1].text(0.38, RDnoRate + 0.01, '20.9%')
ax4[1][1].text(0.87, RDyesRate + 0.01, '57.5%')
ax4[1][2].text(0.38, BADnoRate + 0.01, '20.5%')
ax4[1][2].text(0.87, BADyesRate + 0.01, '62.8%')
# center the bar graph labels under their respectives bars
ax4[1][0].set_xticks(ind + width/2)
ax4[1][1].set_xticks(ind + width/2)
ax4[1][2].set_xticks(ind + width/2)
# label the x-axes of the bar graphs and add a common y-axis label
ax4[1][0].set_xlabel("Cooperation is...", fontsize = 10)
ax4[1][1].set_xlabel("Cooperation is...", fontsize = 10)
ax4[1][2].set_xlabel("Basin of Attraction to Defect", fontsize = 10)
ax4[1][0].set_ylabel("Cooperation Rate", fontsize = 11)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data
Step2: Use tf.data to read the CSV files
Step3: Build a simple keras DNN model
Step4: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Step5: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments
Step6: Train the model
Step7: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Step8: High-level model evaluation
Step9: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
Step10: Making predictions with our model
Step11: Export and deploy our model
Step12: Deploy our model to Vertex AI
Step13: Exercise. Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the Model.upload method. Have a look at the documentation.
Step14: Exercise. Complete the code in the cell below to call prediction on your deployed model for the example you just created in the instance variable above.
Step15: Cleanup
Step16: ...then delete the endpoint.
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
!ls -l ../data/*.csv
!head ../data/taxi*.csv
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = # TODO: Your code here
# Build a keras DNN model using Sequential API
model = # TODO: Your code here
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code here
# Compile the keras model
# TODO: Your code here
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
%%time
steps_per_epoch = # TODO: Your code here
LOGDIR = "./taxi_trained"
history = # TODO: Your code here
model.summary()
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(
# TODO: Your code here
)
!saved_model_cli show \
--tag_set # TODO: Your code here
--signature_def # TODO: Your code here
--dir # TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = f"taxifare-{TIMESTAMP}"
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri= # TODO: Your code here
serving_container_image_uri= # TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
endpoint.predict(
# TODO: Your code here
)
endpoint.undeploy_all()
endpoint.delete()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logarithmic Return
|
<ASSISTANT_TASK:>
Python Code:
BRK['simple_return'] = (BRK['Close'] / BRK['Close'].shift(1)) - 1
print(BRK['simple_return'])
BRK['simple_return'].plot(figsize=(8,5))
plt.show()
avg_returns_d = BRK['simple_return'].mean()
avg_returns_d
avg_returns_a = avg_returns_d * 250 # multiply by the average number of business days per year
print(str(round(avg_returns_a, 5) * 100) + ' %')
BRK.head()
BRK['log_return'] = np.log(BRK['Close'] / BRK['Close'].shift(1))
print(BRK['log_return'])
BRK['log_return'].plot(figsize=(8, 5))
plt.show()
log_return_d = BRK['log_return'].mean()
log_return_d
log_return_a = BRK['log_return'].mean() * 250
log_return_a
print(str(round(log_return_a, 5) * 100) + ' %')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Grappler による TensorFlow グラフ最適化
Step2: オプティマイザの状態を簡単に切り替えるためのコンテキストマネージャを作成します。
Step3: Grappler 使用の有無による実行パフォーマンスを比較する
Step4: 定数折り畳みオプティマイザをオフにして、関数を実行します。
Step5: 定数折り畳みオプティマイザを有効にして関数を再度実行し、関数実行の高速化を観察します。
Step6: デバッグ ストリッパー オプティマイザ
Step7: まず最初に、デバッグストリッパーオプティマイザをオフにして関数を実行します。
Step8: test_funcに対するInf引数のため、tf.debugging.check_numericsには無効な引数エラーが発生します。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import timeit
import traceback
import contextlib
import tensorflow as tf
@contextlib.contextmanager
def options(options):
old_opts = tf.config.optimizer.get_experimental_options()
tf.config.optimizer.set_experimental_options(options)
try:
yield
finally:
tf.config.optimizer.set_experimental_options(old_opts)
def test_function_1():
@tf.function
def simple_function(input_arg):
print('Tracing!')
a = tf.constant(np.random.randn(2000,2000), dtype = tf.float32)
c = a
for n in range(50):
c = c@a
return tf.reduce_mean(c+input_arg)
return simple_function
with options({'constant_folding': False}):
print(tf.config.optimizer.get_experimental_options())
simple_function = test_function_1()
# Trace once
x = tf.constant(2.2)
simple_function(x)
print("Vanilla execution:", timeit.timeit(lambda: simple_function(x), number = 1), "s")
with options({'constant_folding': True}):
print(tf.config.optimizer.get_experimental_options())
simple_function = test_function_1()
# Trace once
x = tf.constant(2.2)
simple_function(x)
print("Constant folded execution:", timeit.timeit(lambda: simple_function(x), number = 1), "s")
def test_function_2():
@tf.function
def simple_func(input_arg):
output = input_arg
tf.debugging.check_numerics(output, "Bad!")
return output
return simple_func
test_func = test_function_2()
p1 = tf.constant(float('inf'))
try:
test_func(p1)
except tf.errors.InvalidArgumentError as e:
traceback.print_exc(limit=2)
with options({'debug_stripper': True}):
test_func2 = test_function_2()
p1 = tf.constant(float('inf'))
try:
test_func2(p1)
except tf.errors.InvalidArgumentError as e:
traceback.print_exc(limit=2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Depth Restriction
Step2: I want to make the masks match on my data sets so that the unmasked pixels will line up correctly.
Step3: Split data into training and test sets.
Step4: Trained and Tested on Depth Limited Data
Step5: Full 20m Limited Prediction
Step6: Effect of Depth Limit on Model Accuracy
Step7: Limited Training Data
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import geopandas as gpd
import pandas as pd
from OpticalRS import *
from OpticalRS.ArrayUtils import band_df
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error
from sklearn.cross_validation import train_test_split
style.use('ggplot')
cd ../data
imrds = RasterDS('Leigh_wv2_denoised.tif')
imarr = imrds.band_array
deprds = RasterDS('Leigh_Depth_atAcq_Resampled.tif')
darr = -1 * deprds.band_array.squeeze()
darr = np.ma.masked_greater( darr, 20.0 )
imarr = ArrayUtils.mask3D_with_2D( imarr, darr.mask )
darr = np.ma.masked_where( imarr[...,0].mask, darr )
fig,(ax1,ax2) = subplots(1,2,figsize=(10,5))
ax1.imshow( imarr[...,1],norm=mpl.colors.LogNorm())
ax2.imshow( darr )
blah = [ax.set_axis_off() for ax in (ax1,ax2)]
darr.count(), darr.min(), darr.max()
x_train, x_test, y_train, y_test = train_test_split( \
imarr.compressed().reshape(-1,8),darr.compressed(), \
train_size=300000,random_state=5)
knn = KNeighborsRegressor()
knn.fit( x_train, y_train )
pred = knn.predict( x_test )
knn.score( x_test, y_test )
dlimit = 20.0
fig2,ax = plt.subplots(1,1,figsize=(6,6))
ax.hexbin(pred,y_test,mincnt=1,gridsize=500,bins='log',cmap=plt.cm.hot)
# ax.scatter(pred3,y3_test,alpha=0.008,edgecolor='none')
ax.set_ylabel('MB Depth')
ax.set_xlabel('Predicted Depth')
rmse = np.sqrt( mean_squared_error( y_test, pred ) )
n = x_train.shape[0]
tit = "RMSE: %.2f, n=%i" % (rmse,n)
ax.set_title(tit,fontsize=12)
ax.set_aspect('equal')
ax.axis([0,dlimit+1,0,dlimit+1])
ax.plot([0,dlimit+1],[0,dlimit+1],c='white')
fig2.suptitle('KNN Depth<20',fontsize=14)
preddf = pd.DataFrame({'prediction':pred,'mb_depth':y_test})
# preddf.to_pickle('KNN20mPredVsMB.pkl')
resdf = preddf.copy()
resdf['error'] = resdf.prediction - resdf.mb_depth
errlist,upp95,low95 = [],[],[]
ds = arange(2.5,32.5,1)
for d in ds:
ld,hd = d-2.5, d+2.5
# print d, ld, hd, resdf.query('mb_depth > %i and mb_depth < %i' % (ld,hd)).count()
errs = resdf.query('mb_depth > %i and mb_depth < %i' % (ld,hd)).error
upp95.append( errs.quantile(0.90) )
low95.append( errs.quantile(0.10) )
errlist.append( errs.mean() )
fig,ax = subplots(1,1,figsize=(6,6))
ax.plot(ds,errlist)
ax.plot(ds,upp95,c='k',linestyle='--')
ax.plot(ds,low95,c='k',linestyle='--')
ax.scatter(resdf.mb_depth,resdf.error,alpha=0.005)
ax.set_title("Rolling Average Error")
ax.set_xlabel("MB Depth")
ax.set_ylabel("Error")
ax.set_ylim(-12,12)
full_pred = knn.predict( imarr.compressed().reshape(-1,8) )
fullpredout = imarr[...,0].copy()
fullpredout[~darr.mask] = full_pred
fullprederr = fullpredout - darr
fullpredout.dump('KNNPred.pkl')
fullprederr.dump('KNNPredErr.pkl')
figure(figsize=(12,12))
imshow( fullprederr,interpolation='nearest' )
colorbar()
fullim = imrds.band_array
fulldep = -1 * deprds.band_array.squeeze()
fullim = ArrayUtils.mask3D_with_2D( fullim, fulldep.mask )
fulldep = np.ma.masked_where( fullim[...,0].mask, fulldep )
fulldep.max()
dlims = arange(5,31,2.5)
drmses,meanerrs,stderrs = [],[],[]
for dl in dlims:
dlarr = np.ma.masked_greater( fulldep, dl )
iml = ArrayUtils.mask3D_with_2D( fullim, dlarr.mask )
xl_train, xl_test, yl_train, yl_test = train_test_split( \
iml.compressed().reshape(-1,8),dlarr.compressed(),train_size=1500,random_state=5)
knnl = KNeighborsRegressor()
predl = knnl.fit( xl_train, yl_train ).predict( xl_test )
drmses.append( sqrt( mean_squared_error(yl_test,predl) ) )
meanerrs.append( (yl_test - predl).mean() )
stderrs.append( (yl_test - predl).std() )
fig,(ax1,ax2) = subplots(1,2,figsize=(12,6))
ax1.plot(dlims,np.array(drmses),marker='o')
ax1.set_xlabel("Data Depth Limit (m)")
ax1.set_ylabel("Model RMSE (m)")
em,es = np.array(meanerrs), np.array(stderrs)
ax2.plot(dlims,em,marker='o')
ax2.plot(dlims,em+es,linestyle='--',c='k')
ax2.plot(dlims,em-es,linestyle='--',c='k')
ax2.set_xlabel("Data Depth Limit (m)")
ax2.set_ylabel("Model Mean Error (m)")
deplimdf = pd.DataFrame({'depth_lim':dlims,'rmse':drmses,'mean_error':meanerrs,'standard_error':stderrs})
deplimdf.to_pickle('KNNDepthLimitDF.pkl')
ns = np.logspace(1,log10(0.8*darr.count()),15)
ltdf = pd.DataFrame({'train_size':ns})
for rs in range(10):
nrmses = []
for n in ns:
xn_train,xn_test,yn_train,yn_test = train_test_split( \
imarr.compressed().reshape(-1,8),darr.compressed(),train_size=int(n),random_state=rs)
knn_n = KNeighborsRegressor()
npred = knn_n.fit(xn_train,yn_train).predict(xn_test)
nrmses.append( sqrt( mean_squared_error(yn_test,npred ) ) )
dflabel = 'rand_state_%i' % rs
ltdf[dflabel] = nrmses
print "min points: %i, max points: %i" % (int(ns.min()),int(ns.max()))
fig,ax = subplots(1,1,figsize=(10,6))
for rs in range(10):
dflabel = 'rand_state_%i' % rs
ax.plot(ltdf['train_size'],ltdf[dflabel])
ax.set_xlabel("Number of Training Points")
ax.set_ylabel("Model RMSE (m)")
# ax.set_xlim(0,5000)
ax.set_xscale('log')
ax.set_title("Rapidly Increasing Accuracy With More Training Data")
ltdf.to_pickle('KNNAccuracyDF.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Step2: Of course a child can have more than one parent, and a parent can have many children.
Step3: PyMC is concerned with two types of programming variables
Step4: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Step5: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like
Step6: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Step7: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
Step8: This is how we include data into our models
Step9: Finally...
Step10: Modeling approaches
Step11: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution
Step12: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example
Step13: 4. Plot the artificial dataset
Step14: It is okay that our fictional dataset does not look like our observed dataset
Step15: Later we will see how we use this to make predictions and test the appropriateness of our models.
Step16: Had we had stronger beliefs, we could have expressed them in the prior above.
Step17: We combine the observations into the PyMC observed variable, and run our inference algorithm
Step18: We plot the posterior distribution of the unknown $p_A$ below
Step19: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
Step20: Below we plot the posterior distributions for the three unknowns
Step21: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
Step22: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Step23: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
Step24: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students
Step25: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$
Step26: Although not everyone flips a second time, we can still model the possible realization of second coin-flips
Step27: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable
Step28: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
Step29: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
Step30: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step31: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
Step32: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
Step33: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step34: More PyMC Tricks
Step35: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling
Step36: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
Step37: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function
Step38: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Step39: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter
Step40: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like
Step41: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$
Step42: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Step43: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
Step44: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
Step45: Is our model appropriate?
Step46: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
Step47: Next we sort each column by the posterior probabilities
Step48: We can present the above data better in a figure
Step49: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
Step50: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
Step51: References
|
<ASSISTANT_TASK:>
Python Code:
import pymc as pm
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
print("Children of `parameter`: ")
print(parameter.children)
print("\nParents of `data_generator`: ")
print(data_generator.parents)
print("\nChildren of `data_generator`: ")
print(data_generator.children)
print("parameter.value =", parameter.value)
print("data_generator.value =", data_generator.value)
print("data_plus_one.value =", data_plus_one.value)
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value, "\n")
lambda_1.random(), lambda_2.random(), tau.random()
print("After calling random() on the variables...")
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value)
type(lambda_1 + lambda_2)
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print("value: ", fixed_variable.value)
print("calling .random()")
fixed_variable.random()
print("value: ", fixed_variable.value)
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print(obs.value)
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
tau = pm.rdiscrete_uniform(0, 80)
print(tau)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print(lambda_1, lambda_2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.suptitle("More examples of artificial datasets", fontsize=14)
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
# set constants
p_true = 0.05 # remember, this is unknown.
N = 15000
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(occurrences.sum())
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % occurrences.mean())
print("Does this equal the true frequency? %s" % (occurrences.mean() == p_true))
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 250, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend();
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print("Obs from Site A: ", observations_A[:30].astype(int), "...")
print("Obs from Site B: ", observations_B[:30].astype(int), "...")
print(observations_A.mean())
print(observations_B.mean())
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean())
print("Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean())
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
true_answers = pm.Bernoulli("truths", p, size=N)
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print(first_coin_flips.value)
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
observed_proportion.value
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.title("Logistic functon plotted for several value of $\\beta$ parameter", fontsize=14)
plt.legend();
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.title("Logistic functon with bias, plotted for several value of $\\alpha$ bias parameter", fontsize=14)
plt.legend(loc="lower left");
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
# type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Inference
Step2: 2. Test
Step3: COCO test-dev2017
Step4: 3. Train
Step5: Train a YOLOv3 model on COCO128 with --data coco128.yaml, starting from pretrained --weights yolov3.pt, or from randomly initialized --weights '' --cfg yolov3.yaml. Models are downloaded automatically from the latest YOLOv3 release, and COCO, COCO128, and VOC datasets are downloaded automatically on first use.
Step6: 4. Visualize
Step7: <img src="https
Step8: <img src="https
|
<ASSISTANT_TASK:>
Python Code:
!git clone https://github.com/ultralytics/yolov3 # clone repo
%cd yolov3
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
!python detect.py --weights yolov3.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)
# Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
# Run YOLOv3 on COCO val2017
!python test.py --weights yolov3.pt --data coco.yaml --img 640 --iou 0.65
# Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
!f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images
%mv ./test2017 ../coco/images # move to /coco
# Run YOLOv3 on COCO test-dev2017 using --task test
!python test.py --weights yolov3.pt --data coco.yaml --task test
# Download COCO128
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
# Tensorboard (optional)
%load_ext tensorboard
%tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
import wandb
wandb.login()
# Train YOLOv3 on COCO128 for 3 epochs
!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov3.pt --nosave --cache
Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels
Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # test batch 0 labels
Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # test batch 0 predictions
from utils.plots import plot_results
plot_results(save_dir='runs/train/exp') # plot all results*.txt as results.png
Image(filename='runs/train/exp/results.png', width=800)
# Re-clone repo
%cd ..
%rm -rf yolov3 && git clone https://github.com/ultralytics/yolov3
%cd yolov3
# Reproduce
for x in 'yolov3', 'yolov3-spp', 'yolov3-tiny':
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP
# PyTorch Hub
import torch
# Model
model = torch.hub.load('ultralytics/yolov3', 'yolov3') # or 'yolov3_spp', 'yolov3_tiny'
# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()
# Unit tests
%%shell
export PYTHONPATH="$PWD" # to run *.py. files in subdirectories
rm -rf runs # remove runs/
for m in yolov3; do # models
python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained
python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch
for d in 0 cpu; do # devices
python detect.py --weights $m.pt --device $d # detect official
python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom
python test.py --weights $m.pt --device $d # test official
python test.py --weights runs/train/exp/weights/best.pt --device $d # test custom
done
python hubconf.py # hub
python models/yolo.py --cfg $m.yaml # inspect
python models/export.py --weights $m.pt --img 640 --batch 1 # export
done
# Profile
from utils.torch_utils import profile
m1 = lambda x: x * torch.sigmoid(x)
m2 = torch.nn.SiLU()
profile(x=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)
# Evolve
!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov3.pt --cache --noautoanchor --evolve
!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)
# VOC
for b, m in zip([64, 48, 32, 16], ['yolov3', 'yolov3-spp', 'yolov3-tiny']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating a spectrum object. The init takes<br>
Step2: The general VNA parameters (probing frequencies, power, etc.) can be either set at the device or use the qkit instrument command. Using sample object attributes is possible as well.
Step4: Next we want to record the VNA trace. In the spectrum object the parameter comment can be used for all non-digital information. Any digitally available instrument settings are saved automatically.
Step6: For resonator measurements it is also possible to fit the probed resonance live while measureing. For the fits to converge the VNA parameters need to adjusted properly.
Step8: A sweep parameter can be added to the spectum object and attributed to a sweep axis. In the example below we measure a resonator agains a applied current (ie for creating a magnetic field bias). The current is swept from 0 to 1A in 1mA steps. After the measurement the current is ramped down again. For one value of the x-parameter all VNA frequencies are probed before the next value of x is set.
Step10: In the next example the VNA power is changed in addition to the current. Depending on the applied power, the number of averages is changed.
|
<ASSISTANT_TASK:>
Python Code:
# start qkit and import the needed modules. we here assume an already configured qkit measurement environment
import qkit
qkit.start()
from qkit.measure.spectroscopy import spectroscopy
import qkit.measure.samples_class as sc
import numpy as np
# initialize instruments; as an example we here work with a Keysight VNA, a Yokogawa current source,
# and an Anritsu MW source
vna = qkit.instruments.create('vna', 'Keysight_VNA_E5071C', address='TCPIP::XXX.XXX.XXX.XXX')
yoko = qkit.instruments.create('yoko', 'Yokogawa_GS820', address='TCPIP::XXX.XXX.XXX.XXX')
mw_src = qkit.instruments.create('mw_src', 'Anritsu_MG37022', address='TCPIP::XXX.XXX.XXX.XXX')
# create/load sample object; (optional), for more information see the example notebook on the sample class.
sample_filepath = r'\some\path\sample_1.sample'
smpl = sc.Sample(sample_filepath)
s = spectroscopy.spectrum(vna=vna, sample = smpl)
vna.set_centerfreq(smpl.fr)
vna.set_span(200e6)
vna.set_averages(10)
vna.set_Average(True)
comment =
* -30dB attenuator @ VNA
s.comment = comment
s.measure_1D()
comment =
* measure resonance in reflection
* -30dB attenuator @ VNA
s.set_resonator_fit(fit_function='circle_fit_reflection')
s.comment = comment
s.measure_1D()
comment =
* resonator vs applied current
* -30dB step attenuator @ VNA
s.comment = comment
s.set_resonator_fit(fit_resonator=False)
# x_func gets called for every value of i. Here it would be not necessary to define a function for only the ramp fct,
# it will be useful in the next example.
def x_func(i):
return yoko.ramp_current(i, 1e-3)
s.set_x_parameters(x_vec = np.arange(0, 1, 1e-3),
x_coordname = 'current',
x_set_obj = x_func,
x_unit = 'A')
s.measure_2D()
yoko.ramp_current(0, 1e-3)
# the x/y-loops are interleaved, y changes "faster" than x:
# 1) each VNA freq at y0 and x0
# 2) each VNA freq at y1 and x0
# ...
# 3) each VNA freq at yN and x0
# 4) each VNA freq at y0 and x1
# ...
# 5) each VNA freq at yN and xM
comment =
* resonator vs applied current at different power levels
* -30dB step attenuator @ VNA
s.comment = comment
s.set_resonator_fit(fit_resonator=False)
def x_func(i):
return yoko.ramp_current(i, 1e-3)
# Here the called function features more commends, i.e. change the number of averages at different powers to shorten the
# measurement time.
def y_func(i):
if i < -25: vna.set_averages(10)
else: vna.set_averages(5)
return vna.set_power(i)
s.set_x_parameters(x_vec = np.arange(0, 1.001, 1e-3),
x_coordname = 'current',
x_set_obj = x_func,
x_unit = 'A')
s.set_y_parameters(y_vec = np.arange(-35, 11, 5),
y_coordname = 'power',
y_set_obj = y_func,
y_unit = 'dBm')
s.measure_3D()
yoko.ramp_current(0, 1e-3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On va utiliser la fameuse base des Iris qu'on peut appeler depuis la librairie seaborn.
Step2: Grâce à ces graphiques, on observere une forte relation entre longeur et la largeur des pétales.
Step3: Si on observe bien qu'une relation existe, on ne connait pas grand chose d'autre.
Step4: La meilleure approximation linéaire est donc y=a+bx avec
Step5: Si vous regardez de plus près, vous observez que les coefficients des deux régressions sont proches mais pas égaux. Problème de précision ? Non...
Step6: Ca y est, les coefficients de scikit et de statsmodels sont quasi identiques (en réalité si vous regardez après la 10ème décimale, ils sont légèrement différents...)
Step7: La regression multi-linéaire avec statsmodels
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
import seaborn as sns
import statsmodels.api as sm
from sklearn import linear_model
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue="species");
sns.lmplot(x="petal_length", y="petal_width", data=iris);
X = iris[["petal_length"]]
y = iris["petal_width"]
# On fit le modele
model = linear_model.LinearRegression()
results = model.fit(X, y)
print(results.intercept_, results.coef_)
model = sm.OLS(y, X)
results = model.fit()
# Avec statsmodel, on a une sortie qui ressemble beaucoup à celle de R
print(results.summary())
X = iris["petal_length"]
X = np.vander(X, 2) #ici on ajoute
y = iris["petal_width"]
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
dummies = pd.get_dummies(iris["species"])
iris = pd.concat([iris, dummies], axis=1)
iris.head()
X = iris[["petal_length", "setosa", "versicolor", "virginica"]]
X = sm.add_constant(X) # une autre façons d'ajouter une constante
y = iris["petal_width"]
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As we see, the $-\log p(x)$ is a convex function and cannot be bounded above by a line for all $p(x)$. However $p(x)$ is a density function and values of $x$ will appear infrequently from regions where $p(x)$ is small. We will choose the other point of the line to be $(\lambda, -\log \lambda)$ with the intention $\lambda \ll 1$. The line passing through these two points can be expressed as
Step2: Below we demonstrate a comparision between KL and the upper bound. We see the upper bound is more sensitive to local extrema and is better at determining the mean of the modes of the distribution. In this particular example we see that the upper bound is able to detect a mode that the true KL cannot.
Step3: We're still building intution, but it is nice to see that the minimum value of $\mu$ in the upper bound corresponds to approximately the same minimum in the true KL.
Step4: So things are not quite as easy for the variance. However, likely this is only for $\lambda=10^{-3}$ which we held fixed. If we also minimized across lambda we may see the same minimum of $\sigma^2$ around 1. Let us look for a local minima in the upper bound for all lambda.
Step5: To build a little more intuition about the role that $\lambda$ plays in the bound's tightness, let's run the experiment again, but explore $10^{-5} \le \lambda \le 10^{-1}$ while we hold $\mu=2$ and $\sigma^2=1$. In this case we know the true KL should be two. We will only visualize how the upper bound changes.
Step6: As suspected in the derivation, for the upper bound to be valid, $\lambda$ must remain small. However, if all we are interested in is minimizing the upper bound with respect to the distribution $q(x)$, then $\lambda$ is irrelevant. In addition, at least as far as one previous experiment showed, the parameter values that minimize the upper bound also minimize the KL.
Step7: Minimizing the Upper Bound
Step8: Minimzation with Parameterized Density
Step9: To build some intuition about the stability issues lets examine closely the cost surface for each iteration
Step10: Cost Surface
Step11: The confusion arises when we apply the optimization definition directly which says we must find the variable $x$ that maximizes the pdf. This is just a little mathematical slight of hand resulting from an imprecise specification of exactly what function is being optimized. When we observe values of the random variable $x$ we view the function $q(x; \mu, \lambda)$ differently. The observed random variable becomes the parameter and $q$ becomes a function of both $\mu$ and $\lambda$. It is no longer required to be a pdf as the integral over both $\mu$ and $\lambda$ is not guaranteed to equal 1. This is made clear by defining a new function for when $x$ is observed.
Step13: I would like to visualize the surface of $KL(q(x|\mu,\sigma)||p(x))$ for different $\mu$ and $\sigma$. To do this, we must choose a $p(x)$ to hold fixed, and then compute the KL for a range of values of $\mu$ and $\sigma$. We can then display the results as a surface in 3D. Let us choose $p(x)$ to be a standard normal with mean 0 and standard deviation 1. I defined a function above which calculates the value of the KL divergence for a set $q(x)$ and $p(x)$ using a numerical integration technique. I redefine it here and add comments for ease of use
Step14: Below we define a standard normal distribution. We use the gaussian mixture model (gmm) class but only define a single gaussian at mean $\mu=0$ and standard deviation $\sigma=1$ with a weight of $\pi=1$. So it is a regular univariate guassian.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
#matplotlib.use('nbagg')
import matplotlib.pyplot as plt
x = np.linspace(1e-6,1,200)
y1 = -np.log(x)
y2 = -5*x+5
plt.plot(x,y1,label='$-\log p(x)$',lw=2)
plt.plot(x,y2,label='$-5p(x)+5$',lw=2)
plt.xlabel("$p(x)$")
plt.ylim(-0.5,10)
plt.tight_layout()
plt.legend()
plt.show()
from scipy.integrate import quadrature
def reverse_kl_int(q,p):
def f_i(x):
global p
q_i = q(x)
p_i = p(x)
return q_i*np.log(q_i/p_i)
quad_rslt, quad_err = quadrature(f_i, -1, 2.0, tol=1e-8, maxiter=100)
return (quad_rslt, quad_err)
def kl_upper_bound(q,p,l=1e-3,k=None):
if not k:
k = np.log(l)/(1-l)
def f_i(x):
global p
q_i = q(x)
p_i = p(x)
return q_i*(k*(p_i-1)-np.log(q_i))
quad_rslt, quad_err = quadrature(f_i, -1, 2.0, tol=1e-8, maxiter=100)
return (quad_rslt, quad_err)
from scipy.stats import norm
class gmm: # gaussian mixture model
def __init__(self, pis, params, normed=False):
self.params = params # [[mu1, sig1], [mu2, sig2],...]
self.components = params.shape[0]
self.pis = pis
self.max_val = 1
self.normed = normed
if self.normed:
vals = self.__call__(np.linspace(0,1,10000))
self.max_val = vals.max()
def __call__(self, x):
pis = self.pis
p = self.params
sz = self.components
vals = np.array([pis[i]*norm.pdf(x,*(p[i])) for i in range(sz)]).sum(axis=0)
if self.normed:
vals /= self.max_val
return vals
def sample(self, n_samples=1):
mode_id = np.random.choice(self.components, size=n_samples, replace=True, p=self.pis)
return np.array([norm.rvs(*(self.params[i])) for i in mode_id])
p = gmm([1], np.array([[0.5,0.1]]), normed=True)
mu = np.linspace(0,1,200)
true_kl = []
upper_bound = []
for m in mu:
q = gmm([1], np.array([[m,0.1]]))
true_kl.append(reverse_kl_int(q,p)[0])
upper_bound.append(kl_upper_bound(q,p,l=1e-5)[0])
max_val = np.max((np.max(true_kl),np.max(upper_bound)))
plt.plot(mu,true_kl/max_val,lw=3,label='KL')
plt.plot(mu,upper_bound/max_val,lw=3,label='bound')
plt.plot(mu,p(mu),ls='dashed',label='$p(x)$')
plt.plot([0,1],[0,0],c='k')
plt.xlabel('$\mu$')
plt.legend()
plt.show()
mu[(np.diff(np.sign(np.diff(upper_bound))) > 0).nonzero()[0] + 1]
mu[(np.diff(np.sign(np.diff(true_kl))) > 0).nonzero()[0] + 1]
p = gmm([1], np.array([[0.5,0.1]]), normed=True)
var = np.linspace(0.05,0.15,500)
true_kl = []
upper_bound = []
for v in var:
q = gmm([1], np.array([[0.5,v]]))
true_kl.append(reverse_kl_int(q,p)[0])
upper_bound.append(kl_upper_bound(q,p,l=1e-3)[0])
plt.plot(var,true_kl,label='KL')
plt.plot(var,upper_bound,label='bound')
plt.xlabel('$\sigma^2$')
plt.legend()
plt.show()
var[(np.diff(np.sign(np.diff(upper_bound))) > 0).nonzero()[0] + 1]
var[(np.diff(np.sign(np.diff(true_kl))) > 0).nonzero()[0] + 1]
p = gmm([0.1,0.3,0.6], np.array([[-7,1],[0,0.1],[3,0.5]]))
var = np.linspace(0.01,3,500)
ks = np.linspace(-5,-2,10)
true_kl = []
for i,k_val in enumerate(ks):
print("iteration = ", i)
upper_bound = []
for v in var:
q = gmm([1], np.array([[3,np.sqrt(v)]]))
true_kl.append(reverse_kl_int(q,p))
upper_bound.append(kl_upper_bound(q,p,k=k_val))
out = var[(np.diff(np.sign(np.diff(upper_bound))) > 0).nonzero()[0] + 1]
if out.size > 0:
print("Found minima with k = %f", k)
print(out)
plt.plot(var,true_kl,label='KL')
plt.plot(var,upper_bound,label='bound')
plt.xlabel('$\sigma^2$')
plt.legend()
plt.show()
lmda = np.linspace(1e-5,0.5,1000)
p = gmm([0.1,0.1,0.2,0.2,0.4], np.array([[-8,1],[-4,1],[-2,3],[-5,1],[9,0.1]]))
q = gmm([1], np.array([[0,0.5]]))
true_kl = reverse_kl_int(q,p,-12,12,1000)
upper_bound = []
for l in lmda:
upper_bound.append(kl_upper_bound(q,p,-12,12,100,l))
plt.plot([lmda[0],lmda[-1]],[true_kl,true_kl],ls='dashed',label='true KL')
plt.plot(lmda,upper_bound,label='bound')
plt.xlabel('$\lambda$')
plt.legend()
plt.show()
from scipy.integrate import quadrature
fig = plt.figure(figsize=(9,5))
#ax2d = fig.add_subplot(121)
ax3d = fig.add_subplot(111, projection='3d')
def E(p,q,m,s,b=1):
def e_i(x,m,s,b):
global p
q_i = q(np.array(x).reshape(-1,1),update=False)
p_i = p(x)
return p_i*q_i
quad_rslt, quad_err = quadrature(e_i, 0.0, 1.0, args=(m,s,b), tol=1e-6)
return (-7*(quad_rslt-1)-b*0.5*np.log(2*np.pi*np.e*s), quad_err)
def f(M, S, b=1):
global p
Z = np.ndarray(M.shape)
for mi,m in enumerate(M[:,0]):
for si,s in enumerate(S[0]):
q = Neuron([1,1], np.array([m]), s)
Z[mi,si] = E(p,q,m,s,b)[0]
return Z
mu_min = 0
mu_max = 1
mu_step = 0.005
s_min = 0.001
s_max = 0.2
s_step = 0.01
M, S = np.mgrid[mu_min:mu_max:mu_step,s_min:s_max:s_step]
#mf = mu.flatten()
#lf = lamda.flatten()
#locs = np.where(1/lf<(1-mf)**2)[0]
Z = f(M,S,b=0.25)
ax3d.plot_surface(M, S, Z, rstride=5, cstride=5, alpha=0.3)
cset = ax3d.contour(M, S, Z, zdir='z', offset=Z.min(), cmap=cm.coolwarm)
cset = ax3d.contour(M, S, Z, zdir='x', offset=mu_min, cmap=cm.coolwarm)
cset = ax3d.contour(M, S, Z, zdir='y', offset=s_max, cmap=cm.coolwarm)
ax3d.set_xlabel('$\mu$')
ax3d.set_xlim(mu_min, mu_max)
ax3d.set_ylabel('$\sigma^2$')
ax3d.set_ylim(s_min, s_max)
ax3d.set_zlabel('$E_P[q(x)]$')
plt.show()
plt.clf()
def k2(s):
root_pi = np.sqrt(np.pi)
return -(s*root_pi*np.log(np.e*np.sqrt(2)*root_pi*s))/(1-2*s*root_pi)
s_vals = np.linspace(1e-4,0.2,1000)
plt.plot(s_vals,2*s_vals,s_vals,k2(s_vals))
plt.show()
np.log(0.001)/(1-0.001)
from ipywidgets import interactive
class neuron():
def __init__(self, mu_0, var_0, a=1, b=1):
self.mu = mu_0
self.var = var_0
self.a = a
self.b = b
self.calls = 0
def __call__(self, x, update=False):
self.calls += 1
z = x-self.mu
x2 = x + np.random.rand()*self.var
z2 = x2-self.mu
norm_const = 1/np.sqrt(2*np.pi*self.var)
output = norm_const*np.exp(-np.power(z,2)/(2*self.var))
output2 = norm_const*np.exp(-np.power(z2,2)/(2*self.var))
if update:
#self.mu += self.a*output*(z/self.var)
#self.var += self.b/2*output*(np.power(z/self.var,2)-(1/self.var)) + 1/self.var
self.mu += self.a*output*z
#z2 = x-self.mu
#self.var += self.b*output/2*(np.power(z,2)-self.var) + 1/self.var
#self.mu += self.a*0.5*(output*(z/self.var)+output2*(z2/self.var))
#self.var += self.b*0.5*(output*(np.power(z/self.var,2)-(1/self.var))+output2*(np.power(z2/self.var,2)-(1/self.var))) + 1/self.var
assert self.var > 0, 'ERROR: $\sigma^2 \le 0$'
return output
p = gmm([0.3,0.7], np.array([[-7,1], [3,1]]))
q = neuron(-2,1,1,2)
t = np.linspace(-11,10,1000)
true_kl = []
upper_bound = []
def f(i=0):
x = p.sample()
y = q(x, update=True)
true_kl.append(reverse_kl_int(q,p,-10,10,100))
upper_bound.append(kl_upper_bound(q,p,-10,10,100))
fig, ax = plt.subplots(1, 2, figsize=(12,5), dpi=80)
ax[0].plot([x,x],[0.0,1.0],c='k', label='$x\sim p(x)$')
ax[0].plot(t, p(t), c='r', lw=3, label='$p(x)$')
ax[0].plot(t, q(t), c='b', lw=3, label='$q(x)$')
ax[0].legend()
ax[1].plot(true_kl, label='KL')
ax[1].plot(upper_bound, label='bound')
ax[1].set_xlabel('$q(x)$')
ax[1].set_title('KL Divergence and Upper Bound')
ax[1].legend()
plt.show()
interactive_plot = interactive(f, i=(0, 1000))
output = interactive_plot.children[-1]
#output.layout.height = '35px'
interactive_plot
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
def f(mu,var,x):
x = np.array(x)
N = x.size
val = 0
c = 2*var
for xi in x:
val += (xi-mu)**2
return np.exp(-(N/2)*np.log(c*np.pi) - (1/c)*val) + 0.5*np.log(c*np.pi*np.e)
mu_min = 0.5
mu_max = 1.5
mu_step = 0.01
var_min = 0.001
var_max = 2.0
var_step = 0.01
mu,var = np.mgrid[mu_min:mu_max:mu_step,var_min:var_max:var_step]
#mf = mu.flatten()
#sf = sigma.flatten()
#locs = np.where(1/sf<(1-mf)**2)[0]
z = f(mu,var,[1])
fig = plt.figure(figsize=(8,6))
ax = fig.gca(projection='3d')
ax.plot_surface(mu, var, z, rstride=2, cstride=2, alpha=0.3)
cset = ax.contour(mu, var, z, zdir='z', offset=0.04, cmap=cm.coolwarm)
cset = ax.contour(mu, var, z, zdir='x', offset=mu_min, cmap=cm.coolwarm)
cset = ax.contour(mu, var, z, zdir='y', offset=var_max, cmap=cm.coolwarm)
ax.set_xlabel('$\mu$')
ax.set_xlim(mu_min, mu_max)
ax.set_ylabel('$\sigma^2$')
ax.set_ylim(var_min, var_max)
ax.set_ylim(var_min, var_max)
ax.set_zlim(bottom=0)
ax.set_zlabel('$f$')
plt.show()
t = np.linspace(0,2,100)
q = gmm([1], np.array([[1,1/5]]))
q_y = q(t)
fig, ax = plt.subplots(figsize=(10,3))
ax.plot(t,q_y)
plt.show()
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
def f(mu,lamda,x):
x = np.array(x)
N = x.size
val = 0
c = lamda/2
for xi in x:
val += (xi-mu)**2
return np.exp((N/2)*np.log(c/np.pi) - c*val)
mu_min = -1
mu_max = 3
mu_step = 0.05
l_min = 0.1
l_max = 5
l_step = 0.05
mu,lamda = np.mgrid[mu_min:mu_max:mu_step,l_min:l_max:l_step]
mf = mu.flatten()
lf = lamda.flatten()
locs = np.where(1/lf<(1-mf)**2)[0]
z = f(mu,lamda,[1])
fig = plt.figure(figsize=(8,6))
ax = fig.gca(projection='3d')
ax.plot_surface(mu, lamda, z, rstride=5, cstride=5, alpha=0.3)
cset = ax.contour(mu, lamda, z, zdir='z', offset=0.04, cmap=cm.coolwarm)
cset = ax.contour(mu, lamda, z, zdir='x', offset=mu_min, cmap=cm.coolwarm)
cset = ax.contour(mu, lamda, z, zdir='y', offset=l_max, cmap=cm.coolwarm)
ax.set_xlabel('$\mu$')
ax.set_xlim(mu_min, mu_max)
ax.set_ylabel('$\lambda$')
ax.set_ylim(l_min, l_max)
ax.set_zlabel('$f$')
plt.show()
def reverse_kl_int(q,p,beg,end,n_samples):
Calculate the Reverse Kullback-Leibler Divergence between two density functions q(x) and p(x).
q - a density function defined using the gmm class above.
p - a density function defined using the gmm class above.
beg - the beginning of the interval which to integrate over.
end - the end of the interval which to integrate over.
n_samples - the number of sample points within the interval to use in the integral approximation.
a = beg
b = end
samples = np.linspace(a,b,n_samples)
qs = q(samples)
return (qs*np.log(qs/p(samples))).sum()*((b-a)/n_samples)
p = gmm([1], np.array([[0,1]]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify a folder containing preprocessed HCP data.
Step2: Read data from file.
Step3: This data has some negative values, which is unrealistic. This is taken care of by the function replaceNegativeData().
Step4: Convert diffusion encoding to q-space features usable in prediction.
Step5: Compute the spatial coordinates corresponding to each measurement.
Step6: Compute a mask containing the brain. For this we first need to create an average b0-img, which is done by averaging all b0 images. Then, we pass this to createBrainMaskFromb0Data() which uses Dipy's median_otsu() function.
Step7: Get the (x, y, z)-indices of voxels belonging to the mask using numpy.nonzero().
Step8: Normalize by the b0-image, taking advantage of Numpy's broadcasting.
Step9: Sort columns so that bvals are monotonically increasing
Step10: Some measurements have b = 9950 or b = 10050. We want to treat them as part of the b=10000 shell.
Step11: From the perspective of the qFeatures, each row in S is an observation and each column is a feature. Assuming that the rows are independent (enough) we can thus estimate the mean signal as a function of qFeatures by averaging over the rows. The hypothesis is that the mean only depends on the q-magnitude, so we expect the empirical mean to be stepwise constant.
Step12: The hypothesis seems to hold nicely. Now we can compute the empirical covariance.
Step13: Investigate the covariance as a function of $\theta$ for each shell separately.
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import matplotlib.pyplot as plt
from diGP.preprocessing import (readHCP,
averageb0Volumes,
createBrainMaskFromb0Data,
replaceNegativeData)
from diGP.dataManipulations import (log_q_squared,
generateCoordinates)
%matplotlib inline
dataPath = 'C:\\Users\\sesjojen\\Documents\\Data\\HumanConnectomeProject\\mgh_1007\\diff\\preproc'
print(dataPath)
gtab, data, voxelSize = readHCP(dataPath)
print(gtab.info)
data = replaceNegativeData(data, gtab)
def qMagnitudeTransform(q):
return log_q_squared(q, c=1.)
qvecs = gtab.bvecs
qvals = gtab.qvals[:, np.newaxis]
qvalsTransformed = qMagnitudeTransform(qvals)
qFeatures = np.column_stack((qvalsTransformed, qvecs))
plt.hist(qvalsTransformed)
plt.xlabel('Transformed q-magnitude')
plt.show
spatialDimensions = data.shape[0:3]
coordinates = generateCoordinates(spatialDimensions, voxelSize)
coordinates.shape
b0 = averageb0Volumes(data, gtab)
mask = createBrainMaskFromb0Data(b0)
zSlice = 60
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
ax1.imshow(b0[:, :, zSlice])
ax2.imshow(mask[:, :, zSlice])
maskIdx = np.nonzero(mask)
S = data[maskIdx[0], maskIdx[1], maskIdx[2], :]
S = S/b0[maskIdx[0], maskIdx[1], maskIdx[2], np.newaxis]
maxSignal = 1.5
percentExceedingMaxSignal = 100*np.sum(S>maxSignal)/np.prod(S.shape)
print('Replacing the top {} % values with {}.'.format(percentExceedingMaxSignal, maxSignal))
S[S > maxSignal] = maxSignal
bvalsIdxIncreasing = np.argsort(gtab.bvals)
bvals = gtab.bvals[bvalsIdxIncreasing]
bvecs = gtab.bvecs[bvalsIdxIncreasing, :]
S = S[:, bvalsIdxIncreasing]
qFeatures = qFeatures[bvalsIdxIncreasing, :]
bvals[np.logical_and(bvals >= 9950., bvals <= 10050.)] = 10000.
shells = np.unique(bvals)
print(shells)
shellIdx = [bvals == b for b in shells]
mu = np.mean(S, axis=0)
plt.plot(mu)
def computeEmpiricalCovariance(A):
mu = np.mean(A, axis=0)
Acentered = A - mu[np.newaxis, :]
N = A.shape[0]
empiricalCovariance = 1/(N-1)*np.dot(Acentered.T, Acentered)
return empiricalCovariance
def removeShellMeans(A, shellIdx):
A2 = np.zeros_like(A)
for ind in shellIdx:
shellMeans = np.mean(A[:, ind], axis=1)
A2[:, ind] = A[:, ind] - shellMeans[:, np.newaxis]
return A2
#R = np.corrcoef(removeShellMeans(S, shellIdx).T)
R = np.corrcoef(S.T)
plt.imshow(R, cmap='gray')
bvecsOuterProd = bvecs.dot(bvecs.T)
bvecsOuterProd[bvecsOuterProd > 1.] = 1.
bvecsOuterProd[bvecsOuterProd < -1.] = -1.
thetaMatrix = np.arccos(bvecsOuterProd)
thetaMatrix[thetaMatrix >= np.pi/2] -= np.pi;
plt.imshow(thetaMatrix, cmap='gray')
for i in np.arange(len(shells)):
ind = np.squeeze(np.nonzero(shellIdx[i]))
subMatrixIdx = np.ix_(ind, ind)
ax = plt.subplot(2, 3, i+1)
plt.scatter(thetaMatrix[subMatrixIdx]/np.pi, R[subMatrixIdx])
plt.xlabel('$\\theta/\pi$')
plt.ylabel('Correlation coefficient')
plt.title('b = {}'.format(shells[i]))
plt.xticks((-0.5, 0, 0.5))
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Polynomial regression, revisited
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: QUIZ QUESTION
Step7: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Step8: The four curves should differ from one another a lot, as should the coefficients you learned.
Step9: These curves should vary a lot less, now that you applied a high degree of regularization.
Step10: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
Step11: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step12: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Step13: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step14: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step15: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step16: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step17: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
Step18: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step19: QUIZ QUESTIONS
Step20: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
Step21: QUIZ QUESTION
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
name_left = 'power_' + str(power-1)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature * poly_sframe[name_left]
return poly_sframe
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
sales = sales.sort(['sqft_living','price'])
l2_small_penalty = 1e-5
poly_sframe = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly_sframe.column_names()
poly_sframe['price'] = sales['price']
model = graphlab.linear_regression.create(poly_sframe, 'price', features=my_features,
validation_set=None, l2_penalty=1e-5)
model.get('coefficients')
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
def fitAndPlot(data, degree, l2):
poly_data = polynomial_sframe(data['sqft_living'], degree)
my_features = poly_data.column_names() # get the name of the features
poly_data['price'] = data['price'] # add price to the data since it's the target
model = graphlab.linear_regression.create(poly_data, target = 'price', l2_penalty=l2, verbose=False,
features = my_features, validation_set = None)
plt.plot(poly_data['power_1'],poly_data['price'],'.',
poly_data['power_1'], model.predict(poly_data),'-')
model.get("coefficients").print_rows(num_rows = 16)
for data in [set_1, set_2, set_3, set_4]:
fitAndPlot(data, 15, l2_small_penalty)
power_1 | None | 1247.59037346 | 7944.94142547
power_1 | None | -759.251889293 | 7591.2364892
power_1 | None | 783.493762459 | nan
power_1 | None | 585.865810528 | 2868.03758336
for data in [set_1, set_2, set_3, set_4]:
fitAndPlot(data, 15, 1e5)
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
train_valid_shuffled[0:10] # rows 0 to 9
validation4 = train_valid_shuffled[5818:7758]
print int(round(validation4['price'].mean(), 0))
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
train4 = train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:])
print int(round(train4['price'].mean(), 0))
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
n = len(data)
RSS_total = 0.
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k
validation = data[start:end]
train = data[:start].append(data[end:])
model = graphlab.linear_regression.create(train, target=output_name, l2_penalty=l2_penalty,
verbose=False, features=features_list, validation_set=None)
predictions = model.predict(validation)
errors = predictions - validation[output_name]
RSS = (errors * errors).sum()
RSS_total = RSS_total + RSS
return RSS_total / k
train_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
feature_list = train_data.column_names()
train_data['price'] = train_valid_shuffled['price']
import numpy as np
validation_error = float('inf')
l2_min = 0
errors = []
l2_list = np.logspace(1, 7, num=13)
for l2_penalty in l2_list:
new_error = k_fold_cross_validation(10, l2_penalty, train_data, 'price', feature_list)
errors.append(new_error)
if new_error < validation_error:
l2_min = l2_penalty
validation_error = new_error
print l2_min
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.xscale('log')
plt.plot(np.logspace(1, 7, num=13), errors)
final_model = graphlab.linear_regression.create(train_data, target='price', l2_penalty=1000,
verbose=False, features=feature_list, validation_set=None)
test_data = polynomial_sframe(test['sqft_living'], 15)
test_data['price'] = test['price']
predictions = final_model.predict(test_data)
test_error = predictions - test['price']
RSS = (test_error * test_error).sum()
RSS
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: The phase error $\phi_{out}[n]$ can be calculated integrating the error in the frequency. Then the power spectrum density is estimated with the welch periodogram.
Step3: Non-linear behavior of the $\Sigma-\Delta$ modulator
Step4: The spurs appearance and amplitude depend both of the input word and the initial state of the registers of the modulator [3]. It can be that initializing the first register of the first register of the SDM creates enough dithering to reduce the spurs The gen_mash accepts a fourth argument to initialize the mash registers. This argument is a tuple with a number of elements that equals the order of the SDM.
|
<ASSISTANT_TASK:>
Python Code:
# Compatibility with python 3.0
from __future__ import division
# Array fast operation
import numpy as np
from numpy import (ones, pi, log10)
# Signal processing routines
import scipy.signal as sig
# Plotting
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
mpl.rcParams['font.size'] = 12
mpl.rcParams['figure.figsize'] = (9,6)
mpl.rcParams['axes.labelsize'] = 'large'
mpl.rcParams['axes.grid'] = True
# Load the modules for pnoise manimulation and SDM modules
import plldesigner.pnoise as pn
import plldesigner.sdmod as sdmod
# Parameters
NsdBits = 19
fref = 27.6e6
Tref = 1.0/fref
# Create a SDM sequency
fracnum = ((0.253232*2**NsdBits)*ones(100000)).astype(int)
sd, per = sdmod.gen_mash(3,NsdBits,fracnum)
plt.step(np.r_[0:200],sd[:200]);
plt.xlabel('steps') #x label
plt.ylabel('$\Delta N$') #y label
print("Mean value of the sequence: {:2.6f}\n".format(sd.mean()))
# Phi_er at the output equals \sum{\DeltaN*fref}*Tref
phi_div = 2*pi*(sd-fracnum[0]/2**NsdBits).cumsum()
# Calculate the spectrum
npoints = 2**7
f, Phi2_div = sig.welch(phi_div, fref, window='blackman', nperseg=npoints)
rbw = fref/2/(len(f)-1)
ind = (f>1e5) & (f<1e9)
sim = pn.Pnoise(f[ind],10*log10(Phi2_div[ind]/2), label='simulated')
# calcualte the L teorical
theory = sdmod.L_mash_dB(3,fref)
theory.fm = f[ind]
theory.label = 'theoretical'
# Calculate the integral value of the two
print('''
Integrated phase noise
======================
Theory: {:2.3f} (rad,rms)
Sim : {:2.3f} (rad,rms)'''.format(theory.integrate(),sim.integrate()))
# plot both the spectrums
sim.plot('o-')
ax = theory.plot()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
plt.title('RBW: {:2.3f} (KHz)'.format(rbw/1e3))
val = plt.ylim([-140,-40])
# Fraction number
fracnum = (2/8*2**NsdBits)*ones(100000)
# Produce the SDM sequence
sd, per = sdmod.gen_mash(3,NsdBits,fracnum)
# Calculate the phase error and its PSD
phi_div = 2*pi*(sd-fracnum[0]/2**NsdBits).cumsum()
f, Phi2_div = sig.welch(phi_div, fref, window="blackman", nperseg=npoints)
sim = pn.Pnoise(f[ind],10*log10(Phi2_div[ind]/2), label='simulated')
# plot the sequence
plt.step(np.r_[0:200],sd[:200])
plt.xlabel('steps') #x label
plt.ylabel('$\Delta N$') #y label
print("Mean value of the sequence: {:2.5f}\n".format(sd.mean()))
sd, per = sdmod.gen_mash(3,NsdBits,fracnum)
#Plot the power spectrum density
sim.plot('o-')
theory.plot()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
plt.title('RBW: {:2.3f} (KHz)'.format(rbw/1e3))
plt.ylim([-130,-40])
plt.grid(True)
sd_init, per = sdmod.gen_mash(3,NsdBits,fracnum,(1,0,0))
# Calculate the phase error and its PSD
phi_div = 2*pi*(sd_init-fracnum[0]/2**NsdBits).cumsum()
f, Phi2_div = sig.welch(phi_div, fref, window="blackman", nperseg=npoints)
sim = pn.Pnoise(f[ind],10*log10(Phi2_div[ind]/2), label='simulated')
# plot the sequence
pltpoints = np.r_[800:1000]
plt.step(pltpoints,sd_init[pltpoints],label='init=1')
plt.step(pltpoints,sd[pltpoints], label='init=0')
plt.xlabel('steps') #x label
plt.ylabel('$\Delta N$') #y label
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
print("Mean value of the sequence: {:2.5f}\n".format(sd.mean()))
#Plot the power spectrum density
sim.plot('o-')
theory.plot()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
plt.title('RBW: {:2.3f} (KHz)'.format(rbw/1e3))
plt.ylim([-130,-40])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 3 Finding Similar Items
Step5: exercise 3.1.3
Step7: exercise
Step9: #exercise 3.2.3
Step11: Minhashing
Step13: minhash and Jaccard Similarity
Step14: 对 $M$ 中的所有行做一次随机组合得到 $\bar{M}$,非常耗时。
Step15: 2) 再按照行号重新对行排序,生成新矩阵 $\bar{M}$
Step17: Let SIG(i,c) be the element of the signature matrix for the ith hash function and column c.
Step18: exercise 3.3
Step19: #exercise 3.3.4
Step20: if we consider two documents with 80% similarity, then their false negative is $1 - 0.99964... \approx 0.00035$, which means that only roughly 1 in 3000 pairs that are as high as 80% similar will fail to become a candidate pair.
Step21: 2) $h_i(x)$ should be statistically independent, because we need estimate the possbility by using the product rule for independent events.
Step22: Amplifing a Locality-Sensitive Family
Step24: As shown in the figure above,
Step25: 3.8 Applications of Locality-Sensitive Hashing
|
<ASSISTANT_TASK:>
Python Code:
#Exercise 3.1.1
logger.setLevel(logging.WARNING)
data = [
set([1, 2, 3, 4]),
set([2, 3, 5, 7]),
set([2, 4, 6]),
]
def Jaccard_similarity_calc(set_a, set_b):
calculate the Jaccard similarity of two sets
res = \frac{a \cap b}{a \cup b}
assert isinstance(set_a, set), '{} is not a set'.format(set_a)
assert isinstance(set_b, set), '{} is not a set'.format(set_b)
logging.debug('a:{}, b:{}'.format(set_a, set_b))
logging.debug('inter:{}, union:{}'.format(set_a.intersection(set_b), set_a.union(set_b)))
return len(set_a.intersection(set_b)) / len(set_a.union(set_b))
for comb in list(itertools.combinations(range(3),2)):
set_a, set_b = data[comb[0]], data[comb[1]]
print('a:{}, b:{}, SIM:{:.3f}'.format(set_a, set_b, Jaccard_similarity_calc(set_a, set_b)))
#Exercise 3.1.2
logger.setLevel(logging.WARNING)
data = [
[1, 1, 1, 2],
[1, 1, 2, 2, 3],
[1, 2, 3, 4],
]
def Jaccard_bag_similarity_calc(a, b):
calculate the Jaccard bag similarity of two sets
see page 76, movie ratings 3.
intersecton = min(times of element in the two sets)
union = sum of length of the two sets
res = intersection / union
from collections import Counter
count_a = Counter(a)
count_b = Counter(b)
logging.debug('count_a:{}\n count_b:{}'.format(count_a,count_b))
inter = [min(count_a[x], count_b[x]) for x in count_a if x in count_b]
logging.debug('intersection:{}'.format(inter))
return sum(inter) / (len(a) + len(b))
for comb in list(itertools.combinations(range(3),2)):
set_a, set_b = data[comb[0]], data[comb[1]]
print('a:{}, b:{}, JbSIM:{:.3f}'.format(set_a, set_b, Jaccard_bag_similarity_calc(set_a, set_b)))
logger.setLevel(logging.WARNING)
def to_alphanumeric_string(s):
remove any non-alphanumeric characte in string s.
import re
s = s.lower()
s = re.sub('\W', ' ', s)
logging.debug('after: del special char: {}'.format(s))
s = re.sub('\s+', ' ', s).strip()
logging.debug('after: del multiple whitespace: {}'.format(s))
return s
def k_shingle_extrac(document, k):
extract all k-shingles from document.
document = to_alphanumeric_string(document)
assert len(document) >= k, 'k should be less than the length of document'
k_shingles = []
while len(document) >= k:
shingle = document[0:k]
if shingle not in k_shingles:
k_shingles.append(shingle)
document = document[1:]
return k_shingles
print('Example 3.3: {}'.format(k_shingle_extrac('abcdabd', 2)))
print('other example: {}'.format(k_shingle_extrac("Hello, I'm So-So. ", 2)))
#exercise 3.2.1
data = "The most effective way to represent documents as sets, for the purpose of iden- tifying lexically similar documents is to construct from the document the set of short strings that appear within it. "
print('first 10 3-shingles:{}'.format(k_shingle_extrac(data,3)[0:10]))
#exercise 3.2.2
logger.setLevel(logging.WARNING)
def stopwords_shingles_extract(document, k):
extract the stopwords-shingle.
stropwords-shingle = stop-words + k words
stop_words_list = ['the', 'you', 'to', 'as', 'for', 'of', 'is', 'that', 'it', 'from']
document = to_alphanumeric_string(document)
document = document.split()
logging.debug('split:{}'.format(document))
shingles = []
k = k + 1 #len(shingle) = 1 stop-word + k words
while(document):
try:
logging.debug('check:{}'.format(document[0]))
if document[0] in stop_words_list:
shingle = ' '.join(document[0:k])
logging.debug('hit: {}'.format(shingle))
if shingle not in shingles:
shingles.append(shingle)
except IndexError:
logging.debug('Index Error: no of char:{}, k: {}'.format(len(document), k))
k = len(document)
continue
document = document[1:]
return shingles
print('stop-shingles:{}'.format(stopwords_shingles_extract(data, 2)))
#Example 3.6
logger.setLevel(logging.WARNING)
my_database = ['a', 'b', 'c', 'd', 'e']
my_features_dict = {
'S1':['a', 'd'],
'S2':['c'],
'S3':['b', 'd', 'e'],
'S4':['a', 'c', 'd']
}
def matrix_representation_create(database, features_dict):
create the matrix representation of one database.
matrix_ = np.zeros((len(database), len(features_dict)), dtype=np.int)
matrix = pd.DataFrame(matrix_, index=database, columns=sorted(features_dict.keys()))
for feature_name, values in features_dict.iteritems():
for value in values:
matrix.loc[value, feature_name] = 1
return matrix
my_matrix = matrix_representation_create(my_database, my_features_dict)
my_matrix
#Example 3.7
logger.setLevel(logging.WARNING)
def minhash(matrix, row_orders):
calculate the minhash value of matrix according to element permutation in Fig 3.3.
hash_fun_names = ['h{}'.format(i) for i in range(1, len(row_orders)+1)]
hash_table = pd.DataFrame(np.zeros((len(row_orders), matrix.shape[1])), index=hash_fun_names, columns=matrix.columns)
for row_order, hash_fun_name in zip(row_orders, hash_fun_names):
matrix_p = matrix.loc[row_order,:]
logging.debug('after permutation: \n{}'.format(matrix_p))
for c in matrix_p.columns:
first_one_index = next((i for i, x in enumerate(matrix_p.loc[:,c]) if x), None)
hash_table.loc[hash_fun_name, c] = row_order[first_one_index]
return hash_table
minhash(my_matrix, [['b', 'e', 'a', 'd', 'c']])
logger.setLevel(logging.WARNING)
my_matrix.index = range(my_matrix.shape[0])
print('matrix:\n{}\n'.format(my_matrix))
h_rows = [
[1, 2, 3, 4, 0],
[1, 4, 2, 0, 3]
]
def minhash_by_rows(matrix, row_orders):
calculate the minhash value of matrix according to row_permutation in Fig 3.4.
hash_fun_names = ['h{}'.format(i) for i in range(1, len(row_orders)+1)]
hash_table = pd.DataFrame(np.zeros((len(row_orders), matrix.shape[1])), index=hash_fun_names, columns=matrix.columns)
for row_order, hash_fun_name in zip(row_orders, hash_fun_names):
logging.debug('row_order:{}, h:{}'.format(row_order, hash_fun_name))
matrix_p = matrix.copy()
matrix_p.index = row_order #new rows permutation
matrix_p.sort_index(inplace=True) #adjust rows orders
logging.debug('after permutation: \n{}'.format(matrix_p))
for c in matrix_p.columns:
first_one_index = next((i for i, x in enumerate(matrix_p.loc[:,c]) if x), None)
hash_table.loc[hash_fun_name, c] = first_one_index
return hash_table
my_minhash_res = minhash_by_rows(my_matrix, h_rows)
print('minhash: 5dim -> 2dim \n\
true hash res:\n{}\n'.format(my_minhash_res))
print('Minhashing')
for comb in list(itertools.combinations(range(4),2)):
s_a, s_b = 'S{}'.format(comb[0]+1), 'S{}'.format(comb[1]+1)
print('{}-{}'.format(s_a, s_b)),
set_a, set_b = set(my_minhash_res.iloc[:,comb[0]]), set(my_minhash_res.iloc[:,comb[1]])
print('minhash:{:.3f}'.format(Jaccard_similarity_calc(set_a, set_b))),
print('true SIM:{:.3f}'.format(Jaccard_similarity_calc(set(my_features_dict[s_a]), set(my_features_dict[s_b]))))
def add_hash_func(a, b, c):
return lambda x: (a*x + b) % c
h_funcs = [
add_hash_func(1, 1, 5),
add_hash_func(3, 1, 5)
]
h_rows = []
for h_func in h_funcs:
h_rows.append(map(h_func, range(5)))
print('h_rows:{}'.format(h_rows))
#Fig 3.4
df_matrix = my_matrix
df_h_rows = pd.DataFrame(np.array(h_rows).T)
df_h_rows.columns = ['h{}'.format(x+1) for x in df_h_rows.columns]
print('Hash functions computed for the matrix:\n{}\n'.format(pd.concat([df_matrix, df_h_rows], axis=1)))
print('signature matrix\n(SIG):\n{}'.format(my_minhash_res))
#Example 3.8
logger.setLevel(logging.WARNING)
def minhash_signatures_calc(df_M, hash_funcs, nagging=False):
computing minhash signatures by the way in Example 3.8.
logging.debug('data matrix:\n{}\n'.format(df_M))
h = []
for hash_func in hash_funcs:
h.append(map(hash_func, range(df_M.shape[0])))
df_h = pd.DataFrame(np.array(h).T)
df_h.columns = ['h{}'.format(x+1) for x in df_h.columns]
logging.debug('hash matrix:\n{}\n'.format(df_h))
if nagging:
print('hash matrix:\n{}\n'.format(pd.concat([df_matrix, df_h], axis=1)))
df_signatures = pd.DataFrame(np.ones((df_h.shape[1], df_M.shape[1]))*np.inf, index=df_h.columns, columns=df_M.columns)
logging.debug('signatures matrix:\ninit\n{}\n'.format(df_signatures))
for r in df_M.index:
for c in df_h.columns:
r_1_loc = df_M.loc[r,:] == 1
logging.debug('r:{}, c:{}, 1 loc:\n{}\n'.format(r,c, r_1_loc))
sig_c = df_signatures.loc[c,:]
line_bigger_loc = sig_c > df_h.loc[r, c]
logging.debug('bigger row loc:\n{}\n'.format(line_bigger_loc))
sig_c[line_bigger_loc & r_1_loc] = df_h.loc[r, c]
logging.debug('modified:\n{}\n'.format(sig_c))
df_signatures.loc[c,:] = sig_c
if nagging:
print('row:{},\n signature matrix:\n{}\n'.format(r, df_signatures))
return df_signatures
minhash_signatures_calc(df_matrix, h_funcs, nagging=True)
#exercise 3.3.1
#generate 120 permutations
h_rows = list(itertools.permutations(range(5),5))
my_minhash_res = minhash_by_rows(my_matrix, h_rows)
for comb in list(itertools.combinations(range(4),2)):
s_a, s_b = 'S{}'.format(comb[0]+1), 'S{}'.format(comb[1]+1)
print('{}-{}'.format(s_a, s_b)),
#calc Jaccard similarity
print('true SIM:{:.3f}'.format(Jaccard_similarity_calc(set(my_features_dict[s_a]), set(my_features_dict[s_b])))),
#calc the fraction of the 120 permutations in which the value is same
print('fraction:{:.3f}'.format(sum(my_minhash_res.loc[:,s_a] == my_minhash_res.loc[:,s_b])/120))
#exercise 3.3.2
h_funcs[2:4] = [
add_hash_func(2, 4, 5),
add_hash_func(3, -1, 5)
]
minhash_signatures_calc(df_matrix, h_funcs, nagging=True)
#exercise 3.3.3
my_database = range(6)
my_features_dict = {
'S1':[2, 5],
'S2':[0, 1],
'S3':[3, 4],
'S4':[0, 2, 4]
}
df_fig_3_5 = matrix_representation_create(my_database, my_features_dict)
print('Fig 3.5:\n{}\n'.format(df_fig_3_5))
#(a)
h_funcs = [
add_hash_func(2, 1, 6),
add_hash_func(3, 2, 6),
add_hash_func(5, 2, 6)
]
df_matrix = df_fig_3_5
my_minhash_res = minhash_signatures_calc(df_matrix, h_funcs, nagging=True)
my_minhash_res
#(b) h_3 is a true permutation.
#(c)
for comb in list(itertools.combinations(range(4),2)):
s_a, s_b = 'S{}'.format(comb[0]+1), 'S{}'.format(comb[1]+1)
print('{}-{}'.format(s_a, s_b)),
set_a, set_b = set(my_minhash_res.iloc[:,comb[0]]), set(my_minhash_res.iloc[:,comb[1]])
print('minhash:{:.3f}'.format(Jaccard_similarity_calc(set_a, set_b))),
print('true SIM:{:.3f}'.format(Jaccard_similarity_calc(set(my_features_dict[s_a]), set(my_features_dict[s_b]))))
#Example 3.11
b = 20
r = 5
s = np.linspace(0, 1, 100)
def p(s, r, b):
return 1 - (1 - s**r)**b
plt.plot(s,p(s, r, b))
s = np.arange(0.2, 0.9, 0.1)
zip(s, p(s, r, b))
s = np.linspace(0, 1, 100)
plt.plot(1-s, p(s, r, b))
plt.xlabel('Jaccard distance')
plt.ylabel('Probility')
#minhash for Jaccard distance
d = 1 - s
d_1, d_2 = 0.4, 0.6
plt.plot(d[d<d_1], p(s, r, b)[d<d_1], d[d>d_2], p(s, r, b)[d>d_2])
plt.xlabel('d(x, y)')
plt.ylabel('Probility of being a candidate')
#AND-construciton and OR-constuction
dis = d
pos = p(s, r, b)
r = 4
AND_pos = (pos**r)
OR_pos = 1 - (1 - pos)**r
plt.plot(dis, pos, label='origin')
plt.plot(dis, AND_pos, label='AND-cons')
plt.plot(dis, OR_cons, label='OR-cons')
plt.legend(loc='upper right')
plt.xlabel('d(x, y)')
plt.ylabel('P[f(x) = f(y)]')
#example 3.21
x = np.arange(3, 7)
y = np.arange(4, 0, -1)
print('x:{}, y:{}'.format(x,y))
from scipy.spatial.distance import cosine
print('angle between x and y: {:.3f}\n'.format(np.rad2deg(np.arccos(1-cosine(x,y)))))
def sketch_calc(x, v):
sketch = np.dot(x, v)
sketch[sketch>0] = 1
sketch[sketch<=0] = -1
return sketch
v = [[1, -1, 1, 1], [-1, 1, -1, 1], [1, 1, -1, -1]]
v = np.array(v).T
print('v:{}'.format(v))
x_s = sketch_calc(x, v)
print('sketch of x: {}'.format(x_s))
y_s = sketch_calc(y, v)
print('sketch of y: {}'.format(y_s))
print('p = same elemets / total elements: {} / {}'. format(sum((x_s-y_s)==0), len(x_s)))
def sketch_angle_calc(x_s, y_s):
calculate angle when similar possibily of two vectors are determined.
p = (180 - d) / 180
p = sum((x_s-y_s)==0) / len(x_s)
d = 180 - 180 * p
return d
print('angle(sketch): {}\n'.format(sketch_angle_calc(x_s, y_s)))
v = list(itertools.product([-1,1],repeat=len(x)))
v = np.array(v).T
print('v:{}'.format(v))
x_s = sketch_calc(x, v)
print('sketch of x: {}'.format(x_s))
y_s = sketch_calc(y, v)
print('sketch of y: {}'.format(y_s))
print('p = same elemets / total elements: {} / {}'. format(sum((x_s-y_s)==0), len(x_s)))
print('angle(sketch): {}'.format(sketch_angle_calc(x_s, y_s)))
#Example 3.22
p = np.array([0.2**6, (0.2**3)*(0.8**3)])
print('dissimilar:{:.4f}, similar:{:.4f}'.format(p[0], p[1]))
#1024 OR-cons
p = 1 - (1 - p)**1024
print('dissimilar:{:.4f}, similar:{:.4f}'.format(p[0], p[1]))
#then 2 AND-cons
p = p**2
print('dissimilar:{:.4f}, similar:{:.4f}'.format(p[0], p[1]))
print 1/p[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Last time, we trained our Neural Network, and it made suspiciously good predictions of your test score based on how many hours you slept, and how many hours you studied the night before. Before we celebrate and begin changing our sleep and study habits, we need some way to ensure that our model reflects the real world.
Step2: So it appears our model is overfitting, but how do we know for sure? A widely accepted method is to split our data into 2 portions
Step3: So now that we know overfitting is a problem, but how do we fix it? One way is to throw more data at the problem. A simple rule of thumb as presented by Yaser Abu-Mostaf is his excellent machine learning course available from Caltech, is that you should have at least 10 times as many examples as the degrees for freedom in your model. For us, since we have 9 weights that can change, we would need 90 observations, which we certainly don’t have.
Step4: If we train our model now, we see that the fit is still good, but our model is no longer interested in “exactly” fitting our data. Further, our training and testing errors are much closer, and we’ve successfully reduced overfitting on this dataset. To further reduce overfitting, we could increase lambda.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('S4ZUwgesjS8')
%pylab inline
from partSix import *
NN = Neural_Network()
# X = (hours sleeping, hours studying), y = Score on test
X = np.array(([3,5], [5,1], [10,2], [6,1.5]), dtype=float)
y = np.array(([75], [82], [93], [70]), dtype=float)
#Plot projections of our new data:
fig = figure(0,(8,3))
subplot(1,2,1)
scatter(X[:,0], y)
grid(1)
xlabel('Hours Sleeping')
ylabel('Test Score')
subplot(1,2,2)
scatter(X[:,1], y)
grid(1)
xlabel('Hours Studying')
ylabel('Test Score')
#Normalize
X = X/np.amax(X, axis=0)
y = y/100 #Max test score is 100
#Train network with new data:
T = trainer(NN)
T.train(X,y)
#Plot cost during training:
plot(T.J)
grid(1)
xlabel('Iterations')
ylabel('Cost')
#Test network for various combinations of sleep/study:
hoursSleep = linspace(0, 10, 100)
hoursStudy = linspace(0, 5, 100)
#Normalize data (same way training data way normalized)
hoursSleepNorm = hoursSleep/10.
hoursStudyNorm = hoursStudy/5.
#Create 2-d versions of input for plotting
a, b = meshgrid(hoursSleepNorm, hoursStudyNorm)
#Join into a single input matrix:
allInputs = np.zeros((a.size, 2))
allInputs[:, 0] = a.ravel()
allInputs[:, 1] = b.ravel()
allOutputs = NN.forward(allInputs)
#Contour Plot:
yy = np.dot(hoursStudy.reshape(100,1), np.ones((1,100)))
xx = np.dot(hoursSleep.reshape(100,1), np.ones((1,100))).T
CS = contour(xx,yy,100*allOutputs.reshape(100, 100))
clabel(CS, inline=1, fontsize=10)
xlabel('Hours Sleep')
ylabel('Hours Study')
#3D plot:
#Uncomment to plot out-of-notebook (you'll be able to rotate)
#%matplotlib qt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
#Scatter training examples:
ax.scatter(10*X[:,0], 5*X[:,1], 100*y, c='k', alpha = 1, s=30)
surf = ax.plot_surface(xx, yy, 100*allOutputs.reshape(100, 100), \
cmap=cm.jet, alpha = 0.5)
ax.set_xlabel('Hours Sleep')
ax.set_ylabel('Hours Study')
ax.set_zlabel('Test Score')
#Training Data:
trainX = np.array(([3,5], [5,1], [10,2], [6,1.5]), dtype=float)
trainY = np.array(([75], [82], [93], [70]), dtype=float)
#Testing Data:
testX = np.array(([4, 5.5], [4.5,1], [9,2.5], [6, 2]), dtype=float)
testY = np.array(([70], [89], [85], [75]), dtype=float)
#Normalize:
trainX = trainX/np.amax(trainX, axis=0)
trainY = trainY/100 #Max test score is 100
#Normalize by max of training data:
testX = testX/np.amax(trainX, axis=0)
testY = testY/100 #Max test score is 100
##Need to modify trainer class a bit to check testing error during training:
class trainer(object):
def __init__(self, N):
#Make Local reference to network:
self.N = N
def callbackF(self, params):
self.N.setParams(params)
self.J.append(self.N.costFunction(self.X, self.y))
self.testJ.append(self.N.costFunction(self.testX, self.testY))
def costFunctionWrapper(self, params, X, y):
self.N.setParams(params)
cost = self.N.costFunction(X, y)
grad = self.N.computeGradients(X,y)
return cost, grad
def train(self, trainX, trainY, testX, testY):
#Make an internal variable for the callback function:
self.X = trainX
self.y = trainY
self.testX = testX
self.testY = testY
#Make empty list to store training costs:
self.J = []
self.testJ = []
params0 = self.N.getParams()
options = {'maxiter': 200, 'disp' : True}
_res = optimize.minimize(self.costFunctionWrapper, params0, jac=True, method='BFGS', \
args=(trainX, trainY), options=options, callback=self.callbackF)
self.N.setParams(_res.x)
self.optimizationResults = _res
#Train network with new data:
NN = Neural_Network()
T = trainer(NN)
T.train(trainX, trainY, testX, testY)
#Plot cost during training:
plot(T.J)
plot(T.testJ)
grid(1)
xlabel('Iterations')
ylabel('Cost')
#Regularization Parameter:
Lambda = 0.0001
#Need to make changes to costFunction and costFunctionPrim:
def costFunction(self, X, y):
#Compute cost for given X,y, use weights already stored in class.
self.yHat = self.forward(X)
#We don't want cost to increase with the number of examples, so normalize by dividing the error term by number of examples(X.shape[0])
J = 0.5*sum((y-self.yHat)**2)/X.shape[0] + (self.Lambda/2)*(sum(self.W1**2)+sum(self.W2**2))
return J
def costFunctionPrime(self, X, y):
#Compute derivative with respect to W and W2 for a given X and y:
self.yHat = self.forward(X)
delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3))
#Add gradient of regularization term:
dJdW2 = np.dot(self.a2.T, delta3)/X.shape[0] + self.Lambda*self.W2
delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2)
#Add gradient of regularization term:
dJdW1 = np.dot(X.T, delta2)/X.shape[0] + self.Lambda*self.W1
return dJdW1, dJdW2
#New complete class, with changes:
class Neural_Network(object):
def __init__(self, Lambda=0):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
#Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize,self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize,self.outputLayerSize)
#Regularization Parameter:
self.Lambda = Lambda
def forward(self, X):
#Propogate inputs though network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
def sigmoidPrime(self,z):
#Gradient of sigmoid
return np.exp(-z)/((1+np.exp(-z))**2)
def costFunction(self, X, y):
#Compute cost for given X,y, use weights already stored in class.
self.yHat = self.forward(X)
J = 0.5*sum((y-self.yHat)**2)/X.shape[0] + (self.Lambda/2)*(np.sum(self.W1**2)+np.sum(self.W2**2))
return J
def costFunctionPrime(self, X, y):
#Compute derivative with respect to W and W2 for a given X and y:
self.yHat = self.forward(X)
delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3))
#Add gradient of regularization term:
dJdW2 = np.dot(self.a2.T, delta3)/X.shape[0] + self.Lambda*self.W2
delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2)
#Add gradient of regularization term:
dJdW1 = np.dot(X.T, delta2)/X.shape[0] + self.Lambda*self.W1
return dJdW1, dJdW2
#Helper functions for interacting with other methods/classes
def getParams(self):
#Get W1 and W2 Rolled into vector:
params = np.concatenate((self.W1.ravel(), self.W2.ravel()))
return params
def setParams(self, params):
#Set W1 and W2 using single parameter vector:
W1_start = 0
W1_end = self.hiddenLayerSize*self.inputLayerSize
self.W1 = np.reshape(params[W1_start:W1_end], \
(self.inputLayerSize, self.hiddenLayerSize))
W2_end = W1_end + self.hiddenLayerSize*self.outputLayerSize
self.W2 = np.reshape(params[W1_end:W2_end], \
(self.hiddenLayerSize, self.outputLayerSize))
def computeGradients(self, X, y):
dJdW1, dJdW2 = self.costFunctionPrime(X, y)
return np.concatenate((dJdW1.ravel(), dJdW2.ravel()))
NN = Neural_Network(Lambda=0.0001)
#Make sure our gradients our correct after making changes:
numgrad = computeNumericalGradient(NN, X, y)
grad = NN.computeGradients(X,y)
#Should be less than 1e-8:
norm(grad-numgrad)/norm(grad+numgrad)
T = trainer(NN)
T.train(X,y,testX,testY)
plot(T.J)
plot(T.testJ)
grid(1)
xlabel('Iterations')
ylabel('Cost')
allOutputs = NN.forward(allInputs)
#Contour Plot:
yy = np.dot(hoursStudy.reshape(100,1), np.ones((1,100)))
xx = np.dot(hoursSleep.reshape(100,1), np.ones((1,100))).T
CS = contour(xx,yy,100*allOutputs.reshape(100, 100))
clabel(CS, inline=1, fontsize=10)
xlabel('Hours Sleep')
ylabel('Hours Study')
#3D plot:
##Uncomment to plot out-of-notebook (you'll be able to rotate)
#%matplotlib qt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(10*X[:,0], 5*X[:,1], 100*y, c='k', alpha = 1, s=30)
surf = ax.plot_surface(xx, yy, 100*allOutputs.reshape(100, 100), \
cmap=cm.jet, alpha = 0.5)
ax.set_xlabel('Hours Sleep')
ax.set_ylabel('Hours Study')
ax.set_zlabel('Test Score')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Manual Plotting in Matplotlib
Step2: Plotting chart types
Step3: Plotting with Pandas
|
<ASSISTANT_TASK:>
Python Code:
# Jupyter Directive
%matplotlib inline
# imports
import matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (20.0, 10.0) # larger figure size
# Matplotlib requires lists to plot
x = [1,2,3,4,5]
xsquared = [1,4,9,16,25]
plt.plot(x,xsquared) # default is a blue line
# this can be overridden. consult help(plt.plot) for details
#MATLAB MATPLOTLIB
plt.plot(x, xsquared, 'ro') # red dots
# we can manipulate the axis too, rather than auto scale. In this case we must call plt.show() to display the plot
plt.plot(x, xsquared, 'ro') # red dots
plt.axis([0,6,0,26]) # a list in the form [xmin, xmax, ymin, ymax]
plt.show()
# Labels are simple
plt.bar(x, xsquared) #,'r--') # red dashes
plt.axis([0,6,0,26]) # a list in the form [xmin, xmax, ymin, ymax]
plt.xlabel("Value of X", fontsize=36)
plt.ylabel("Value of X Squared", fontsize=36)
plt.title("Plot of X versus X Squared", fontsize=48)
plt.grid(True)
plt.show()
plt.bar(x,xsquared)
plt.pie(x)
plt.scatter(x, xsquared)
scores = pd.read_csv("https://raw.githubusercontent.com/mafudge/datasets/master/exam-scores/exam-scores.csv")
scores.sample(10)
# Plotting with Pandas is a bit more expressive
scores.plot.scatter(x ='Completion_Time', y ='Student_Score' )
scores.corr()
## Labels too small, we can fall back to Matplot lib!
p = scores.plot.scatter(x ='Completion Time', y ='Student Score', fontsize=20)
p.set_xlabel('Completetion Time', fontsize=20)
p.set_ylabel('Student Score', fontsize=20)
p
# Take the value counts of letter grade and create a data frame
letter_grades = pd.DataFrame( { 'Letter' : scores['Letter_Grade'].value_counts() } ).sort_index()
letter_grades.plot.bar(sort_columns=True)
letter_grades.plot.pie( y = 'Letter', fontsize = 20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE
Step2: NOTE
|
<ASSISTANT_TASK:>
Python Code:
from pyesgf.logon import LogonManager
lm = LogonManager()
lm.logoff()
lm.is_logged_on()
OPENID = 'https://esgf-data.dkrz.de/esgf-idp/openid/USERNAME'
lm.logon_with_openid(openid=OPENID, password=None, bootstrap=True)
lm.is_logged_on()
myproxy_host = 'esgf-data.dkrz.de'
lm.logon(hostname=myproxy_host, interactive=True, bootstrap=True)
lm.is_logged_on()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's load the data. Write the path to your sms.csv file in the cell below
Step2: Execute the cell below to load the CSV data into a pandas data frame with the columns label and message.
Step3: Extracting text features
Step4: The resulting matrix has the same number of rows as the input SMS data, but it has thousands of columns - each one corresponding to a new feature
Step5: This might seem a bit confusing at first, but it makes sense when you think about it
Step6: As can be seen below, the vocabulary has the same number of items as there are columns in the matrix
Step7: Finally, we can examine the TF-IDF score for any combination of message and word by checking the corresponding entry in the matrix. For instance, to see the TF-IDF score for the word "only" in the first message in our data frame, we can write
Step8: If a word isn't in a message, it's TF-IDF score will be zero
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
data_file = 'data/sms.csv'
sms = pd.read_csv(data_file, sep='\t', header=None, names=['label', 'message'])
sms.head()
tfidf = TfidfVectorizer()
matrix = tfidf.fit_transform(sms['message'])
matrix.shape
tfidf.vocabulary_
len(tfidf.vocabulary_)
row = 0
col = tfidf.vocabulary_['only']
print('Message: "%s"' % sms.loc[row, 'message'])
print('TF-IDF score: %f' % matrix[row, col])
row = 1
col = tfidf.vocabulary_['only']
print('Message: "%s"' % sms.loc[row, 'message'])
print('TF-IDF score: %f' % matrix[row, col])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths and prepare raw data.
Step2: 1) Fit ICA model using the FastICA algorithm.
Step3: 2) identify bad components by analyzing latent sources.
Step4: 3) Assess component selection and unmixing quality.
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, None, fir_design='firwin') # already lowpassed @ 40
raw.set_annotations(mne.Annotations([1], [10], 'BAD'))
raw.plot(block=True)
# For the sake of example we annotate first 10 seconds of the recording as
# 'BAD'. This part of data is excluded from the ICA decomposition by default.
# To turn this behavior off, pass ``reject_by_annotation=False`` to
# :meth:`mne.preprocessing.ICA.fit`.
raw.set_annotations(mne.Annotations([0], [10], 'BAD'))
ica = ICA(n_components=0.95, method='fastica', random_state=0, max_iter=100)
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# low iterations -> does not fully converge
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13))
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components
title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds
# estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual.
Step2: One-hot encode categorical features
Step3: Now we have 17 features.
Step4: Null model
Step5: Thus, null accuracy of ~62% if always predict death.
Step6: Regression
Step7: KNN with and without pre-processing and various options
Step8: Note the default OptimizedPipeline parameters and those for its fit() method.
Step9: The report lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters.
Step10: KNN with custom pipeline step grid parameters
Step11: Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30.
Step12: Note how the estimator__weights parameter isn't set for the KNN estimator.
Step13: Let's compare the pipelines so far
Step14: Binary scaling fed into a KNN classifier appears to have the best training score.
Step15: Switching the range for min_max scaling boosted it to rank 1 for pipeline training scores
Step16: The range of 1 to 2 for the MinMaxScaler appeared to be the best.
Step17: Feature selection and KNN did had a mid-level training score
Step18: SelectKBest with f_classif chose 5 features as the best to use in the model.
Step19: Thus, Pclass 3, being male, and the titles Miss, Mr, and Mrs were considered the most important features by SelectKBest using f_classif.
Step20: KNN using feature interactions
Step21: The optimal number of interactions (number of features multiplied by each other at once) was found to be 1.
Step22: KNN with pre-processing transforms
Step23: We can look at the transformed data after PCA normally
Step24: This is currently a very manual process and would be difficult with more and more processing steps. I'm thinking of automating this with a class containing all optimized pipelines in the future.
Step25: This t-SNE step takes longer than most in pyplearnr unfortunately. It also resulted in the worst score. I'll try to optimize this in the future.
Step26: This is a good way to speed up computations and give you an idea as to how long a particular pipeline takes to train.
Step27: The best parameter combination, of those attempted by RandomizedSearchCV, was 12 nearest neighbors with the 'uniform' weight.
Step28: Logistic regression, random forest, multilayer perceptron, and adaboost outperform KNN, even with all of the attempted pre-processing so far.
Step29: Out of 240 different possible pipelines, best pipeline, with a test score of 0.899, appears to be min-max scaling between 0 and 1 funneled into a PCA and then into a multilayer perceptron with one hidden layer of size 5.
Step30: Looks like I died!
Step31: I would have a 0.77% chance of survival.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_pickle('trimmed_titanic_data.pkl')
df.info()
simulation_df = df.copy()
categorical_features = ['Survived','Pclass','Sex','Embarked','Title']
for feature in categorical_features:
simulation_df[feature] = simulation_df[feature].astype('category')
simulation_df.info()
simulation_df = pd.get_dummies(simulation_df,drop_first=True)
simulation_df.info()
# Set output feature
output_feature = 'Survived_1'
# Get all column names
column_names = list(simulation_df.columns)
# Get input features
input_features = [x for x in column_names if x != output_feature]
# Split into features and responses
X = simulation_df[input_features].copy()
y = simulation_df[output_feature].copy()
simulation_df['Survived_1'].value_counts().values/float(simulation_df['Survived_1'].value_counts().values.sum())
%matplotlib inline
%load_ext autoreload
import sys
import os
sys.path.append("./pyplearnr")
optimized_pipelines = {}
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_schematic = [
{'scaler': {
'none': {},
'standard': {},
'min_max': {},
'normal': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,31),
'weights': ['uniform','distance']
}}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='auc')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {0:59})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=59)
%autoreload
kfcv.plot_best_pipeline_scores(number_size=10,markersize=8, figsize=(9,3), box_line_thickness=1)
%autoreload
kfcv.plot_contest(color_by='scaler', markersize=3)
%autoreload
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:6})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=8)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores(number_size=18, markersize=14)
%autoreload
%matplotlib inline
kfcv.plot_contest(number_size=8, markersize=7, all_folds=True, figsize=(10,40),
color_by='scaler', box_line_thickness=2)
kfcv.pipelines[29]
# cmap = pylab.cm.viridis
# print cmap.__doc__
worst_pipelines = [85, 67, 65, 84, 69, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [86, 75, 84, 79, 85, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [77, 61, 81, 83, 74, 82, 84]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
best_pipelines = [89, 93, 2, 91, 4, 3]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [91, 93, 5, 43, 4, 100]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [5, 4, 91, 3, 55, 49, 2]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'standard': {},
'normal': {},
'min_max': {},
'binary': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,30)
},
# 'svm': {
# 'C': np.array([1.00000000e+00])
# }
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='accuracy')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:24, 2:55})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=55)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores()
%autoreload
%matplotlib inline
kfcv.plot_contest()
best_pipelines = [91, 44, 89, 45, 3, 90]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [21, 18, 40, 38, 36, 35, 24]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [55, 39, 41, 42, 47, 40, 114, 110]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%autoreload
kfcv.print_report()
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {2:18})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=18)
%autoreload
kfcv.print_report()
best_inner_fold_pipelines = {
2: 9
}
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = best_inner_fold_pipelines)
best_outer_fold_pipeline = 45
kfcv.fit(X.values, y.values, pipelines,
best_outer_fold_pipeline = best_outer_fold_pipeline)
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
reload(ppl)
data = pd.read_csv('Advertising.csv',index_col=0)
# Start with all features
feature_cols = ['TV','Radio','Newspaper']
# Split data
X = data[feature_cols]
y = data.Sales
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=5,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'none': {},
'standard': {}
}},
{'pre_estimator': {
'polynomial_features': {
'degree': range(1,5)
}
}},
{'estimator': {
'linear_regression': {},
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse')
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse', best_outer_fold_pipeline=1)
%autoreload
kfcv.print_report()
%autoreload
kfcv.print_report()
%%time
%autoreload
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
%%time
estimator = 'knn'
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': None,
'scale_type': None,
'transform_type': None
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
print optimized_pipeline
pre_processing_grid_parameters,classifier_grid_parameters,regression_grid_parameters = \
optimized_pipeline.get_default_pipeline_step_parameters(X.shape[0])
classifier_grid_parameters['knn']
%%time
estimator_name = 'knn'
model_name = 'custom_override_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(30,500)
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
%%time
model_name = 'from_scratch_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(10,30)
}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': False,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
%%time
estimator = 'knn'
scaling_options = ['standard','normal','min-max','binary']
for scaling_option in scaling_options:
model_name = '%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
%%time
reload(ppl)
estimator = 'knn'
scaling_options = ['min_max','binary']
param_dists = {
'min_max': {
'scaler__feature_range': [(1,2),(3,4)]
},
'binary': {
'scaler__threshold': np.arange(0,1,0.1)
}
}
for scaling_option in scaling_options:
model_name = 'custom_%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dists[scaling_option]
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_min_max_knn']
%%time
reload(ppl)
estimator = 'knn'
model_name = 'select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['select_knn']
feature_selection_mask = optimized_pipelines['select_knn'].pipeline.named_steps['feature_selection'].get_support()
print np.array(X.columns)[feature_selection_mask]
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_selection__k': [5,7,8]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_select_knn']
%%time
reload(ppl)
estimator = 'knn'
model_name = 'interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['interaction_knn']
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_interactions__degree': [2,3,4]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_interaction_knn']
%%time
reload(ppl)
estimator = 'knn'
model_name = 'pca_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 'pca'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['pca_knn']
transformed_data = optimized_pipelines['pca_knn'].pipeline.named_steps['transform'].transform(X.values)
column_names = ['PCA_%d'%(feature_ind+1) for feature_ind in range(transformed_data.shape[1])]
pca_df = pd.DataFrame(transformed_data,columns=column_names)
pca_df.plot(x='PCA_1',y='PCA_2',style='ro')
%%time
reload(ppl)
estimator = 'knn'
model_name = 't-sne_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 't-sne'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
%%time
reload(ppl)
estimator = 'knn'
model_name = 'less_combos_%s'%(estimator_name)
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'num_parameter_combos': 5
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['less_combos_knn']
%%time
classifiers = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
for estimator in classifiers:
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
%%time
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'feature_interactions': feature_interactions,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black',figsize=(10,40))
print optimized_pipelines['min_max_pca_multilayer_perceptron']
len(optimized_pipelines.keys())
personal_stats = [32,1,0,df[df['Pclass']==1]['Fare'].median(),0,0,1,1,0,1,0,0,0,0,0,0]
zip(personal_stats,X.columns)
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict(personal_stats)
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict_proba(personal_stats)
%%time
%matplotlib inline
import pyplearnr as ppl
repeated_k_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_k_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_k_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_k_folds],
}
repeated_kfcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
%%time
reload(ppl)
%matplotlib inline
import pyplearnr as ppl
repeated_five_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
'cv': 5,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_five_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_five_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_five_folds],
}
repeated_fivefcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
import sys
sys.path.append('/Users/cmshymansky/documents/code/library/pairplotr')
import pairplotr as ppr
repeated_fivefcv_df.info()
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
repeated_fivefcv_df['train scores'].describe()
from matplotlib import pylab as plt
ax = plt.subplot(111)
print ax
# repeated_fivefcv_df.plot(ax=ax,x='train scores',y='test scores',style='bo')
repeated_kfcv_df.plot(ax=ax,x='train scores',y='test scores',style='ro')
print dir(repeated_k_folds[0].grid_search)
all_scores = []
for x in repeated_k_folds[0].grid_search.grid_scores_:
all_scores.extend(list(x.cv_validation_scores))
print max(x.cv_validation_scores),x.best_score_
print repeated_k_folds[0].grid_search.cv_results_
pd.Series(all_scores).plot(kind='hist',color='grey',bins=8)
def get_bootstrapped_datasets(orig_data_set, num_samples=100, points_per_sample=50):
import random
data_sets = []
for i in range(num_samples):
sample = [random.choice(orig_data_set) for x in range(points_per_sample)]
data_sets.append(sample)
return data_sets
def cdf(aList, x):
''' 'aList' must be sorted (low to high) '''
returnVal=0
for v in aList:
if v<=x:
returnVal+=1
return returnVal/float(len(aList))
def inv_cdf(aList, percentile):
''' 'percentile' is between 0 and 1.
'aList' must be sorted (low to high)
'''
returnVal = 0
for i in xrange(len(aList)):
if cdf(aList, aList[i])>=percentile:
returnVal = aList[i]
break
return returnVal
def conf_interval(data_set, alpha=0.05):
data_set.sort()
low_end = inv_cdf(data_set, alpha)
high_end = inv_cdf(data_set, 1-alpha)
return (low_end, high_end)
from matplotlib import pylab as plt
bootstrapped_samples = get_bootstrapped_datasets(repeated_fivefcv_df['test scores'].values)
avg_vals = [float(sum(l))/len(l) for l in bootstrapped_samples]
conf_10000 = conf_interval(avg_vals)
pd.Series(avg_vals).hist(bins=10, normed=True)
plt.axvspan(conf_10000[0],conf_10000[1],alpha=0.5,color='red')
from sklearn.learning_curve import learning_curve
import numpy as np
fig, ax = plt.subplots(1,1, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
N, train_lc, val_lc = learning_curve(optimized_pipeline.pipeline,
X, y, cv=5,
train_sizes=np.linspace(0.3, 1, 25))
ax.plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax.plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax.hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax.set_ylim(0, 1)
ax.set_xlim(N[0], N[-1])
ax.set_xlabel('training size')
ax.set_ylabel('score')
ax.legend(loc='best')
# ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
# ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
# ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
# color='gray', linestyle='dashed')
# ax[i].set_ylim(0, 1)
# ax[i].set_xlim(N[0], N[-1])
# ax[i].set_xlabel('training size')
# ax[i].set_ylabel('score')
# ax[i].set_title('degree = {0}'.format(degree), size=14)
# ax[i].legend(loc='best')
train_lc
# Set output feature
output_feature = 'diabetes'
# Get input features
input_features = [x for x in X_interaction.columns if x != output_feature]
# Split into features and responses
X = X_interaction.copy()
y = test_df[output_feature].copy()
reload(ppl)
ppl.OptimizationBundle().get_options()
%%time
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Fit data
optimized_pipeline.fit(X,y,random_state=6)
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import sklearn.metrics as sklearn_metrics
X_array = X.copy().values
y_array = y.copy().values
param_grid = {
'estimator__n_neighbors': range(31),
'estimator__weights': ['uniform', 'distance']
}
X_train, X_val, y_train, y_val = \
train_test_split(X_array,y_array,test_size=0.2,random_state=6,stratify=y_array)
from sklearn.model_selection import StratifiedKFold
kfolds_kwargs = dict(
n_splits=10,
shuffle=True,
random_state=6
)
skf = StratifiedKFold(**kfolds_kwargs)
fold_optimizations = {}
for fold_ind, data_inds in enumerate(skf.split(X_train, y_train)):
fold_optimizations[fold_ind] = {}
train_index, test_index = data_inds[0],data_inds[1]
X_train_inner, X_test_inner = X_array[train_index], X_array[test_index]
y_train_inner, y_test_inner = y_array[train_index], y_array[test_index]
pipeline = Pipeline([('estimator',KNeighborsClassifier(n_neighbors=11,weights='distance'))])
pipeline.fit(X_train_inner,y_train_inner)
y_pred_inner = pipeline.predict(X_test_inner)
confusion_matrix = sklearn_metrics.confusion_matrix(y_test_inner, y_pred_inner)
score = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['confusion_matrix'] = confusion_matrix
fold_optimizations[fold_ind]['score'] = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['pipeline'] = pipeline
print np.array([fold_optimizations[fold_ind]['score'] for fold_ind in fold_optimizations]).mean()
y_pred = pipeline.predict(X_val)
test_confusion_matrix = sklearn_metrics.confusion_matrix(y_val, y_pred)
score = test_confusion_matrix.trace()/float(test_confusion_matrix.sum())
print score
# TRAIN: [1 3] TEST: [0 2]
# TRAIN: [0 2] TEST: [1 3]
fold_optimizations
print dir(optimized_pipeline.grid_search.best_estimator_)
dir(folds[0].named_steps['estimator'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multiply these values by large matricies, and it becomes a bit more clear why some data types might be more practical than others for a given application.
Step2: This comes up frequently when applying logic to floating point arithmetic
Step3: Scaling the numbers can sometimes help. It's almost never a good idea to work in such small numeric ranges
Step4: Floating point errors
Step5: Floating point errors
Step6: There are mathematical operations where this becomes significantly more problematic. While it may be possible to work around by avoiding the exact representation of those numbers, in general, different algorithms are needed for a solution.
Step7: Here, it can be shown that the subtraction is a catastrophic cancellation, while division and multiplication doesn't suffer the same fate. Using
Step8: The subtraction of the two similar numbers gives a zero result, resulting in a significant loss in precision (i.e. absolute error of infinity).
Step9: If we add one to the sum, we can get pretty different results when using the Python scientific stack (numpy, pandas)
Step10: Python integers have arbitrary precision, so the integer sum comes out as expected. In the scientific python stack, the C-style fixed-precision integers actually wrap around to negative numbers when increased.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
# The memory consumption of different types of variables can vary significantly:
num_int = np.array(4, dtype='int8')
print("Memory (bytes) used by the int8 is: ", num_int.nbytes)
num_float = np.array(4, dtype='float')
print("Memory (bytes) used by the float is: ", num_float.nbytes)
var_1 = (0.1 + 0.1 + 0.1) - 0.3
print(var_1)
1 - 0.7 == 0.3
# Scaling can sometimes help
1*10 - 0.7*10 == 0.3*10
var1 = 1/3*10**18 # Start with a large number
var2 = 1
var3 = var1+var2 # Add a small number
print(var3 - var1)# When subtracting a large number from a similarly large number, the smaller number is ignored
var1 = np.array(0.9876543210987654)
var2 = np.array(0.9876543210987653)
var3 = var1 - var2
print(var3)
exp_val = 1e-16
rel_error = (var3 - exp_val)/exp_val
print("The relative error is {0:.2g}%".format(rel_error*100))
X = np.linspace(1,300,300)
y1 = 2**(-X)-1/(1+2**(X))
y2 = 2**(-X)/(1+2**(X))
# Plot
plt.semilogy(X,y2, label='y2 (division)')
plt.semilogy(X,y1, label='y1 (subtraction)')
plt.ylim(1e-122, y1.max())
plt.legend(loc='best')
plt.show()
# Using the 53rd index (53) as a point to check the difference:
a = 2**(-X[53])
print("a ( 2^-x ) for x = 53 is: {}".format(a))
b = 1/(1+2**(X[53]))
print("b ( 1/(1+2^x) ) for x = 53 is: {}".format(b))
x = 2**63 - 1 # Python uses 'int'
x_np = np.array(x) # numpy automatically assigns to 'int64'
x_pd = pd.Series(x) # pandas automatically assigns to 'int64'
print(x)
print(x_np)
print(x_pd[0])
print("The python sum is: ", x+1)
print("The numpy sum is: ", x_np+1)
print("The pandas sum is: ", x_pd[0]+1)
randArr = np.random.rand(10000)
def scaleLoop(array): # For loop goes through each element and adds one
for i in range(len(array)):
array[i] = array[i]+1
return array
%timeit forloopsum = scaleLoop(randArr)
%timeit vectorsum = randArr + 1 # Numpy was written to perform vectorized operations
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='brown'>Reading a multi-page tiff</font>
Step2: <font color='brown'>Reading a multi-page tiff with multiple channels</font>
Step3: <font color='brown'>Applying a threshold to an image.</font>
|
<ASSISTANT_TASK:>
Python Code:
#This line is very important: (It turns on the inline visuals)!
%pylab inline
#This library is one of the libraries one can use for importing tiff files.
#For detailed info:http://effbot.org/imagingbook/image.htm
from PIL import Image
#We import our cell_fluorescent.tif image
im = Image.open('cell_fluorescent.tif')
#This line converts our image object into a numpy array (matrix).
im_array = np.array(im)
#This is an inline visual. It displays it after your code.
imshow(im_array)
#Notice the scale on the side of the image. What happens when you index a range.
#imshow(im_array[50:100,:])
#Or what happens when you index every fifth pixel:
#imshow(im_array[::5,::5],interpolation='nearest')
#Notice interpolation. What do you thing this is doing?
#Repeat the above step but for the image cell_colony.tif.
#Experiment with changing the look-up-table:
#imshow(im_array, cmap="Reds")
#more colors at: http://matplotlib.org/examples/color/colormaps_reference.html
#Make sure you have previously run %pylab inline at least once.
#This library is another one of the libaries we can use to import tiff files
#It also works with formats such as .lsm which are tiff's in disguise.
from tifffile import imread as imreadtiff
#We import our mri-stack.tif image file.
im = imread('mri-stack.tif')
print('image dimensions',im.shape)
#This line converts our image object into a numpy array and then accesses the fifteenth slice.
im_slice = im[15,:,:]
#This activates a subplot which can be used to display more than one image in a grid.
subplot(1,2,1)
imshow(im_slice)
#We can also assess the raw data directly.
im = imreadtiff('mri-stack.tif',key=5)
print('image dimensions',im.shape)
#This line converts our image object into a numpy array (matrix).
im_slice = im
#This is an inline visual. It displays it after your code.
subplot(1,2,2)
imshow(im_slice)
#Rerun the code and try and access different slices.
#How do you think you could extract the number of slices in this file?
#Make sure you have previously run %pylab inline at least once.
#from tifffile import imread as imreadtiff
#We import our flybrain.tif image file.
im = imreadtiff('flybrain.tif')
print('image dimensions',im.shape)
#This line converts our image object into a numpy array and then accesses the fifteenth slice.
im_slice = im[15,:,:]
#This activates a subplot which can be used to display more than one image in a grid.
subplot(2,2,1)
#Notice imshow can also show three channel images
#By default (RGB) if there are three channels.
#Note this doesn't work if there are two channels or more than three.
imshow(im_slice)
subplot(2,2,2)
#Plot the individual channels by specifying their index.
#Red channel.
imshow(im_slice[:,:,0],cmap="Greys_r")
subplot(2,2,3)
#Blue channel.
imshow(im_slice[:,:,1],cmap="Greys_r")
subplot(2,2,4)
#Green channel.
imshow(im_slice[:,:,2],cmap="Greys_r")
#Maximum projection.
#Take a look at this:
subplot(2,2,1)
imshow(np.average(im,0)[:,:,:])
subplot(2,2,2)
imshow(np.average(im,0)[:,:,0],cmap="Greys_r")
subplot(2,2,3)
imshow(np.average(im,0)[:,:,1],cmap="Greys_r")
#Can you work out what has happened.
#What happens when you use np.average instead?
#Can you work out why the average RGB image is so bad?
#Make sure you have previously run %pylab inline at least once.
#from tifffile import imread as imreadtiff
im_stack = imreadtiff('mri-stack.tif')
im_slice = im_stack[5,:,:]
thr = 100;
print('image min: ',np.min(im_slice),'image max: ',np.max(im_slice), 'thr: ',thr)
#Here we can very easily apply a threshold to the image.
binary = im_slice>thr
#Now we show the binary mask.
subplot(1,2,1)
imshow(im_slice)
subplot(1,2,2)
imshow(binary)
#What happens when you change the direction of the sign from '>' to '<'.
#Hopefully the result makes sense.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Your First Plot
Step2: Let's plot y_data against x_data, and then show the plot.
Step3: Use the buttons above to Pan (or Zoom), Reset or save the Figure.
Step4: Since both the x and the y attributes of a bqplot chart are interactive widgets, we can change them. So, let's
Step5: Re-run the above cell a few times, the same plot should update everytime. But, thats not the only thing that can be changed once a plot has been rendered. Let's try changing some of the other attributes.
Step6: It's important to remember that an interactive widget means that the JavaScript and the Python communicate. So, the plot can be changed through a single line of python code, or a piece of python code can be triggered by a change in the plot. Let's go through a simple example. Say we have a function foo
Step7: We can call foo everytime any attribute of our scatter is changed. Say, the y values
Step8: To allow the points in the Scatter to be moved interactively, we set the enable_move attribute to True
Step9: Go ahead, head over to the chart and move any point in some way. This move (which happens on the JavaScript side should trigger our Python function foo.
Step10: Now, we need to create the actual Mark that will visually represent the data. Let's pick a Scatter chart to start.
Step11: Most of the time, the actual Figure co-ordinates don't really mean anything to us. So, what we need is the visual representation of our Scale, which is called an Axis.
Step12: And finally, we put it all together on a canvas, which is called a Figure.
Step13: The IPython display machinery displays the last returned value of a cell. If you wish to explicitly display a widget, you can call IPython.display.display.
Step14: Now, that the plot has been generated, we can control every single attribute of it. Let's say we wanted to color the chart based on some other data.
Step15: Now, we define a ColorScale to map the color_data to actual colors
Step16: The grammar of graphics framework allows us to overlay multiple visualizations on a single Figure by having the visualization share the Scales. So, for example, if we had a Bar chart that we would like to plot alongside the Scatter plot, we just pass it the same Scales.
Step17: Finally, we add the new Mark to the Figure to update the plot!
|
<ASSISTANT_TASK:>
Python Code:
# Let's begin by importing some libraries we'll need
import numpy as np
# And creating some random data
size = 100
np.random.seed(0)
x_data = np.arange(size)
y_data = np.cumsum(np.random.randn(size) * 100.0)
from bqplot import pyplot as plt
plt.figure(title='My First Plot')
plt.plot(x_data, y_data)
plt.show()
# Creating a new Figure and setting it's title
plt.figure(title='My Second Chart')
# Let's assign the scatter plot to a variable
scatter_plot = plt.scatter(x_data, y_data)
# Let's show the plot
plt.show()
scatter_plot.y = np.cumsum(np.random.randn(size) * 100.0)
# Say, the color
scatter_plot.colors = ['Red']
# Or, the marker style
scatter_plot.marker = 'diamond'
def foo(change):
print('This is a trait change. Foo was called by the fact that we moved the Scatter')
print('In fact, the Scatter plot sent us all the new data: ')
print('To access the data, try modifying the function and printing the data variable')
# First, we hook up our function `foo` to the colors attribute (or Trait) of the scatter plot
scatter_plot.observe(foo, 'y')
scatter_plot.enable_move = True
# First, we import the scales
from bqplot import LinearScale
# Let's create a scale for the x attribute, and a scale for the y attribute
x_sc = LinearScale()
y_sc = LinearScale()
from bqplot import Scatter
scatter_chart = Scatter(x=x_data, y=y_data, scales={'x': x_sc, 'y': y_sc})
from bqplot import Axis
x_ax = Axis(label='X', scale=x_sc)
y_ax = Axis(label='Y', scale=y_sc, orientation='vertical')
from bqplot import Figure
fig = Figure(marks=[scatter_chart], title='A Figure', axes=[x_ax, y_ax])
fig
from IPython.display import display
display(fig)
# First, we generate some random color data.
color_data = np.random.randint(0, 2, size=100)
from bqplot import ColorScale
# The colors trait controls the actual colors we want to map to. It can also take a min, mid, max list of
# colors to be interpolated between for continuous data.
col_sc = ColorScale(colors=['MediumSeaGreen', 'Red'])
scatter_chart.scales = {'x': x_sc, 'y': y_sc, 'color': col_sc}
# We pass the color data to the Scatter Chart through it's color attribute
scatter_chart.color = color_data
from bqplot import Bars
new_size = 50
scale = 100.
x_data_new = np.arange(new_size)
y_data_new = np.cumsum(np.random.randn(new_size) * scale)
# All we need to do to add a bar chart to the Figure is pass the same scales to the Mark
bar_chart = Bars(x=x_data_new, y=y_data_new, scales={'x': x_sc, 'y': y_sc})
fig.marks = [scatter_chart, bar_chart]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: We have to make sure all conditions have the same counts, as the ANOVA
Step3: Create TFR representations for all conditions
Step4: Setup repeated measures ANOVA
Step5: Now we'll assemble the data matrix and swap axes so the trial replications
Step6: While the iteration scheme used above for assembling the data matrix
Step7: Account for multiple comparisons using FDR versus permutation clustering test
Step8: A stat_fun must deal with a variable number of input arguments.
Step9: Create new stats image with only significant clusters
Step10: Now using FDR
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
epochs.equalize_event_counts(event_id)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = frequencies / frequencies[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, frequencies, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
effects = 'A:B'
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll use the algorithm from Shrake and Rupley for computing the SASA. Here's the function in MDTraj
Step2: The computed sasa array contains the solvent accessible surface area for each atom in each frame of the trajectory. Let's sum over all of the atoms to get the total SASA from all of the atoms in each frame.
Step3: We probably don't really have enough data do compute a meaningful autocorrelation, but for more realistic datasets, this might be something that you want to do.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
import mdtraj as md
help(md.shrake_rupley)
trajectory = md.load('ala2.h5')
sasa = md.shrake_rupley(trajectory)
print(trajectory)
print('sasa data shape', sasa.shape)
total_sasa = sasa.sum(axis=1)
print(total_sasa.shape)
from matplotlib.pylab import *
plot(trajectory.time, total_sasa)
xlabel('Time [ps]', size=16)
ylabel('Total SASA (nm)^2', size=16)
show()
def autocorr(x):
"Compute an autocorrelation with numpy"
x = x - np.mean(x)
result = np.correlate(x, x, mode='full')
result = result[result.size//2:]
return result / result[0]
semilogx(trajectory.time, autocorr(total_sasa))
xlabel('Time [ps]', size=16)
ylabel('SASA autocorrelation', size=16)
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Можно присвоить результат запроса в переменную
Step2: Другой способ соединения
Step3: Создание (CREATE) и Удаление (DROP) таблиц.
Step4: Вставка (INSERT) данных в базу данных
Step5: Значения переменных python подставляются через кортеж. Другой способ - через словарь, используя '
Step6: Если вы хотите вставить нескольо пользователей в таблицу, используйте executemany и список из кортежей
Step7: Если вам требуется получить ид строки, которую вы только что добавили, используйте lastrowid
Step8: Получение данных (SELECT) с SQLite
Step9: Объект cursor работает как итератор, вызывая fetchall() автоматически
Step10: Чтобы получить данные с условиями, используйте '?'
Step11: Обновление (UPDATE) и удаление (DELETE) данных
Step12: Использование SQLite транзакций
Step13: Или rollback для отмены изменений
Step14: Помните, что всегда требуется сохранить изменения. Если вы закроете соединение или соединение будет потеряно, то ваши изменения
Step15: Исключения SQLite
Step16: В этом примере мы используем try/except/finally для того, чтобы "поймать" исключение в коде. Служебное слово finally - очень важно, потому что благодаря ему коннект к бд закрывается корректно. Более подробно здесь.
Step17: В пример выше, если insert вызывает исключение, для транзакции будет совершен откат и сообщение будет написано, иначе транзакция будет выполнена. Заметьте, что в данном случае мы вызываем execute на db объект.
|
<ASSISTANT_TASK:>
Python Code:
a = 'Pop'
%sql select * from genres where Name = :a
a = %sql select * from genres
type(a)
print(a)
import sqlite3
# Создаем БД в RAM
db=sqlite3.connect(':memory:')
# После окончания работы не забываем закрыть соединение
db.close()
# Создаем или открываем бд
db=sqlite3.connect('testdb')
# Закрываем бд
db.close()
db=sqlite3.connect('testdb')
# Получить cursor объекта
cursor = db.cursor()
cursor.execute('''
DROP TABLE IF EXISTS users
''');
cursor.execute('''
CREATE TABLE users(id INTEGER PRIMARY KEY, name TEXT,
phone TEXT, email TEXT UNIQUE, password TEXT);
''')
db.commit()
cursor=db.cursor()
name1 = 'Andrew'
phone1 = '123232'
email1 = 'user@example.com'
password1 = '12345'
name2 = 'John'
phone2 = '234241'
email2 = 'john@example.com'
password2 = 'abcdef'
# Insert user 1
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(?,?,?,?)''', (name1, phone1, email1, password1))
print('First user inserted')
# Insert user 2
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(?,?,?,?)''', (name2, phone2, email2, password2))
print('Second user inserted')
db.commit()
name3 = 'Nikita'
phone3 = '323232'
email3 = 'nikita@example.com'
password3 = '123'
cursor = db.cursor()
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(:name, :phone, :email, :password)''',
{'name':name3, 'phone':phone3, 'email':email3, 'password':password3})
print('Third user inserted')
db.commit()
name3 = 'Nikita'
phone3 = '323232'
email3 = 'nikita@example.com'
password3 = '123'
cursor = db.cursor()
cursor.execute('''INSERT INTO users(name, phone, email, password)
VALUES(:name3, :phone3, :email3, :password3)''')
print('Third user inserted')
db.commit()
name4 = 'Ann'
phone4 = '490904'
email4 = 'ann@example.com'
password4 = '345'
name5 = 'Jane'
phone5 = '809908'
email5 = 'jane@example.com'
password5 = '785'
users = [(name4, phone4, email4, password4),
(name5, phone5, email5, password5)]
cursor.executemany('''INSERT INTO users(name, phone, email, password) VALUES (?,?,?,?)''', users)
db.commit()
id = cursor.lastrowid
print('Last row id: %d' % id)
cursor.execute('''SELECT name, email, phone FROM users''')
user1 = cursor.fetchone() # получить одну строку
print(user1[0])
all_rows = cursor.fetchall()
for row in all_rows:
# row[0] возращает первый столбец - name, row[1] - email, row[2] - phone
print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))
cursor.execute('''SELECT name, email, phone FROM users''')
for row in cursor:
print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))
user_id=3
cursor.execute('''SELECT name, email, phone FROM users WHERE id=?''', (user_id,))
user=cursor.fetchone()
print (user[0], user[1], user[2])
# Обновить пользователя с id = 1
newphone = '77777'
userid = 1
cursor.execute('''UPDATE users SET phone = ? WHERE id = ?''', (newphone, userid))
# Удалить пользователя с id = 2
delete_userid = 2
cursor.execute('''DELETE FROM users WHERE id = ?''', (delete_userid,))
db.commit()
cursor.execute('''UPDATE users SET phone = ? WHERE id = ? ''', (newphone, userid))
db.commit()
cursor.execute('''UPDATE users SET phone = ? WHERE id = ?''', (newphone, userid))
db.rollback()
db.close()
import sqlite3
try:
db=sqlite3.connect('testdb')
cursor=db.cursor()
cursor.execute('''CREATE TABLE users(id INTEGER PRIMARY KEY, name TEXT
email TEXT unique, password TEXT)''')
db.commit()
except Exception as e:
db.rollback()
print('we are here')
raise e
finally:
db.close()
name1 = 'Andres'
phone1 = '333658'
email1 = 'user@example.com'
password1 = '12345'
try:
db=sqlite3.connect('testdb')
with db:
db.execute('''INSERT INTO users(name, phone, email, password)
VALUES(?,?,?,?)''', (name1, phone1, email1, password1))
except sqlite3.IntegrityError:
print('Record already exists')
finally:
db.close()
%%sql
select company
FROM invoices
join customers
ON invoices.customerid = customers.customerid
WHERE customers.company <> 'None'
group by customers.customerid, customers.company
having count(*)
in
(
select min(cnt) from
(
select count(*) as cnt
FROM invoices
group by customerid
) A
UNION ALL
select max(cnt) from
(
select count(*) as cnt
FROM invoices
group by customerid
) A
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the optimal control module to find the pulse
Step2: To quickly visualize the pulse, Processor has a method called plot_pulses. In the figure bellow, each colour represents the pulse sequence of one control Hamiltonian in the system as a function of time. In each time interval, the pulse remains constant.
Step3: To simulate the evolution, we only need to call the method run_state which calls one of the open system solvers in QuTiP and calculate the time evolution.
Step4: With decoherence
Step5: We can see that under noisy evolution their is a none zero probability of measuring state 00.
Step6: The first three pulse periods (from $t=0$ to $t\approx5$) are for the three Hadamard gates, they are followed by two long periods for the CNOT gates and then again two Hadamard. Different colours represent different kinds of interaction, as shown in the legend.
Step7: With decoherence
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from qutip.qip.device import OptPulseProcessor, LinearSpinChain
from qutip.qip.circuit import QubitCircuit
from qutip.operators import sigmaz, sigmax, identity
from qutip.tensor import tensor
from qutip.states import basis
from qutip.qobj import ptrace
basis00 = tensor([basis(2,0), basis(2,0)])
qc = QubitCircuit(N=3)
qc.add_gate("SNOT", targets=0)
qc.add_gate("SNOT", targets=1)
qc.add_gate("SNOT", targets=2)
# function f(x)
qc.add_gate("CNOT", controls=0, targets=2)
qc.add_gate("CNOT", controls=1, targets=2)
qc.add_gate("SNOT", targets=0)
qc.add_gate("SNOT", targets=1)
setting_args = {"SNOT": {"num_tslots": 5, "evo_time": 1},
"CNOT": {"num_tslots": 12, "evo_time": 5}}
processor = OptPulseProcessor(N=3)
processor.add_control(sigmaz(), cyclic_permutation=True)
processor.add_control(sigmax(), cyclic_permutation=True)
processor.add_control(tensor([sigmax(), sigmax(), identity(2)]))
processor.add_control(tensor([identity(2), sigmax(), sigmax()]))
processor.load_circuit(qc, setting_args=setting_args, merge_gates=False, verbose=True,
amp_ubound=5, amp_lbound=0);
processor.plot_pulses(title="Control pulse of OptPulseProcessor", figsize=(8, 4), dpi=100);
psi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])
result = processor.run_state(init_state=psi0)
print("Probability of measuring state 00:")
print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
processor.t1 = 100
processor.t2 = 30
psi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])
result = processor.run_state(init_state=psi0)
print("Probability of measuring state 00:")
print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
processor2 = LinearSpinChain(3)
processor2.load_circuit(qc);
processor2.plot_pulses(title="Control pulse of Spin chain");
psi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])
result = processor2.run_state(init_state=psi0)
print("Probability of measuring state 00:")
print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
processor2.t1 = 100
processor2.t2 = 30
psi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])
result = processor2.run_state(init_state=psi0)
print("Probability of measuring state 00:")
print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The mathematical (deterministic, forward) model
Step2: The data
Step3: The gendata Python module is used to generate the dataset (see the notebook Datasets).
Step4: Make a plot of the dataset
Step5: The statistical model used for computing the posterior probability density PPD
Step6: Create a pd.Series with the true parameter values. It will be used for plotting purpose.
Step7: Define the prior probability distribution for each free parameter. Here the uniform distribution is used, with given bounds (loc and scale arguments of scipy.stats.uniform are the lower bound and the range, respectively)
Step8: Define (min, max) bounds for each free parameter. It should be given by lower and upper quantiles (lower_qtl, upper_qtl) of the prior distribution. Choose the extreme quantiles (0, 1) if the distribution is uniform. It will be used for plotting purpose and also for constrained optimization (see below).
Step9: Plot the prior probability density for each parameter.
Step10: Define a function that returns the (logarithm of the) prior probability density for a given data model m.
Step11: Define a function that returns the log-likelihood. It is a $n$-dimensional Gaussian ($n$ nucleide concentrations sampled along the depth profile) with the mean given by the formard model and the variance given by the error estimated from the measurements of the nucleide concentration of each sample. This Gaussian implies that (1) the error on each measurement is random, (2) the sampled nucleide concentrations are measured independently of each other, (3) the forward model - i.e., the deterministic model that predicts the nucleide concentration profile - represents the real physics and (4) the values of the non-free parameters of the forward model - e.g., nucleide surface production rate, attenuation lengths... - are exactly known.
Step12: Define a function that returns the log-posterior probability density, according to the Bayes's theorem.
Step13: Sampling the posterior probablility density using MCMC
Step14: We show below the initial guesses and the initial positions of the walkers in a scatter plot.
Step15: We can then setup the emcee sampler and run the MCMC for n_steps iterations starting from the initial positions defined above.
Step16: Let's plot the trace of the MCMC iterations. The red lines show the true values.
Step17: This is not that bad! Try plotting only the firsts samples (e.g., sample_plot_range = slice(0, 1000)). We see that thanks to the initial positions of the walkers, the emcee sampler quickly starts exploring the full posterior distribution. The “burn-in” period is small and we can therefore set a small value for nburn below.
Step18: We can visualize the sampled posterior propbability density by joint plots of the MCMC samples. The red lines show the true values.
Step19: Given the samples, it is straightforward to characterize the posterior porbability density and estimate its moments.
Step20: the sample which have the max PPD value (i.e., the most probable sampled model)
Step21: the PPD quantiles (useful for delineating the Bayesian confidence intervals or credible intervals for each free parameter)
Step22: We finally plot the nucleide concentration profiles (blue dots
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import pandas as pd
from scipy import stats
from scipy import optimize
import emcee
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
clr_plt = sns.color_palette()
import models
# the true parameters
eps_true = 5e-4
t_true = 3e5
rho_true = 2.
inh_true = 5e4
# depths and sample size
depth_minmax = [50, 500]
N = 8
# perturbations
err_magnitude = 20.
err_variability = 5.
import gendata
profile_data = gendata.generate_dataset(
models.C_10Be,
(eps_true, t_true, rho_true, inh_true),
zlimits=depth_minmax,
n=N,
err=(err_magnitude, err_variability)
)
sns.set_context('notebook')
fig, ax = plt.subplots()
profile_data.plot(
y='depth', x='C', xerr='std',
kind="scatter", ax=ax, rot=45
)
ax.invert_yaxis()
param_names = 'erosion rate', 'time exposure'
param_true = pd.Series((eps_true, t_true), index=param_names)
eps_prior = stats.uniform(loc=0., scale=1e-3)
t_prior = stats.uniform(loc=0., scale=8e5)
priors = eps_prior, t_prior
param_priors = pd.Series(priors, index=param_names)
def get_bounds(f, lower_qtl=0., upper_qtl=1.):
return f.ppf(lower_qtl), f.ppf(upper_qtl)
eps_bounds = get_bounds(eps_prior, 0, 1)
t_bounds = get_bounds(t_prior, 0, 1)
bounds = eps_bounds, t_bounds
param_bounds = pd.DataFrame(
np.array(bounds), columns=('min', 'max'), index=param_names
)
param_bounds
fig, axes = plt.subplots(1, 2, figsize=(13, 3))
for ax, p, b, name in zip(axes.flatten(),
param_priors.values,
param_bounds.values,
param_names):
xmin, xmax = b
eps = 0.1 * (xmax - xmin)
x = np.linspace(xmin - eps, xmax + eps, 200)
d = p.pdf(x)
ax.plot(x, d)
ax.fill(x, d, alpha=0.4)
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
plt.setp(ax, ylim=(0, None), yticklabels=[],
xlabel=name)
plt.subplots_adjust()
def lnprior(m):
lps = [p.logpdf(v) for (p, v) in zip(priors, m)]
if not np.all(np.isfinite(lps)):
return -np.inf
return np.sum(lps)
def lnlike(m):
eps, t = m
mean = models.C_10Be(profile_data['depth'].values,
eps, t, rho_true, inh_true)
var = profile_data['std']**2
lngauss = -0.5 * np.sum(
np.log(2. * np.pi * var) +
(profile_data['C'] - mean)**2 / var
)
return lngauss
def lnprob(m):
lp = lnprior(m)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(m)
n_params, n_walkers = len(param_names), 100
# randomly choose initial guesses according to the prior
init_guesses = np.array(
[p.rvs(size=n_walkers) for p in priors]
).T
# perform bounded non-linear optimization from each initial guess
op_lnlike = lambda *args: -lnlike(*args)
init_walkers = np.empty_like(init_guesses)
for i, g in enumerate(init_guesses):
res = optimize.minimize(op_lnlike, g,
method='TNC',
bounds=bounds)
init_walkers[i] = res['x']
df_init_guesses = pd.DataFrame(init_guesses, columns=param_names)
df_init_walkers = pd.DataFrame(init_walkers, columns=param_names)
def scatter_pos(xcol, ycol, ax):
df_init_guesses.plot(
kind='scatter', x=xcol, y=ycol,
alpha=0.5, ax=ax, color=clr_plt[0], label='init guesses'
)
df_init_walkers.plot(
kind='scatter', x=xcol, y=ycol,
alpha=0.8, ax=ax, color=clr_plt[1], label='init walkers'
)
legend = ax.legend(frameon=True, loc='lower right')
legend.get_frame().set_facecolor('w')
plt.setp(ax, xlim=param_bounds.loc[xcol],
ylim=param_bounds.loc[ycol])
fig, ax = plt.subplots()
scatter_pos('erosion rate', 'time exposure', ax)
sampler = emcee.EnsembleSampler(n_walkers, n_params, lnprob)
n_steps = 500
sampler.run_mcmc(init_walkers, n_steps)
mcmc_samples = pd.DataFrame(sampler.flatchain,
columns=param_names)
sample_plot_range = slice(None)
axes = mcmc_samples[sample_plot_range].plot(
kind='line', subplots=True,
figsize=(10, 8), color=clr_plt[0]
)
for i, ax in enumerate(axes):
ax.axhline(param_true.iloc[i], color='r')
nburn = 100
mcmc_kept_samples = pd.DataFrame(
sampler.chain[:, nburn:, :].reshape((-1, n_params)),
columns=param_names
)
def jointplot_density(xcol, ycol):
p = sns.jointplot(
xcol, ycol,
data=mcmc_kept_samples,
xlim=(mcmc_kept_samples[xcol].min(),
mcmc_kept_samples[xcol].max()),
ylim=(mcmc_kept_samples[ycol].min(),
mcmc_kept_samples[ycol].max()),
joint_kws={'alpha': 0.02}
)
p.ax_joint.axhline(param_true.loc[ycol], color='r')
p.ax_joint.axvline(param_true.loc[xcol], color='r')
jointplot_density('erosion rate', 'time exposure')
mcmc_kept_samples.mean()
max_ppd = sampler.lnprobability[:, nburn:].reshape((-1)).argmax()
mcmc_kept_samples.iloc[max_ppd]
percentiles = np.array([2.5, 5, 25, 50, 75, 95, 97.5])
mcmc_kept_samples.quantile(percentiles * 0.01)
fig, ax = plt.subplots()
# plot the profile data with error bars
profile_data.plot(
y='depth', x='C', xerr='std',
kind="scatter", ax=ax, rot=45
)
# plot 50 randomly chosen profiles from MCMC samples
depths = np.linspace(profile_data['depth'].min(),
profile_data['depth'].max(),
100)
for i in np.random.randint(len(mcmc_kept_samples), size=100):
eps, t = mcmc_kept_samples.iloc[i]
c = models.C_10Be(depths, eps, t, rho_true, inh_true)
ax.plot(c, depths, color='grey', alpha=0.1)
# plot the true profile
c_true = models.C_10Be(depths, eps_true, t_true,
rho_true, inh_true)
ax.plot(c_true, depths, color='r', label='true model')
ax.invert_yaxis()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DDSP Training
Step2: Get a batch of data
Step3: Get a distribution strategy
Step4: Get model and trainer
Step6: or gin
Step7: Train
Step8: Train Loop
Step9: Analyze results
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# Install and import dependencies
%tensorflow_version 2.x
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import time
import ddsp
from ddsp.training import (data, decoders, encoders, models, preprocessing,
train_util, trainers)
from ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE
import gin
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
sample_rate = DEFAULT_SAMPLE_RATE # 16000
# Get a single example from NSynth.
# Takes a few seconds to load from GCS.
data_provider = data.NSynthTfds(split='test')
dataset = data_provider.get_batch(batch_size=1, shuffle=False).take(1).repeat()
batch = next(iter(dataset))
audio = batch['audio']
n_samples = audio.shape[1]
specplot(audio)
play(audio)
strategy = train_util.get_strategy()
TIME_STEPS = 1000
# Create Neural Networks.
preprocessor = preprocessing.F0LoudnessPreprocessor(time_steps=TIME_STEPS)
decoder = decoders.RnnFcDecoder(rnn_channels = 256,
rnn_type = 'gru',
ch = 256,
layers_per_stack = 1,
input_keys = ('ld_scaled', 'f0_scaled'),
output_splits = (('amps', 1),
('harmonic_distribution', 45),
('noise_magnitudes', 45)))
# Create Processors.
harmonic = ddsp.synths.Harmonic(n_samples=n_samples,
sample_rate=sample_rate,
name='harmonic')
noise = ddsp.synths.FilteredNoise(window_size=0,
initial_bias=-10.0,
name='noise')
add = ddsp.processors.Add(name='add')
# Create ProcessorGroup.
dag = [(harmonic, ['amps', 'harmonic_distribution', 'f0_hz']),
(noise, ['noise_magnitudes']),
(add, ['noise/signal', 'harmonic/signal'])]
processor_group = ddsp.processors.ProcessorGroup(dag=dag,
name='processor_group')
# Loss_functions
spectral_loss = ddsp.losses.SpectralLoss(loss_type='L1',
mag_weight=1.0,
logmag_weight=1.0)
with strategy.scope():
# Put it together in a model.
model = models.Autoencoder(preprocessor=preprocessor,
encoder=None,
decoder=decoder,
processor_group=processor_group,
losses=[spectral_loss])
trainer = trainers.Trainer(model, strategy, learning_rate=1e-3)
gin_string =
import ddsp
import ddsp.training
# Preprocessor
models.Autoencoder.preprocessor = @preprocessing.F0LoudnessPreprocessor()
preprocessing.F0LoudnessPreprocessor.time_steps = 1000
# Encoder
models.Autoencoder.encoder = None
# Decoder
models.Autoencoder.decoder = @decoders.RnnFcDecoder()
decoders.RnnFcDecoder.rnn_channels = 256
decoders.RnnFcDecoder.rnn_type = 'gru'
decoders.RnnFcDecoder.ch = 256
decoders.RnnFcDecoder.layers_per_stack = 1
decoders.RnnFcDecoder.input_keys = ('ld_scaled', 'f0_scaled')
decoders.RnnFcDecoder.output_splits = (('amps', 1),
('harmonic_distribution', 20),
('noise_magnitudes', 20))
# ProcessorGroup
models.Autoencoder.processor_group = @processors.ProcessorGroup()
processors.ProcessorGroup.dag = [
(@harmonic/synths.Harmonic(),
['amps', 'harmonic_distribution', 'f0_hz']),
(@noise/synths.FilteredNoise(),
['noise_magnitudes']),
(@add/processors.Add(),
['noise/signal', 'harmonic/signal']),
]
# Harmonic Synthesizer
harmonic/synths.Harmonic.name = 'harmonic'
harmonic/synths.Harmonic.n_samples = 64000
harmonic/synths.Harmonic.scale_fn = @core.exp_sigmoid
# Filtered Noise Synthesizer
noise/synths.FilteredNoise.name = 'noise'
noise/synths.FilteredNoise.n_samples = 64000
noise/synths.FilteredNoise.window_size = 0
noise/synths.FilteredNoise.scale_fn = @core.exp_sigmoid
noise/synths.FilteredNoise.initial_bias = -10.0
# Add
add/processors.Add.name = 'add'
models.Autoencoder.losses = [
@losses.SpectralLoss(),
]
losses.SpectralLoss.loss_type = 'L1'
losses.SpectralLoss.mag_weight = 1.0
losses.SpectralLoss.logmag_weight = 1.0
with gin.unlock_config():
gin.parse_config(gin_string)
with strategy.scope():
# Autoencoder arguments are filled by gin.
model = ddsp.training.models.Autoencoder()
trainer = trainers.Trainer(model, strategy, learning_rate=1e-4)
# Build model, easiest to just run forward pass.
dataset = trainer.distribute_dataset(dataset)
trainer.build(next(iter(dataset)))
dataset_iter = iter(dataset)
for i in range(300):
losses = trainer.train_step(dataset_iter)
res_str = 'step: {}\t'.format(i)
for k, v in losses.items():
res_str += '{}: {:.2f}\t'.format(k, v)
print(res_str)
# Run a batch of predictions.
start_time = time.time()
controls = model(next(dataset_iter))
audio_gen = model.get_audio_from_outputs(controls)
print('Prediction took %.1f seconds' % (time.time() - start_time))
print('Original Audio')
play(audio)
print('Resynthesized Audio')
play(audio_gen)
print('Filtered Noise Audio')
audio_noise = controls['noise']['signal']
play(audio_noise)
specplot(audio)
specplot(audio_gen)
specplot(audio_noise)
batch_idx = 0
get = lambda key: ddsp.core.nested_lookup(key, controls)[batch_idx]
amps = get('harmonic/controls/amplitudes')
harmonic_distribution = get('harmonic/controls/harmonic_distribution')
noise_magnitudes = get('noise/controls/magnitudes')
f0_hz = get('f0_hz')
loudness = get('loudness_db')
audio_noise = get('noise/signal')
f, ax = plt.subplots(1, 2, figsize=(14, 4))
f.suptitle('Input Features', fontsize=16)
ax[0].plot(loudness)
ax[0].set_ylabel('Loudness')
ax[1].plot(f0_hz)
ax[1].set_ylabel('F0_Hz')
f, ax = plt.subplots(1, 2, figsize=(14, 4))
f.suptitle('Synth Params', fontsize=16)
ax[0].semilogy(amps)
ax[0].set_ylabel('Amps')
ax[0].set_ylim(1e-5, 2)
# ax[0].semilogy(harmonic_distribution)
ax[1].matshow(np.rot90(np.log10(harmonic_distribution + 1e-6)),
cmap=plt.cm.magma,
aspect='auto')
ax[1].set_ylabel('Harmonic Distribution')
ax[1].set_xticks([])
_ = ax[1].set_yticks([])
f, ax = plt.subplots(1, 1, figsize=(7, 4))
# f.suptitle('Filtered Noise Params', fontsize=16)
ax.matshow(np.rot90(np.log10(noise_magnitudes + 1e-6)),
cmap=plt.cm.magma,
aspect='auto')
ax.set_ylabel('Filtered Noise Magnitudes')
ax.set_xticks([])
_ = ax.set_yticks([])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overview
Step2: Download the data and sanity check the inputs.
Step4: Neural Network Architecture
Step5: Evaluating Generative Models, and evaluating GANs
Step6: GANEstimator
Step7: Train and eval loop
|
<ASSISTANT_TASK:>
Python Code:
# Check that imports for the rest of the file work.
import tensorflow.compat.v1 as tf
!pip install tensorflow-gan
import tensorflow_gan as tfgan
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import numpy as np
# Allow matplotlib images to render immediately.
%matplotlib inline
tf.logging.set_verbosity(tf.logging.ERROR) # Disable noisy outputs.
import tensorflow_datasets as tfds
import tensorflow.compat.v1 as tf
def input_fn(mode, params):
assert 'batch_size' in params
assert 'noise_dims' in params
bs = params['batch_size']
nd = params['noise_dims']
split = 'train' if mode == tf.estimator.ModeKeys.TRAIN else 'test'
shuffle = (mode == tf.estimator.ModeKeys.TRAIN)
just_noise = (mode == tf.estimator.ModeKeys.PREDICT)
noise_ds = (tf.data.Dataset.from_tensors(0).repeat()
.map(lambda _: tf.random.normal([bs, nd])))
if just_noise:
return noise_ds
def _preprocess(element):
# Map [0, 255] to [-1, 1].
images = (tf.cast(element['image'], tf.float32) - 127.5) / 127.5
return images
images_ds = (tfds.load('mnist:3.*.*', split=split)
.map(_preprocess)
.cache()
.repeat())
if shuffle:
images_ds = images_ds.shuffle(
buffer_size=10000, reshuffle_each_iteration=True)
images_ds = (images_ds.batch(bs, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE))
return tf.data.Dataset.zip((noise_ds, images_ds))
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
import tensorflow_gan as tfgan
import numpy as np
params = {'batch_size': 100, 'noise_dims':64}
with tf.Graph().as_default():
ds = input_fn(tf.estimator.ModeKeys.TRAIN, params)
numpy_imgs = next(iter(tfds.as_numpy(ds)))[1]
img_grid = tfgan.eval.python_image_grid(numpy_imgs, grid_shape=(10, 10))
plt.axis('off')
plt.imshow(np.squeeze(img_grid))
plt.show()
def _dense(inputs, units, l2_weight):
return tf.layers.dense(
inputs, units, None,
kernel_initializer=tf.keras.initializers.glorot_uniform,
kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))
def _batch_norm(inputs, is_training):
return tf.layers.batch_normalization(
inputs, momentum=0.999, epsilon=0.001, training=is_training)
def _deconv2d(inputs, filters, kernel_size, stride, l2_weight):
return tf.layers.conv2d_transpose(
inputs, filters, [kernel_size, kernel_size], strides=[stride, stride],
activation=tf.nn.relu, padding='same',
kernel_initializer=tf.keras.initializers.glorot_uniform,
kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))
def _conv2d(inputs, filters, kernel_size, stride, l2_weight):
return tf.layers.conv2d(
inputs, filters, [kernel_size, kernel_size], strides=[stride, stride],
activation=None, padding='same',
kernel_initializer=tf.keras.initializers.glorot_uniform,
kernel_regularizer=tf.keras.regularizers.l2(l=l2_weight),
bias_regularizer=tf.keras.regularizers.l2(l=l2_weight))
def unconditional_generator(noise, mode, weight_decay=2.5e-5):
Generator to produce unconditional MNIST images.
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
net = _dense(noise, 1024, weight_decay)
net = _batch_norm(net, is_training)
net = tf.nn.relu(net)
net = _dense(net, 7 * 7 * 256, weight_decay)
net = _batch_norm(net, is_training)
net = tf.nn.relu(net)
net = tf.reshape(net, [-1, 7, 7, 256])
net = _deconv2d(net, 64, 4, 2, weight_decay)
net = _deconv2d(net, 64, 4, 2, weight_decay)
# Make sure that generator output is in the same range as `inputs`
# ie [-1, 1].
net = _conv2d(net, 1, 4, 1, 0.0)
net = tf.tanh(net)
return net
_leaky_relu = lambda net: tf.nn.leaky_relu(net, alpha=0.01)
def unconditional_discriminator(img, unused_conditioning, mode, weight_decay=2.5e-5):
del unused_conditioning
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
net = _conv2d(img, 64, 4, 2, weight_decay)
net = _leaky_relu(net)
net = _conv2d(net, 128, 4, 2, weight_decay)
net = _leaky_relu(net)
net = tf.layers.flatten(net)
net = _dense(net, 1024, weight_decay)
net = _batch_norm(net, is_training)
net = _leaky_relu(net)
net = _dense(net, 1, weight_decay)
return net
from tensorflow_gan.examples.mnist import util as eval_util
import os
def get_eval_metric_ops_fn(gan_model):
real_data_logits = tf.reduce_mean(gan_model.discriminator_real_outputs)
gen_data_logits = tf.reduce_mean(gan_model.discriminator_gen_outputs)
real_mnist_score = eval_util.mnist_score(gan_model.real_data)
generated_mnist_score = eval_util.mnist_score(gan_model.generated_data)
frechet_distance = eval_util.mnist_frechet_distance(
gan_model.real_data, gan_model.generated_data)
return {
'real_data_logits': tf.metrics.mean(real_data_logits),
'gen_data_logits': tf.metrics.mean(gen_data_logits),
'real_mnist_score': tf.metrics.mean(real_mnist_score),
'mnist_score': tf.metrics.mean(generated_mnist_score),
'frechet_distance': tf.metrics.mean(frechet_distance),
}
train_batch_size = 32 #@param
noise_dimensions = 64 #@param
generator_lr = 0.001 #@param
discriminator_lr = 0.0002 #@param
def gen_opt():
gstep = tf.train.get_or_create_global_step()
base_lr = generator_lr
# Halve the learning rate at 1000 steps.
lr = tf.cond(gstep < 1000, lambda: base_lr, lambda: base_lr / 2.0)
return tf.train.AdamOptimizer(lr, 0.5)
gan_estimator = tfgan.estimator.GANEstimator(
generator_fn=unconditional_generator,
discriminator_fn=unconditional_discriminator,
generator_loss_fn=tfgan.losses.wasserstein_generator_loss,
discriminator_loss_fn=tfgan.losses.wasserstein_discriminator_loss,
params={'batch_size': train_batch_size, 'noise_dims': noise_dimensions},
generator_optimizer=gen_opt,
discriminator_optimizer=tf.train.AdamOptimizer(discriminator_lr, 0.5),
get_eval_metric_ops_fn=get_eval_metric_ops_fn)
# Disable noisy output.
tf.autograph.set_verbosity(0, False)
import time
steps_per_eval = 500 #@param
max_train_steps = 5000 #@param
batches_for_eval_metrics = 100 #@param
# Used to track metrics.
steps = []
real_logits, fake_logits = [], []
real_mnist_scores, mnist_scores, frechet_distances = [], [], []
cur_step = 0
start_time = time.time()
while cur_step < max_train_steps:
next_step = min(cur_step + steps_per_eval, max_train_steps)
start = time.time()
gan_estimator.train(input_fn, max_steps=next_step)
steps_taken = next_step - cur_step
time_taken = time.time() - start
print('Time since start: %.2f min' % ((time.time() - start_time) / 60.0))
print('Trained from step %i to %i in %.2f steps / sec' % (
cur_step, next_step, steps_taken / time_taken))
cur_step = next_step
# Calculate some metrics.
metrics = gan_estimator.evaluate(input_fn, steps=batches_for_eval_metrics)
steps.append(cur_step)
real_logits.append(metrics['real_data_logits'])
fake_logits.append(metrics['gen_data_logits'])
real_mnist_scores.append(metrics['real_mnist_score'])
mnist_scores.append(metrics['mnist_score'])
frechet_distances.append(metrics['frechet_distance'])
print('Average discriminator output on Real: %.2f Fake: %.2f' % (
real_logits[-1], fake_logits[-1]))
print('Inception Score: %.2f / %.2f Frechet Distance: %.2f' % (
mnist_scores[-1], real_mnist_scores[-1], frechet_distances[-1]))
# Vizualize some images.
iterator = gan_estimator.predict(
input_fn, hooks=[tf.train.StopAtStepHook(num_steps=21)])
try:
imgs = np.array([next(iterator) for _ in range(20)])
except StopIteration:
pass
tiled = tfgan.eval.python_image_grid(imgs, grid_shape=(2, 10))
plt.axis('off')
plt.imshow(np.squeeze(tiled))
plt.show()
# Plot the metrics vs step.
plt.title('MNIST Frechet distance per step')
plt.plot(steps, frechet_distances)
plt.figure()
plt.title('MNIST Score per step')
plt.plot(steps, mnist_scores)
plt.plot(steps, real_mnist_scores)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create a nice dataframe that makes this data better consumable later.
Step3: Visualization
Step4: Basics
Step5: One of the important lines it this one
Step6: Bar chart
Step7: We just iterate over all entries and add these to the bar chart by using a list comprehension.
Step8: So this is pretty standard and easy to do.
Step9: This format is exactly what pygal needs. Let's create the treemap out of this data by using a list comprehension again.
Step10: Adding labels
Step11: If we generate the treemap once again, you can spot the difference in the visualization by hovering over the tiles with your pointing device.
Step12: Adding color
Step13: Let's plot this treemap. We disable the legend, because it doesn't make sense anymore (the colors of the legend doesn't represent the colors in the treemap anymore).
Step14: Hacking the system
Step15: Gauge
Step16: We can visualize this e. g. as gauge chart.
Step17: Or in another variant of it...
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
raw = pd.read_csv("datasets/jacoco_production_coverage_spring_petclinic.csv")
raw.head()
df = pd.DataFrame(index=raw.index)
df['class'] = raw['PACKAGE'] + "." + raw['CLASS']
df['lines'] = raw['LINE_MISSED'] + raw['LINE_COVERED']
df['coverage'] = raw['LINE_COVERED'] / df['lines']
df.head()
from IPython.display import display, HTML
base_html =
<!DOCTYPE html>
<html>
<head>
<script type="text/javascript" src="http://kozea.github.com/pygal.js/javascripts/svg.jquery.js"></script>
<script type="text/javascript" src="https://kozea.github.io/pygal.js/2.0.x/pygal-tooltips.min.js""></script>
</head>
<body>
<figure>
{rendered_chart}
</figure>
</body>
</html>
import pygal
bar_chart = pygal.Bar(height=200)
bar_chart.title = 'Browser usage evolution (in %)'
bar_chart.x_labels = map(str, range(2002, 2013))
bar_chart.add('Firefox', [None, None, 0, 16.6, 25, 31, 36.4, 45.5, 46.3, 42.8, 37.1])
bar_chart.add('Chrome', [None, None, None, None, None, None, 0, 3.9, 10.8, 23.8, 35.3])
bar_chart.add('IE', [85.8, 84.6, 84.7, 74.5, 66, 58.6, 54.7, 44.8, 36.2, 26.6, 20.1])
bar_chart.add('Others', [14.2, 15.4, 15.3, 8.9, 9, 10.4, 8.9, 5.8, 6.7, 6.8, 7.5])
display(HTML(base_html.format(rendered_chart=bar_chart.render(is_unicode=True))))
df['category'] = df['class'].str.split(".").str[-2]
df.head()
mean_by_category = df.groupby('category')['coverage'].mean()
mean_by_category
bar_chart = pygal.Bar(height=200)
[bar_chart.add(x[0], x[1]) for x in mean_by_category.items()]
display(HTML(base_html.format(rendered_chart=bar_chart.render(is_unicode=True))))
values_by_category = df.groupby(['category'])['lines'].apply(list)
values_by_category
treemap = pygal.Treemap(height=200)
[treemap.add(x[0], x[1]) for x in values_by_category.items()]
display(HTML(base_html.format(rendered_chart=treemap.render(is_unicode=True))))
class_values_by_category = df.groupby(['category'], axis=0).apply(
lambda x : [{"value" : l, "label" : c } for l, c in zip(x['lines'], x['class'])])
class_values_by_category
treemap = pygal.Treemap(height=200)
[treemap.add(x[0], x[1]) for x in class_values_by_category.iteritems()]
display(HTML(base_html.format(rendered_chart=treemap.render(is_unicode=True))))
from matplotlib.cm import coolwarm
from matplotlib.colors import rgb2hex
df['color'] = df['coverage'].apply(lambda x : rgb2hex(coolwarm(x)))
df.head()
class_ratios_by_category = df.groupby(['category'], axis=0).apply(
lambda x : [
{"value" : y,
"label" : z,
"color" : c} for y, z, c in zip(
x['lines'],
x['class'],
x['color'])])
class_ratios_by_category
treemap = pygal.Treemap(height=200, show_legend=False)
[treemap.add(x[0], x[1]) for x in class_ratios_by_category.iteritems()]
display(HTML(base_html.format(rendered_chart=treemap.render(is_unicode=True))))
class_ratios_hack_by_category = df.groupby(['category'], axis=0).apply(
lambda x : [
{"value" : y,
"label" : z,
"color" : c,
"formatter" : lambda x : "{0:.0%}".format(f)} for y, z, c, f in zip(
x['lines'],
x['class'],
x['color'],
x['coverage'])])
class_ratios_hack_by_category
treemap = pygal.Treemap(height=200, show_legend=False, colors=["#ffffff"])
[treemap.add(x[0], x[1]) for x in class_ratios_hack_by_category.iteritems()]
display(HTML(base_html.format(rendered_chart=treemap.render(is_unicode=True))))
mean_by_category
gauge = pygal.SolidGauge(inner_radius=0.70)
[gauge.add(x[0], [{"value" : x[1] * 100}] ) for x in mean_by_category.iteritems()]
display(HTML(base_html.format(rendered_chart=gauge.render(is_unicode=True))))
gauge = pygal.Gauge(human_readable=True)
[gauge.add(x[0], [{"value" : x[1] * 100}] ) for x in mean_by_category.iteritems()]
display(HTML(base_html.format(rendered_chart=gauge.render(is_unicode=True))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Utility function
Step2: Test data
Step3: Causal Discovery
Step4: Prediction Model
Step5: Identification of Feature with Greatest Causal Influence on Prediction
Step6: Estimation of Optimal Intervention
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
def make_graph(adjacency_matrix, labels=None):
idx = np.abs(adjacency_matrix) > 0.01
dirs = np.where(idx)
d = graphviz.Digraph(engine='dot')
names = labels if labels else [f'x{i}' for i in range(len(adjacency_matrix))]
for to, from_, coef in zip(dirs[0], dirs[1], adjacency_matrix[idx]):
d.edge(names[from_], names[to], label=f'{coef:.2f}')
return d
X = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data-original',
delim_whitespace=True, header=None,
names = ['mpg', 'cylinders', 'displacement',
'horsepower', 'weight', 'acceleration',
'model year', 'origin', 'car name'])
X.dropna(inplace=True)
X.drop(['model year', 'origin', 'car name'], axis=1, inplace=True)
print(X.shape)
X.head()
model = lingam.DirectLiNGAM()
model.fit(X)
labels = [f'{i}. {col}' for i, col in enumerate(X.columns)]
make_graph(model.adjacency_matrix_, labels)
from sklearn.linear_model import LassoCV
target = 0 # mpg
features = [i for i in range(X.shape[1]) if i != target]
reg = LassoCV(cv=5, random_state=0)
reg.fit(X.iloc[:, features], X.iloc[:, target])
ce = lingam.CausalEffect(model)
effects = ce.estimate_effects_on_prediction(X, target, reg)
df_effects = pd.DataFrame()
df_effects['feature'] = X.columns
df_effects['effect_plus'] = effects[:, 0]
df_effects['effect_minus'] = effects[:, 1]
df_effects
max_index = np.unravel_index(np.argmax(effects), effects.shape)
print(X.columns[max_index[0]])
# mpg = 15
c = ce.estimate_optimal_intervention(X, target, reg, 1, 15)
print(f'Optimal intervention: {c:.3f}')
# mpg = 21
c = ce.estimate_optimal_intervention(X, target, reg, 1, 21)
print(f'Optimal intervention: {c:.3f}')
# mpg = 30
c = ce.estimate_optimal_intervention(X, target, reg, 1, 30)
print(f'Optimal intervention: {c:.3f}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mass Balance
Step2: The mass balance equations may be combined with the equilibrium dissociation equations into a cubic form. Isolating $[P \bullet L]$ and $[P \bullet L \bullet P]$ from their equations yields
Step3: In this fashion we can derive an expression for the total protein concentration, $[P]{T}$, in terms of $[P]$, $K{D}$, $\alpha$, and $[L]$. Which is as follows
Step4: By substituting the expression for free ligand concentration into our formula for total protein concentration, we can derive an expression for $[P]{T}$ that depends instead on $[P]$, $K{D}$, $\alpha$, and $[L]_{T}$ which is
Step5: Rearrange P_Total to cubic form
Step6: There exists only one real solution to a cubic function, whose value would represent the appropiate value of $[P]$ given the equilibrium system's specific traits of $K_{D}$, $\alpha$, $[L]{T}$, and $[P]{T}$. The solution takes the following form
Step7: Putting the model to work
|
<ASSISTANT_TASK:>
Python Code:
#Kd-prime and Kd-doubleprime as expressions of Kd and alpha (cooperativity)
#as well as their concentration ratios
kd, alpha, p, l, pl, plp = symbols('K_{D} alpha [P] [L] [PL] [PLP]')
kd_p = Eq(kd / 2, p * l / pl)
kd_p
kd_pp = Eq(2 * kd / alpha, p * pl / plp)
kd_pp
l_t, p_t = symbols('[L]_{T} [P]_{T}')
#Represent L_total
l_total = Eq(l_t, l + pl + plp)
l_total
#Represent P_total
p_total = Eq(p_t, p + pl + 2 * plp)
p_total
#Isolate PL from Kd'
isol_pl = solvers.solve(kd_p, pl)[0]
#Isolate PLP from Kd''
isol_plp = solvers.solve(kd_pp, plp)[0]
#Replace isolated PLP expression with new form using substituted isolated PL
isol_plp = isol_plp.subs(pl, isol_pl)
#Show isolated PL expresson
isol_pl
#Show isolated PLP expression
isol_plp
#Substitute these expressions into L_total
subs_l_total = l_total.subs(plp, isol_plp)
subs_l_total = subs_l_total.subs(pl, isol_pl)
#Solve for L
l_free = solvers.solve(subs_l_total, l)[0]
#Show L_free
l_free
#Substitute the isolated PL/PLP expressions into P_total
subs_p_total = p_total.subs(plp, isol_plp)
subs_p_total = subs_p_total.subs(pl, isol_pl)
#Show P_total
subs_p_total
#Substitute in our l_free expression into sub_p_total to replace dependence on [L] with [L]_total
subs_p_total = subs_p_total.subs(l, l_free)
#Show subs_p_total
subs_p_total
#Rearrange P_total to the other side then expand
p_expression = solvers.solve(subs_p_total, p_t)[0] - p_t
p_expression = p_expression.expand()
#Multiply the expression by the proper value to obtain [P]^(3) with coefficient of alpha
p_expression = p_expression * (kd ** 2 + 2 * kd * p + p ** 2 * alpha)
#Cancel through and show p_expression
p_expression = cancel(p_expression)
p_expression
#Collect the terms of the polynomial with the same power of [P]
power_coeffs = collect(p_expression, p, evaluate=False)
#Normalize for coefficientless [P]**3 by dividing all terms by alpha and assign/display
a, b, c = symbols('a b c')
a = power_coeffs[p**2] / alpha
b = power_coeffs[p**1] / alpha
c = power_coeffs[p**0] / alpha
a, b, c
#Using these coefficients, compose our new cubic function
cubic_p_expression = p ** 3 + a * p **2 + b * p + c
cubic_p_expression
#For the latex representation in the markdown, I simply rearranged this equation into
#power order and clarified this by placing [P]**n beside the fraction
pl_expr = isol_pl.subs(l, l_free)
pl_expr
plp_expr = isol_plp.subs(l, l_free)
plp_expr
#First we need to be able to calculate our cubic polynomial constants
def calc_abc(kd, alpha, p_total, l_total):
a = 2.0 * kd / alpha + 2.0 * l_total - p_total
b = (np.power(kd, 2.0) + 2.0 * kd * l_total - 2.0 * kd * p_total) / alpha
c = -1 * (np.power(kd, 2.0) * p_total) / alpha
return a, b, c
#Secondly we need to calculate the Q and R for the cubic solution
def calc_qr(a, b, c):
q = (3 * b - np.power(a, 2)) / 9
r = (9 * a * b - 27 * c - 2.0 * np.power(a, 3)) / 54
return q, r
#Thirdly we need to be able to solve the cubic formula in either cartesian or polar coords
def cartesian_cubic(a, q, r): # For use if Q^3+R^2 > 0
first = -1 * a / 3.0
second = np.power(r + np.power(np.power(q, 3.0) + np.power(r, 2.0), 0.5), 1.0 / 3.0)
third = np.power(r - np.power(np.power(q, 3.0) + np.power(r, 2.0), 0.5), 1.0 / 3.0)
return first + second + third
def polar_cubic(a, q, r): # For use if Q^3+R^2 < 0
theta = np.arccos(r / np.power(-1 * np.power(q, 3), 0.5))
return np.cos(theta / 3.0) * np.power(-1 * q, 0.5) * 2.0 - (a / 3.0)
#If we wish to plot [PL] and [PLP] as well, we need these
def get_pl(kd, alpha, l_total, p):
numerator = 2.0 * kd * l_total * p
denominator = np.power(kd, 2.0) + 2.0 * kd * p + alpha * np.power(p, 2.0)
return numerator / denominator
def get_plp(kd, alpha, l_total, p):
numerator = alpha * l_total * np.power(p, 2.0)
denominator = np.power(kd, 2.0) + 2.0 * kd * p + alpha * np.power(p, 2.0)
return numerator / denominator
def model_func(kd, alpha, p_total, l_total):
a, b, c = calc_abc(kd, alpha, p_total, l_total)
q, r = calc_qr(a, b, c)
p = []
for a_val, q_val, r_val in zip(a, q, r):
if np.power(q_val, 3) + np.power(r_val, 2) > 0:
p.append(cartesian_cubic(a_val, q_val, r_val))
else:
p.append(polar_cubic(a_val, q_val, r_val))
p = np.array(p)
pl = get_pl(kd, alpha, l_total, p)
plp = get_plp(kd, alpha, l_total, p)
return p, pl, plp
#Create the plot
plot = pylab.figure().add_subplot(111)
total_protein = 0.1
lig_range = 0.00001 * np.power(10, linspace(1, 8, 150))
p, pl, plp = model_func(0.02, 0.05, total_protein, lig_range)
plot.plot(lig_range, p / total_protein, label='[P]')
plot.plot(lig_range, pl / total_protein, label='[PL]')
plot.plot(lig_range, plp / total_protein, label='[PLP]')
plot.set_ylabel('[P]')
plot.set_xscale('log')
plot.set_xlabel(r'Ligand (uM)')
plot.legend(loc='center right')
plot.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of Contents
Step2: Back to Top
Step3: If SYSTOOLS is not part of the path, you can update it with the following SQL.
Step4: From this point on you won't need to added the SYSTOOLS schema on the front of any of your SQL
Step5: Back to Top
Step6: This is an example of a poorly formatted JSON document.
Step7: Back to Top
Step8: If you want to extract the entire contents of a JSON field, you need to use the BSON2JSON function.
Step9: One thing that you should note is that the JSON that is retrieved has been modified slightly so that
Step10: The following SQL will inject a bad value into the beginning of the JSON field to test the results from the
Step11: The BSON_VALIDATE should return a zero for this particular row since it is not a valid BSON document.
Step12: Back to Top
Step13: We can check the count of records to make sure that 42 employees were added to our table.
Step14: Additional DEPARTMENT Table
Step15: Back to Top
Step16: If the size of the field being returned is larger that the field specification,
Step17: In the case of character fields, you may need to specify a larger return
Step18: Back to Top
Step19: If you specify "
Step20: If you need to access a specific array element in a field, you can use the "dot"
Step21: If you attempt to retrieve the pay field, you will end up with a NULL value, not
Step22: The results contain 40 employees who have a middle initial field, and two that do not.
Step23: If you only want to know how many employees have the middle initial field (midinit)
Step24: Back to Top
Step25: You need to ensure that the data types from both JSON functions are compatible for
Step26: The following SQL will generate a list of data types and field names found within this document.
Step27: The following sections will show how we can get atomic (non-array) types out of
Step28: Back to Top
Step29: The JSON_TYPE function will verify that this is an integer field (Type=16).
Step30: You can retrieve an integer value with either the 'i' flag or the 'l' flag.
Step31: We can ask that the value be interpreted as a BIGINT by using the 'l' flag,
Step32: The next SQL statement will create a field with a BIGINT size. Note that we don't
Step33: The JSON_TYPE function will verify that this is a big integer field (Type=18).
Step34: We can check to see that the data is stored in the document as a BIGINT by
Step35: Returning the data as an integer type 'i' will fail since the number is too big
Step36: Specifying the 'l' flag will make the data be returned properly.
Step37: Since we have an integer in the JSON field, we also have the option of returning
Step38: Back to Top
Step39: The JSON_TYPE function will verify that this is a numeric field (Type=1).
Step40: Numeric data can be retrieved in either number (n) formant, integer (i - note that
Step41: You may wonder why number format (n) results in an answer that has a fractional
Step42: A floating-point number is recognized by the Exx specifier in the number. The
Step43: The JSON_TYPE function will verify that this is a floating point field (Type=1).
Step44: The floating-point value can be retrieved as a number, integer, or floating point value.
Step45: Back to Top
Step46: We will double-check what type the field is in the JSON record.
Step47: To retrieve the value, we can ask that it be formatted as an integer or number.
Step48: You can also retrieve a boolean field as a character or
Step49: Back to Top
Step50: Querying the data type of this field using JSON_VAL will return a value of 9 (date type).
Step51: If you decide to use a character string to represent a date, you can use either
Step52: Using the 'd' specification will return the value as a date.
Step53: What about timestamps? If you decide to store a timestamp into a field, you can
Step54: Retrieving it as a Date will also work, but the time portion will be removed.
Step55: You can also ask for the timestamp value by using the 'ts'
Step56: To force the value to return just the time portion, either
Step57: Back to Top
Step58: You must specify a length for the 's' parameter otherwise
Step59: Back to Top
Step60: By default, only the first number of an array is returned
Step61: The TABLE( ... ) specification in the FROM clause is used
Step62: Only a subset of the results is shown above, but you will
Step63: You can even find out how many people are sharing
Step64: Back to Top
Step65: Back to Top
Step66: An easier way is to use the JSON_GET_POS_ARR_INDEX function.
Step67: If we used quotes around the phone number, the function will not match any of
Step68: To add a new field to the record, the JSON_UPDATE function needs to specify the
Step69: Retrieving the document shows that the lastname field has now been added to the record.
Step70: If you specify a field that is an array type and do not
Step71: Running the SQL against the original phone data will work properly.
Step72: To remove the phone number field you need to use the $unset keyword and set the field to null.
Step73: Back to Top
Step74: The following command will time the select statement.
Step75: To create an index on the empno field, we use the JSON_VAL function to extract the
Step76: Rerunning the SQL results in the following performance
Step77: Db2 can now use the index to retrieve the record and the following plot shows the increased throughput.
Step78: Back to Top
Step79: To use INSTEAD OF triggers, a view needs to be created on
Step80: At this point we can create three INSTEAD OF triggers to handle insert,
Step81: On UPDATES, the sequence number remains the same, and the BSON field is updated
Step82: Finally, the DELETE trigger will just remove the row.
Step83: Applications will only deal with the EMP_TXS view. Any
Step84: Selecting from the EMP_TXS view will return the JSON in a readable format
Step85: The base table only contains the BSON but the view translates the value back into a readable format.
Step86: If you want to manipulate the BSON directly (say change the employee number),
Step87: And we can check it using our original view.
|
<ASSISTANT_TASK:>
Python Code:
%run db2.ipynb
%%sql -q
CREATE FUNCTION SYSTOOLS.JSON_TABLE(
INJSON BLOB(16M), INELEM VARCHAR(2048), RETTYPE VARCHAR(100))
RETURNS TABLE(TYPE INTEGER, VALUE VARCHAR(2048))
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
NO EXTERNAL ACTION
DISALLOW PARALLEL
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonTable';
CREATE FUNCTION SYSTOOLS.JSON_TYPE(
INJSON BLOB(16M), INELEM VARCHAR(2048), MAXLENGTH INTEGER)
RETURNS INTEGER
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
RETURNS NULL ON NULL INPUT
NO EXTERNAL ACTION
EXTERNAL NAME 'db2json!jsonType';
CREATE FUNCTION SYSTOOLS.JSON_LEN(
INJSON BLOB(16M), INELEM VARCHAR(2048))
RETURNS INTEGER
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonLen';
CREATE FUNCTION SYSTOOLS.BSON2JSON(INBSON BLOB(16M)) RETURNS CLOB(16M)
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonBsonToJson';
CREATE FUNCTION SYSTOOLS.JSON2BSON(INJSON CLOB(16M)) RETURNS BLOB(16M)
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonToBson';
CREATE FUNCTION SYSTOOLS.JSON_GET_POS_ARR_INDEX(
INJSON BLOB(16M), QUERY VARCHAR(32672) FOR BIT DATA)
RETURNS INTEGER
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
CALLED ON NULL INPUT
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonGetPosArrIndex';
CREATE FUNCTION SYSTOOLS.JSON_UPDATE(
INJSON BLOB(16M), INELEM VARCHAR(32672))
RETURNS BLOB(16M)
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
CALLED ON NULL INPUT
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonUpdate2';
CREATE FUNCTION SYSTOOLS.BSON_VALIDATE(
INJSON BLOB(16M))
RETURNS INT
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
RETURNS NULL ON NULL INPUT
NO EXTERNAL ACTION
EXTERNAL NAME 'db2json!jsonValidate';
%sql VALUES CURRENT PATH
%sql SET CURRENT PATH = CURRENT PATH, SYSTOOLS
%sql -q DROP TABLE TESTJSON
%%sql
CREATE TABLE TESTJSON
(
JSON_FIELD BLOB(4000) INLINE LENGTH 4000
)
%%sql
INSERT INTO TESTJSON VALUES ( JSON2BSON('{Name:"George"}') )
%sql -j select bson2json(json_field) from testjson
%%sql
INSERT INTO TESTJSON VALUES
( JSON2BSON('{Name:, Age: 32}'))
%%sql
SELECT CAST(JSON_FIELD AS VARCHAR(60)) FROM TESTJSON
%%sql -j
SELECT BSON2JSON(JSON_FIELD) FROM TESTJSON
%%sql
SELECT BSON_VALIDATE(JSON_FIELD) FROM TESTJSON
%%sql
UPDATE TESTJSON
SET JSON_FIELD = BLOB('!') || JSON_FIELD
%%sql
SELECT BSON_VALIDATE(JSON_FIELD) FROM TESTJSON
%%sql
DROP TABLE JSON_EMP;
CREATE TABLE JSON_EMP
(
SEQ INT NOT NULL GENERATED ALWAYS AS IDENTITY,
EMP_DATA BLOB(4000) INLINE LENGTH 4000
);
INSERT INTO JSON_EMP(EMP_DATA) VALUES
JSON2BSON( '{ "empno":"000010", "firstnme":"CHRISTINE", "midinit":"I", "lastname":"HAAS", "workdept":"A00", "phoneno":[3978], "hiredate":"01/01/1995", "job":"PRES", "edlevel":18, "sex":"F", "birthdate":"08/24/1963", "pay" : { "salary":152750.00, "bonus":1000.00, "comm":4220.00} }'),
JSON2BSON( '{"empno":"000020","firstnme":"MICHAEL","lastname":"THOMPSON", "workdept":"B01","phoneno":[3476,1422],"hiredate":"10/10/2003", "job":"MANAGER","edlevel":18,"sex":"M","birthdate":"02/02/1978", "pay": {"salary":94250.00,"bonus":800.00,"comm":3300.00}}'),
JSON2BSON( '{"empno":"000030","firstnme":"SALLY","midinit":"A","lastname":"KWAN", "workdept":"C01","phoneno":[4738],"hiredate":"04/05/2005", "job":"MANAGER","edlevel":20,"sex":"F","birthdate":"05/11/1971", "pay": {"salary":98250.00,"bonus":800.00,"comm":3060.00} }'),
JSON2BSON( '{ "empno":"000050","firstnme":"JOHN","midinit":"B","lastname":"GEYER", "workdept":"E01","phoneno":[6789],"hiredate":"08/17/1979", "job":"MANAGER","edlevel":16,"sex":"M","birthdate":"09/15/1955", "pay": {"salary":80175.00,"bonus":800.00,"comm":3214.00} }'),
JSON2BSON( '{ "empno":"000060","firstnme":"IRVING","lastname":"STERN", "workdept":"D11","phoneno":[6423,2433],"hiredate":"09/14/2003", "job":"MANAGER","edlevel":16,"sex":"M","birthdate":"07/07/1975", "pay": {"salary":72250.00,"bonus":500.00,"comm":2580.00} }'),
JSON2BSON( '{ "empno":"000070","firstnme":"EVA","midinit":"D","lastname":"PULASKI", "workdept":"D21","phoneno":[7831,1422,4567],"hiredate":"09/30/2005", "job":"MANAGER","edlevel":16,"sex":"F","birthdate":"05/26/2003", "pay": {"salary":96170.00,"bonus":700.00,"comm":2893.00} }'),
JSON2BSON( '{ "empno":"000090","firstnme":"EILEEN","midinit":"W","lastname":"HENDERSON", "workdept":"E11","phoneno":[5498],"hiredate":"08/15/2000", "job":"MANAGER","edlevel":16,"sex":"F","birthdate":"05/15/1971", "pay": {"salary":89750.00,"bonus":600.00,"comm":2380.00} }'),
JSON2BSON( '{ "empno":"000100","firstnme":"THEODORE","midinit":"Q","lastname":"SPENSER", "workdept":"E21","phoneno":[0972],"hiredate":"06/19/2000", "job":"MANAGER","edlevel":14,"sex":"M","birthdate":"12/18/1980", "pay": {"salary":86150.00,"bonus":500.00,"comm":2092.00} }'),
JSON2BSON( '{ "empno":"000110","firstnme":"VINCENZO","midinit":"G","lastname":"LUCCHESSI", "workdept":"A00","phoneno":[3490,3567],"hiredate":"05/16/1988", "job":"SALESREP","edlevel":19,"sex":"M","birthdate":"11/05/1959", "pay": {"salary":66500.00,"bonus":900.00,"comm":3720.00} }'),
JSON2BSON( '{ "empno":"000120","firstnme":"SEAN","midinit":"","lastname":"O''CONNELL", "workdept":"A00","phoneno":[2167,1533],"hiredate":"12/05/1993", "job":"CLERK","edlevel":14,"sex":"M","birthdate":"10/18/1972", "pay": {"salary":49250.00,"bonus":600.00,"comm":2340.00} }'),
JSON2BSON( '{ "empno":"000130","firstnme":"DELORES","midinit":"M","lastname":"QUINTANA", "workdept":"C01","phoneno":[4578],"hiredate":"07/28/2001", "job":"ANALYST","edlevel":16,"sex":"F","birthdate":"09/15/1955", "pay": {"salary":73800.00,"bonus":500.00,"comm":1904.00} }'),
JSON2BSON( '{ "empno":"000140","firstnme":"HEATHER","midinit":"A","lastname":"NICHOLLS", "workdept":"C01","phoneno":[1793],"hiredate":"12/15/2006", "job":"ANALYST","edlevel":18,"sex":"F","birthdate":"01/19/1976", "pay": {"salary":68420.00,"bonus":600.00,"comm":2274.00} }'),
JSON2BSON( '{ "empno":"000150","firstnme":"BRUCE","midinit":"","lastname":"ADAMSON", "workdept":"D11","phoneno":[4510],"hiredate":"02/12/2002", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"05/17/1977", "pay": {"salary":55280.00,"bonus":500.00,"comm":2022.00} }'),
JSON2BSON( '{ "empno":"000160","firstnme":"ELIZABETH","midinit":"R","lastname":"PIANKA", "workdept":"D11","phoneno":[3782,9322],"hiredate":"10/11/2006", "job":"DESIGNER","edlevel":17,"sex":"F","birthdate":"04/12/1980", "pay": {"salary":62250.00,"bonus":400.00,"comm":1780.00} }'),
JSON2BSON( '{ "empno":"000170","firstnme":"MASATOSHI","midinit":"J","lastname":"YOSHIMURA", "workdept":"D11","phoneno":[2890],"hiredate":"09/15/1999", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"01/05/1981", "pay": {"salary":44680.00,"bonus":500.00,"comm":1974.00} }'),
JSON2BSON( '{ "empno":"000180","firstnme":"MARILYN","midinit":"S","lastname":"SCOUTTEN", "workdept":"D11","phoneno":[1682,9945],"hiredate":"07/07/2003", "job":"DESIGNER","edlevel":17,"sex":"F","birthdate":"02/21/1979", "pay": {"salary":51340.00,"bonus":500.00,"comm":1707.00} }'),
JSON2BSON( '{ "empno":"000190","firstnme":"JAMES","midinit":"H","lastname":"WALKER", "workdept":"D11","phoneno":[2986,3644],"hiredate":"07/26/2004", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"06/25/1982", "pay": {"salary":50450.00,"bonus":400.00,"comm":1636.00} }'),
JSON2BSON( '{ "empno":"000200","firstnme":"DAVID","midinit":"","lastname":"BROWN", "workdept":"D11","phoneno":[4501,2522],"hiredate":"03/03/2002", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"05/29/1971", "pay": {"salary":57740.00,"bonus":600.00,"comm":2217.00} }'),
JSON2BSON( '{ "empno":"000210","firstnme":"WILLIAM","midinit":"T","lastname":"JONES", "workdept":"","phoneno":[0942],"hiredate":"04/11/1998", "job":"DESIGNER","edlevel":17,"sex":"M","birthdate":"02/23/2003", "pay": {"salary":68270.00,"bonus":400.00,"comm":1462.00} }'),
JSON2BSON( '{ "empno":"000220","firstnme":"JENNIFER","midinit":"K","lastname":"LUTZ", "workdept":"D11","phoneno":[0672],"hiredate":"08/29/1998", "job":"DESIGNER","edlevel":18,"sex":"F","birthdate":"03/19/1978", "pay": {"salary":49840.00,"bonus":600.00,"comm":2387.00} }'),
JSON2BSON( '{ "empno":"000230","firstnme":"JAMES","midinit":"J","lastname":"JEFFERSON", "workdept":"D21","phoneno":[2094,8999,3756],"hiredate":"11/21/1996", "job":"CLERK","edlevel":14,"sex":"M","birthdate":"05/30/1980", "pay": {"salary":42180.00,"bonus":400.00,"comm":1774.00} }'),
JSON2BSON( '{ "empno":"000240","firstnme":"SALVATORE","midinit":"M","lastname":"MARINO", "workdept":"D21","phoneno":[3780],"hiredate":"12/05/2004", "job":"CLERK","edlevel":17,"sex":"M","birthdate":"03/31/2002", "pay": {"salary":48760.00,"bonus":600.00,"comm":2301.00} }'),
JSON2BSON( '{ "empno":"000250","firstnme":"DANIEL","midinit":"S","lastname":"SMITH", "workdept":"D21","phoneno":[0961],"hiredate":"10/30/1999", "job":"CLERK","edlevel":15,"sex":"M","birthdate":"11/12/1969", "pay": {"salary":49180.00,"bonus":400.00,"comm":1534.00} }'),
JSON2BSON( '{ "empno":"000260","firstnme":"SYBIL","midinit":"P","lastname":"JOHNSON", "workdept":"D21","phoneno":[8953,2533],"hiredate":"09/11/2005", "job":"CLERK","edlevel":16,"sex":"F","birthdate":"10/05/1976", "pay": {"salary":47250.00,"bonus":300.00,"comm":1380.00} }'),
JSON2BSON( '{ "empno":"000270","firstnme":"MARIA","midinit":"L","lastname":"PEREZ", "workdept":"D21","phoneno":[9001],"hiredate":"09/30/2006", "job":"CLERK","edlevel":15,"sex":"F","birthdate":"05/26/2003", "pay": {"salary":37380.00,"bonus":500.00,"comm":2190.00} }'),
JSON2BSON( '{ "empno":"000280","firstnme":"ETHEL","midinit":"R","lastname":"SCHNEIDER", "workdept":"E11","phoneno":[8997,1422],"hiredate":"03/24/1997", "job":"OPERATOR","edlevel":17,"sex":"F","birthdate":"03/28/1976", "pay": {"salary":36250.00,"bonus":500.00,"comm":2100.00} }'),
JSON2BSON( '{ "empno":"000290","firstnme":"JOHN","midinit":"R","lastname":"PARKER", "workdept":"E11","phoneno":[4502],"hiredate":"05/30/2006", "job":"OPERATOR","edlevel":12,"sex":"M","birthdate":"07/09/1985", "pay": {"salary":35340.00,"bonus":300.00,"comm":1227.00} }'),
JSON2BSON( '{ "empno":"000300","firstnme":"PHILIP","midinit":"X","lastname":"SMITH", "workdept":"E11","phoneno":[2095],"hiredate":"06/19/2002", "job":"OPERATOR","edlevel":14,"sex":"M","birthdate":"10/27/1976", "pay": {"salary":37750.00,"bonus":400.00,"comm":1420.00} }'),
JSON2BSON( '{ "empno":"000310","firstnme":"MAUDE","midinit":"F","lastname":"SETRIGHT", "workdept":"E11","phoneno":[3332,8005],"hiredate":"09/12/1994", "job":"OPERATOR","edlevel":12,"sex":"F","birthdate":"04/21/1961", "pay": {"salary":35900.00,"bonus":300.00,"comm":1272.00} }'),
JSON2BSON( '{ "empno":"000320","firstnme":"RAMLAL","midinit":"V","lastname":"MEHTA", "workdept":"E21","phoneno":[9990,1533],"hiredate":"07/07/1995", "job":"FIELDREP","edlevel":16,"sex":"M","birthdate":"08/11/1962", "pay": {"salary":39950.00,"bonus":400.00,"comm":1596.00} }'),
JSON2BSON( '{ "empno":"000330","firstnme":"WING","midinit":"","lastname":"LEE", "workdept":"E21","phoneno":[2103,2453],"hiredate":"02/23/2006", "job":"FIELDREP","edlevel":14,"sex":"M","birthdate":"07/18/1971", "pay": {"salary":45370.00,"bonus":500.00,"comm":2030.00} }'),
JSON2BSON( '{ "empno":"000340","firstnme":"JASON","midinit":"R","lastname":"GOUNOT", "workdept":"E21","phoneno":[5698,7744],"hiredate":"05/05/1977", "job":"FIELDREP","edlevel":16,"sex":"M","birthdate":"05/17/1956", "pay": {"salary":43840.00,"bonus":500.00,"comm":1907.00} }'),
JSON2BSON( '{ "empno":"200010","firstnme":"DIAN","midinit":"J","lastname":"HEMMINGER", "workdept":"A00","phoneno":[3978,2564],"hiredate":"01/01/1995", "job":"SALESREP","edlevel":18,"sex":"F","birthdate":"08/14/1973", "pay": {"salary":46500.00,"bonus":1000.00,"comm":4220.00} }'),
JSON2BSON( '{ "empno":"200120","firstnme":"GREG","midinit":"","lastname":"ORLANDO", "workdept":"A00","phoneno":[2167,1690],"hiredate":"05/05/2002", "job":"CLERK","edlevel":14,"sex":"M","birthdate":"10/18/1972", "pay": {"salary":39250.00,"bonus":600.00,"comm":2340.00} }'),
JSON2BSON( '{ "empno":"200140","firstnme":"KIM","midinit":"N","lastname":"NATZ", "workdept":"C01","phoneno":[1793],"hiredate":"12/15/2006", "job":"ANALYST","edlevel":18,"sex":"F","birthdate":"01/19/1976", "pay": {"salary":68420.00,"bonus":600.00,"comm":2274.00} }'),
JSON2BSON( '{ "empno":"200170","firstnme":"KIYOSHI","midinit":"","lastname":"YAMAMOTO", "workdept":"D11","phoneno":[2890],"hiredate":"09/15/2005", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"01/05/1981", "pay": {"salary":64680.00,"bonus":500.00,"comm":1974.00} }'),
JSON2BSON( '{ "empno":"200220","firstnme":"REBA","midinit":"K","lastname":"JOHN", "workdept":"D11","phoneno":[0672],"hiredate":"08/29/2005", "job":"DESIGNER","edlevel":18,"sex":"F","birthdate":"03/19/1978", "pay": {"salary":69840.00,"bonus":600.00,"comm":2387.00} }'),
JSON2BSON( '{ "empno":"200240","firstnme":"ROBERT","midinit":"M","lastname":"MONTEVERDE", "workdept":"D21","phoneno":[3780,6823],"hiredate":"12/05/2004", "job":"CLERK","edlevel":17,"sex":"M","birthdate":"03/31/1984", "pay": {"salary":37760.00,"bonus":600.00,"comm":2301.00} }'),
JSON2BSON( '{ "empno":"200280","firstnme":"EILEEN","midinit":"R","lastname":"SCHWARTZ", "workdept":"E11","phoneno":[8997,9410],"hiredate":"03/24/1997", "job":"OPERATOR","edlevel":17,"sex":"F","birthdate":"03/28/1966", "pay": {"salary":46250.00,"bonus":500.00,"comm":2100.00} }'),
JSON2BSON( '{ "empno":"200310","firstnme":"MICHELLE","midinit":"F","lastname":"SPRINGER", "workdept":"E11","phoneno":[3332,7889],"hiredate":"09/12/1994", "job":"OPERATOR","edlevel":12,"sex":"F","birthdate":"04/21/1961", "pay": {"salary":35900.00,"bonus":300.00,"comm":1272.00} }'),
JSON2BSON( '{ "empno":"200330","firstnme":"HELENA","midinit":"","lastname":"WONG", "workdept":"E21","phoneno":[2103],"hiredate":"02/23/2006", "job":"FIELDREP","edlevel":14,"sex":"F","birthdate":"07/18/1971", "pay": {"salary":35370.00,"bonus":500.00,"comm":2030.00} }'),
JSON2BSON( '{ "empno":"200340","firstnme":"ROY","midinit":"R","lastname":"ALONZO", "workdept":"E21","phoneno":[5698,1533],"hiredate":"07/05/1997", "job":"FIELDREP","edlevel":16,"sex":"M","birthdate":"05/17/1956", "pay": {"salary":31840.00,"bonus":500.00,"comm":1907.00} }')
;
%sql SELECT COUNT(*) FROM JSON_EMP
%%sql -q
DROP TABLE JSON_DEPT;
CREATE TABLE JSON_DEPT
(
SEQ INT NOT NULL GENERATED ALWAYS AS IDENTITY,
DEPT_DATA BLOB(4000) INLINE LENGTH 4000
);
INSERT INTO JSON_DEPT(DEPT_DATA) VALUES
JSON2BSON('{"deptno":"A00", "mgrno":"000010", "admrdept":"A00", "deptname":"SPIFFY COMPUTER SERVICE DIV."}'),
JSON2BSON('{"deptno":"B01", "mgrno":"000020", "admrdept":"A00", "deptname":"PLANNING" }'),
JSON2BSON('{"deptno":"C01", "mgrno":"000030", "admrdept":"A00", "deptname":"INFORMATION CENTER" }'),
JSON2BSON('{"deptno":"D01", "admrdept":"A00", "deptname":"DEVELOPMENT CENTER" }'),
JSON2BSON('{"deptno":"D11", "mgrno":"000060", "admrdept":"D01", "deptname":"MANUFACTURING SYSTEMS" }'),
JSON2BSON('{"deptno":"D21", "mgrno":"000070", "admrdept":"D01", "deptname":"ADMINISTRATION SYSTEMS" }'),
JSON2BSON('{"deptno":"E01", "mgrno":"000050", "admrdept":"A00", "deptname":"SUPPORT SERVICES" }'),
JSON2BSON('{"deptno":"E11", "mgrno":"000090", "admrdept":"E01", "deptname":"OPERATIONS" }'),
JSON2BSON('{"deptno":"E21", "mgrno":"000100", "admrdept":"E01", "deptname":"SOFTWARE SUPPORT" }'),
JSON2BSON('{"deptno":"F22", "admrdept":"E01", "deptname":"BRANCH OFFICE F2" }'),
JSON2BSON('{"deptno":"G22", "admrdept":"E01", "deptname":"BRANCH OFFICE G2" }'),
JSON2BSON('{"deptno":"H22", "admrdept":"E01", "deptname":"BRANCH OFFICE H2" }'),
JSON2BSON('{"deptno":"I22", "admrdept":"E01", "deptname":"BRANCH OFFICE I2" }'),
JSON2BSON('{"deptno":"J22", "admrdept":"E01", "deptname":"BRANCH OFFICE J2" }')
;
%%sql
SELECT trim(JSON_VAL(EMP_DATA,'lastname','s:40')),
JSON_VAL(EMP_DATA,'pay.salary','f')
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '200170'
%%sql
SELECT JSON_VAL(EMP_DATA,'lastname','s:7')
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '000010'
%%sql
SELECT LEFT(JSON_VAL(EMP_DATA,'lastname','s:20'),7)
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '200170'
%sql -a SELECT JSON_VAL(EMP_DATA, 'phoneno.0', 'i') FROM JSON_EMP
%sql SELECT JSON_VAL(EMP_DATA, 'phoneno', 'i:na') FROM JSON_EMP
%%sql
SELECT JSON_VAL(EMP_DATA,'pay.salary','i'),
JSON_VAL(EMP_DATA,'pay.bonus','i'),
JSON_VAL(EMP_DATA,'pay.comm','i')
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '200170'
%%sql -a
SELECT JSON_VAL(EMP_DATA,'lastname','s:30'),
JSON_VAL(EMP_DATA,'midinit','u')
FROM JSON_EMP
ORDER BY 2
%%sql
SELECT COUNT(*) FROM JSON_EMP
WHERE JSON_VAL(EMP_DATA,'midinit','s:40') = '' OR
JSON_VAL(EMP_DATA,'midinit','u') IS NULL
%%sql
SELECT COUNT(*) FROM JSON_EMP
WHERE JSON_VAL(EMP_DATA,'midinit','s:40') = ''
%%sql
SELECT JSON_VAL(EMP_DATA,'empno','s:6') AS EMPNO,
JSON_VAL(EMP_DATA,'lastname','s:20') AS LASTNAME,
JSON_VAL(DEPT_DATA,'deptname','s:30') AS DEPTNAME
FROM JSON_EMP, JSON_DEPT
WHERE
JSON_VAL(DEPT_DATA,'deptno','s:3') =
JSON_VAL(EMP_DATA,'workdept','s:3')
FETCH FIRST 5 ROWS ONLY
%%sql -q
DROP TABLE TYPES;
CREATE TABLE TYPES
(DATA BLOB(4000) INLINE LENGTH 4000);
INSERT INTO TYPES VALUES
JSON2BSON(
'{
"string" : "string",
"integer" : 1,
"number" : 1.1,
"date" : {"$date": "2016-06-20T13:00:00"},
"boolean" : true,
"array" : [1,2,3],
"object" : {type: "main", phone: [1,2,3]}
}');
%%sql
SELECT 'STRING',JSON_TYPE(DATA, 'string', 2048) FROM TYPES
UNION ALL
SELECT 'INTEGER',JSON_TYPE(DATA, 'integer', 2048) FROM TYPES
UNION ALL
SELECT 'NUMBER',JSON_TYPE(DATA, 'number', 2048) FROM TYPES
UNION ALL
SELECT 'DATE',JSON_TYPE(DATA, 'date', 2048) FROM TYPES
UNION ALL
SELECT 'BOOLEAN', JSON_TYPE(DATA, 'boolean', 2048) FROM TYPES
UNION ALL
SELECT 'ARRAY', JSON_TYPE(DATA, 'array', 2048) FROM TYPES
UNION ALL
SELECT 'OBJECT', JSON_TYPE(DATA, 'object', 2048) FROM TYPES
%%sql -q
DROP TABLE SANDBOX;
CREATE TABLE SANDBOX (DATA BLOB(4000) INLINE LENGTH 4000);
%%sql
INSERT INTO SANDBOX VALUES
JSON2BSON('{"count":9782333}')
%%sql
SELECT JSON_TYPE(DATA,'count',2048) AS TYPE
FROM SANDBOX
%sql SELECT JSON_VAL(DATA,'count','i') FROM SANDBOX
%sql SELECT JSON_VAL(DATA,'count','l') FROM SANDBOX
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"count":94123512223422}');
%sql SELECT JSON_TYPE(DATA,'count',2048) AS TYPE FROM SANDBOX
%sql SELECT JSON_TYPE(DATA,'count',2048) FROM SANDBOX
%sql SELECT JSON_VAL(DATA,'count','i') FROM SANDBOX
%sql SELECT JSON_VAL(DATA,'count','l') FROM SANDBOX
%%sql
SELECT JSON_VAL(DATA,'count','n') AS DECIMAL,
JSON_VAL(DATA,'count','f') AS FLOAT
FROM SANDBOX
%%sql -q
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"salary":92342.20}');
%sql SELECT JSON_TYPE(DATA,'salary',2048) AS TYPE FROM SANDBOX
%%sql
SELECT JSON_VAL(DATA,'salary','n') AS DECIMAL,
JSON_VAL(DATA,'salary','i') AS INTEGER,
JSON_VAL(DATA,'salary','f') AS FLOAT
FROM SANDBOX
%sql SELECT DEC(JSON_VAL(DATA,'salary','n'),9,2) AS DECIMAL FROM SANDBOX
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"salary":9.2523E01}');
%sql SELECT JSON_TYPE(DATA,'salary',2048) AS TYPE FROM SANDBOX
%%sql
SELECT JSON_VAL(DATA,'salary','n') AS DECIMAL,
JSON_VAL(DATA,'salary','i') AS INTEGER,
JSON_VAL(DATA,'salary','f') AS FLOAT
FROM SANDBOX
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"valid":true, "invalid":false}');
%sql SELECT JSON_TYPE(DATA,'valid',2048) AS TYPE FROM SANDBOX
%%sql
SELECT JSON_VAL(DATA,'valid','n') AS TRUE_DECIMAL,
JSON_VAL(DATA,'valid','i') AS TRUE_INTEGER,
JSON_VAL(DATA,'invalid','n') AS FALSE_DECIMAL,
JSON_VAL(DATA,'invalid','i') AS FALSE_INTEGER
FROM SANDBOX
%%sql
SELECT JSON_VAL(DATA,'valid','s:5') AS TRUE_STRING,
JSON_VAL(DATA,'valid','b:2') AS TRUE_BINARY,
JSON_VAL(DATA,'invalid','s:5') AS FALSE_STRING,
JSON_VAL(DATA,'invalid','b:2') AS FALSE_BINARY
FROM SANDBOX
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"today":{"$date":"2016-07-01T12:00:00"}}');
%sql SELECT JSON_TYPE(DATA,'today',2048) FROM SANDBOX
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"today":"2016-07-01"}');
SELECT JSON_VAL(DATA,'today','s:10') FROM SANDBOX;
%sql SELECT JSON_VAL(DATA,'today','d') FROM SANDBOX
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"today":"' || VARCHAR(NOW()) || '"}');
SELECT JSON_VAL(DATA,'today','s:30') FROM SANDBOX;
%sql SELECT JSON_VAL(DATA,'today','d') FROM SANDBOX
%sql SELECT JSON_VAL(DATA,'today','ts') FROM SANDBOX
%sql SELECT TIME(JSON_VAL(DATA,'today','ts')) FROM SANDBOX
%sql SELECT JSON_VAL(EMP_DATA, 'lastname', 's:10') FROM JSON_EMP
%sql SELECT JSON_VAL(EMP_DATA, 'lastname', 's:8') FROM JSON_EMP
%sql SELECT JSON_VAL(EMP_DATA, 'phoneno', 'i') FROM JSON_EMP
%%sql
SELECT PHONES.TYPE, CAST(PHONES.VALUE AS VARCHAR(10)) AS VALUE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
WHERE JSON_VAL(E.EMP_DATA,'empno','s:6') = '000010'
%%sql -a
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
CAST(PHONES.VALUE AS VARCHAR(10)) AS PHONE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
%%sql
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
CAST (PHONES.VALUE AS VARCHAR(10)) AS PHONE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
ORDER BY PHONE
%%sql
SELECT CAST(PHONES.VALUE AS VARCHAR(10)) AS PHONE, COUNT(*) AS COUNT
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
GROUP BY PHONES.VALUE HAVING COUNT(*) > 1
ORDER BY PHONES.VALUE
%%sql
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
JSON_LEN(E.EMP_DATA, 'phoneno') AS PHONE_COUNT
FROM JSON_EMP E
%%sql
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
CAST(PHONES.VALUE AS VARCHAR(10)) AS PHONE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
WHERE PHONES.VALUE = 1422
%%sql
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:10') AS LASTNAME
FROM JSON_EMP
WHERE JSON_GET_POS_ARR_INDEX(EMP_DATA,
JSON2BSON('{"phoneno":1422}')) >= 0
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"phone":["1111","2222","3333"]}');
%%sql
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $set: {"lastname":"HAAS"}}')
%sql -j SELECT BSON2JSON(DATA) FROM SANDBOX
%%sql -j
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $set: {"phone":"9999"}}');
SELECT BSON2JSON(DATA) FROM SANDBOX;
%sql -j SELECT BSON2JSON(DATA) FROM SANDBOX
%%sql -j
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $set: {"phone.0":9999}}');
SELECT BSON2JSON(DATA) FROM SANDBOX;
%%sql -j
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $unset: {"phone":null}}');
SELECT BSON2JSON(DATA) FROM SANDBOX;
%%sql -q
DROP INDEX IX_JSON;
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:20') AS LASTNAME
FROM JSON_EMP
WHERE JSON_VAL(EMP_DATA, 'empno', 's:6') = '000010';
noindex = %sql -t \
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:20') AS LASTNAME \
FROM JSON_EMP \
WHERE JSON_VAL(EMP_DATA, 'empno', 's:6') = '000010'
%%sql
CREATE INDEX IX_JSON ON JSON_EMP
(JSON_VAL(EMP_DATA,'empno','s:6'));
withindex = %sql -t \
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:20') AS LASTNAME \
FROM JSON_EMP \
WHERE JSON_VAL(EMP_DATA, 'empno', 's:6') = '000010' \
%%sql -pb
WITH RESULTS(RUN, RESULT) AS (
VALUES ('No Index',:noindex),('With Index',:withindex)
)
SELECT * FROM RESULTS
%%sql -q
DROP TABLE BASE_EMP_TXS;
CREATE TABLE BASE_EMP_TXS (
SEQNO INT NOT NULL GENERATED ALWAYS AS IDENTITY,
INFO VARCHAR(4000),
BSONINFO BLOB(4000) INLINE LENGTH 4000
);
%%sql
CREATE OR REPLACE VIEW EMP_TXS AS
(SELECT SEQNO, BSON2JSON(BSONINFO) AS INFO FROM BASE_EMP_TXS)
%%sql -d
CREATE OR REPLACE TRIGGER I_EMP_TXS
INSTEAD OF INSERT ON EMP_TXS
REFERENCING NEW AS NEW_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
INSERT INTO BASE_EMP_TXS VALUES (
DEFAULT,
NULL,
SYSTOOLS.JSON2BSON(NEW_TXS.INFO)
);
END
@
%%sql -d
CREATE OR REPLACE TRIGGER U_EMP_TXS
INSTEAD OF UPDATE ON EMP_TXS
REFERENCING NEW AS NEW_TXS OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE BASE_EMP_TXS
SET (INFO, BSONINFO) = (NULL,
SYSTOOLS.JSON2BSON(NEW_TXS.INFO))
WHERE
BASE_EMP_TXS.SEQNO = OLD_TXS.SEQNO;
END
@
%%sql -d
CREATE OR REPLACE TRIGGER D_EMP_TX
INSTEAD OF DELETE ON EMP_TXS
REFERENCING OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
DELETE FROM BASE_EMP_TXS
WHERE
BASE_EMP_TXS.SEQNO = OLD_TXS.SEQNO;
END
@
%%sql
INSERT INTO EMP_TXS(INFO) VALUES (
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"salary":152750.00,
"bonus":1000.00,
"comm":4220.00}
}')
%sql -j SELECT INFO FROM EMP_TXS
%%sql -j
UPDATE EMP_TXS SET INFO = '{"empno":"000010"}' WHERE SEQNO = 1;
SELECT INFO FROM EMP_TXS;
%%sql
UPDATE BASE_EMP_TXS
SET BSONINFO = JSON_UPDATE(BSONINFO,
'{$set: {"empno":"111111"}}')
WHERE SEQNO = 1
%sql -j SELECT INFO FROM EMP_TXS
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train and export the model
Step2: For the example, we only trained the model for a single epoch, so it only trains to ~96% accuracy.
Step3: Using the Python TFLiteConverter, the saved model can be converted into a TensorFlow Lite model.
Step4: Write it out to a .tflite file
Step5: To instead quantize the model on export, first set the optimizations flag to optimize for size
Step6: Now, construct and provide a representative dataset, this is used to get the dynamic range of activations.
Step7: Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
Step8: Note how the resulting file is approximately 1/4 the size.
Step9: Run the TensorFlow Lite models
Step10: Load the model into the interpreters
Step11: Test the models on one image
Step12: Evaluate the models
Step13: We can repeat the evaluation on the fully quantized model to obtain
|
<ASSISTANT_TASK:>
Python Code:
! pip uninstall -y tensorflow
! pip install -U tf-nightly
import tensorflow as tf
tf.enable_eager_execution()
! git clone --depth 1 https://github.com/tensorflow/models
import sys
import os
if sys.version_info.major >= 3:
import pathlib
else:
import pathlib2 as pathlib
# Add `models` to the python path.
models_path = os.path.join(os.getcwd(), "models")
sys.path.append(models_path)
saved_models_root = "/tmp/mnist_saved_model"
# The above path addition is not visible to subprocesses, add the path for the subprocess as well.
# Note: channels_last is required here or the conversion may fail.
!PYTHONPATH={models_path} python models/official/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last
saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob("*"))[-1])
saved_model_dir
import tensorflow as tf
tf.enable_eager_execution()
tf.logging.set_verbosity(tf.logging.DEBUG)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
tf.logging.set_verbosity(tf.logging.INFO)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32)/255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
!ls -lh {tflite_models_dir}
import numpy as np
_, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.cast(mnist_test[0], tf.float32)/255.0, mnist_test[1]
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
for img, label in mnist_ds:
break
interpreter.set_tensor(interpreter.get_input_details()[0]["index"], img)
interpreter.invoke()
predictions = interpreter.get_tensor(
interpreter.get_output_details()[0]["index"])
import matplotlib.pylab as plt
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0])))
plt.grid(False)
interpreter_quant.set_tensor(
interpreter_quant.get_input_details()[0]["index"], img)
interpreter_quant.invoke()
predictions = interpreter_quant.get_tensor(
interpreter_quant.get_output_details()[0]["index"])
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0])))
plt.grid(False)
def eval_model(interpreter, mnist_ds):
total_seen = 0
num_correct = 0
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
for img, label in mnist_ds:
total_seen += 1
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
if predictions == label.numpy():
num_correct += 1
if total_seen % 500 == 0:
print("Accuracy after %i images: %f" %
(total_seen, float(num_correct) / float(total_seen)))
return float(num_correct) / float(total_seen)
print(eval_model(interpreter, mnist_ds))
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(eval_model(interpreter_quant, mnist_ds))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data matrices, and normalized matrices
Step2: Check that the data is there
Step3: Decide which region to model.
Step4: Optimization of parameters
Step5: Refine optimization in a small region
Step6: For the other replicate, we can reduce the space of search
|
<ASSISTANT_TASK:>
Python Code:
from pytadbit import load_chromosome
crm = load_chromosome('results/crm18.tdb')
hindiii, mboi = crm.experiments
hindiii
hindiii.load_hic_data('results/HindIII/04_normalizing/100000_raw/chr18.mat')
hindiii.load_norm_data('results/HindIII/04_normalizing/100000_norm/chr18.mat')
mboi.load_hic_data('results/MboI/04_normalizing/100000_raw/chr18.mat')
mboi.load_norm_data('results/MboI/04_normalizing/100000_norm/chr18.mat')
crm.visualize([('HindIII', 'MboI')], normalized=True, paint_tads=True)
crm.visualize([('MboI', 'HindIII')], normalized=True, paint_tads=True,
focus=(670, 770))
opt_hindiii = hindiii.optimal_imp_parameters(
start=685, end=750,
n_models=40, n_keep=20, n_cpus=8,
upfreq_range=(0, 0.6, 0.3),
lowfreq_range=(-0.9, 0, 0.3),
maxdist_range=(1000, 2000, 500),
dcutoff_range=[1.5, 2, 2.5])
opt_hindiii.plot_2d(show_best=5)
opt_hindiii.run_grid_search(upfreq_range=(0, 0.3, 0.3), lowfreq_range=(-0.6, -0.3, 0.3),
maxdist_range=[1750],
dcutoff_range=[1.5, 2],
n_cpus=8)
opt_hindiii.plot_2d(show_best=5)
opt_hindiii.run_grid_search(upfreq_range=(0, 0.3, 0.3), lowfreq_range=(-0.3, 0, 0.1),
maxdist_range=[2000, 2250],
dcutoff_range=[1.5, 2],
n_cpus=8)
opt_hindiii.plot_2d(show_best=5)
opt_hindiii.run_grid_search(upfreq_range=(0, 0.3, 0.1), lowfreq_range=(-0.3, 0, 0.1),
n_cpus=8,
maxdist_range=[2000, 2250],
dcutoff_range=[1.5, 2])
opt_hindiii.plot_2d(show_best=5)
opt_hindiii.get_best_parameters_dict()
opt_mboi = mboi.optimal_imp_parameters(start=685, end=750, n_models=40, n_keep=20, n_cpus=8,
upfreq_range=(0, 0.6, 0.3),
lowfreq_range=(-0.9, -0.3, 0.3),
maxdist_range=(1500, 2000, 500),
dcutoff_range=[1.5, 2])
opt_mboi.plot_2d(show_best=5)
opt_mboi.get_best_parameters_dict()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Networks
Step2: Make Network
Step3: Writing the HTML file
Step4: Display the Network
Step5: RPYS Visualization
Step6: Standard RPYS
Step7: Multi RPYS
|
<ASSISTANT_TASK:>
Python Code:
# Only run this the VERY first time
!pip install metaknowledge
!pip install networkx
!pip install pandas
!pip install python-louvain
# Run this before you do anything else
import metaknowledge as mk
import networkx as nx
import pandas
import community
import webbrowser
# The line below is the most important line in the entire document.
# Make sure the filepath is set to the location where the WOS file is stored.
inputFile = "/Users/jilliananderson/Downloads/imetrics"
networkType = "CoAuthor"
nodeType = "original"
# This cell creates the network based on
# the variables you provided above.
RC = mk.RecordCollection(inputFile)
if networkType == "CoCitation":
net = RC.networkCoCitation(nodeType = nodeType, coreOnly=True)
directed = False
partition = community.best_partition(net)
elif networkType == "CoAuthor":
net = RC.networkCoAuthor()
directed = False
partition = community.best_partition(net)
elif networkType == "Citation":
net = RC.networkCitation(nodeType=nodeType, coreOnly=True)
directed = True
else:
print("Please ensure networkType has been set to one of the accepted values")
# This code detects centrality measures for your network.
betweenness = nx.betweenness_centrality(net)
# closeness = nx.closeness_centrality(net) # <- Extra more complicated centrality
# eigenVect = nx.eigenvector_centrality(net) # <--/
for n in net.nodes():
if not directed:
betw = round(betweenness[n], 3)
comm = partition[n]
net.add_node(n, community=comm, betweenness=betw)
else:
betw = round(betweenness[n], 3)
net.add_node(n, community=comm, betweenness=betw)
# This code writes two .csv files to your computer.
# One is the edgeList and the other is the node Attribute file
mk.writeGraph(net, "myNet")
%%writefile network.html
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<title>Network</title>
<link rel="stylesheet" href="http://networkslab.org/mkD3/styles.css">
<script src="https://d3js.org/d3.v4.js"></script>
<script src="http://networkslab.org/mkD3/mkd3.js"></script>
</head>
<body>
<script type = "text/javascript">
mkd3.networkGraph("myNet_edgeList.csv", "myNet_nodeAttributes.csv")
</script>
</body>
url = 'http://localhost:8888/files/network.html'
webbrowser.open(url)
minYear = 1900
maxYear = 2016
rpysType = "StandardBar"
RC = mk.RecordCollection(inputFile)
rpys = RC.rpys(minYear=1900, maxYear=2016)
df = pandas.DataFrame.from_dict(rpys)
df.to_csv("standard_rpys.csv")
# Creating CitationFile
citations = RC.getCitations()
df = pandas.DataFrame.from_dict(citations)
df.to_csv("standard_citation.csv")
%%writefile standardBar.html
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<title>Title Here</title>
<link rel="stylesheet" href="http://networkslab.org/mkD3/styles.css">
<script src="https://d3js.org/d3.v4.js"></script>
<script src="http://networkslab.org/mkD3/mkd3.js"></script>
</head>
<body>
<script type = "text/javascript">
mkd3.standardBar("standard_rpys.csv", "standard_citation.csv")
</script>
</body>
url = 'http://localhost:8888/files/standardBar.html'
webbrowser.open(url)
%%writefile standardLine.html
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<title>Title Here</title>
<link rel="stylesheet" href="http://networkslab.org/mkD3/styles.css">
<script src="https://d3js.org/d3.v4.js"></script>
<script src="http://networkslab.org/mkD3/mkd3.js"></script>
</head>
<body>
<script type = "text/javascript">
mkd3.standardLine("standard_rpys.csv", "standard_citation.csv")
</script>
</body>
url = 'http://localhost:8888/files/standardLine.html'
webbrowser.open(url)
years = range(minYear, maxYear+1)
RC = mk.RecordCollection(inputFile)
# ***************************
# Create the multiRPYS file
# ***************************
dictionary = {'CPY': [],
"abs-deviation": [],
"num-cites": [],
"rank": [],
"RPY": []}
for i in years:
try:
RCyear = RC.yearSplit(i, i)
if len(RCyear) > 0:
rpys = RCyear.rpys(minYear=1900, maxYear=maxYear)
length = len(rpys['year'])
rpys['CPY'] = [i]*length
dictionary['CPY'] += rpys['CPY']
dictionary['abs-deviation'] += rpys['abs-deviation']
dictionary['num-cites'] += rpys['count']
dictionary['rank'] += rpys['rank']
dictionary['RPY'] += rpys['year']
except:
pass
df = pandas.DataFrame.from_dict(dictionary)
df.to_csv("multi_rpys.csv")
# ***************************
# Create the citation file
# ***************************
dictionary = {"author": [],
"journal": [],
"num-cites": [],
"RPY": [],
"CPY": []}
for i in years:
try:
RCyear = RC.yearSplit(i, i)
if len(RCyear) > 0:
citations = RCyear.getCitations()
length = len(citations['year'])
citations['CPY'] = [i]*length
dictionary['CPY'] += citations['CPY']
dictionary['author'] += citations['author']
dictionary['journal'] += citations['journal']
dictionary['num-cites'] += citations['num-cites']
dictionary['RPY'] += citations['year']
except:
pass
df = pandas.DataFrame.from_dict(dictionary)
df.to_csv("multi_citation.csv")
%%writefile multiRPYS.html
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<title>Title Here</title>
<link rel="stylesheet" href="http://networkslab.org/mkD3/styles.css">
<script src="https://d3js.org/d3.v4.js"></script>
<script src="http://networkslab.org/mkD3/mkd3.js"></script>
</head>
<body>
<script type = "text/javascript">
mkd3.multiRPYS("multi_rpys.csv", "multi_citation.csv")
</script>
</body>
url = 'http://localhost:8888/files/multiRPYS.html'
webbrowser.open(url)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define functions
Step2: Define variables for shapefiles and GeoTIFF
Step3: Import GeoTIFF
Step4: Get GeoTransformation parameters, calculate image extents
Step5: Load shapefiles (for plotting only)
Step6: Create basemap with shaded relief image and mountain range boundaries
Step7: Mask original raster with shapefiles
Step8: Load clipped rasters
Step9: Set yres to a positive value
Step10: Calculate pixel size (in km) along the N-S direction
Step11: Calculate pixel size along the E-W direction, create array with area values
Step12: Get base statistics for clipped rasters and calculate Elevation values used in hypsometric analysis
Step13: Make a masked array of cell area and calculate Area values used in hypsometric analysis
Step14: Plot hypsographic (absolute values) curve
Step15: Plot hypsometric (normalized values) curve
Step16: Make histograms
Step17: Simple frequency (cell count) histograms
Step18: Histograms of area per elevation
Step19: To calculate the area of pixels per elevation, we use the ndimage function from SciPy. It sums the values in one array (area) based on occurence a second array (elevation). A third array is used as an index (from 0 to max+1).
Step20: Plot histograms
Step21: We can compare both methods and see that approximating the area of pixels by the mean cell size gives results very close to those obtained by calculating the area of each pixel.
|
<ASSISTANT_TASK:>
Python Code:
import sys, os
import numpy as np
import math as math
import numpy.ma as ma
from matplotlib import cm
from matplotlib.colors import LightSource
from scipy import ndimage
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
%matplotlib inline
# import osgeo libs after basemap, so it
# won't cause conflicts (Assertion failed..)
# with mannualy-installed GEOS
import gdal, ogr
import shapefile as shpf
# auxiliar functions
def roundBase(x, base=5):
return int(base * round(float(x)/base))
def roundUp(x, base=50):
return int(base * np.ceil(float(x)/base))
def roundDown(x, base=50):
return int(base * np.floor(float(x)/base))
def haversine(lon1, lat1, lon2, lat2, r=6371.009):
R = r # Earth radius in kilometers
dLat = math.radians(lat2 - lat1)
dLon = math.radians(lon2 - lon1)
lat1 = math.radians(lat1)
lat2 = math.radians(lat2)
a = math.sin(dLat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dLon/2)**2
c = 2 * math.asin(math.sqrt(a))
return R * c
# files
dataDir = './data/'
mountain = 'cachimbo' # 'alps', 'blueRidge', 'espinhaco', 'cachimbo', 'ibiapaba'
mtn = mountain + '.shp'
mtn_NE = mountain + '_NEarth.shp'
tiff = mountain + '.tif'
# label for 5M/7M boundaries
source = 'IBGE'# brazilian maps
# source = 'ECALP' # Alps
# source = 'Fenneman & Johnson 1946' # USA Physiographic Provinces
rast = gdal.Open(tiff)
rast_band = rast.GetRasterBand(1)
rast_array = rast.ReadAsArray()
rast_stats = rast_band.GetStatistics( True, True )
rast_min = rast_stats[0]
rast_max = rast_stats[1]
w_lon, xdim, rot1, n_lat, rot2, ydim = rast.GetGeoTransform()
e_lon = w_lon + xdim * rast.RasterXSize
s_lat = n_lat + ydim * rast.RasterYSize
bound_5M = shpf.Reader(mtn)
bound_5M_lonlat = np.array(bound_5M.shape().points)
bound_NE = shpf.Reader(mtn_NE)
bound_NE_lonlat = np.array(bound_NE.shape().points)
m = Basemap(projection='merc', llcrnrlat=s_lat, urcrnrlat=n_lat, llcrnrlon=w_lon, \
urcrnrlon=e_lon, resolution='c')
ls = LightSource(azdeg=135,altdeg=25)
rgb = ls.shade(rast_array,plt.cm.Greys)
m_shade = m.imshow(rgb, origin='upper')
m_color = m.imshow(rast_array, origin='upper',cmap=plt.cm.terrain, alpha=0.8, vmin=-150)
bounds = range(0, roundUp(rast_max), 50)
cbar = m.colorbar(size='3%', boundaries=bounds)
cbar.ax.tick_params(labelsize=8)
m.drawmapscale(lon=e_lon-0.8, lat=s_lat+0.5, lon0=e_lon, lat0=s_lat, length=100)
xticks = np.arange(roundBase(w_lon), roundBase(e_lon), 2)
yticks = np.arange(roundBase(s_lat), roundBase(n_lat), 2)
m.drawparallels(yticks, linewidth=0.2, labels=[1,0,0,0], fontsize=9) # draw parallels
m.drawmeridians(xticks, linewidth=0.2, labels=[0,0,1,0], fontsize=9) # draw meridians
m.plot(bound_NE_lonlat[:,0], bound_NE_lonlat[:,1], c='k', label='Natural Earth', latlon=True)
m.plot(bound_5M_lonlat[:,0], bound_5M_lonlat[:,1], c='r', label=source, latlon=True)
lg = plt.legend(loc='upper right', fontsize=9)
lg.get_frame().set_alpha(.8) # A little transparency
# plt.show()
# plt.savefig(mtn + '.pdf', dpi=600, bbox_inches='tight')
# plt.clf()
# 5M limits
out_mtn = dataDir + mountain + '_clip_5M.tif'
os.system('gdalwarp -overwrite -dstnodata -9999 -cutline %s %s %s' %(mtn, tiff, out_mtn))
# Natural Earth
out_NE = dataDir + mountain + '_clip_NE.tif'
os.system('gdalwarp -overwrite -dstnodata -9999 -cutline %s %s %s' %(mtn_NE, tiff, out_NE))
# 5M
rast_clip = gdal.Open(out_mtn)
clip_bd = rast_clip.GetRasterBand(1)
clip_array = rast_clip.ReadAsArray()
clip_mask = ma.masked_where(clip_array == -9999, clip_array)
# NatEarth
rast_clip_NE = gdal.Open(out_NE)
clip_NE_bd = rast_clip_NE.GetRasterBand(1)
clip_NE_array = rast_clip_NE.ReadAsArray()
clip_NE_mask = ma.masked_where(clip_NE_array == -9999, clip_NE_array)
if ydim < 0:
yres = ydim * -1.0
dy = haversine(0, 0, 0, ydim, r=6371.009)
# array with indices
rows, cols = np.indices(rast_array.shape)
nrows = rast_array.shape[0]
ncols = rast_array.shape[1]
# new array for area values
area_array = np.empty(rast_array.shape)
# nested loop to create array with area values
for row in range(nrows):
for col in range(ncols):
y = row
lat = n_lat - ((y - 0.5) * yres)
dx = haversine(0, lat, xdim, lat, r=6371.009)
area_array[row,col] = dx * dy
# elevation 5M
stats_clip = clip_bd.GetStatistics( True, True )
clip_min = stats_clip[0]
clip_max = stats_clip[1]
# heigh of point/contour above base of basin
clip_array_comp = ma.compressed(clip_mask)
h_clip = clip_array_comp - clip_min
# total height of basin
H_clip = clip_max - clip_min
# normalize elev for hypsometric curve
elevNorm_clip = h_clip / H_clip
# elevation NatEarth
stats_clip_NE = clip_NE_bd.GetStatistics( True, True )
clip_NE_min = stats_clip_NE[0]
clip_NE_max = stats_clip_NE[1]
clip_array_NE_comp = ma.compressed(clip_NE_mask)
h_clip_NE = clip_array_NE_comp - clip_min
H_clip_NE = clip_NE_max - clip_NE_min
elevNorm_clip_NE = h_clip_NE / H_clip_NE
# cell area 5M
area_clip = ma.masked_where(clip_array == -9999, area_array)
# total area of basin/area
area_clip_sum = np.sum(area_clip)
# cumulative area for hypsographyc curve
area_clip_csum = np.cumsum(ma.compressed(area_clip))
# normalized area for hypsometric curve
area_norm_clip = area_clip / area_clip_sum
area_norm_csum = np.cumsum(ma.compressed(area_norm_clip))
# cell area NatEarth
area_clip_NE = ma.masked_where(clip_NE_array == -9999, area_array)
area_clip_sum_NE = np.sum(area_clip_NE)
area_clip_csum_NE = np.cumsum(ma.compressed(area_clip_NE))
area_norm_clip_NE = area_clip_NE / area_clip_sum_NE
area_norm_csum_NE = np.cumsum(ma.compressed(area_norm_clip_NE))
# 5M
plt.plot(area_clip_csum[::-1], np.sort(ma.compressed(clip_mask)), c='r', label=source)
# NatEarth
plt.plot(area_clip_csum_NE[::-1], np.sort(ma.compressed(clip_NE_mask)), c='k', \
label='Natural Earth')
# decorations
plt.ylabel('Elevation')
plt.xlabel('Area km^2')
plt.title('Hypsographic curve for ' + mountain)
# plt.ylim(0.0, 5000.0)
lg = plt.legend(loc='upper right', fontsize=9)
# fighist = mountain + '_hypsographic.pdf'
# plt.savefig(fighist)
# plt.clf()
# 5M
plt.plot(area_norm_csum[::-1], np.sort(ma.compressed(elevNorm_clip)), c='r', label=source)
# NatEarth
plt.plot(area_norm_csum_NE[::-1], np.sort(ma.compressed(elevNorm_clip_NE)), c='k', \
label='Natural Earth')
# decorations
plt.xlim(0.0,1.0)
plt.ylim(0.0,1.0)
plt.ylabel('Elevation: h/H')
plt.xlabel('Area: a/A')
plt.title('Hypsometric curve for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# fighist = mountain + '_hypsometric.pdf'
# plt.savefig(fighist)
# plt.clf()
# define bins for all histograms
binsize = 50
# 5M
bins_clip = range(0, roundUp(clip_max), binsize)
bincenters = [i + binsize/2 for i in bins_clip]
# Nat Earth
bins_clip_NE = range(0, roundUp(clip_NE_max), binsize)
bincenters_NE = [i + binsize/2 for i in bins_clip_NE]
# 5M
vals, edges = np.histogram(clip_array_comp, bins=bins_clip)
plt.plot(bincenters[:-1], vals, c='r', label='IBGE')
# NatEarth
vals_NE, edges_NE = np.histogram(clip_array_NE_comp, bins=bins_clip_NE)
plt.plot(bincenters_NE[:-1], vals_NE, c='k', label='Natural Earth')
# decorations
plt.ylabel('Elevation frequency (counts)')
plt.xlabel('Elevation (m)')
plt.title('Frequency histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# plt.show()
# fighist = mountain + '_histogram_frequency.pdf'
# plt.savefig(fighist)
# plt.clf()
# i) approximating area by mean cell size
mean_area_clip = np.mean(area_clip)
mean_area_clip_NE = np.mean(area_clip_NE)
# 5M
vals, edges = np.histogram(clip_array_comp, bins=bins_clip)
plt.plot(bincenters[:-1], vals * mean_area_clip, c='r', label='IBGE')
# NatEarth
vals_NE, edges_NE = np.histogram(clip_array_NE_comp, bins=bins_clip_NE)
plt.plot(bincenters_NE[:-1], vals_NE * mean_area_clip_NE, c='k', label='Natural Earth')
# decorations
plt.ylabel('Area km2 (approx)')
plt.xlabel('Elevation (m)')
plt.title('Area (approx) histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# plt.show()
# fighist = mountain + '_histogram_area_approx.pdf'
# plt.savefig(fighist)
# plt.clf()
# ii) calculating area per elevation
# 5M data
clip_range = np.arange(0, int(clip_max)+1)
sum_area_clip = ndimage.sum(area_array, clip_array, clip_range)
# sum the values of areas in each bin
bins_sum = []
for i in bincenters:
low = i - (binsize / 2)
up = i + (binsize / 2)
b_sum = np.sum(sum_area_clip[low:up])
bins_sum.append(b_sum)
# Natural Earth
clip_range_NE = np.arange(0, int(clip_NE_max)+1)
sum_area_clip = ndimage.sum(area_array, clip_NE_array, clip_range_NE)
bins_sum_NE = []
for i in bincenters_NE:
low = i - (binsize / 2)
up = i + (binsize / 2)
b_sum = np.sum(sum_area_clip[low:up])
bins_sum_NE.append(b_sum)
# 5M
plt.plot(bincenters, bins_sum, c='r', label='IBGE')
# Natural Earth
plt.plot(bincenters_NE, bins_sum_NE, c='k', label='Natural Earth')
# decorations
plt.ylabel('Area km2 (calc)')
plt.xlabel('Elevation (m)')
plt.title('Area (calc) histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# plt.show()
# fighist = mountain + '_histogram_area_calc.pdf'
# plt.savefig(fighist)
# plt.clf()
# 5M area - calculated
plt.plot(bincenters, bins_sum, c='r', label='calculated')
#5M area - approximated
plt.plot(bincenters[:-1], vals * mean_area_clip, 'o', c='k', ms=4, label='approximated')
# plt.plot(bins_sum[:-1],vals * mean_area_clip, 'ko-')
# decorations
plt.ylabel('Area km2')
plt.xlabel('Elevation (m)')
plt.title('Area histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore the Data
Step3: Implement Preprocessing Function
Step4: Preprocess all the data and save it
Step5: Check Point
Step6: Check the Version of TensorFlow and Access to GPU
Step8: Build the Neural Network
Step10: Process Decoder Input
Step12: Encoding
Step14: Decoding - Training
Step16: Decoding - Inference
Step18: Build the Decoding Layer
Step20: Build the Neural Network
Step21: Neural Network Training
Step22: Build the Graph
Step25: Batch and pad the source and target sequences
Step27: Train
Step28: My conclusions from hyperparameter adjustmensts are that for comparable results
Step29: Checkpoint
Step31: Sentence to Sequence
Step32: Translate
Step33: Imperfect Translation
|
<ASSISTANT_TASK:>
Python Code:
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int,
target_vocab_to_int):
Convert source and target text to proper vectors of word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists of lists (source_id_text, target_id_text)
sentences = source_text.split('\n')
source_vectors = []
for sent in sentences:
source_vectors.append([
source_vocab_to_int[word] for word in sent.split(' ') if word != ''
])
sentences = target_text.split('\n')
target_vectors = []
for sent in sentences:
target_vectors.append([
target_vocab_to_int[word] for word in sent.split(' ') if word != ''
] + [target_vocab_to_int['<EOS>']])
return source_vectors, target_vectors
tests.test_text_to_ids(text_to_ids)
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) == LooseVersion('1.1.0'), 'Please use TensorFlow version 1.1'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
inputs = tf.placeholder(tf.int32, [None, None], 'input')
targets = tf.placeholder(tf.int32, [None, None], 'target')
lr = tf.placeholder(tf.float32, name='lr')
kp = tf.placeholder(tf.float32, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq_len = tf.reduce_max(target_seq_len, name='max_target_len')
source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, lr, kp, target_seq_len, max_target_seq_len, source_seq_len
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# difficult way
# ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
# easy way
ending = target_data[:, :-1]
dec_input = tf.concat(
values=[tf.fill(dims=[batch_size, 1], value=target_vocab_to_int['<GO>']),
ending],
axis=1)
return dec_input
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
emb = tf.contrib.layers.embed_sequence(
ids=rnn_inputs,
vocab_size=source_vocab_size,
embed_dim=encoding_embedding_size)
def wrapped_cell(rnn_size, keep_prob):
initer = tf.random_uniform_initializer(-0.1, 0.1, seed=2)
cell = tf.contrib.rnn.LSTMCell(num_units=rnn_size, initializer=initer)
return tf.contrib.rnn.DropoutWrapper(
cell=cell, input_keep_prob=keep_prob)
stacked = tf.contrib.rnn.MultiRNNCell(
[wrapped_cell(rnn_size, keep_prob) for _ in range(num_layers)])
rnn_output, rnn_state = tf.nn.dynamic_rnn(
cell=stacked,
inputs=emb,
sequence_length=source_sequence_length,
dtype=tf.float32)
return rnn_output, rnn_state
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoder output containing training logits and sample_id
train_help = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell,
helper=train_help,
initial_state=encoder_state,
output_layer=output_layer)
f_outputs, f_state = tf.contrib.seq2seq.dynamic_decode(decoder=train_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
# TODO: keep_prob - unused argument?
return f_outputs
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoder output containing inference logits and sample_id
start_tokens = tf.tile(input=tf.constant([start_of_sequence_id], dtype=tf.int32),
multiples=[batch_size],
name='start_tokens')
infer_help = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings,
start_tokens=start_tokens,
end_token=end_of_sequence_id)
infer_decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell,
helper=infer_help,
initial_state=encoder_state,
output_layer=output_layer)
f_outputs, f_state = tf.contrib.seq2seq.dynamic_decode(decoder=infer_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
# TODO: keep_prob - unused argument
return f_outputs
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state, target_sequence_length,
max_target_sequence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# Embed the target sequences
dec_embeddings = tf.Variable(
tf.random_uniform([target_vocab_size, decoding_embedding_size]))
emb = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Construct the decoder LSTM cell
#(just like you constructed the encoder cell above)
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(
rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell(
[make_cell(rnn_size) for _ in range(num_layers)])
# Create an output layer to map the outputs of the decoder
# to the elements of our vocabulary
output_layer = Dense(
target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(
mean=0.0, stddev=0.1))
with tf.variable_scope("decode"):
train_logits = decoding_layer_train(
encoder_state, dec_cell, emb, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
infer_logits = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
max_target_sequence_length, target_vocab_size, output_layer,
batch_size, keep_prob)
return train_logits, infer_logits
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length, source_vocab_size,
target_vocab_size, enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, enc_state = encoding_layer(
input_data, rnn_size, num_layers, keep_prob, source_sequence_length,
source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
train_dec_out, infer_dec_out = decoding_layer(
dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int,
target_vocab_size, batch_size, keep_prob, dec_embedding_size)
return train_dec_out, infer_dec_out
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 50
save_path = 'checkpoints/dev'
((source_int_text, target_int_text),
(source_vocab_to_int, target_vocab_to_int), _) = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
(input_data, targets, lr, keep_prob, target_sequence_length,
max_target_sequence_length, source_sequence_length) = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(target, [(0, 0), (0, max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(logits, [(0, 0), (0, max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths,
valid_targets_lengths) = next(
get_batches(valid_source, valid_target, batch_size, source_vocab_to_int[
'<PAD>'], target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths,
targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run([train_op, cost], {
input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability
})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(inference_logits, {
input_data:
source_batch,
source_sequence_length:
sources_lengths,
target_sequence_length:
targets_lengths,
keep_prob:
1.0
})
batch_valid_logits = sess.run(inference_logits, {
input_data:
valid_sources_batch,
source_sequence_length:
valid_sources_lengths,
target_sequence_length:
valid_targets_lengths,
keep_prob:
1.0
})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch,
batch_valid_logits)
print(
'Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i,
len(source_int_text) // batch_size, train_acc,
valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
print('# Number of Epochs ', epochs)
print('# Batch Size ', batch_size)
print('# RNN Size ', rnn_size)
print('# Number of Layers ', num_layers)
print('# Embedding Size ', encoding_embedding_size)
print('# Learning Rate ', learning_rate)
print('# Dropout Keep Probability ', keep_probability)
# Save parameters for checkpoint
helper.save_params(save_path)
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [
vocab_to_int[w] if w in vocab_to_int.keys() else vocab_to_int['<UNK>']
for w in sentence.lower().split(' ')
]
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
import sys
sys.modules.keys()
import types
lines = !conda list
versions = dict()
for line in lines[2:]:
li = line.split()
versions[li[0]] = li[1]
def imports():
print('Modules used in the notebook:\n')
for val in globals().values():
if isinstance(val, types.ModuleType):
name = val.__name__
ver = ''
if name in versions:
ver = versions[name]
# special case for tensorflow-gpu
if name + '-gpu' in versions:
ver = versions[name + '-gpu']
name = name + '-gpu'
print('{:25}{:>10}'.format(name, ver))
imports()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.split(X_train,num_folds)
y_train_folds = np.split(y_train,num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
for i in range(num_folds):
X_train_data_fold = np.empty(shape=(0,X_train.shape[1]))
y_train_data_fold = np.array([])
for j in range(num_folds):
if j!=i:
X_train_data_fold = np.vstack((X_train_data_fold,X_train_folds[j]))
y_train_data_fold = np.hstack((y_train_data_fold,y_train_folds[j]))
classifier = KNearestNeighbor()
classifier.train(X_train_data_fold, y_train_data_fold)
dists = classifier.compute_distances_no_loops(X_train_folds[i])
y_test_pred = classifier.predict_labels(dists, k)
num_correct = np.sum(y_test_pred == y_train_folds[i])
accuracy = float(num_correct) / num_test
if k not in k_to_accuracies:
k_to_accuracies[k] = []
k_to_accuracies[k].append(accuracy)
print ("done with ",k)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 5
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Euler's method
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
np.zeros?
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
y = np.empty(len(x))
y[0] = y0
n = 0
while n < len(x)-1:
h = x[n+1] - x[n]
y[n+1] = y[n] + h*derivs(y[n],x[n])
n += 1
return y
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
y = np.empty(len(x))
y[0] = y0
n = 0
while n < len(x)-1:
h = x[n+1] - x[n]
y[n+1] = y[n] + h*derivs(y[n] + h*derivs(y[n],x[n])/2,x[n] + h/2)
n += 1
return y
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
y = np.empty(len(x))
for i in range(len(x)):
y[i] = 0.25*np.exp(2*x[i]) - 0.5*x[i] - 0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
N = 10
x = np.linspace(0,1.0,N)
y0 = 0.0
derivs = lambda y, x: x + 2*y
y_euler = solve_euler(derivs,y0,x)
y_midpoint = solve_midpoint(derivs,y0,x)
y_odeint = odeint(derivs,y0,x)
y_exact = solve_exact(x)
plt.plot(x, y_euler, label='Euler')
plt.plot(x, y_midpoint, label='Midpoint')
plt.plot(x, y_odeint, label='odeint')
plt.plot(x, y_exact, label='Exact')
plt.legend(loc=2);
assert True # leave this for grading the plots
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Not all words are equal
Step2: Not all unique words different
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
string1 = "hi aseem the car will be late regards company"
string2 = "hi company why will it be late I paid in advance regards aseem"
string3 = "hi aseem we don't know why will it be late regards company of company"
email_list = [string1, string2, string3]
vectorizer.fit(email_list)
bag_of_words = vectorizer.transform(email_list)
print vectorizer.vocabulary_
print bag_of_words
import nltk
nltk.download()
from nltk.corpus import stopwords
sw = stopwords.words("english")
len(sw)
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english")
print stemmer.stem("responsiveness")
print stemmer.stem("responsivity")
print stemmer.stem("unresponsive")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
<ASSISTANT_TASK:>
Python Code:
class Node (object):
def __init__ (self, data=None):
#TODO:implement me
pass
def __str__ (self):
#TODO:implement me
pass
class BinaryTree (object):
def __init__ (self):
#TODO:implement me
pass
def insert (self, newData):
#TODO:implement me
pass
def delete (self, key):
#TODO:implement me
pass
def maxNode (self):
#TODO:implement me
pass
def minNode (self):
#TODO:implement me
pass
def printPostOrder (self):
#TODO:implement me
pass
def printPreOrder (self):
#TODO:implement me
pass
def printInOrder (self):
#TODO:implement me
pass
def treeIsEmpty (self):
#TODO: implement me
pass
from nose.tools import assert_equal
class TestBinaryTree(object):
def test_insert_traversals (self):
myTree = BinaryTree()
myTree2 = BinaryTree()
for num in [50, 30, 70, 10, 40, 60, 80, 7, 25, 38]:
myTree.insert(num)
[myTree2.insert(num) for num in range (1, 100, 10)]
print("Test: insert checking with in order traversal")
expectVal = [7, 10, 25, 30, 38, 40, 50, 60, 70, 80]
assert_equal(myTree.printInOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
assert_equal(myTree2.printInOrder(), expectVal)
print("Test: insert checking with post order traversal")
expectVal = [7, 25, 10, 38, 40, 30, 60, 80, 70, 50]
assert_equal(myTree.printPostOrder(), expectVal)
expectVal = [91, 81, 71, 61, 51, 41, 31, 21, 11, 1]
assert_equal(myTree2.printPostOrder(), expectVal)
print("Test: insert checking with pre order traversal")
expectVal = [50, 30, 10, 7, 25, 40, 38, 70, 60, 80]
assert_equal(myTree.printPreOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
assert_equal(myTree2.printPreOrder(), expectVal)
print("Success: test_insert_traversals")
def test_max_min_nodes (self):
myTree = BinaryTree()
myTree.insert(5)
myTree.insert(1)
myTree.insert(21)
print("Test: max node")
assert_equal(myTree.maxNode(), 21)
myTree.insert(32)
assert_equal(myTree.maxNode(), 32)
print("Test: min node")
assert_equal(myTree.minNode(), 1)
print("Test: min node inserting negative number")
myTree.insert(-10)
assert_equal(myTree.minNode(), -10)
print("Success: test_max_min_nodes")
def test_delete (self):
myTree = BinaryTree()
myTree.insert(5)
print("Test: delete")
myTree.delete(5)
assert_equal(myTree.treeIsEmpty(), True)
print("Test: more complex deletions")
[myTree.insert(x) for x in range(1, 5)]
myTree.delete(2)
assert_equal(myTree.root.rightChild.data, 3)
print("Test: delete invalid value")
assert_equal(myTree.delete(100), False)
print("Success: test_delete")
def main():
testing = TestBinaryTree()
testing.test_insert_traversals()
testing.test_max_min_nodes()
testing.test_delete()
if __name__=='__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This installs datapackage-pipelines with speed optimizations, if you encounter problems installing it, remove the [speedup] suffix.
Step2: Create a flow
Step3: Run the flow
Step4: This is standard DataFlows library usage, now let's see what datapackage-pipelines provides
Step5: Using dpp
Step6: Run the pipeline
Step7: Pipeline Dependencies
Step8: Clear the pipelines state using dpp init and list the available pipelines
Step9: You can see that the new pipeline can't run until it's dependency is executed.
Step10: Inspect the created datapackage
Step11: Pipeline processors
Step12: Run the pipeline
Step13: Print the datapackage
Step14: Pipelines Server
Step15: Start a local pipelines server, mounting the current working directory into the container
Step16: After a few seconds, the pipelines dashboad should be available at http
Step17: Inspect the Pipelines server logs and wait for Update Pipelines task to complete and pipelines to start running
|
<ASSISTANT_TASK:>
Python Code:
%%sh
python3 -m pip install -qU datapackage-pipelines[seedup]
%%sh
dpp version
%%writefile countries_population_flow.py
# notice that we don't import any datapackage-pipelines modules
# all the flow code is written purely with the DataFlows library
from dataflows import Flow, dump_to_path, load, add_metadata, printer, update_resource
from lxml import etree
from urllib.request import urlopen
# Generator flow step, copied from the DataFlows tutorial
# it just spews rows of data - in this case, countries populations scraped from Wikipedia
def country_population():
# Read the Wikipedia page and parse it using etree
page = urlopen('https://en.wikipedia.org/w/index.php?title=List_of_countries_and_dependencies_by_population&oldid=987469839').read()
parser = etree.XMLParser(recover=True)
tree = etree.fromstring(page, parser)
# Iterate on all tables, rows and cells
for table in tree.findall('.//table'):
if 'wikitable' in table.attrib.get('class', ''):
for row in table.find('tbody').findall('tr'):
cells = row.findall('td')
if len(cells) > 3:
# If a matching row is found...
name = cells[0].find('.//a').attrib.get('title').replace("Demographics of","")
population = cells[1].text
# ... yield a row with the information
yield dict(
name=name,
population=population
)
# The main entrypoint for Datapackage Pipelines, each flow file should have a single flow function
def flow(*args):
return Flow(
country_population(),
update_resource('res_1', **{
# Set a proper name for the resource
'name': 'countries_population',
# Always set a path as well, even if you don't intend to save it to the filesystem
'path': 'countries_population.csv',
# dpp:streaming property is required to let Datapackage Pipelines know it should handle this resource
'dpp:streaming': True,
})
)
# Entrypoint for running the flow directly, without Datapackage Pipelines
if __name__ == '__main__':
# Add a printer step and run the flow
Flow(flow(), printer(num_rows=1, tablefmt='html')).process()
%run countries_population_flow.py
%%writefile pipeline-spec.yaml
countries-population:
pipeline:
- flow: countries_population_flow
- run: dump.to_path
parameters:
out-path: data/countries_population
%%sh
dpp
%%sh
dpp run ./countries-population
%%writefile pipeline-spec.yaml
countries-population:
pipeline:
- flow: countries_population_flow
- run: dump.to_path
parameters:
out-path: data/countries_population
sorted_countries_by_name:
dependencies:
- pipeline: ./countries-population
- datapackage: data/countries_population/datapackage.json
pipeline:
- run: load
parameters:
from: data/countries_population/datapackage.json
resources: ['countries_population']
- run: sort
parameters:
resources: ['countries_population']
sort-by: '{name}'
- run: dump.to_path
parameters:
out-path: data/sorted_countries_by_name
%%sh
dpp init
dpp
%%sh
dpp run --dirty all
from dataflows import Flow, load, printer
Flow(
load('data/sorted_countries_by_name/datapackage.json'),
printer(num_rows=1, tablefmt='html')
).process()
%%writefile pipeline-spec.yaml
double-winners:
pipeline:
- run: load
parameters:
name: emmies
from: https://raw.githubusercontent.com/datahq/dataflows/master/data/emmy.csv
- run: load
parameters:
name: oscars
from: https://raw.githubusercontent.com/datahq/dataflows/master/data/academy.csv
- run: filter
parameters:
resources: ['emmies']
in:
- winner: 1
- run: concatenate
parameters:
target: {'name': 'emmies_filtered'}
resources: ['emmies']
fields:
emmy_nominee: ['nominee']
- run: join
parameters:
source:
name: 'emmies_filtered'
key: ['emmy_nominee']
delete: true
target:
name: 'oscars'
key: ['Name']
fields: {}
full: false
- run: filter
parameters:
in:
- Winner: "1"
- run: dump.to_path
parameters:
out-path: data/double_winners
%%sh
dpp run ./double-winners
from dataflows import Flow, printer, load
Flow(load('data/double_winners/datapackage.json'), printer(tablefmt='html', num_rows=1)).process()
%%sh
docker pull frictionlessdata/datapackage-pipelines
%%sh
docker run -d --name dpp -v `pwd`:/pipelines:rw -p 5000:5000 frictionlessdata/datapackage-pipelines server
%%writefile pipeline-spec.yaml
countries-population:
schedule:
# minute hour day_of_week day_of_month month_of_year
crontab: '* * * * *'
pipeline:
- flow: countries_population_flow
- run: dump.to_path
parameters:
out-path: data/countries_population
sorted_countries_by_name:
dependencies:
- pipeline: ./countries-population
- datapackage: data/countries_population/datapackage.json
pipeline:
- run: load
parameters:
from: data/countries_population/datapackage.json
resources: ['countries_population']
- run: sort
parameters:
resources: ['countries_population']
sort-by: '{name}'
- run: dump.to_path
parameters:
out-path: data/sorted_countries_by_name
%%sh
docker logs dpp --tail 5
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: and here are the first 6 bigrams
Step2: and the first 5 trigrams
Step3: Our task is to decide which of these N-grams are actually phrases.
Step4: We end up with a lot of junk collocations. This is often due to very infrequent N-grams; consider a bigram with tokens that occur only once in the whole corpus
Step5: We can also apply our filter from earlier notebooks using the apply_word_filter() function. This function expects another function that will return True if a token should be excluded. Our filter_token() function does the opposite (True to keep a token), so we can use a lambda function to invert it.
Step6: Depending on what kind of phrases you are interested in, it may be desirable to filter N-grams based on the part of speach (POS) of their constituent tokens. The pos_tag() function generates a list of tokens and tags from our corpus. We can use those tagged tokens to build the collocate model, and then filter based on tags.
Step7: Rather than specifying a number of N-grams to accept as phrases, we can set a threshold PMI value. Let's first examine the distribution of scores. Here are the bottom 20 scores (using score_ngrams()).
Step8: We can extract the scores, and plot a histogram to see the range and distribution of values.
Step9: To accept only N-grams with a PMI over 2, for example, we could do (selecting only the last 20)
Step10: The MWETokenizer combines the two tokens in each bigram using an underscore _. So when we operate on the tokens below, we must take that into account.
|
<ASSISTANT_TASK:>
Python Code:
documents.words()[:7]
[gram for gram in nltk.bigrams(documents.words())][:6]
[gram for gram in nltk.trigrams(documents.words())][:5]
bigram_measures = nltk.collocations.BigramAssocMeasures()
finder = nltk.collocations.BigramCollocationFinder.from_words(documents.words())
finder.nbest(bigram_measures.pmi, 10)
finder.apply_freq_filter(3)
finder.nbest(bigram_measures.pmi, 10)
finder.apply_word_filter(lambda token: not filter_token(token))
finder.nbest(bigram_measures.pmi, 10)
tagged_words = nltk.pos_tag(documents.words())
tagged_words[:10]
finder = nltk.collocations.BigramCollocationFinder.from_words(tagged_words)
finder.apply_freq_filter(3)
# Only keep nouns (noun POS tags all start with ``N``).
word_filter = lambda token: not (filter_token(token[0]) and token[1].startswith('N'))
finder.apply_word_filter(word_filter)
finder.nbest(bigram_measures.pmi, 10)
finder.score_ngrams(bigram_measures.pmi)[-20:]
ngram, score = zip(*[g for g in finder.score_ngrams(bigram_measures.pmi)])
plt.hist(score)
plt.show()
selected_ngrams = [ngram for ngram in finder.above_score(bigram_measures.pmi, 2.0)]
selected_ngrams[-20:]
selected_expressions = []
for (token1, pos1), (token2, pos2) in finder.above_score(bigram_measures.pmi, 2.0):
selected_expressions.append((token1, token2))
selected_expressions[:10]
tokenizer = nltk.MWETokenizer(selected_expressions)
words_over_time = nltk.ConditionalFreqDist([
(normalize_token(token).replace('_', ' '), metadata[fileid].date)
for fileid in documents.fileids()
for token in tokenizer.tokenize(documents.words(fileids=[fileid]))
if filter_token(token) or '_' in token
])
words_over_time.plot(conditions=['hans spemann', 'embryonic development',
'ernst haeckel', 'water'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test data from the application loaded into a simple data container. One row contains the data of a click. If not changed the first file from files is loaded.
Step3: ID is the index of difficulty.
Step4: Calculate all models that can be drawn on the graph later on.
Step5: All data is put into a pandas dataframe for easier selection and matplot drawing
|
<ASSISTANT_TASK:>
Python Code:
files = ['clicks_2020-01-24 09:48:51_touchpad_14"_monitor.csv',
'clicks_2020-01-24 09:44:46_mouse_24"_monitor.csv',
'clicks_2020-01-23 16:00:32_mouse_24"_monitor.csv']
import csv
import numpy as np
import pandas as pd
from dataclasses import dataclass
from datetime import datetime, timedelta
@dataclass
class CircleClick():
circle_x: int
circle_y: int
click_x: int
click_y: int
radius: int
timestamp: datetime
clicks = []
with open(files[0]) as src:
reader = csv.reader(src)
for row in reader:
circle_click = CircleClick(circle_x=int(row[0]), circle_y=int(row[1]),
click_x=int(row[2]), click_y=int(row[3]),
timestamp=datetime.strptime(row[4], '%Y-%m-%d %H:%M:%S.%f'),
radius=int(row[5]))
clicks.append(circle_click)
clicks[0]
def distance(x1: int, x2: int, y1: int, y2: int):
a = np.power(x1 - x2, 2)
b = np.power(y1 - y2, 2)
distance = np.sqrt(a + b)
return distance
distance(0, 1, 0, 1)
@dataclass
class FittsModel:
D: float = 0
W: float = 0
ID: float = 0
MT: timedelta = timedelta(0)
def calculate(self, start: CircleClick, end: CircleClick):
The model calculates its values D, W, ID and MT
based on two clicks
self.D = distance(start.click_x,
end.circle_x + end.radius,
start.click_y,
end.circle_y + end.radius)
self.W = end.radius * 2
self.ID = np.log2(2 * self.D / self.W)
self.MT = end.timestamp - start.timestamp
@property
def MT_in_millis(self):
millis, micros = divmod(self.MT.microseconds, 1000)
return self.MT.total_seconds() * 1000 + millis + micros / 1000
models = []
for i in range(1, len(clicks)):
model = FittsModel()
model.calculate(clicks[i - 1], clicks[i])
models.append(model)
models[0]
data = {'D': [], 'W': [], 'ID': [], 'MT': []}
for m in models:
data['D'].append(m.D)
data['W'].append(m.W)
data['ID'].append(m.ID)
data['MT'].append(m.MT_in_millis)
df = pd.DataFrame(data=data)
df
widths = set([m.W for m in models])
widths
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['figure.figsize']
matplotlib.rcParams['figure.figsize'] = [12, 8]
df['ID'].mean()
df.groupby(['W']).mean()
df.groupby(['W']).median()
from sklearn.linear_model import LinearRegression
# uncomment the next line to select a specific circle width
# widths = [100]
for width in widths:
_df = df[df['W'] == width]
model = LinearRegression()
model.fit(_df[['ID']], _df[['MT']])
min_x = min(df['ID'])
max_x = max(df['ID'])
predicted = model.predict([[min_x], [max_x]])
plt.scatter(x=_df['ID'], y=_df['MT'])
plt.plot([min_x, max_x], predicted)
plt.legend(widths)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We use the sklearn.datasets.load_digits method to load the MNIST data.
Step2: This dataset contains data for 1797 images. Each image is an 8*8 matrix stored as a flat-packed array.
Step3: Next we find out How many images we have per label.
Step4: Next we scale mnist_df so that every feature has zero mean and unit variance.
Step8: From the scaled data, we must calculate the $P_{ij}$s. To do this we first calculate the pairwise distances between each pair of rows in the input data. For efficiency's sake, we use the sklearn.metrics.pairwise_distances library function.
Step11: Now we're going to set up TensorFlow for the KLD minimization problem.
Step12: Let's compare that against what the sklearn implementation gives us
Step13: Appendix
Step14: First we calculate Q using the direct iterative algorithm which requires iterating over rows and columns of y this gives us a reference to test our vectorized implementation for correctness.
Step15: To calculate Q in a vectorized way, we note that
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import seaborn as sns
from sklearn.datasets import load_digits
digits_data = load_digits()
from IPython.display import display
display(dir(digits_data))
display(digits_data.data.shape)
display(digits_data.target.shape)
mnist_df = pd.DataFrame(index=digits_data.target, data=digits_data.data)
mnist_df.head()
image_counts = mnist_df.groupby(mnist_df.index)[0].count()
ax = image_counts.plot(kind='bar', title='Image count per label in data')
from sklearn.preprocessing import scale
mnist_df_scaled = pd.DataFrame(index=mnist_df.index,
columns=mnist_df.columns,
data=scale(mnist_df))
mnist_df_scaled.head()
MACHINE_PRECISION = np.finfo(float).eps
from sklearn.metrics import pairwise_distances
def optimal_sigma(dist_i, i, target_entropy, n_iter=100, entropy_diff=1E-7):
For the pairwise distances between the i-th feature vector and every other feature vector in the original dataset,
execute a binary search for ``sigma`` such the entropy of the conditional probability distribution
${P_i}$ equals ``target_entropy`` at ``entropy_diff`` precision. Return the optimal sigma.
Assume that the distances are not squared.
Execute at most ``n_iter`` searches. Raise ``ValueError`` if we haven't found a decent enough
value of ``sigma``.
Note that dist_i.loc[i] is the distance of the i-th feature vector to itself, i.e. 0.
assert dist_i.loc[i] == 0
# initial value of sigma
sigma = 1.0
# initial left and right boundaries for the binary search
sigma_min, sigma_max = -np.inf, np.inf
for _ in range(1, n_iter+1):
# Evaluate the Gaussian kernel with current sigma
r = dist_i.pow(2).div(2 * (sigma ** 2))
s = np.exp(-r)
# Recall that p(j|i) = 0 if i = j
s.loc[i] = 0
p = s / s.sum()
# the np.maximum trick below avoids taking log of very small (< MACHINE_PRECISION) numbers
# and ending up with -inf
entropy = - p.dropna().dot(np.log(np.maximum(p.dropna(), MACHINE_PRECISION)))
if np.fabs(target_entropy - entropy) <= entropy_diff:
break
if entropy > target_entropy:
# new boundary is [sigma_min, sigma]
sigma_max = sigma
# if sigma_min is still open
if not np.isfinite(sigma_min):
sigma *= 0.5
else:
# new boundary is [sigma, sigma_max]
sigma_min = sigma
# if sigma_max is still open
if not np.isfinite(sigma_max):
sigma *= 2.0
# If both the left and right boundaries are closed, new sigma
# is the midpoint of sigma_min and sigma_max
if np.all(np.isfinite([sigma_min, sigma_max])):
sigma = (sigma_min + sigma_max) / 2
else:
raise ValueError("Unable to find a sigma after [{}] iterations that matches target entropy: [{}]".format(
n_iter, target_entropy))
return sigma
def calc_optimal_sigmas(df, target_purplexity):
From the DataFrame of feature vectors, ``df``, calculate pairwise distances and then find the optimal values
for the Gaussian kernels for each conditional probability distribution {P_i}
target_entropy = np.log(target_purplexity)
paired_dists = pd.DataFrame(data=pairwise_distances(df.values, metric='l2'))
optimal_sigmas = paired_dists.apply(lambda row: optimal_sigma(row, row.name, target_entropy), axis=1)
# p_joint = (p_cond + p_cond.T) / (2 * df.shape[0])
# return p_joint
return paired_dists, optimal_sigmas
def calc_p(df, target_purplexity=30):
Calculate the joint distribution of P_{ij} for the original input vectors x.
Assume ``pairwise_dist`` are squared.
paired_dists, optimal_sigmas = calc_optimal_sigmas(df, target_purplexity)
exps = np.exp(-paired_dists)
p_cond = exps.div(2 * optimal_sigmas.pow(2), axis=1)
p_cond.values[np.diag_indices_from(p_cond)] = 0
p_cond = p_cond.div(p_cond.sum(axis=1), axis=0)
n_points = p_cond.shape[0]
p_joint = (p_cond + p_cond.T) / (2 * n_points)
return p_joint
p_joint = calc_p(mnist_df_scaled)
import tensorflow as tf
display(tf.__version__)
def pairwise_dist(tf_y):
Calculate pairwise distances between each pair of vectors in tf_y.
tf_norms = tf.square(tf.norm(tf_y, axis=1))
tf_r1 = tf.expand_dims(tf_norms, axis=1)
tf_r2 = tf.expand_dims(tf_norms, axis=0)
tf_y_dot_yT = tf.matmul(tf_y, tf_y, transpose_b=True)
tf_dot = tf.cast(tf_y_dot_yT, dtype=tf.float32)
tf_r = tf_r1 + tf_r2
tf_d1 = tf_r - 2 * tf_dot
return tf_d1
def calc_q(tf_y):
Calculate the joint distribution of two embeddings y_i and y_j in tensorflow.
Call from inside an active tensorflow session only.
tf_pdist = pairwise_dist(tf_y)
tf_d = 1 / (1 + tf_pdist)
tf_d = tf.matrix_set_diag(tf_d, tf.zeros(tf.shape(tf_d)[0]))
tf_q = tf.div(tf_d, tf.reduce_sum(tf_d))
return tf_q
embedding_size = 2
n_points = p_joint.shape[0]
losses = []
n_iter = 1000
loss_epsilon = 1E-8
learning_rate = 0.2
current_graph = tf.Graph()
with current_graph.as_default():
# Placeholder for the joint distribution P of feature vectors in original space
# This is a constant w.r.t the KLD minimization
tf_p_joint = tf.placeholder(dtype=tf.float32, name='p_joint', shape=[n_points, n_points])
# Feature vectors in the embedding space - initialized by sampling from random distribution
tf_y = tf.Variable(name='y', validate_shape=False,
dtype=tf.float32,
initial_value=tf.random_normal([n_points, embedding_size]))
# One step for iterative KLD minimization
# calculate joint distribution Q of embeddings
tf_q_joint = calc_q(tf_y)
# Both P and Q have zeros in the diagonals. Since we want to calculate log{P/Q},
# We temporarily replace the 1s with 0s, so the log of the diagonals are zeros
# and they don't contribute to the KLD value.
p_diag_1 = tf.matrix_set_diag(tf_p_joint, tf.ones(n_points))
q_diag_1 = tf.matrix_set_diag(tf_q_joint, tf.ones(n_points))
tf_log_p_by_q = tf.log(tf.div(p_diag_1, q_diag_1))
kld = tf.reduce_sum(tf.multiply(tf_p_joint, tf_log_p_by_q))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name='Adam')
train_op = optimizer.minimize(kld, name='KLD_minimization')
with tf.Session() as sess:
# initialize tensorflow variables
init = tf.global_variables_initializer()
sess.run(init)
feed_dict = {tf_p_joint: p_joint.astype(np.float32).values}
# run the optimization step n_iter times, breaking out if two successive steps
# produce an absolute change in the value of the loss function <= loss_epsilon
for i in range(1, n_iter+1):
_, loss_val = sess.run([train_op, kld], feed_dict=feed_dict)
losses.append(loss_val)
if i % 100 == 0:
print("After iteration: {}, loss: {}".format(i, loss_val))
if len(losses) >= 2:
last_loss = losses[-2]
loss_delta = np.abs(last_loss-loss_val)
if loss_delta < loss_epsilon:
print("Exiting after %s iterations, loss_delta [{}] <= loss_epsilon [{}".format(
n_iter, loss_delta, loss_epsilon))
break
y_embeddings = sess.run(tf_y, feed_dict=feed_dict)
pd.Series(losses).rolling(10).mean().plot()
embeddings_df = pd.DataFrame(index=mnist_df_scaled.index, data=y_embeddings)
plot_source = embeddings_df.reset_index().rename(columns={
'index': 'label', 0: 'x', 1:'y'})
fg = sns.FacetGrid(data=plot_source, hue='label', size=10)
fg.map(plt.scatter, 'x', 'y').add_legend()
from sklearn.manifold import TSNE
# Extract the embeddings and convert into a DataFrame
sk_embedded = TSNE(n_components=2).fit_transform(mnist_df_scaled.values)
sk_embedded = pd.DataFrame(index=mnist_df_scaled.index, data=sk_embedded)
# Display
sk_embedded = sk_embedded.reset_index().rename(columns={'index': 'label', 0: 'x', 1:'y'})
fg = sns.FacetGrid(data=sk_embedded, hue='label', size=10)
fg.map(plt.scatter, 'x', 'y').add_legend()
y = pd.DataFrame(index=range(3), columns=range(5), data=np.random.uniform(1, 5, size=[3, 5]))
y
Q_simple = pd.DataFrame(index=y.index, columns=y.index, data=0.0)
for i in range(0, y.shape[0]):
for j in range(0, i):
assert i != j, (i, j)
md = y.loc[i, :].sub(y.loc[j, :])
d = 1 + np.linalg.norm(md)**2
Q_simple.loc[i, j] = 1 / d
Q_simple.loc[j, i] = 1 / d
Q_simple
norms = y.apply(np.linalg.norm, axis=1).values
r1 = np.atleast_2d(norms**2)
r2 = r1.T
d1 = r1 + r2
d2 = d1 - 2 * np.dot(y, y.T)
d2 += 1
d3 = 1 / d2
d3[np.diag_indices_from(d3)] = 0
Q_vectorized = pd.DataFrame(d3)
Q_vectorized
from pandas.util.testing import assert_frame_equal
assert_frame_equal(Q_simple, Q_vectorized, check_less_precise=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Geometric Brownian motion
Step3: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
Step4: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
Step5: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
Step6: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation
Step7: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
def brownian(maxt, n):
Return one realization of a Brownian (Wiener) process with n steps and a max time of t.
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W
#t, w = numpy.empty(1000)
t, W = brownian(1.0, 1000)
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000
plt.plot(t, W, "bo")
plt.ylabel('Position')
plt.xlabel('Time')
assert True # this is for grading
dW = np.diff(W)
mean = np.mean(dW)
stdev = np.std(dW)
print mean
print stdev
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float)
def geo_brownian(t, W, X0, mu, sigma):
return X0*np.exp((mu-(sigma**2)/2)*t+sigma*W)
print geo_brownian(t,W,2,mean,stdev)
assert True # leave this for grading
X = geo_brownian(t,W,1.0,0.5,0.3)
plt.plot(t,X, 'ro')
plt.ylabel('X(t)')
plt.xlabel('time')
assert True # leave this for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Let's set reasonable (although not necessarily physical) values for the secondary component.
Step3: We'll add a spot to the primary component.
Step4: Adding Datasets
Step5: Because we have such a cool transiting object, we'll have to use blackbody atmospheres and manually provide limb-darkening.
Step6: Running Compute
Step7: Plotting
Step8: Let's go through these options (see also the plot API docs)
Step9: Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
import numpy as np
b = phoebe.default_binary()
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value(qualifier='mass', component='secondary', value=0.2)
b.set_value(qualifier='requiv', component='secondary', value=0.2)
b.set_value(qualifier='teff', component='secondary', value=300)
b.add_spot(component='primary',
relteff=0.90,
long=0,
colat=90,
radius=20,
feature='spot01')
b.add_dataset('lc', compute_times=phoebe.linspace(-0.1, 0.1, 201))
b.set_value(qualifier='atm', component='secondary', value='blackbody')
b.set_value(qualifier='ld_mode', component='secondary', value='manual')
anim_times = phoebe.linspace(-0.1, 0.1, 101)
b.add_dataset('mesh', compute_times=anim_times, coordinates='uvw', columns='teffs')
b.run_compute(distortion_method='sphere', irrad_method='none')
print(np.min(b.get_value('teffs', time=0.0, component='primary')), np.max(b.get_value('teffs', time=0.0, component='primary')))
afig, mplfig = b.plot(time=0.0,
fc='teffs', fcmap='plasma', fclim=(5000, 6000),
ec='face',
tight_layout=True,
show=True)
afig, mplfig = b.plot(times=anim_times,
fc='teffs', fcmap='plasma', fclim=(5000, 6000),
ec='face',
consider_for_limits={'primary': True, 'secondary': False},
tight_layout=True, pad_aspect=False,
animate=True,
save='spot_transit.gif',
save_kwargs={'writer': 'imagemagick'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: The inspect module provides functions for learning about live objects, classes, instances, and methods. The functions in this module can be used to retrieve the original source code for a function, look the arguments to a method on the stack, and extract the sort of information useful for producing library documentation for source code.
Step5: The first kind of introspection probes live objects to learn about them. Use getmembers() to discover the member attributes of object. The types of members that might be returned depend on the type of object scanned. Modules can contain classes and functions; classes can contain methods and attributes; and so on.
Step6: The predicate argument can be used to filter the types of objects returned.
Step7: Inspecting Classes
Step8: To find the methods of a class, use the isfunction() predicate. The ismethod() predicate only recognizes bound methods of instances.
Step9: The output for B includes the override for get_name() as well as the new method, and the inherited init() method implemented in A.
Step10: Inspecting Instances
Step11: Documentation Strings
Step12: The second line of the docstring is indented when it is retrieved through the attribute directly, but moved to the left margin by getdoc().
Step13: Retrieving Source
Step14: To retrieve the source for a single method, pass the method reference to getsource().
Step15: Use getsourcelines() instead of getsource() to retrieve the lines of source split into individual strings.
Step16: Method and Function Signatures
Step17: The function arguments are available through the parameters attribute of the Signature. parameters is an ordered dictionary mapping the parameter names to Parameter instances describing the argument. In this example, the first argument to the function, arg1, does not have a default value, while arg2 does.
Step18: Class Hierarchies
Step19: The output from this example is the tree of inheritance for the A, B, C, and D classes. D appears twice, since it inherits from both C and A.
Step20: Method Resolution Order
Step21: The Stack and Frames
Step22: Using stack(), it is also possible to access all of the stack frames from the current frame to the first caller. This example is similar to the one shown earlier, except it waits until reaching the end of the recursion to print the stack information.
Step23: There are other functions for building lists of frames in different contexts, such as when an exception is being processed. See the documentation for trace(), getouterframes(), and getinnerframes() for more details.
|
<ASSISTANT_TASK:>
Python Code:
# %load example.py
def module_level_function(arg1, arg2='default', *args, **kwargs):
This function is declared in the module.
local_variable = arg1 * 2
return local_variable
class A(object):
The A class.
def __init__(self, name):
self.name = name
def get_name(self):
"Returns the name of the instance."
return self.name
instance_of_a = A('sample_instance')
class B(A):
This is the B class.
It is derived from A.
# This method is not part of A.
def do_something(self):
Does some work
def get_name(self):
"Overrides version from A"
return 'B(' + self.name + ')'
import inspect
import example
for name, data in inspect.getmembers(example):
if name.startswith('__'):
continue
print('{} : {!r}'.format(name, data))
import inspect
import example
for name, data in inspect.getmembers(example, inspect.isclass):
print('{} : {!r}'.format(name, data))
import inspect
from pprint import pprint
import example
pprint(inspect.getmembers(example.A), width=65)
import inspect
from pprint import pprint
import example
pprint(inspect.getmembers(example.A, inspect.isfunction))
import inspect
from pprint import pprint
import example
pprint(inspect.getmembers(example.B, inspect.isfunction))
import inspect
from pprint import pprint
import example
a = example.A(name='inspect_getmembers')
pprint(inspect.getmembers(a, inspect.ismethod))
import inspect
import example
print('B.__doc__:')
print(example.B.__doc__)
print()
print('getdoc(B):')
print(inspect.getdoc(example.B))
import inspect
import example
print(inspect.getcomments(example.B.do_something))
import inspect
import example
print(inspect.getsource(example.A))
import inspect
import example
print(inspect.getsource(example.A.get_name))
import inspect
import pprint
import example
pprint.pprint(inspect.getsourcelines(example.A.get_name))
import inspect
import example
sig = inspect.signature(example.module_level_function)
print('module_level_function{}'.format(sig))
print('\nParameter details:')
for name, param in sig.parameters.items():
if param.kind == inspect.Parameter.POSITIONAL_ONLY:
print(' {} (positional-only)'.format(name))
elif param.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD:
if param.default != inspect.Parameter.empty:
print(' {}={!r}'.format(name, param.default))
else:
print(' {}'.format(name))
elif param.kind == inspect.Parameter.VAR_POSITIONAL:
print(' *{}'.format(name))
elif param.kind == inspect.Parameter.KEYWORD_ONLY:
if param.default != inspect.Parameter.empty:
print(' {}={!r} (keyword-only)'.format(
name, param.default))
else:
print(' {} (keyword-only)'.format(name))
elif param.kind == inspect.Parameter.VAR_KEYWORD:
print(' **{}'.format(name))
import inspect
import example
sig = inspect.signature(example.module_level_function)
bound = sig.bind(
'this is arg1',
'this is arg2',
'this is an extra positional argument',
extra_named_arg='value',
)
print('Arguments:')
for name, value in bound.arguments.items():
print('{} = {!r}'.format(name, value))
print('\nCalling:')
print(example.module_level_function(*bound.args, **bound.kwargs))
import inspect
import example
class C(example.B):
pass
class D(C, example.A):
pass
def print_class_tree(tree, indent=-1):
if isinstance(tree, list):
for node in tree:
print_class_tree(node, indent + 1)
else:
print(' ' * indent, tree[0].__name__)
return
if __name__ == '__main__':
print('A, B, C, D:')
print_class_tree(inspect.getclasstree(
[example.A, example.B, C, D])
)
import inspect
import example
print_class_tree(inspect.getclasstree(
[example.A, example.B, C, D],
unique=True,
))
import inspect
import example
class C(object):
pass
class C_First(C, example.B):
pass
class B_First(example.B, C):
pass
print('B_First:')
for c in inspect.getmro(B_First):
print(' {}'.format(c.__name__))
print()
print('C_First:')
for c in inspect.getmro(C_First):
print(' {}'.format(c.__name__))
import inspect
import pprint
def recurse(limit, keyword='default', *, kwonly='must be named'):
local_variable = '.' * limit
keyword = 'changed value of argument'
frame = inspect.currentframe()
print('line {} of {}'.format(frame.f_lineno,
frame.f_code.co_filename))
print('locals:')
pprint.pprint(frame.f_locals)
print()
if limit <= 0:
return
recurse(limit - 1)
return local_variable
if __name__ == '__main__':
recurse(2)
import inspect
import pprint
def show_stack():
for level in inspect.stack():
print('{}[{}]\n -> {}'.format(
level.frame.f_code.co_filename,
level.lineno,
level.code_context[level.index].strip(),
))
pprint.pprint(level.frame.f_locals)
print()
def recurse(limit):
local_variable = '.' * limit
if limit <= 0:
show_stack()
return
recurse(limit - 1)
return local_variable
if __name__ == '__main__':
recurse(2)
!python3 -m inspect -d example
!python3 -m inspect -d example:A
!python3 -m inspect example:A.get_name
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot continuous head position with respect to the mean recording position
Step2: Plot raw data with annotated movement
Step3: After checking the annotated movement artifacts, calculate the new transform
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Adonay Nunes <adonay.s.nunes@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# License: BSD-3-Clause
import os.path as op
import mne
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
from mne.preprocessing import annotate_movement, compute_average_dev_head_t
# Load data
data_path = bst_auditory.data_path()
data_path_MEG = op.join(data_path, 'MEG')
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
raw_fname1 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_02.ds')
# read and concatenate two files, ignoring device<->head mismatch
raw = read_raw_ctf(raw_fname1, preload=False)
mne.io.concatenate_raws(
[raw, read_raw_ctf(raw_fname2, preload=False)], on_mismatch='ignore')
raw.crop(350, 410).load_data()
raw.resample(100, npad="auto")
# Get cHPI time series and compute average
chpi_locs = mne.chpi.extract_chpi_locs_ctf(raw)
head_pos = mne.chpi.compute_head_pos(raw.info, chpi_locs)
original_head_dev_t = mne.transforms.invert_transform(
raw.info['dev_head_t'])
average_head_dev_t = mne.transforms.invert_transform(
compute_average_dev_head_t(raw, head_pos))
fig = mne.viz.plot_head_positions(head_pos)
for ax, val, val_ori in zip(fig.axes[::2], average_head_dev_t['trans'][:3, 3],
original_head_dev_t['trans'][:3, 3]):
ax.axhline(1000 * val, color='r')
ax.axhline(1000 * val_ori, color='g')
# The green horizontal lines represent the original head position, whereas the
# red lines are the new head position averaged over all the time points.
mean_distance_limit = .0015 # in meters
annotation_movement, hpi_disp = annotate_movement(
raw, head_pos, mean_distance_limit=mean_distance_limit)
raw.set_annotations(annotation_movement)
raw.plot(n_channels=100, duration=20)
new_dev_head_t = compute_average_dev_head_t(raw, head_pos)
raw.info['dev_head_t'] = new_dev_head_t
mne.viz.plot_alignment(raw.info, show_axes=True, subject=subject,
trans=trans_fname, subjects_dir=subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Cifar
Step2: Train Logistic Regression Model
Step3: Deploy Logistic Regression Model
Step5: Now that you've deployed your model, go ahead and check back on your running frontend application from part 1. You should see the accuracy rise from around 50% to the accuracy of your SKLearn model (68-74%), without having to stop or modify your application at all!
Step6: Deploy TensorFlow Model
Step7: Inspect Clipper Metrics
Step8: Congratulations! You've now successfully completed the tutorial. You started Clipper, created an application and queried it from a frontend client, and deployed two models trained in two different machine learning frameworks (Scikit-Learn and TensorFlow) to the running system.
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
from clipper_admin import Clipper
# Change the username if necessary
user = ""
# Set the path to the SSH key
key = ""
# Set the SSH host
host = ""
clipper = Clipper(host, user, key)
cifar_loc = ""
import cifar_utils
train_x, train_y = cifar_utils.filter_data(
*cifar_utils.load_cifar(cifar_loc, cifar_filename="cifar_train.data", norm=True))
test_x, test_y = cifar_utils.filter_data(
*cifar_utils.load_cifar(cifar_loc, cifar_filename="cifar_test.data", norm=True))
from sklearn import linear_model as lm
def train_sklearn_model(m, train_x, train_y):
m.fit(train_x, train_y)
return m
lr_model = train_sklearn_model(lm.LogisticRegression(), train_x, train_y)
print("Logistic Regression test score: %f" % lr_model.score(test_x, test_y))
model_name = "birds_vs_planes_classifier"
model_added = clipper.deploy_model(
model_name,
1,
lr_model,
"clipper/sklearn_cifar_container:latest",
"doubles",
num_containers=1
)
print("Model deploy successful? {success}".format(success=model_added))
import os
import tensorflow as tf
import numpy as np
tf_cifar_model_path = os.path.abspath("tf_cifar_model/cifar10_model_full")
tf_session = tf.Session('', tf.Graph())
with tf_session.graph.as_default():
saver = tf.train.import_meta_graph("%s.meta" % tf_cifar_model_path)
saver.restore(tf_session, tf_cifar_model_path)
def tensorflow_score(session, test_x, test_y):
NOTE: This predict method expects pre-whitened (normalized) images
logits = session.run('softmax_logits:0',
feed_dict={'x:0': test_x})
relevant_activations = logits[:, [cifar_utils.negative_class, cifar_utils.positive_class]]
preds = np.argmax(relevant_activations, axis=1)
return float(np.sum(preds == test_y)) / float(len(test_y))
print("TensorFlow CNN test score: %f" % tensorflow_score(tf_session, test_x, test_y))
model_added = clipper.deploy_model(
model_name,
2,
os.path.abspath("tf_cifar_model"),
"clipper/tf_cifar_container:latest",
"doubles",
num_containers=1
)
print("Model deploy successful? {success}".format(success=model_added))
clipper.inspect_instance()
clipper.stop_all()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wall Clock Time
Step2: The epoch is the start of measurement for time, which for Unix systems is 0
Step3: Monotonic Clocks
Step4: Processor Clock Time
Step5: In this example, the formatted ctime() is printed along with the floating point values from time(), and clock() for each iteration through the loop.
Step6: In this example, the loop does very little work by going to sleep after each iteration. The time() value increases even while the application is asleep, but the clock() value does not.
Step7: As with monotonic(), the epoch for perf_counter() is undefined, and the values are meant to be used for comparing and computing values, not as absolute times.
Step8: The gmtime() function returns the current time in UTC.
Step9: The default time zone on the system used to prepare the examples is US/Eastern. The other zones in the example change the tzname, daylight flag, and timezone offset value.
|
<ASSISTANT_TASK:>
Python Code:
import textwrap
import time
available_clocks = [
('clock', time.clock),
('monotonic', time.monotonic),
('perf_counter', time.perf_counter),
('process_time', time.process_time),
('time', time.time),
]
for clock_name, func in available_clocks:
print(textwrap.dedent('''\
{name}:
adjustable : {info.adjustable}
implementation: {info.implementation}
monotonic : {info.monotonic}
resolution : {info.resolution}
current : {current}
''').format(
name=clock_name,
info=time.get_clock_info(clock_name),
current=func())
)
import time
print('The time is:', time.time())
import time
print('The time is :', time.ctime())
later = time.time() + 15
print('15 secs from now :', time.ctime(later))
import time
start = time.monotonic()
time.sleep(0.1)
end = time.monotonic()
print('start : {:>9.2f}'.format(start))
print('end : {:>9.2f}'.format(end))
print('span : {:>9.2f}'.format(end - start))
import hashlib
import time
# Data to use to calculate md5 checksums
data = open("README.md", 'rb').read()
for i in range(5):
h = hashlib.sha1()
print(time.ctime(), ': {:0.3f} {:0.3f}'.format(
time.time(), time.clock()))
for i in range(300000):
h.update(data)
cksum = h.digest()
import time
template = '{} - {:0.2f} - {:0.2f}'
print(template.format(
time.ctime(), time.time(), time.clock())
)
for i in range(3, 0, -1):
print('Sleeping', i)
time.sleep(i)
print(template.format(
time.ctime(), time.time(), time.clock())
)
import hashlib
import time
# Data to use to calculate md5 checksums
data = open("README.md", 'rb').read()
loop_start = time.perf_counter()
for i in range(5):
iter_start = time.perf_counter()
h = hashlib.sha1()
for i in range(300000):
h.update(data)
cksum = h.digest()
now = time.perf_counter()
loop_elapsed = now - loop_start
iter_elapsed = now - iter_start
print(time.ctime(), ': {:0.3f} {:0.3f}'.format(
iter_elapsed, loop_elapsed))
import time
def show_struct(s):
print(' tm_year :', s.tm_year)
print(' tm_mon :', s.tm_mon)
print(' tm_mday :', s.tm_mday)
print(' tm_hour :', s.tm_hour)
print(' tm_min :', s.tm_min)
print(' tm_sec :', s.tm_sec)
print(' tm_wday :', s.tm_wday)
print(' tm_yday :', s.tm_yday)
print(' tm_isdst:', s.tm_isdst)
print('gmtime:')
show_struct(time.gmtime())
print('\nlocaltime:')
show_struct(time.localtime())
print('\nmktime:', time.mktime(time.localtime()))
import time
import os
def show_zone_info():
print(' TZ :', os.environ.get('TZ', '(not set)'))
print(' tzname:', time.tzname)
print(' Zone : {} ({})'.format(
time.timezone, (time.timezone / 3600)))
print(' DST :', time.daylight)
print(' Time :', time.ctime())
print()
print('Default :')
show_zone_info()
ZONES = [
'GMT',
'Europe/Amsterdam',
]
for zone in ZONES:
os.environ['TZ'] = zone
time.tzset()
print(zone, ':')
show_zone_info()
import time
def show_struct(s):
print(' tm_year :', s.tm_year)
print(' tm_mon :', s.tm_mon)
print(' tm_mday :', s.tm_mday)
print(' tm_hour :', s.tm_hour)
print(' tm_min :', s.tm_min)
print(' tm_sec :', s.tm_sec)
print(' tm_wday :', s.tm_wday)
print(' tm_yday :', s.tm_yday)
print(' tm_isdst:', s.tm_isdst)
now = time.ctime(1483391847.433716)
print('Now:', now)
print("tyoe of now:", type(now))
parsed = time.strptime(now)
print('\nParsed:')
show_struct(parsed)
print("tyoe of parsed:", type(parsed))
print('\nFormatted:',
time.strftime("%a %b %d %H:%M:%S %Y", parsed))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def total_match(lst1, lst2):
'''
Write a function that accepts two lists of strings and returns the list that has
total number of chars in the all strings of the list less than the other list.
if the two lists have the same number of chars, return the first list.
Examples
total_match([], []) ➞ []
total_match(['hi', 'admin'], ['hI', 'Hi']) ➞ ['hI', 'Hi']
total_match(['hi', 'admin'], ['hi', 'hi', 'admin', 'project']) ➞ ['hi', 'admin']
total_match(['hi', 'admin'], ['hI', 'hi', 'hi']) ➞ ['hI', 'hi', 'hi']
total_match(['4'], ['1', '2', '3', '4', '5']) ➞ ['4']
'''
l1 = 0
for st in lst1:
l1 += len(st)
l2 = 0
for st in lst2:
l2 += len(st)
if l1 <= l2:
return lst1
else:
return lst2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (A) List the line numbers of the code above in the order that they will be executed. If a line will be executed more than once, list it each time.
Step2: 2. Data structure woes (2pt)
Step3: Why does scores get sorted?
Step4: Yes, that's right--even when you try to make a new copy of a list, it's actually just a reference to the same list! This is called aliasing. The same thing will happen with a dictionary. This can really trip you up if you don't know it's happening.
Step5: And here's a way for dictionaries
Step6: 3. Writing custom functions (8pts)
Step7: (B) (1pt) Create a function called "reverse_compl" that takes a single sequence as a parameter and returns the reverse complement.
Step8: (C) (1pt) Create a function called "read_fasta" that takes a file name as a parameter (which is assumed to be in fasta format), puts each fasta entry into a dictionary (using the header line as a key and the sequence as a value), and then returns the dictionary.
Step9: (D) (2pts) Create a function called "rand_seq" that takes an integer length as a parameter, and then returns a random DNA sequence of that length.
Step10: (E) (2pts) Create a function called "shuffle_nt" that takes a single sequence as a parameter and returns a string that is a shuffled version of the sequence (i.e. the same nucleotides, but in a random order).
Step11: (F) (1pt) Run the code below to show that all of your functions work. Try to fix any that have problems.
Step12: 4. Using your functions (5pts)
Step13: (B) (1pt) Read in horrible.fasta into a dictionary. For each sequence, print its reverse complement to the screen.
Step14: (C) (3pts) Read in horrible.fasta into a dictionary. For each sequence, find the length and the gc content. Print the results to the screen in the following format
Step15: Bonus question
Step16: Extra problems (0pts)
Step17: (B) Create a function that generates a random nt sequence of a specified length with specified nt frequencies. Your function should accept as parameters
|
<ASSISTANT_TASK:>
Python Code:
def fancy_calc(a, b, c):
x1 = basic_calc(a,b)
x2 = basic_calc(b,c)
x3 = basic_calc(c,a)
z = x1 * x2 * x3
return z
def basic_calc(x, y):
result = x + y
return result
x = 1
y = 2
z = 3
result = fancy_calc(x, y, z)
print x
print z
print x1
print result
# run this first!
def getMax(someList):
someList.sort()
x = someList[-1]
return x
scores = [9, 5, 7, 1, 8]
maxScore = getMax(scores)
print maxScore
print someList
print scores
# run this first!
list1 = [1, 2, 3, 4]
list2 = list1
list2[0] = "HELLO"
print list2
print list1
# for lists
list1 = [1, 2, 3, 4]
list2 = list(list1) #make a true copy of the list
list2[0] = "HELLO"
print list2
print list1
# for dictionaries
dict1 = {'A':1, 'B':2, 'C':3}
dict2 = dict1.copy() #make a true copy of the dict
dict2['A'] = 99
print dict2
print dict1
def gc(seq):
gcCount = seq.count("C") + seq.count("G")
gcFrac = float(gcCount) / len(seq)
return round(gcFrac,2)
def reverse_compl(seq):
complements = {'A':'T', 'C':'G', 'G':'C', 'T':'A'}
compl = ""
for char in seq:
compl = complements[char] + compl
return compl
def read_fasta(fileName):
ins = open(fileName, 'r')
seqDict = {}
activeID = ""
for line in ins:
line = line.rstrip('\r\n')
if line[0] == ">":
activeID = line[1:]
if activeID in seqDict:
print ">>> Warning: repeat id:", activeID, "-- overwriting previous ID."
seqDict[activeID] = ""
else:
seqDict[activeID] += line
ins.close()
return seqDict
def rand_seq(length):
import random
nts = ['A','C','G','T']
seq = ""
for i in range(length):
seq += random.choice(nts)
return seq
def shuffle_nt(seq):
import random
strList = list(seq)
random.shuffle(strList)
shuffSeq = "".join(strList)
return shuffSeq
##### testing gc
gcCont = gc("ATGGGCCCAATGG")
if type(gcCont) != float:
print ">> Problem with gc: answer is not a float, it is a %s." % type(gcCont)
elif gcCont != 0.62:
print ">> Problem with gc: incorrect answer (should be 0.62; your code gave", gcCont, ")"
else:
print "gc: Passed."
##### testing reverse_compl
revCompl = reverse_compl("GGGGTCGATGCAAATTCAAA")
if type(revCompl) != str:
print ">> Problem with reverse_compl: answer is not a string, it is a %s." % type(revCompl)
elif revCompl != "TTTGAATTTGCATCGACCCC":
print ">> Problem with reverse_compl: answer (%s) does not match expected (%s)" % (revCompl, "TTTGAATTTGCATCGACCCC")
else:
print "reverse_compl: Passed."
##### testing read_fasta
try:
ins = open("horrible.fasta", 'r')
except IOError:
print ">> Can not test read_fasta because horrible.fasta is missing. Please add it to the directory with this notebook."
else:
seqDict = read_fasta("horrible.fasta")
if type(seqDict) != dict:
print ">> Problem with read_fasta: answer is not a dictionary, it is a %s." % type(seqDict)
elif len(seqDict) != 22:
print ">> Problem with read_fasta: # of keys in dictionary (%s) does not match expected (%s)" % (len(seqDict), 22)
else:
print "read_fasta: Passed."
##### testing rand_seq
randSeq1 = rand_seq(23)
randSeq2 = rand_seq(23)
if type(randSeq1) != str:
print ">> Problem with rand_seq: answer is not a string, it is a %s." % type(randSeq1)
elif len(randSeq1) != 23:
print ">> Problem with rand_seq: answer length (%s) does not match expected (%s)." % (len(randSeq1), 23)
elif randSeq1 == randSeq2:
print ">> Problem with rand_seq: generated the same sequence twice (%s) -- are you sure this is random?" % randSeq1
else:
print "rand_seq: Passed."
##### testing shuffle_nt
shuffSeq = shuffle_nt("AAAAAAGTTTCCC")
if type(shuffSeq) != str:
print ">> Problem with shuffle_nt: answer is not a string, it is a %s." % type(shuffSeq)
elif len(shuffSeq) != 13:
print ">> Problem with shuffle_nt: answer length (%s) does not match expected (%s)." % (len(shuffSeq), 12)
elif shuffSeq == "AAAAAAGTTTCCC":
print ">> Problem with shuffle_nt: answer is exactly the same as the input. Are you sure this is shuffling?"
elif shuffSeq.count('A') != 6:
print ">> Problem with shuffle_nt: answer doesn't contain the same # of each nt as the input."
else:
print "shuff_seq: Passed."
for i in range(20):
print rand_seq(50)
seqDict = read_fasta("horrible.fasta")
for seqID in seqDict:
print reverse_compl(seqDict[seqID])
seqDict = read_fasta("horrible.fasta")
print "SeqID\tLen\tGC"
for seqID in seqDict:
seq = seqDict[seqID]
seqLen = len(seq)
seqGC = gc(seq)
print seqID + "\t" + str(seqLen) + "\t" + str(seqGC)
# Method 1
# Generic kmer generation for any k and any alphabet (default is DNA nt)
# Pretty fast
def get_kmers1(k, letters=['A','C','G','T']):
kmers = []
choices = len(letters)
finalNum = choices ** k
# initialize to blank strings
for i in range(finalNum):
kmers.append("")
# imagining the kmers lined up vertically, generate one "column" at a time
for i in range(k):
consecReps = choices ** (k - (i + 1)) #number of times to consecutively repeat each letter
patternReps = choices ** i #number of times to repeat pattern of letters
# create the current column of letters
index = 0
for j in range(patternReps):
for m in range(choices):
for n in range(consecReps):
kmers[index] += letters[m]
index += 1
return kmers
get_kmers1(3)
# Method 2
# Generate numbers, discard any that aren't 1/2/3/4's, convert to letters.
# Super slow~
def get_kmers2(k):
discard = ["0", "5", "6", "7", "8", "9"]
convert = {"1": "A", "2": "T", "3": "G", "4": "C"}
min = int("1" * k)
max = int("4" * k)
kmers = []
tmp = []
for num in range(min, (max + 1)): # generate numerical kmers
good = True
for digit in str(num):
if digit in discard:
good = False
break
if good == True:
tmp.append(num)
for num in tmp: # convert numerical kmers to ATGC
result = ""
for digit in str(num):
result += convert[digit]
kmers.append(result)
return kmers
# Method 3 (by Nate)
# A recursive solution. Fast!
# (A recursive function is a function that calls itself)
def get_kmers3(k):
nt = ['A', 'T', 'G', 'C']
k_mers = []
if k == 1:
return nt
else:
for i in get_kmers3(k - 1):
for j in nt:
k_mers.append(i + j)
return k_mers
# Method 4 (by Nate)
# Fast
def get_kmers4(k):
nt = ['A', 'T', 'G', 'C']
k_mers = []
total_kmers = len(nt)**k
# make a list of size k with all zeroes.
# this keeps track of which base we need at each position
pointers = []
for p in range(k):
pointers.append(0)
for k in range(total_kmers):
# use the pointers to generate the next k-mer
k_mer = ""
for p in pointers:
k_mer += nt[p]
k_mers.append(k_mer)
# get the pointers ready for the next k-mer by updating them left to right
pointersUpdated = False
i = 0
while not pointersUpdated and i < len(pointers):
if pointers[i] < len(nt) - 1:
pointers[i] += 1
pointersUpdated = True
else:
pointers[i] = 0
i += 1
return k_mers
# Method 5 (by Justin Becker, bootcamp 2013)
# Fast!
def get_kmers5(k): #function requires int as an argument
kmers = [""]
for i in range(k): #after each loop, kmers will store the complete set of i-mers
currentNumSeqs = len(kmers)
for j in range(currentNumSeqs): #each loop takes one i-mer and converts it to 4 (i+1)=mers
currentSeq = kmers[j]
kmers.append(currentSeq + 'C')
kmers.append(currentSeq + 'T')
kmers.append(currentSeq + 'G')
kmers[j] += 'A'
return kmers
# Method 6 (by Nick)
# Convert to base-4
def get_kmers6(k):
bases = ['a', 'g', 'c', 't']
kmers = []
for i in range(4**k):
digits = to_base4(i, k)
mystr = ""
for baseidx in digits:
mystr += bases[baseidx]
kmers.append(mystr)
return kmers
# convert num to a k-digit base-4 int
def to_base4(num, k):
digits = []
while k > 0:
digits.append(num/4**(k-1))
num %= 4**(k-1)
k -= 1
return digits
# Below: more from Nate
import random
import time
alphabet = ['A', 'C', 'G', 'T']
## Modulus based
def k_mer_mod(k):
k_mers = []
for i in range(4**k):
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4]+ k_mer
k_mers.append(k_mer)
return k_mers
## maybe the range operator slows things down by making a big tuple
def k_mer_mod_1(k):
k_mers = []
total = 4**k
i = 0
while i < total:
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4]+ k_mer
k_mers.append(k_mer)
i += 1
return k_mers
## Does initializing the list of k_mers help?
def k_mer_mod_2(k):
k_mers = [''] * 4**k
for i in range(4**k):
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4] + k_mer
k_mers[i] = k_mer
return k_mers
## What's faster? element assignment or hashing?
def k_mer_mod_set(k):
k_mers = set()
for i in range(4**k):
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4] + k_mer
k_mers.add(k_mer)
return list(k_mers)
## does creating the string up front help?
#def k_mer_mod_3(k):
#n k_mers = []
# k_mer = "N" * k
# for i in range(4**k):
# for j in range(k):
# k_mer[j] = alphabet[(i/4**j) % 4]
# k_mers.append(k_mer)
# return k_mers
# Nope! String are immutable, dummy!
# maybe we can do something tricky with string substitution
def k_mer_mod_ssub(k):
template = "\%s" * k
k_mers = []
for i in range(4**k):
k_mer = []
for j in range(k):
k_mer.append(alphabet[(i/4**j) % 4])
k_mers.append(template % k_mer)
return k_mers
# what about using a list?
def k_mer_mod_4(k):
k_mers = [''] * 4**k
k_mer = [''] * k
for i in range(4**k):
for j in range(k):
k_mer[j] = alphabet[(i/4**j) % 4]
k_mers[i] = "".join(k_mer)
return k_mers
## recursive version
def k_mer_recursive(k):
if k == 0:
return ['']
else:
k_mers = []
for k_mer in k_mer_recursive(k-1):
for n in alphabet:
k_mers.append("%s%s" % (k_mer, n))
return k_mers
## That works, but what I wanted to be like, really obnoxious about it
def k_mer_recursive_2(k):
if k == 0:
return ['']
else:
k_mers = []
[[k_mers.append("%s%s" % (k_mer, n)) for n in alphabet] for k_mer in k_mer_recursive_2(k-1)]
return k_mers
# using list instead of strings to store the k_mers
def k_mer_recursive_3(k, j = False):
if k == 0:
return [[]]
else:
k_mers = []
[[k_mers.append((k_mer + [n])) if j else k_mers.append("".join(k_mer + [n])) for n in alphabet] for k_mer in k_mer_recursive_3(k-1, True)]
return k_mers
## stochastic (I have a good feeling about this one!)
def k_mer_s(k):
s = set()
i = 0
while i < 4**k:
k_mer = ''
for j in range(k):
k_mer = k_mer + random.choice(alphabet)
if k_mer not in s:
s.add(k_mer)
i += 1
return list(s)
## I sure hope this works because now we're pretty much cheating
import array
def k_mer_mod_array(k):
k_mers = []
k_mer = array.array('c', ['N'] * k)
for i in range(4**k):
for j in range(k):
k_mer[j] = alphabet[(i/4**j) % 4]
k_mers.append("".join(k_mer))
return k_mers
## That could have gone better.
def nt_counts(seq):
counts = {}
for nt in seq:
if nt not in counts:
counts[nt] = 1
else:
counts[nt] += 1
return counts
nt_counts("AAAAATTTTTTTGGGGC")
def generate_nucleotide(length, freqs):
import random
seq = ""
samplingStr = ""
# maybe not the best way to do this, but fun:
# create a list with the indicated freq of nt
for nt in freqs:
occurPer1000 = int(1000*freqs[nt])
samplingStr += nt*occurPer1000
samplingList = list(samplingStr)
# sample from the list
for i in range(length):
newChar = random.choice(samplingList)
seq += newChar
return seq
generate_nucleotide(100, {'A':0.60, 'G':0.10, 'C':0.25, 'T':0.05})
# let's check if it's really working
n = 10000
testSeq = generate_nucleotide(n, {'A':0.60, 'G':0.10, 'C':0.25, 'T':0.05})
obsCounts = nt_counts(testSeq)
for nt in obsCounts:
print nt, float(obsCounts[nt]) / n
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this discussion, we're going to check numerically some of the results from yesterday's lecture. Speicifically, we're going to see how bias and variance change as we modify model complexity.
Step2: We're going to compare a number of models we might use to fit this data, using polynomial regression. Specifically, we're going to compare models with polynomial terms varying from 1 to 20. The cell below implements a polynomial regression of degree $p$, given data $x$ and $y$.
Step3: Question 1
Step4: Now let's consider the three componets that make up our squared error
Step5: Question 4
Step6: Question 5
Step7: Since your variance term will depend heavily on the specifc $\theta$-values of each individual regression (which are in turn dependent on the specific values of $\epsilon$ for each simulated dataset), we will average together the variance terms of a number of regressions. The function avg_var_term is provided below. You should find that avg_var_term(10) $\approx 0.27$, though this will be less precise than for the bias term.
Step8: Question 6
Step9: Question 7
Step10: Question 8
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 10
np.random.seed(100)
x = np.array([i*np.pi/180 for i in range(0,360,18)])
h = lambda x: np.sin(x) + np.cos(x) + np.sin(2*x) + np.cos(2*x)
y = h(x) + np.random.normal(0,0.7,len(x))
plt.plot(x, y, '.')
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
def poly_regression(x, y, p):
reg = LinearRegression(normalize=True)
poly = PolynomialFeatures(degree=p)
poly_basis = poly.fit_transform(x.reshape(-1,1))
reg.fit(poly_basis, y)
f_theta = reg.predict(poly_basis)
return f_theta
def smooth_curve(x, y, p):
# computes predictions for all more values, to get a smoother curve
# you will not need to use this function
reg = LinearRegression(normalize=True)
poly = PolynomialFeatures(degree=p)
poly_basis = poly.fit_transform(x.reshape(-1,1))
reg.fit(poly_basis, y)
x_extended = np.array([i*np.pi/180 for i in range(0,360,6)])
poly_extended = PolynomialFeatures(degree=p)
poly_basis_extended = poly_extended.fit_transform(x_extended.reshape(-1,1))
f_theta = reg.predict(poly_basis_extended)
return f_theta
curve = np.array([i*np.pi/180 for i in range(0,360,6)])
plots = [(1,231),(3,232),(5,233),(10,234),(15,235),(30,236)]
for p, subplot in plots:
plt.subplot(subplot)
plt.plot(x, y, ".")
plt.plot(curve, smooth_curve(x, y, p))
plt.title('Plot for p={}'.format(p))
plt.ylim((-3,4))
def E_f_theta_x(p, n_sims=1000, sigma=0.7):
f_thetas = np.zeros(20)
for sim in range(n_sims):
y = ...
f_thetas = f_thetas + poly_regression(x, y, p)
E_f_thetas = ...
return E_f_thetas
E_f_theta_x(3)
def bias_term(p): # really squared bias
bias = ...
return bias
bias_term(3)
def variance_term(p):
y = ...
f_theta_x = poly_regression(x, y, p)
variance = ...
return variance
def avg_var_term(p, n_sims=100):
variance = 0
E_f_theta = E_f_theta_x(p, sigma=0.7)
for sim in range(n_sims):
y = h(x) + np.random.normal(0, 0.7, len(x))
f_theta_x = poly_regression(x, y, p)
variance += np.mean((E_f_theta - f_theta_x)**2)
return variance/n_sims
avg_var_term(10)
varis = np.array([avg_var_term(i) for i in range(1, 20)])
bias = np.array([bias_term(i) for i in range(1, 20)])
plt.plot(np.arange(1, 20), varis, color='b')
plt.plot(np.arange(1, 20), bias, color='r')
plt.title('Bias^2 and Variance vs. p')
def test_error(p):
y_train = h(x) + np.random.normal(0,0.7,len(x))
y_test = h(x) + np.random.normal(0,0.7,len(x))
pred_values = poly_regression(x, y_train, p)
mse = ...
return mse
def avg_test_error(p, n_sims=100):
return np.mean([test_error(p) for sim in range(n_sims)])
noise = 0.49
plt.plot(range(1, 20), [avg_test_error(i) for i in range(1, 20)], color='b')
plt.plot(range(1, 20), noise + bias + varis, color='g')
plt.title("Test error and calculated (noise + bias^2 + variance) vs. p")
def train_error(p):
y_train = h(x) + np.random.normal(0,0.7,len(x))
pred_values = poly_regression(x, y_train, p)
mse = ...
return mse
def avg_train_error(p, n_sims=100):
return np.mean([train_error(p) for sim in range(n_sims)])
plt.plot(range(1, 20), [avg_test_error(i) for i in range(1, 20)], color='b')
plt.plot(range(1, 20), [avg_train_error(i) for i in range(1, 20)], color='g')
plt.title("Training vs. Test Error")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the SNIFS-R channel optical configuration
Step2: Load the simulation configuration
Step3: Create a Spectrograph instance from optical configuration
Step4: Test the optical model
Step5: Simulate spectra on the detector and plot
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import mpld3
mpld3.enable_notebook()
import warnings
warnings.filterwarnings("ignore")
import numpy as N
from spectrogrism import spectrogrism as S
from spectrogrism import snifs
optcfg = snifs.SNIFS_R
print(optcfg)
simcfg = snifs.SNIFS_SIMU
print(simcfg)
spectro = S.Spectrograph(optcfg)
print(spectro)
if not spectro.test(simcfg.get_wavelengths(optcfg), verbose=False):
print "ERROR: backward modeling does not match."
else:
print " SUCCESSFUL ROUND-TRIP TEST ".center(70, '-')
detector = spectro.predict_positions(simcfg)
ax = detector.plot(modes=(-1, 0, 1, 2), blaze=True)
ax.set_aspect('auto')
ax.axis(N.array([-2000, 2000, -4000, 4000]) *
spectro.detector.pxsize / 1e-3) # [mm]
ax.figure.set_size_inches(12, 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Split test and training data
Step2: 3.2 Training a Random Forest Classifier for baseline
Step3: 3.3 Training a KNeighborsClassifier Classifier with default settings
Step4: 3.4 GridSearchCV
|
<ASSISTANT_TASK:>
Python Code:
from scipy.io import loadmat
mnist = loadmat('./datasets/mnist-original.mat')
mnist
X, y = mnist['data'], mnist['label']
X = X.T
X.shape
y = y.T
y.shape
type(y)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
len(X_train)
shuffle_index = np.random.permutation(len(X_train))
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
forest_clf.fit(X_train, y_train)
forest_pred = forest_clf.predict(X_test)
forest_pred = forest_pred.reshape(10000,1)
accuracy = (forest_pred == y_test).sum() / len(y_test)
print(accuracy)
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1)
knn_clf.fit(X_train, y_train)
knn_clf.predict([X_test[0]])
# for i in range(1000):
# knn_clf.predict([X_test[i]])
knn_pred = knn_clf.predict(X_test)
knn_pred = knn_pred.reshape(10000, 1)
accuracy = (knn_pred == y_test).sum() / len(y_test)
print(accuracy)
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_jobs': [-1], 'n_neighbors': [3, 5, 11, 19], 'weights': ['uniform', 'distance']}
]
grid_search = GridSearchCV(knn_clf, param_grid, cv=3, scoring='accuracy', n_jobs=-1)
grid_search.fit(X_train, y_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
Step3: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符)。
Step4: 尝试性练习:写程序,能够在屏幕上显示空行。
Step5: 挑战性练习:写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。
|
<ASSISTANT_TASK:>
Python Code:
name=input('请输入你的姓名,回车结束:')
print(name,'你好!')
month=int(input('请输入你的出生月份,回车结束:'))
date=int(input('请输入你的出生日期,回车结束:'))
print('你的生日是:',month,'月',date,'日')
if month == 3:
if date >= 21:
print(name,',你是白羊座。')
else:
print(name,',你是双鱼座。')
if month == 4:
if date >= 20:
print(name,',你是金牛座。')
else:
print(name,',你是白羊座。')
if month == 5:
if date >= 21:
print(name,',你是双子座。')
else:
print(name,',你是金牛座。')
if month == 6:
if date >= 22:
print(name,',你是巨蟹座。')
else:
print(name,',你是双子座。')
if month == 7:
if date >= 23:
print(name,',你是狮子座。')
else:
print(name,',你是巨蟹座。')
if month == 8:
if date >= 23:
print(name,',你是处女座。')
else:
print(name,',你是狮子座。')
if month == 9:
if date >= 23:
print(name,',你是天秤座。')
else:
print(name,',你是处女座。')
if month == 10:
if date >= 24:
print(name,',你是天蝎座。')
else:
print(name,',你是天秤座。')
if month == 11:
if date >= 23:
print(name,',你是射手座。')
else:
print(name,',你是天蝎座。')
if month == 12:
if date >= 22:
print(name,',你是摩羯座。')
else:
print(name,',你是射手座。')
if month == 1:
if date >= 20:
print(name,',你是水瓶座。')
else:
print(name,',你是摩羯座。')
if month == 2:
if date >= 19:
print(name,',你是双鱼座。')
else:
print(name,',你是水瓶座。')
m=int(input('请输入一个整数,回车结束:'))
n=int(input('请输入一个非0整数,回车结束:'))
ask=input('你想求的结果是(e.g.和、积、余数):')
if ask == '和':
if m<n:
total=m
while m<n:
m=m+1
total=total+m
if m>n:
total=n
while m>n:
n=n+1
total=total+n
print('结果是:', total)
elif ask == '积':
if m<n:
multi=m
while m<n:
m=m+1
multi=multi*m
if m>n:
multi=n
while m>n:
n=n+1
multi=multi*n
print('结果是:', multi)
elif ask== '余数':
print('结果是:', m%n)
else:
print('结果是:', m//n)
ask=int(input('现在北京的pm2.5指数是多少:'))
if ask >= 500:
print('请不要外出并打开空气净化器')
if ask >= 250:
print('外出请戴口罩;室内开启空气净化器')
if ask >= 100:
print('请减少室外活动')
else:
print('空气质量优良')
s=input('请输入一个英文单词,回车结束')
if s.endswith('o'):
print(s,'变复数加es')
elif s.endswith('ch') or s.endswith('sh'):
print(s,'变复数加es')
else:
print(s,'变复数加s')
print('我是空行')
print('我是空行')
print('我是空行')
print()
print('我是空行')
print('我是空行')
print('我是空行')
print('我是空行')
max_number = int(input('请输入一个整数,回车结束'))
submax=0
i = 0
while i < 4:
i += 1
n = int(input('请输入一个整数,回车结束'))
if n > max_number:
submax = max_number
max_number = n
elif n < max_number and n > submax:
submax = n
print('次大值是:', submax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get a list of supported projection names (no, there seems to be no single ready-made list)
Step2: Create sliders without continuous update, since creating a map can take a few seconds (this effect shows only when replacing the @interact_manual decorator below with @interact)
Step3: This function does the real work. Notice that some projections will create warnings or even errors when they need additional parameters!
|
<ASSISTANT_TASK:>
Python Code:
# Make plots appear inline (inside the Jupyter notebook).
%matplotlib inline
import datetime
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, supported_projections
from ipywidgets import interact, interact_manual, FloatSlider
lines = supported_projections.strip().split('\n')
proj_names = [line.strip().split()[0] for line in lines]
print(sorted(proj_names))
lat_slider = FloatSlider(min=-90, max=90, step=0.1, continuous_update=False)
lon_slider = FloatSlider(min=-180, max=180, step=0.1, continuous_update=False)
hour_slider = FloatSlider(min=-12, max=12, step=1/60, continuous_update=False)
@interact_manual(lat_0=lat_slider, lon_0=lon_slider,
delta_hours=hour_slider,
projection=proj_names, title='Sample Title')
def show_map(lat_0=0, lon_0=0, delta_hours=0, projection='mill', title=''):
"Show a world map."
# Resolutions: c (crude), l (low), i (intermediate), h (high), f (full) or None.
map = Basemap(projection=projection, lat_0=lat_0, lon_0=lon_0, resolution='c')
# Plot coastlines, draw label meridians and parallels.
map.drawcoastlines() # linewidth=0.5, linestyle='solid', color='k', antialiased=1, ax=None, zorder=None)
# Plot countries.
map.drawcountries() # linewidth=0.5, linestyle='solid', color='k', antialiased=1, ax=None, zorder=None)
# Plot parallels and meridians.
map.drawparallels(np.arange(-90, 90, 30), labels=[1, 0, 0, 0])
map.drawmeridians(np.arange(map.lonmin, map.lonmax + 30, 60), labels=[0, 0, 0, 1])
# Fill continents 'coral' (with zorder=0), color wet areas 'aqua'
map.drawmapboundary(fill_color='aqua')
map.fillcontinents(color='coral', lake_color='aqua')
# Shade the night areas, with alpha transparency so the
# map shows through. Use current time in UTC + delta.
date = datetime.datetime.utcnow().timestamp() + delta_hours * 3600
date = datetime.datetime.fromtimestamp(date)
map.nightshade(date, alpha=0.35)
plt.title('%s %s (UTC)' % (title, date.isoformat()[:19]))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Decision tree
Step2: To prevent over fitting we usually need to set a maximum depth we want to allow. A common choice is 6 layers
Step3: As all sklearn classes, DecisionTreeClassifier can be trained with .fit()
Step4: We can test our tree on the dev set with the .score()function, which outputs the accuracy.
Step5: 72.9% accuracy is not bad for a simple tree, especially one that was much faster to train than a neural net. However, it lags behind. This is where a truly powerful idea comes into play. What if we combined many weak classifiers into a strong one?
Step6: As you can see, many models do better than one. We can take this concept even further if we use some more tricks to avoid that the decision trees we train become too similar.
Step7: The random forest outperforms the decision tree and the simple bagging algorithm. It also performs slightly better than the best neural net from Ch. 10. Their good performance on structured data and their ease of training make random forests still a popular choice for working with structured data.
Step8: XGBoost achieves an accuracy of over 74.2%, that is better than the best neural network we came up with. It also trained a lot faster than a neural net. This is why it is still very popular for structured data. However, it reaches its limits on unstructured data like text, images or sound.
Step9: We can then fit all of our base classifiers to the base training set
Step10: For good measure we can also throw in our neural net from chapter 11
Step11: Now we create an input data set for the meta classifier by letting our models make predictions on the meta training set. Note that we must reshape all of our predictions to make sure they have the same shape when we feed them into the meta classifier
Step12: We can then train a meta classifier, let's make it another xgboost here
Step13: To make predictions we will define a new method
Step14: In lack of a .score() method we can measure the accuracy with sklearns accuracy score function
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# Set seed for reproducability
np.random.seed(42)
import matplotlib.pyplot as plt
# Supress warnings for better readability
import warnings; warnings.simplefilter('ignore')
# Load data
df = pd.read_csv('processed_bank.csv',index_col=0)
# Check that data is okay
df.head()
# Process data into train / dev / test
# X is everything that is not y
X = df.loc[:, df.columns != 'y'].values
# y is y
y = df['y'].values
# First split in train / test_dev
from sklearn.model_selection import train_test_split
X_train, X_test_dev, y_train, y_test_dev = train_test_split(X, y, test_size=0.25, random_state=0)
# Second split in dev / test
X_dev, X_test, y_dev, y_test = train_test_split(X_test_dev, y_test_dev, test_size=0.5, random_state=0)
# Remove test_dev set from memory
del X_test_dev
del y_test_dev
# Import the corresponding class
from sklearn.tree import DecisionTreeClassifier
tree_classifier = DecisionTreeClassifier(max_depth=6)
tree_classifier.fit(X=X_train,y=y_train); # ; suppresses the output of the cell for cleaner reading
tree_classifier.score(X_dev,y_dev)
# Import class
from sklearn.ensemble import BaggingClassifier
# Setup, needs the classifier we want to use and the number of classifiers
bagger = BaggingClassifier(base_estimator=tree_classifier, n_estimators=50)
# Fit will train the specified number of classifiers
bagger.fit(X_train,y_train);
bagger.score(X_dev,y_dev)
from sklearn.ensemble import RandomForestClassifier
randomforest = RandomForestClassifier(max_depth=6,n_estimators=1000)
randomforest.fit(X_train,y_train);
randomforest.score(X_dev,y_dev)
# Get XGBoost and import the classifier
import xgboost as xgb
from xgboost import XGBClassifier
# Parameteris:
# Learning rate = alpha
# max_depth maximum depth of each tree
xgclassifier = XGBClassifier(learning_rate=0.1,max_depth=3)
# Train classifier
xgclassifier.fit(X_train,y_train);
# Scoring works exactly as with sklearn
xgclassifier.score(X_dev,y_dev)
# Split train set into meta and base training sets
X_base, X_meta, y_base, y_meta = train_test_split(X_train, y_train, test_size=0.25, random_state=0)
tree_classifier.fit(X_base,y_base);
bagger.fit(X_base,y_base);
randomforest.fit(X_base,y_base);
xgclassifier.fit(X_base,y_base);
import keras
from keras.models import load_model
neural_net = load_model('./support_files/Ch11_model.h5')
# Get prediction from sigle tree classifier
treepred = tree_classifier.predict(X_meta).reshape(X_meta.shape[0],1)
# Get prediction from bagged tree classifier
baggerpred = bagger.predict(X_meta).reshape(X_meta.shape[0],1)
# Get prediction from random forrest
forestpred = randomforest.predict(X_meta).reshape(X_meta.shape[0],1)
# Get prediction from XGBoost
xgpred = xgclassifier.predict(X_meta).reshape(X_meta.shape[0],1)
# Get prediction from neural net
nnpred = neural_net.predict(X_meta).reshape(X_meta.shape[0],1)
# Combine predictions into meta features
meta_features = np.stack((treepred,baggerpred,forestpred,xgpred,nnpred),axis=1).reshape(X_meta.shape[0],5)
# Train the meta classifier
meta = XGBClassifier()
meta.fit(meta_features,y_meta);
def make_predictions(X):
# Get meta predictions
treepred = tree_classifier.predict(X).reshape(X.shape[0],1)
baggerpred = bagger.predict(X).reshape(X.shape[0],1)
forestpred = randomforest.predict(X).reshape(X.shape[0],1)
xgpred = xgclassifier.predict(X).reshape(X.shape[0],1)
nnpred = neural_net.predict(X).reshape(X.shape[0],1)
# Combine predictions
meta_features = np.stack((treepred,baggerpred,forestpred,xgpred,nnpred),axis=1).reshape(X.shape[0],5)
# Make meta predictions
meta_pred = meta.predict(meta_features)
return meta_pred
from sklearn.metrics import accuracy_score
# Make predictions
predictions = make_predictions(X_dev)
# Turn predictions into definit predictions
predictions[predictions >= 0.5] = 1
predictions[predictions < 0.5] = 0
# Measure accuracy
accuracy_score(y_dev,predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
Step8: Now we actually run the training
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables, input layer
w1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
b1 = tf.Variable(tf.zeros([num_hidden_nodes]))
# Variables, output layer
w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
b2 = tf.Variable(tf.zeros([num_labels]))
# Forward propagation
# To get the prediction, apply softmax to the output of this
def forward_prop(dataset, w1, b1, w2, b2):
o1 = tf.nn.softmax(tf.matmul(dataset, w1) + b1)
output_hidden = tf.nn.relu(o1)
return tf.matmul(output_hidden, w2) + b2
train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels))
tf.scalar_summary('loss', loss)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(train_output)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2))
num_steps = 3001
with tf.Session(graph=graph) as session:
merged = tf.merge_all_summaries()
train_writer = tf.train.SummaryWriter('logs', graph)
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
summary, _, l, predictions = session.run(
[merged, optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 10 == 0):
train_writer.add_summary(summary, step)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2- Aplique paso a paso el método de mínimos cuadrados de tal forma que le permita obtener la mejor curva lineal de ajuste de los datos anteriores, y determine las incertezas asociadas a los parámetros. Determine los coeficientes de correlación $ \chi ^{2} $ y $ R ^{2} $. Reporte correctamente los parámetros con su incertidumbre y concluya sobre la conveniencia de la regresión lineal a partir de las correlaciones obtenidas.
Step2: b) $$a_{1}= \frac{n\sum x_{i}y_{i} - (\sum x_{i})(\sum y_{i})}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
Step3: c) $$y= a_{0}+a_{1}x$$
Step4: $$S_{y} = \sqrt{\frac{1}{n-2}\sum_{i=1}^{n}(y_{i}-a_{0}-a_{1}x_{i})^{2}}$$
Step5: $$a_{0}\pm S_{ma0} $$ $$ a_{1}\pm S_{ma1} $$
Step6: 3- Grafique todas las posibles curvas de la regresión lineal teniendo en cuenta el error determinado para los parámetros. Concluya al respecto.
Step7: $ \chi ^{2} $
Step8: B- Ajuste de Curva.
Step9: cuadratica
|
<ASSISTANT_TASK:>
Python Code:
########################################################
## Librerias para el trabajo
########################################################
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
data1= np.loadtxt('datos.csv',delimiter=',') #datos para regresion lineal
X1=data1[:,0]
Y1=data1[:,1]
print
print 'grafica preliminar de los puntos: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X1,Y1,'o')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
#n:
n=len(X1)
#suma xi cuadrado:
suma_xi2=0
for i in xrange(0,n):
suma_xi2+=(X1[i]*X1[i])
#suma yi cuadrado:
suma_yi2=0
for i in xrange(0,n):
suma_yi2+=(Y1[i]*Y1[i])
#suma xi simple:
suma_xi=0
for i in xrange(0,n):
suma_xi+=(X1[i])
#suma yi simple:
suma_yi=0
for i in xrange(0,n):
suma_yi+=(Y1[i])
#suma xi*yi:
suma_xiyi=0
for i in xrange(0,n):
suma_xiyi+=(X1[i]*Y1[i])
a0=((suma_xi2*suma_yi)-(suma_xi*suma_xiyi))/(n*suma_xi2-(suma_xi*suma_xi))
print 'a0 = %.1f'%a0
a1=((n*suma_xiyi)-(suma_xi*suma_yi))/(n*suma_xi2-(suma_xi*suma_xi))
print 'a1 = %.1f'%a1
x=np.linspace(X1[0],X1[-1],n)
y=(a0 +a1*x)
print
print 'grafica de los puntos con ajuste: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X1,Y1,'ro')
ax.plot(x,y,'b-')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
#desviacion estandar
Sy=0
for i in xrange(0,len(y)):
Sy+= (y[i]-a0-a1*x[i])**2
#print Sy
Sy*=(1/(len(y)-2.))
#print Sy
Sy=Sy**(1/2)
print 'Sy %.1f'%Sy
#error en y
raiz= np.sqrt(n)
Smy=Sy/(raiz)
print 'Smy %.1f'%Smy
#error en a0
S2_ma0=(Smy*Smy*suma_xi2)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma0
#error en a0
S2_ma1=(Smy*Smy*n)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma1
print 'a0 ± sma0: %f ± %f'%(a0,np.sqrt(S2_ma0))
print 'a1 ± sma1: %f ± %f'%(a1,np.sqrt(S2_ma1))
print ('y= (%.2f±%.2f)x+(%.2f±%.2f)'%(a0,np.sqrt(S2_ma0),a1,np.sqrt(S2_ma1)))
err_a0= np.sqrt(S2_ma0)
err_a1= np.sqrt(S2_ma1)
y=(a0 +a1*x)
y1=((a0+err_a0) +(a1+err_a1)*x)
y2=((a0-err_a0) +(a1-err_a1)*x)
y3=((a0+err_a0) +(a1-err_a1)*x)
y4=((a0-err_a0) +(a1+err_a1)*x)
print
print 'grafica de los puntos con ajustes y errores: '
print 'es facil observar que todas las curvas posibles variando los errores, están muy cerca de la curva "perfecta", lo cual implica que el ajuste es bastante bueno y los datos tienen muy bajo error'
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X1,Y1,'ro')
ax.plot(x,y,'b-')
ax.plot(x,y1,'-*')
ax.plot(x,y2,'-*')
ax.plot(x,y3,'--')
ax.plot(x,y4,'--')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
#chi2
chi2=0
for i in xrange(0,n):
chi2=((y[i]-Y1[i])**2)/Y1[i]
print 'chi^2 = ',chi2
# r2
b=a1
bprima=a1=((n*suma_xiyi)-(suma_xi*suma_yi))/(n*suma_yi2-(suma_yi*suma_yi))
r2=b*bprima
print 'r^2 = ',r2
X2=[1.0,2.0,3.0,4.0]
Y2=[2.1,4.3,6.0,7.8]
print
print 'grafica preliminar de los puntos: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X2,Y2,'o')
ax.set_xlim(xmin=0.0, xmax=10)
ax.set_ylim(ymin=0.0, ymax=10)
plt.show()
#n:
n2=len(X2)
#suma xi cuadrado:
suma_xi2=0
for i in xrange(0,n2):
suma_xi2+=(Y2[i]*X2[i])
#suma yi cuadrado:
suma_yi2=0
for i in xrange(0,n2):
suma_yi2+=(Y2[i]*Y2[i])
#suma xi simple:
suma_xi=0
for i in xrange(0,n2):
suma_xi+=(X2[i])
#suma yi simple:
suma_yi=0
for i in xrange(0,n2):
suma_yi+=(Y2[i])
#suma xi*yi:
suma_xiyi=0
for i in xrange(0,n2):
suma_xiyi+=(X2[i]*Y2[i])
a0=((suma_xi2*suma_yi)-(suma_xi*suma_xiyi))/(n2*suma_xi2-(suma_xi*suma_xi))
print 'a0 = %.1f'%a0
a1=((n2*suma_xiyi)-(suma_xi*suma_yi))/(n2*suma_xi2-(suma_xi*suma_xi))
print 'a1 = %.1f'%a1
x=np.linspace(X2[0],X2[-1],n2)
y=(a0 +a1*x)
print
print 'grafica de los puntos con ajuste: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X2,Y2,'ro')
ax.plot(x,y,'b-')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
#desviacion estandar
Sy=0
for i in xrange(0,n2):
Sy+= (y[i]-a0-a1*x[i])**2
#print Sy
Sy*=(1/(len(y)-2.))
#print Sy
Sy=Sy**(1/2)
print 'Sy %.1f'%Sy
#error en y
raiz= np.sqrt(n)
Smy=Sy/(raiz)
print 'Smy %.1f'%Smy
#error en a0
S2_ma0=(Smy*Smy*suma_xi2)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma0
#error en a0
S2_ma1=(Smy*Smy*n)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma1
print 'a0 ± sma0: %f ± %f'%(a0,np.sqrt(S2_ma0))
print 'a1 ± sma1: %f ± %f'%(a1,np.sqrt(S2_ma1))
print ('y= (%.2f±%.2f)x+(%.2f±%.2f)'%(a0,np.sqrt(S2_ma0),a1,np.sqrt(S2_ma1)))
#chi2
chi2=0
for i in xrange(0,n2):
chi2=((y[i]-Y2[i])**2)/Y2[i]
print 'chi^2 = ',chi2
# r2
b=a1
bprima=a1=((n2*suma_xiyi)-(suma_xi*suma_yi))/(n2*suma_yi2-(suma_yi*suma_yi))
r2=b*bprima
print 'r^2 = ',r2
logX2=np.log(X2)
logY2=np.log(Y2)
print
print 'grafica preliminar de los puntos: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(logX2,logY2,'o')
ax.set_xlim(xmin=0.0, xmax=2)
ax.set_ylim(ymin=0.0, ymax=4)
plt.show()
#n:
n2=len(logX2)
#suma xi cuadrado:
suma_xi2=0
for i in xrange(0,n2):
suma_xi2+=(logY2[i]*logX2[i])
#suma yi cuadrado:
suma_yi2=0
for i in xrange(0,n2):
suma_yi2+=(logY2[i]*logY2[i])
#suma xi simple:
suma_xi=0
for i in xrange(0,n2):
suma_xi+=(logX2[i])
#suma yi simple:
suma_yi=0
for i in xrange(0,n2):
suma_yi+=(logY2[i])
#suma xi*yi:
suma_xiyi=0
for i in xrange(0,n2):
suma_xiyi+=(logX2[i]*logY2[i])
a0=((suma_xi2*suma_yi)-(suma_xi*suma_xiyi))/(n2*suma_xi2-(suma_xi*suma_xi))
print 'a0 = %.1f'%a0
a1=((n2*suma_xiyi)-(suma_xi*suma_yi))/(n2*suma_xi2-(suma_xi*suma_xi))
print 'a1 = %.1f'%a1
#desviacion estandar
Sy=0
for i in xrange(0,len(y)):
Sy+= (y[i]-a0-a1*x[i])**2
#print Sy
Sy*=(1/(len(y)-2.))
#print Sy
Sy=Sy**(1/2)
print 'Sy %.1f'%Sy
#error en y
raiz= np.sqrt(n)
Smy=Sy/(raiz)
print 'Smy %.1f'%Smy
#error en a0
S2_ma0=(Smy*Smy*suma_xi2)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma0
#error en a0
S2_ma1=(Smy*Smy*n)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma1
print 'a0 ± sma0: %f ± %f'%(a0,np.sqrt(S2_ma0))
print 'a1 ± sma1: %f ± %f'%(a1,np.sqrt(S2_ma1))
print ('y= (%.2f±%.2f)x+(%.2f±%.2f)'%(a0,np.sqrt(S2_ma0),a1,np.sqrt(S2_ma1)))
x=np.linspace(logX2[0],logX2[-1],n2)
y=(a0 +a1*x)
print
print 'grafica de los puntos con ajuste: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(logX2,logY2,'ro')
ax.plot(x,y,'b-')
ax.set_xlim(xmin=0.0, xmax=2)
ax.set_ylim(ymin=0.0, ymax=5)
#chi2
chi2=0
for i in xrange(0,n2):
chi2=((y[i]-Y2[i])**2)/Y2[i]
print 'chi^2 = ',chi2
# r2
b=a1
bprima=a1=((n2*suma_xiyi)-(suma_xi*suma_yi))/(n2*suma_yi2-(suma_yi*suma_yi))
r2=b*bprima
print 'r^2 = ',r2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import and clean data
Step2: Define a function to extract the variable values from each netCDF4 file. Variables are flattened from a 3 dimensional array to 1 dimensional version, pooling all values both spatially and temporily.
Step3: Execute the above function on all netCDF4 file paths.
Step4: Turn this into a dataframe for the analysis.
Step5: Check that we've built it correctly.
Step6: Export this to disk to be used by the analysis notebook - used gzip compression to save on space. Beware, because of there are approximation 10 million rows of data, this may take some time.
|
<ASSISTANT_TASK:>
Python Code:
# data munging and analytical libraries
import re
import os
import numpy as np
import pandas as pd
from netCDF4 import Dataset
# graphical libraries
import matplotlib.pyplot as plt
%matplotlib inline
# set paths
outPath = "../data/globfire.csv"
driver_paths = [os.path.join(dp, f) for (dp, _, fn) in os.walk("../data/raw/") for f in fn if f.endswith('.nc')]
driver_names = [re.search('^[a-zA-Z_]*', os.path.basename(fp)).group(0) for fp in driver_paths]
file_table = pd.DataFrame({'filepath': driver_paths, 'file_name': driver_names})
file_table
def nc_extract(fpath):
print("Processing: {0}".format(fpath))
with Dataset(fpath, 'r') as nc_file:
gdata = np.array(nc_file.variables['variable'][:,:,:])
gflat = gdata.flatten()
if type(gdata) == np.ma.core.MaskedArray:
return gflat[~gflat.mask].data
else:
return gflat.data
values = [nc_extract(dp) for dp in driver_paths]
# turn list into a dataframe
fire_df = pd.DataFrame(np.array(values).T, columns=driver_names)
# replace null flags with pandas null
fire_df.replace(-3.4e38, np.nan, inplace=True)
# drop all null rows (are ocean and not needed in optim)
fire_df.dropna(inplace=True)
fire_df.head()
savepath = os.path.expanduser(outPath)
fire_df.to_csv(savepath, index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: General physics variables and equations
Step2: Each of the above can also be imported one-by-one, using its Name, e.g.
Step3: Variables and equations related to plant leaves
Step4: Equations for leaf energy and water balance
Step5: Variables for leaf radiative balance
Step6: Variables for leaf chamber model
Step7: Leaf chamber mass balance
|
<ASSISTANT_TASK:>
Python Code:
# Checking for essm version installed
import pkg_resources
pkg_resources.get_distribution("essm").version
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:150% !important; }</style>"))
from IPython.display import display
from sympy import init_printing, latex
init_printing()
from sympy.printing import StrPrinter
StrPrinter._print_Quantity = lambda self, expr: str(expr.abbrev) # displays short units (m instead of meter)
import scipy as sc
# Import various functions from sympy
from sympy import Derivative, Eq, exp, log, solve, Symbol
from essm.variables.utils import generate_metadata_table, ListTable
import essm.variables.physics.thermodynamics as physics_vars
vars = ['physics_vars.' + name for name in physics_vars.__all__]
generate_metadata_table([eval(name) for name in vars])
import essm.equations.physics.thermodynamics as physics_eqs
modstr = 'physics_eqs.'
eqs = [name for name in physics_eqs.__all__]
table = ListTable()
#table.append(('Name', 'Description', 'Equation'))
for name in eqs:
table.append((name, eval(modstr+name).__doc__, latex('$'+latex(eval(modstr+name))+'$')))
table
import essm.variables.leaf.energy_water as leaf_energy
vars = ['leaf_energy.' + name for name in leaf_energy.__all__]
generate_metadata_table([eval(name) for name in vars])
import essm.equations.leaf.energy_water as leaf_energy
modstr = 'leaf_energy.'
eqs = [name for name in leaf_energy.__all__]
table = ListTable()
#table.append(('Name', 'Description', 'Equation'))
for name in eqs:
table.append((name, eval(modstr+name).__doc__, latex('$'+latex(eval(modstr+name))+'$')))
table
import essm.variables.leaf.radiation as leaf_radiation
vars = ['leaf_radiation.' + name for name in leaf_radiation.__all__]
generate_metadata_table([eval(name) for name in vars])
import essm.variables.chamber.insulation as chamber_ins
vars = ['chamber_ins.' + name for name in chamber_ins.__all__]
generate_metadata_table([eval(name) for name in vars])
import essm.variables.chamber.mass as chamber_mass
vars = ['chamber_mass.' + name for name in chamber_mass.__all__]
generate_metadata_table([eval(name) for name in vars])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of contents
Step2: Reading the binary file
Step3: Getting dataset information
Step4: Iterating over all markers
Step5: <a id="iterating_over_all_nuc"></a>
Step6: Iterating over selected markers
Step7: <a id="iterating_over_selected_nuc"></a>
Step8: Extracting a single marker
Step9: <a id="extracting_nuc"></a>
Step10: Misc example
Step11: Counting the allele frequency of markers
Step12: Writing binary pedfile
Step13: The newly created binary files are compatible with Plink.
Step14: INDIVIDUAL-major format
|
<ASSISTANT_TASK:>
Python Code:
from pyplink import PyPlink
import zipfile
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Downloading the demo data from Plink webset
urlretrieve(
"http://pngu.mgh.harvard.edu/~purcell/plink/dist/hapmap_r23a.zip",
"hapmap_r23a.zip",
)
# Extracting the archive content
with zipfile.ZipFile("hapmap_r23a.zip", "r") as z:
z.extractall(".")
pedfile = PyPlink("hapmap_r23a")
print("{:,d} samples and {:,d} markers".format(
pedfile.get_nb_samples(),
pedfile.get_nb_markers(),
))
all_samples = pedfile.get_fam()
all_samples.head()
all_markers = pedfile.get_bim()
all_markers.head()
for marker_id, genotypes in pedfile:
print(marker_id)
print(genotypes)
break
for marker_id, genotypes in pedfile.iter_geno():
print(marker_id)
print(genotypes)
break
for marker_id, genotypes in pedfile.iter_acgt_geno():
print(marker_id)
print(genotypes)
break
markers = ["rs7092431", "rs9943770", "rs1587483"]
for marker_id, genotypes in pedfile.iter_geno_marker(markers):
print(marker_id)
print(genotypes, end="\n\n")
markers = ["rs7092431", "rs9943770", "rs1587483"]
for marker_id, genotypes in pedfile.iter_acgt_geno_marker(markers):
print(marker_id)
print(genotypes, end="\n\n")
pedfile.get_geno_marker("rs7619974")
pedfile.get_acgt_geno_marker("rs7619974")
# Getting the Y markers
y_markers = all_markers[all_markers.chrom == 24].index.values
# Getting the males
males = all_samples.gender == 1
# Cycling through the Y markers
for marker_id, genotypes in pedfile.iter_geno_marker(y_markers):
male_genotypes = genotypes[males.values]
print("{:,d} total genotypes".format(len(genotypes)))
print("{:,d} genotypes for {:,d} males ({} on chr{} and position {:,d})".format(
len(male_genotypes),
males.sum(),
marker_id,
all_markers.loc[marker_id, "chrom"],
all_markers.loc[marker_id, "pos"],
))
break
# Getting the founders
founders = (all_samples.father == "0") & (all_samples.mother == "0")
# Computing the MAF
markers = ["rs7619974", "rs2949048", "rs16941434"]
for marker_id, genotypes in pedfile.iter_geno_marker(markers):
valid_genotypes = genotypes[founders.values & (genotypes != -1)]
maf = valid_genotypes.sum() / (len(valid_genotypes) * 2)
print(marker_id, round(maf, 6), sep="\t")
# The genotypes for 3 markers and 10 samples
all_genotypes = [
[0, 0, 0, 1, 0, 0, -1, 2, 1, 0],
[0, 0, 1, 1, 0, 0, 0, 1, 2, 0],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 1],
]
# Writing the BED file using PyPlink
with PyPlink("test_output", "w") as pedfile:
for genotypes in all_genotypes:
pedfile.write_genotypes(genotypes)
# Writing a dummy FAM file
with open("test_output.fam", "w") as fam_file:
for i in range(10):
print("family_{}".format(i+1), "sample_{}".format(i+1), "0", "0", "0", "-9",
sep=" ", file=fam_file)
# Writing a dummy BIM file
with open("test_output.bim", "w") as bim_file:
for i in range(3):
print("1", "marker_{}".format(i+1), "0", i+1, "A", "T",
sep="\t", file=bim_file)
# Checking the content of the newly created binary files
pedfile = PyPlink("test_output")
pedfile.get_fam()
pedfile.get_bim()
for marker, genotypes in pedfile:
print(marker, genotypes)
from subprocess import Popen, PIPE
# Computing frequencies
proc = Popen(["plink", "--noweb", "--bfile", "test_output", "--freq"],
stdout=PIPE, stderr=PIPE)
outs, errs = proc.communicate()
print(outs.decode(), end="")
with open("plink.frq", "r") as i_file:
print(i_file.read(), end="")
# The genotypes for 3 markers and 10 samples (INDIVIDUAL-major)
all_genotypes = [
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 1, 0],
[ 1, 1, 0],
[ 0, 0, 1],
[ 0, 0, 1],
[-1, 0, 0],
[ 2, 1, 0],
[ 1, 2, 0],
[ 0, 0, 1],
]
# Writing the BED file using PyPlink
with PyPlink("test_output_2", "w", bed_format="INDIVIDUAL-major") as pedfile:
for genotypes in all_genotypes:
pedfile.write_genotypes(genotypes)
# Writing a dummy FAM file
with open("test_output_2.fam", "w") as fam_file:
for i in range(10):
print("family_{}".format(i+1), "sample_{}".format(i+1), "0", "0", "0", "-9",
sep=" ", file=fam_file)
# Writing a dummy BIM file
with open("test_output_2.bim", "w") as bim_file:
for i in range(3):
print("1", "marker_{}".format(i+1), "0", i+1, "A", "T",
sep="\t", file=bim_file)
from subprocess import Popen, PIPE
# Computing frequencies
proc = Popen(["plink", "--noweb", "--bfile", "test_output_2", "--freq", "--out", "plink_2"],
stdout=PIPE, stderr=PIPE)
outs, errs = proc.communicate()
print(outs.decode(), end="")
with open("plink_2.frq", "r") as i_file:
print(i_file.read(), end="")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the scatter plot above, we can see three separate groups of data points and we would like to recover them using clustering -- think of "discovering" the class labels that we already take for granted in a classification task.
Step2: We can get the cluster labels either by calling fit and then accessing the
Step3: Let's visualize the assignments that have been found
Step4: Compared to the true labels
Step5: Here, we are probably satisfied with the clustering results. But in general we might want to have a more quantitative evaluation. How about comparing our cluster labels with the ground truth we got when generating the blobs?
Step6: Exercise
Step7: One of the "short-comings" of K-means is that we have to specify the number of clusters, which we often don't know apriori. For example, let's have a look what happens if we set the number of clusters to 2 in our synthetic 3-blob dataset
Step8: The Elbow Method
Step9: Then, we pick the value that resembles the "pit of an elbow." As we can see, this would be k=3 in this case, which makes sense given our visual expection of the dataset previously.
Step10: Some Notable Clustering Routines
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
X.shape
plt.scatter(X[:, 0], X[:, 1]);
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3, random_state=42)
labels = kmeans.fit_predict(X)
labels
all(y == labels)
plt.scatter(X[:, 0], X[:, 1], c=labels);
plt.scatter(X[:, 0], X[:, 1], c=y);
from sklearn.metrics import confusion_matrix, accuracy_score
print('Accuracy score:', accuracy_score(y, labels))
print(confusion_matrix(y, labels))
np.mean(y == labels)
from sklearn.metrics import adjusted_rand_score
adjusted_rand_score(y, labels)
kmeans = KMeans(n_clusters=2, random_state=42)
labels = kmeans.fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels);
distortions = []
for i in range(1, 11):
km = KMeans(n_clusters=i,
random_state=0)
km.fit(X)
distortions.append(km.inertia_)
plt.plot(range(1, 11), distortions, marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.show()
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=170, n_samples=600)
rng = np.random.RandomState(74)
transformation = rng.normal(size=(2, 2))
X = np.dot(X, transformation)
y_pred = KMeans(n_clusters=3).fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
from sklearn.datasets import load_digits
digits = load_digits()
# ...
#%load solutions/08B_digits_clustering.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, split the labeled data into development set and evaluation set and convert them into feature vectors
Step2: Compute accuracy of X (Decision Tree) on J
|
<ASSISTANT_TASK:>
Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
# Set the seed value
seed = 0
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
path_A = datasets_dir + os.sep + 'dblp_demo.csv'
path_B = datasets_dir + os.sep + 'acm_demo.csv'
path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv'
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
# Load the pre-labeled data
S = em.read_csv_metadata(path_labeled_data,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.5, random_state=0)
I = IJ['train']
J = IJ['test']
# Generate a set of features
F = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)
# Convert I into feature vectors using updated F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Instantiate the matcher to evaluate.
dt = em.DTMatcher(name='DecisionTree', random_state=0)
# Train using feature vectors from I
dt.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
# Convert J into a set of feature vectors using F
L = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
append=True, target_attr='predicted', inplace=False, return_probs=True,
probs_attr='proba')
predictions[['_id', 'ltable_id', 'rtable_id', 'predicted', 'proba']].head()
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'label', 'predicted')
em.print_eval_summary(eval_result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Step5: Load the leakage coefficient from disk
Step6: Load the direct excitation coefficient ($d_{exAA}$) from disk
Step7: Update d with the correction coefficients
Step8: Laser alternation selection
Step9: We need to define some parameters
Step10: We should check if everithing is OK with an alternation histogram
Step11: If the plot looks good we can apply the parameters with
Step12: Measurements infos
Step13: Or check the measurements duration
Step14: Compute background
Step15: Burst search and selection
Step16: Donor Leakage fit
Step17: Burst sizes
Step18: Fret fit
Step19: Weighted mean of $E$ of each burst
Step20: Gaussian fit (no weights)
Step21: Gaussian fit (using burst size as weights)
Step22: Stoichiometry fit
Step23: The Maximum likelihood fit for a Gaussian population is the mean
Step24: Computing the weighted mean and weighted standard deviation we get
Step25: Save data to file
Step26: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step27: This is just a trick to format the different variables
|
<ASSISTANT_TASK:>
Python Code:
ph_sel_name = "None"
data_id = "22d"
# data_id = "7d"
from fretbursts import *
init_notebook()
from IPython.display import display
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
d = loader.photon_hdf5(filename=files_dict[data_id])
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
d.leakage = leakage
d.dir_ex = dir_ex_aa
d.ph_times_t, d.det_t
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
plot_alternation_hist(d)
loader.alex_apply_period(d)
d
d.time_max
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
ds_fret.fit_E_m(weights='size')
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_kde, S_gauss, S_gauss_sig, S_gauss_err
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
sample = data_id
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err '
'E_pr_do_kde nt_mean\n')
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Part 1
Step3: Next, let's demonstrate the different sorts of grids we get with different numbers of layers. We'll look at grids with between 3 and 1023 nodes.
Step4: We can see the number of nodes grows exponentially with the number of layers.
Step6: 1b
Step8: 1c
Step9: Part 2
Step10: We can see that the majority of the time it took to create the grid, parcels, and the NST component occurs in xarray functions. This is good, as those are likely already efficient.
Step12: We can see that the majority of the time it takes to run the component happens in a function called _partition_active_and_storage_layers. This function is used to figure out which parcels are moving and which are not active.
Step13: Next, we use or new time_code function with a few different grid sizes, a few different parcels per link, and for 10 seconds. Feel free to experiment and change these values. Some of these values have been reduced to ensure that this notebook always works in the Landlab continuous integration.
Step14: We make a dataframe and investigate the contents with df.head. We'll use some shorthand for the column and axis names
Step15: 3c
|
<ASSISTANT_TASK:>
Python Code:
import cProfile
import io
import pstats
import time
import warnings
from pstats import SortKey
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter
from landlab.data_record import DataRecord
from landlab.grid.network import NetworkModelGrid
from landlab.plot import graph
warnings.filterwarnings("ignore")
def create_node_xy_and_links(n_layers, x0=0.0, y0=0.0, xperc=0.9, dy=1.0):
Create node and link structure of a branching binary tree.
The tree can have an arbitrary number of layers. For example,
a tree with one layer has three nodes and two links:
::
* *
\ /
*
The lowest of the nodes is the "origin" node, and has coordinates
of `(x0, y0)`. The y spacing between layers is given by `dy`. Finally,
in order to ensure that links do not cross and nodes are not co-located
a shrinking factor, `xperc` that must be less than 1.0 is specified.
Each layer has 2^{layer} nodes: Layer 0 has 1 node, layer 1 has 2 nodes,
layer 2 has 4 nodes.
A tree with three layers has seven nodes and six links:
::
* * * *
\ / \ /
* *
\ /
*
Parameters
----------
n_layers : int
Number of layers of the binary tree
x0 : float
x coordinate position of the origin node. Default of 0.
y0=0. : float
y coordinate position of the origin node. Default of 0.
xperc : float
x direction shrinkage factor to prevent co-location of nodes
and corssing links. Must be between 0.0 and 1.0 noninclusive.
Default of 0.9.
dy : float
y direction spacing between layers. Default of 1.
Returns
------
x_of_node : list
Node x coordinates.
y_of_node : list
Node y coordinates.
nodes_at_link : list of (head, tail) tuples
Nodes at link tail and head.
assert xperc < 1.0
assert xperc > 0.0
nodes_per_layer = np.power(2, np.arange(n_layers + 1))
nnodes = np.sum(nodes_per_layer)
x_of_node = [x0]
y_of_node = [y0]
nodes_at_link = []
id_start_layer = 0
for nl in np.arange(1, n_layers + 1):
nodes_last_layer = np.power(2, nl - 1)
nodes_this_layer = np.power(2, nl)
dx = xperc * (dy) * (0.5 ** (nl - 1))
for ni in range(nodes_last_layer):
head_id = id_start_layer + ni
tail_id = len(x_of_node)
x = x_of_node[head_id]
y = y_of_node[head_id]
x_of_node.extend([x - dx, x + dx])
y_of_node.extend([y + dy, y + dy])
nodes_at_link.extend([(head_id, tail_id), (head_id, tail_id + 1)])
id_start_layer = len(x_of_node) - nodes_this_layer
return x_of_node, y_of_node, nodes_at_link
example_layers = [1, 3, 5, 7, 9]
nodes = []
for i, n_layers in enumerate(example_layers):
x_of_node, y_of_node, nodes_at_link = create_node_xy_and_links(n_layers)
grid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link)
graph.plot_graph(grid, at="node,link", with_id=False)
nodes.append(grid.number_of_nodes)
plt.plot(example_layers, nodes)
plt.xlabel("Number of Layers")
plt.ylabel("Number of Nodes")
plt.show()
def create_nmg_and_fd(n_layers):
Create a generic NetworkModelGrid and FlowDirectorSteepest.
This function will also add the following fields to the NetworkModelGrid.
- topographic__elevation at node
- bedrock__elevation at node
- flow_depth at link
- reach_length at link
- channel_width at link.
Parameters
----------
n_layers : int
Number of layers of the binary tree
Returns
-------
grid : NetworkModelGrid instance
fd : FlowDirectorSteepest instance
x_of_node, y_of_node, nodes_at_link = create_node_xy_and_links(n_layers)
grid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link)
_ = grid.add_field("topographic__elevation", grid.y_of_node.copy(), at="node")
_ = grid.add_field("bedrock__elevation", grid.y_of_node.copy(), at="node")
_ = grid.add_field(
"flow_depth", 2.5 * np.ones(grid.number_of_links), at="link"
) # m
_ = grid.add_field(
"reach_length", 200.0 * np.ones(grid.number_of_links), at="link"
) # m
_ = grid.add_field(
"channel_width", 1.0 * np.ones(grid.number_of_links), at="link"
) # m
fd = FlowDirectorSteepest(grid)
fd.run_one_step()
return grid, fd
def create_parcels(grid, parcels_per_link=5):
Create a generic set of parcels.
The NetworkSedimentTransporter requires a set of parcels with some
specific attributes (e.g., density) that are used in order to
calculate travel distances. This function creates the parcels in
the correct format and populates all necessary attributes. Specifically
it creates the following attributes:
- "abrasion_rate"
- "density"
- "time_arrival_in_link"
- "active_layer"
- "location_in_link"
- "D"
- "volume"
Parameters
----------
grid : NetworkModelGrid
parcels_per_link : int
Number of parcels to create for each link. Default of 5.
Returns
-------
parcels : DataRecord
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(grid.number_of_links), parcels_per_link)
element_id = np.expand_dims(element_id, axis=1)
# scale volume with parcels per link so we end up with a similar quantity of sediment.
volume = (1.0 / parcels_per_link) * np.ones(np.shape(element_id)) # (m3)
active_layer = np.zeros(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0.0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.085 # m
mu = np.log(medianD)
sigma = np.log(2) # assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu, sigma, np.shape(element_id)
) # (m) the diameter of grains in each parcel
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume),
}
items = {"grid_element": "link", "element_id": element_id}
parcels = DataRecord(
grid,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
return parcels
# feel free to change these parameters and see
# how it impacts the results
nlayer = 5
timesteps = 50
parcels_per_link = 50
# calculate dt and set seed.
dt = 60 * 60 * 24 * 12 # length of timestep (seconds)
np.random.seed(1234)
pr = cProfile.Profile()
pr.enable()
grid, fd = create_nmg_and_fd(nlayer)
parcels = create_parcels(grid, parcels_per_link=parcels_per_link)
nst = NetworkSedimentTransporter(
grid,
parcels,
fd,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
pr.disable()
s = io.StringIO()
sortby = SortKey.CUMULATIVE
ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
ps.print_stats()
print(s.getvalue())
pr = cProfile.Profile()
pr.enable()
for t in range(0, (timesteps * dt), dt):
nst.run_one_step(dt)
pr.disable()
s = io.StringIO()
sortby = SortKey.CUMULATIVE
ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
ps.print_stats()
print(s.getvalue())
def time_code(nlayer=3, parcels_per_link=100, timesteps=10):
Time the initializiation and runtime.
Parameters
----------
n_layers : int
Number of layers of the binary tree used to create the
NetworkModelGrid. Default of 3
parcels_per_link : int
Number of parcels to create for each link. Default of 100.
timesteps : int
Number of timesteps. Default of 10.
Returns
-------
(number_of_nodes, parcels_per_link, total_parcels) : tuple
Tuple indicating the key inputs in our scaling analysis. The number
of nodes, the number of parcels per link, the total number of parcels.
init_duration : float
Duration of the initiation step, in seconds.
r1s_per : float
Duration of the average run_one_step call, in seconds
init_start = time.time()
grid, fd = create_nmg_and_fd(nlayer)
parcels = create_parcels(grid, parcels_per_link=parcels_per_link)
dt = 60 * 60 * 24 * 12 # length of timestep (seconds)
nst = NetworkSedimentTransporter(
grid,
parcels,
fd,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
init_duration = time.time() - init_start
if timesteps > 0:
r1s_start = time.time()
for t in range(timesteps):
nst.run_one_step(dt)
r1s_per = (time.time() - r1s_start) / timesteps
else:
r1s_per = 0.0
return (grid.number_of_nodes, parcels_per_link), init_duration, r1s_per
np.random.seed(345)
out = []
# this range for i in reduced for testing.
for i in range(2, 5):
for j in [10, 20, 50, 100, 200, 500]:
print(i, j)
(nn, ppl), init, r1s_per = time_code(nlayer=i, parcels_per_link=j, timesteps=10)
out.append((nn, ppl, init, r1s_per))
df = pd.DataFrame(out, columns=["nnodes", "ppl", "init", "r1s_per"])
df = df.pivot(index="nnodes", columns="ppl", values=["init", "r1s_per"])
df.head()
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, dpi=300)
df["init"].plot(loglog=True, ax=ax[0], title="init duration")
ax[0].set_ylabel("duration (s)")
ax[0].set_xlabel("Number of Nodes")
df["r1s_per"].plot(loglog=True, ax=ax[1], title="run one step duration")
ax[1].set_ylabel("duration (s)")
ax[1].set_xlabel("Number of Nodes")
# plt.savefig("scaling1.png")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Baselines
Step2: Dense example
Step3: Sparse example
Step4: Regression example
Step5: n_features/time complexity
Step6: Logging example
Step7: Save/load example
Step8: Different optimizers
Step9: Different regularization strategies
Step10: Weighted Loss Function
Step11: Unweighted loss shows good performance on prevalent class, but poor performance on class with smaller representation
Step12: Performance in underrepresented class improved, at the cost of performance in prevalent class.
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.examples.tutorials.mnist import input_data
from sklearn.datasets import fetch_mldata
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score
mnist = input_data.read_data_sets("MNIST_data/")
mnist_images = mnist.train.images
mnist_labels = mnist.train.labels
n_three, n_five = sum(mnist_labels==3), sum(mnist_labels==5)
X_all = np.vstack([
mnist_images[mnist_labels==3,:],
mnist_images[mnist_labels==5,:]
])
y_all = np.array([1]*n_three + [0]*n_five)
# make it more sparse
X_all = X_all * (np.random.uniform(0, 1, X_all.shape) > 0.8)
print('Dataset shape: {}'.format(X_all.shape))
print('Non-zeros rate: {:.05f}'.format(np.mean(X_all != 0)))
print('Classes balance: {:.03f} / {:.03f}'.format(np.mean(y_all==0), np.mean(y_all==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_all, y_all, random_state=42, test_size=0.3)
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
for model in [
LogisticRegression(),
RandomForestClassifier(n_jobs=-1, n_estimators=200)
]:
model.fit(X_tr, y_tr)
predictions = model.predict(X_te)
acc = accuracy_score(y_te, predictions)
print('model: {}'.format(model.__str__()))
print('accuracy: {}'.format(acc))
print()
from tffm import TFFMClassifier
for order in [2, 3]:
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
# this will close tf.Session and free resources
model.destroy()
import scipy.sparse as sp
# only CSR format supported
X_tr_sparse = sp.csr_matrix(X_tr)
X_te_sparse = sp.csr_matrix(X_te)
order = 3
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='sparse',
seed=42
)
model.fit(X_tr_sparse, y_tr, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
model.destroy()
from tffm import TFFMRegressor
from sklearn.metrics import mean_squared_error
model = TFFMRegressor(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='sparse'
)
# translate Y from {0,1} to {-10, 10}
model.fit(X_tr_sparse, y_tr*20-10, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions > 0)))
print('MSE: {}'.format(mean_squared_error(y_te*20-10, predictions)))
model.destroy()
n_features = X_all.shape[1]
used_features = range(100, 1000, 100)
n_repeats = 5
elapsed_mean = []
elapsed_std = []
model_title = ''
for cur_n_feats in tqdm(used_features):
time_observation = []
for _ in range(n_repeats):
active_features = np.random.choice(range(n_features), size=cur_n_feats)
model = TFFMClassifier(
order=5,
rank=50,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=1,
batch_size=-1,
init_std=0.01,
input_type='dense'
)
model_title = model.__str__()
# manually initialize model without calling .fit()
model.core.set_num_features(cur_n_feats)
model.core.build_graph()
model.initialize_session()
start_time = time.time()
predictions = model.decision_function(X_all[:, active_features])
end_time = time.time()
model.destroy()
time_observation.append(end_time - start_time)
elapsed_mean.append(np.mean(time_observation))
elapsed_std.append(np.std(time_observation))
%pylab inline
errorbar(used_features, elapsed_mean, yerr=elapsed_std)
xlim(0, 1000)
title(model_title)
xlabel('n_features')
ylabel('test time')
grid()
order = 3
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=10,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse',
log_dir='./tmp/logs',
verbose=1
)
model.fit(X_tr_sparse, y_tr, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
model.save_state('./tmp/state.tf')
model.destroy()
model = TFFMClassifier(
order=3,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=10,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse',
log_dir='./tmp/logs',
verbose=1
)
# internally model need to allocate memory before load previous weights,
# so need to set num_features explicitly
model.core.set_num_features(X_tr.shape[1])
model.load_state('./tmp/state.tf')
for optim, title in [(tf.train.AdamOptimizer(learning_rate=0.001), 'Adam'),
(tf.train.FtrlOptimizer(0.01, l1_regularization_strength=0.01), 'FTRL')]:
acc = []
model = TFFMClassifier(
order=3,
rank=10,
optimizer=optim,
batch_size=1024,
init_std=0.001,
reg=0.1,
input_type='sparse',
)
n_epochs = 5
anchor_epochs = range(0, 200+1, n_epochs)
for _ in anchor_epochs:
# score result every 5 epochs
model.fit(X_tr_sparse, y_tr, n_epochs=n_epochs)
predictions = model.predict(X_te_sparse)
acc.append(accuracy_score(y_te, predictions))
plot(anchor_epochs, acc, label=title)
model.destroy()
xlabel('n_epochs')
ylabel('accuracy')
legend()
grid()
X_all = np.vstack([
mnist_images[mnist_labels==3,:],
mnist_images[mnist_labels==5,:]
])
y_all = np.array([1]*n_three + [0]*n_five)
# make it more sparse (sparseness is about 97%)
X_all = X_all * (np.random.uniform(0, 1, X_all.shape) > 0.97)
print('Dataset shape: {}'.format(X_all.shape))
print('Non-zeros rate: {}'.format(np.mean(X_all != 0)))
print('Classes balance: {} / {}'.format(np.mean(y_all==0), np.mean(y_all==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_all, y_all, random_state=42, test_size=0.3)
for use_reweight, title in [(False, 'no reweight reg'), (True, 'reweight reg')]:
acc = []
model = TFFMClassifier(
order=3,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
batch_size=1024,
init_std=0.001,
reg=1.0,
input_type='sparse',
reweight_reg = use_reweight
)
n_epochs = 2
anchor_epochs = range(0, 20+1, n_epochs)
for _ in anchor_epochs:
# score result every 5 epochs
model.fit(X_tr_sparse, y_tr, n_epochs=n_epochs)
predictions = model.predict(X_te_sparse)
acc.append(accuracy_score(y_te, predictions))
plot(anchor_epochs, acc, label=title)
model.destroy()
xlabel('n_epochs')
ylabel('accuracy')
legend(loc=4)
grid()
from sklearn.metrics import confusion_matrix
# generate imbalanced data:
X_imbalanced = X_all[4000:,:]
y_imbalanced = y_all[4000:]
print('Classes balance: {:.03f} / {:.03f}'.format(np.mean(y_imbalanced==0),
np.mean(y_imbalanced==1)))
print('Balanced positive weight is {:.03f}.'.format(np.mean(y_imbalanced==0)/np.mean(y_imbalanced==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_imbalanced, y_imbalanced, random_state=42, test_size=0.3)
# use default weighting
model = TFFMClassifier(
order=2,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('accuracy: {}'.format(accuracy_score(y_te, predictions)))
model.destroy()
confusion_matrix(y_te,predictions)
# use balanced weighting
model = TFFMClassifier(
order=2,
sample_weight='balanced',
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('accuracy: {}'.format(accuracy_score(y_te, predictions)))
model.destroy()
confusion_matrix(y_te,predictions)
# use manully weighting for positive class
model = TFFMClassifier(
order=2,
pos_class_weight=6.0,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('accuracy: {}'.format(accuracy_score(y_te, predictions)))
model.destroy()
confusion_matrix(y_te,predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step2: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
Step3: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
Step4: Predicted 216 for benign but only 54 is true,predicted 50 but there are 107 cases, so this model doesnt work.
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
import pandas as pd
%matplotlib inline
from sklearn import datasets
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import tree
iris = datasets.load_iris()
iris
iris.keys()
iris['target']
iris['target_names']
iris['data']
iris['feature_names']
x = iris.data[:,2:] # the attributes # we are picking up only the info on petal length and width
y = iris.target # the target variable
# The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
dt = tree.DecisionTreeClassifier()
# .fit testing
dt = dt.fit(x,y)
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.50,train_size=0.50)
dt = dt.fit(x_train,y_train)
from sklearn.cross_validation import train_test_split
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test) #generate a prediction based on the model created to output a predicted y
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test)
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
# 75-25 seems to be better at predicting with precision
cancer = datasets.load_breast_cancer()
print(cancer)
cancer.keys()
#cancer['DESCR']
# we are trying to predict how malignant / benign a specific cancer 'feature' is
cancer['target_names']
cancer['data']
cancer['feature_names']
cancer['feature_names'][11]
cancer['target']
x = cancer.data[:,10:11]
print(x)
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:,10:11], x[:,13:14], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('texture error')
plt.ylabel('smoothness error')
plt.axhline(y=56)
plt.axvline(x=0.5)
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:,1:2], x[:,3:4], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('mean perimeter')
plt.ylabel('mean area')
plt.axhline(y=800)
plt.axvline(x=17)
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:,5:6], x[:,6:7], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('Mean Concavity')
plt.ylabel('Mean Concave Point')
plt.axhline(y=0.06)
plt.axvline(x=0.25)
x = cancer.data[:,10:11] # the attributes of skin color
y = cancer.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, cancer.target_names, rotation=45)
plt.yticks(tick_marks, cancer.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test)
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
x = cancer.data[:,:] # the attributes of skin color
y = cancer.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, cancer.target_names, rotation=45)
plt.yticks(tick_marks, cancer.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test)
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nearest centroid classifier
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from sklearn.neighbors import NearestCentroid
import random
import pickle
family_classification_metadata = pd.read_table('../seminar_5/data/family_classification_metadata.tab')
family_classification_sequences = pd.read_table('../seminar_5/data/family_classification_sequences.tab')
table = pd.read_csv('data/protVec_100d_3grams_without_quotes.csv', sep='\t', header=None)
table = table.T
header = table.iloc[0] # grab the first row for the header
prot2vec = table[1:] # take the data less the header row
prot2vec.columns = header # set the header row as the df header
most_common_families = Counter(family_classification_metadata['FamilyID']).most_common(1000)
most_common_families = [family for (family, count) in most_common_families]
family2num = {f: i for (i, f) in enumerate(most_common_families)}
MAX_PROTEIN_LEN = 501
EMBED_LEN = 100
all_proteins = family_classification_sequences['Sequences']
all_families = family_classification_metadata['FamilyID']
selected_ids = [i for i in range(len(all_proteins))
if all_families[i] in family2num and len(all_proteins[i]) <= MAX_PROTEIN_LEN]
random.shuffle(selected_ids)
train_ratio = 0.9
num_train = int(len(selected_ids) * train_ratio)
train_ids = selected_ids[:num_train]
test_ids = selected_ids[num_train:]
def embedding(protein):
res = np.zeros(100)
for i in range(0, (len(protein) - 3) // 3):
try:
res = np.add(res, prot2vec[protein[i*3: i*3 + 3]])
except KeyError:
res = np.add(res, prot2vec['<unk>'])
return np.divide(res, ((len(protein) - 3) // 3))
#embedding(all_proteins[11])
X_train = []
for i in range(len(train_ids)):
#if i % 2000 == 0:
# print(i)
cur_id = train_ids[i]
X_train.append(embedding(all_proteins[cur_id]))
X_test = []
for i in range(len(test_ids)):
#if i % 2000 == 0:
# print(i)
cur_id = test_ids[i]
X_test.append(embedding(all_proteins[cur_id]))
with open('data/X_train.pickle', 'wb') as f:
pickle.dump(X_train, f)
with open('data/X_test.pickle', 'wb') as f:
pickle.dump(X_test, f)
y_train = all_families[train_ids]
y_test = all_families[test_ids]
with open('data/y_train.pickle', 'wb') as f:
pickle.dump(y_train, f)
with open('data/y_test.pickle', 'wb') as f:
pickle.dump(y_test, f)
for shrinkage in [None, .2, 5, 10]:
clf = NearestCentroid(shrink_threshold=shrinkage)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('Accuracy for shinkage {}: {:3.1f}%'.format(shrinkage, np.mean(y_test == y_pred) * 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quantum circuits on Pasqal devices
Step2: <img src="../../images/pasqal/Cirq_pasqal.png" width="700"/>
Step3: Create an array of qubits on a lattice
Step4: Notice how we are fixing depth=1 to keep all qubits in the $z=0$ plane. The same can also be achieved by using the TwoDQubit class, which is nothing more than a ThreeDQubit confined to the z=0 plane. We can create the same $6 x 6$ square array, this time using one of the built-in methods
Step5: Create a PasqalDevice
Step6: Notice that the CX gate, also called a CNOT gate is not valid on this device. We can use a PasqalConverter object to convert it.
Step7: When the distance between the two qubits involved in the gate is greater than the control radius, as shown for example in the following plot, cirq will raise an error.
Step8: Send a circuit to a remote Pasqal machine
Step9: Now, to execute the circuit on one of our servers, one has to create a sampler that will send the circuit to the remote host address. One currently needs a token to do this. Please contact us to get one !
Step10: Alternatively, small circuits can also be simulated locally using Cirq's Simulator, which we will do here.
Step11: Next, we can plot the results obtained.
Step12: In this particular case, it takes a single Grover iteration to find the correct result ($|10\rangle$) with 100% probability on a perfect device.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
from cirq_pasqal import ThreeDQubit, TwoDQubit, PasqalVirtualDevice, PasqalNoiseModel
width=6
height=6
depth=1
# Creating p_qubits, a list of ThreeDQubits.
p_qubits=[ThreeDQubit(row, col, layer)
for row in range(width)
for col in range(height)
for layer in range(depth)]
p_qubits = TwoDQubit.square(6) # 6x6 square array of TwoDQubits
from cirq_pasqal.pasqal_device import PasqalConverter
# Initialize and create a circuit
initial_circuit = cirq.Circuit()
initial_circuit.append(cirq.CZ(p_qubits[0], p_qubits[1]))
initial_circuit.append(cirq.Z(p_qubits[0]))
initial_circuit.append(cirq.CX(p_qubits[0], p_qubits[2]))
# Create a Pasqal device with a control radius of 2.1 (in units of the lattice spacing)
p_device=PasqalVirtualDevice(control_radius=2.1, qubits=p_qubits)
# Validate the circuit using the device
try:
p_device.validate_circuit(initial_circuit)
except ValueError as e:
# Uh oh! This circuit does not pass validation.
print(e)
from cirq.circuits import insert_strategy
converter = PasqalConverter()
pasqal_circuit=cirq.Circuit(*[converter.pasqal_convert(op, keep=p_device.is_pasqal_device_op)
for op in initial_circuit.all_operations()],
strategy=cirq.InsertStrategy.NEW)
print(pasqal_circuit)
# Now the circuit validates correctly!
p_device.validate_circuit(pasqal_circuit)
# It is impossible to add a multi-qubit gate if the qubits involved are too far away
try:
p_device.validate_operation(cirq.CZ(p_qubits[0], p_qubits[-1]))
except ValueError as msg:
print("ERROR:", msg)
else:
print("Failed to raise the expected ValueError.")
import numpy as np
#Place qubits at vertices of an equilateral triangle
qs = [TwoDQubit(0, 0), TwoDQubit(1, 0), TwoDQubit(0.5, np.sqrt(3)/2)]
q_meas = [TwoDQubit(1, 0), TwoDQubit(0, 0)]
def state_preparation():
for q in qs[:-1]:
yield cirq.H(q)
yield cirq.X(qs[-1])
yield cirq.H(qs[-1])
def oracle():
# Signals the state 10
yield cirq.X(qs[0])
yield cirq.CCX(*qs)
yield cirq.X(qs[0])
def grover_operator():
for q in qs[:-1]:
yield cirq.H(q)
yield cirq.X(q)
yield cirq.CZ(*qs[:-1])
for q in qs[:-1]:
yield cirq.X(q)
yield cirq.H(q)
def generate_grover():
yield state_preparation()
yield oracle()
yield grover_operator()
def generate_pasqal_grover():
return cirq.Circuit(
*[converter.pasqal_convert(op, keep=p_device.is_pasqal_device_op)
for op in generate_grover()],
strategy=cirq.InsertStrategy.NEW)
device = PasqalVirtualDevice(control_radius=1.1, qubits=qs)
grover_circuit = generate_pasqal_grover()
device.validate_circuit(grover_circuit)
grover_circuit.append(cirq.measure(*q_meas, key='x'))
# To execute on one of Pasqal's servers, get an access token and uncomment these lines
# sampler = cirq_pasqal.PasqalSampler(remote_host='http://34.98.71.118/v0/pasqal',
# access_token='')
sampler = cirq.Simulator()
data_raw = sampler.run(grover_circuit, repetitions=200)
import matplotlib.pyplot as plt
# Create dictionary of results
data0 = data_raw.data.to_dict()
vec=[j for j in range(4)]
res=np.zeros(len(vec))
for jj in range(200):
number = data0['x'][jj]
res[number] += 1
res=res/200.
# Display plot of states
fig = plt.figure()
ax = fig.add_axes([0.16, 0.16, 0.78, 0.78])
ax.plot(vec, res,'o')
ax.set_xticks(vec)
ax.set_xticklabels([r'$|00\rangle$',r'$|01\rangle$', r'$|10\rangle$', r'$|11\rangle$'])
plt.xlabel(r'$j$', fontsize=20)
plt.ylabel(r'$p$', fontsize=20)
plt.show()
# Use the custom noise model of Pasqal devices
noise_model = PasqalNoiseModel(device)
grover_circuit = generate_pasqal_grover()
grover_circuit = grover_circuit.with_noise(noise_model)
grover_circuit.append(cirq.measure(*q_meas, key='x'))
# Run a simulation locally locally
data_raw = sampler.run(grover_circuit, repetitions=200)
# Plot the results
data0 = data_raw.data.to_dict()
vec=[j for j in range(4)]
res=np.zeros(len(vec))
for jj in range(200):
number = data0['x'][jj]
res[number] += 1
res=res/200.
fig = plt.figure()
ax = fig.add_axes([0.16, 0.16, 0.78, 0.78])
ax.plot(vec, res,'o')
ax.set_xticks(vec)
ax.set_xticklabels([r'$|00\rangle$',r'$|01\rangle$', r'$|10\rangle$', r'$|11\rangle$'])
plt.xlabel(r'$j$', fontsize=20)
plt.ylabel(r'$p$', fontsize=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
Step2: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
Step3: Because our corpus is small, there are only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Step4: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts
Step5: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Step6: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
|
<ASSISTANT_TASK:>
Python Code:
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
# Create a set of frequent words
stoplist = set('for a of the and to in'.split(' '))
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in raw_corpus]
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
print(dictionary.token2id)
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" string
tfidf[dictionary.doc2bow("system minors".lower().split())]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Data set
Step3: Encoding and training a FFNN using dCGP
Step4: Same training is done using Keras (Tensor Flow backend)
|
<ASSISTANT_TASK:>
Python Code:
# Initial import
import dcgpy
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
from sklearn.utils import shuffle
import timeit
%matplotlib inline
# We import the data for a classification task.
from numpy import genfromtxt
# https://archive.ics.uci.edu/ml/datasets/Abalone
my_data = genfromtxt('abalone_data_set.csv', delimiter=',')
points = my_data[:,:-1]
labels_tmp = my_data[:,-1]
# We trasform the categorical variables to one hot encoding
# The problem is treated as a three class problem
labels = np.zeros((len(labels_tmp), 3))
for i,l in enumerate(labels_tmp):
if l < 9:
labels[i][0] = 1
elif l > 10:
labels[i][2] = 1
else :
labels[i][1] = 1
# And split the data into training and test
X_train = points[:3000]
Y_train = labels[:3000]
X_test = points[3000:]
Y_test = labels[3000:]
# Stable implementation of the softmax function
def softmax(x):
Compute softmax values for each sets of scores in x.
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# We define the accuracy metric
def accuracy(ex, points, labels):
acc = 0.
for p,l in zip(points, labels):
ps = softmax(ex(p))
if np.argmax(ps) == np.argmax(l):
acc += 1.
return acc / len(points)
# We encode a FFNN into a dCGP expression. Note that the last layer is made by a sum activation function
# so that categorical cross entropy can be used and produce a softmax activation last layer.
# In a dCGP the concept of layers is absent and neurons are defined by activation functions R->R.
dcgpann = dcgpy.encode_ffnn(8,3,[50,20],["sig", "sig", "sum"], 5)
# By default all weights (and biases) are set to 1 (and 0). We initialize the weights normally distributed
dcgpann.randomise_weights(mean = 0., std = 1.)
dcgpann.randomise_biases(mean = 0., std = 1.)
print("Starting error:", dcgpann.loss(X_test,Y_test, "CE"))
print("Net complexity (number of active weights):", dcgpann.n_active_weights())
print("Net complexity (number of unique active weights):", dcgpann.n_active_weights(unique=True))
print("Net complexity (number of active nodes):", len(dcgpann.get_active_nodes()))
#dcgpann.visualize(show_nonlinearities=True, legend=True)
res = []
# We train
n_epochs = 100
print("Start error (training set):", dcgpann.loss(X_train,Y_train, "CE"), flush=True)
print("Start error (test):", dcgpann.loss(X_test,Y_test, "CE"), flush=True)
start_time = timeit.default_timer()
for i in tqdm(range(n_epochs)):
res.append(dcgpann.sgd(X_train, Y_train, 1., 32, "CE", parallel = 4))
elapsed = timeit.default_timer() - start_time
print("End error (training set):", dcgpann.loss(X_train,Y_train, "CE"), flush=True)
print("End error (test):", dcgpann.loss(X_test,Y_test, "CE"), flush=True)
print("Time:", elapsed, flush=True)
plt.plot(res)
print("Accuracy (test): ", accuracy(dcgpann, X_test, Y_test))
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import optimizers
# We define Stochastic Gradient Descent as an optimizer
sgd = optimizers.SGD(lr=1.)
# We define weight initializetion
initializerw = keras.initializers.RandomNormal(mean=0.0, stddev=1, seed=None)
initializerb = keras.initializers.RandomNormal(mean=0.0, stddev=1, seed=None)
model = Sequential([
Dense(50, input_dim=8, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(3, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('softmax'),
])
model.compile(optimizer=sgd,
loss='categorical_crossentropy', metrics=['acc'])
start_time = timeit.default_timer()
history = model.fit(X_train, Y_train, epochs=100, batch_size=32, verbose=False)
elapsed = timeit.default_timer() - start_time
print("End error (training set):", model.evaluate(X_train,Y_train, verbose=False))
print("End error (test):", model.evaluate(X_test,Y_test, verbose=False))
print("Time:", elapsed)
# We plot for comparison the MSE during learning in the two cases
plt.plot(history.history['loss'], label='Keras')
plt.plot(res, label='dCGP')
plt.title('dCGP vs Keras')
plt.xlabel('epochs')
plt.legend()
_ = plt.ylabel('Cross Entropy Loss')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Section class, which contains all calculations
Step2: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy)
Step3: Define sympy parameters used for geometric description of sections
Step4: We also define numerical values for each symbol in order to plot scaled section and perform calculations
Step5: First example
Step6: Define section and perform first calculations
Step7: Plot of S1 section in original reference frame
Step8: Plot of S1 section in inertial reference Frame
Step9: Compute L matrix
Step10: Compute H matrix
Step11: Compute $\tilde{K}$ and $\tilde{M}$ as
Step12: Compute eigenvalues and eigenvectors as
Step13: Eigenvalues correspond to $\beta^2$
Step14: Eigenvectors are orthogonal as expected
Step15: From $\beta_i^2$ we compute
|
<ASSISTANT_TASK:>
Python Code:
from pint import UnitRegistry
import sympy
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import sys
%matplotlib inline
from IPython.display import display
from Section import Section
ureg = UnitRegistry()
sympy.init_printing()
A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True)
values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \
(b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter), \
(t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 27e3 * ureg.MPa)]
datav = [(v[0],v[1].magnitude) for v in values]
stringers = {1:[(2*a,h),A],
2:[(a,h),A],
3:[(sympy.Integer(0),h),A],
4:[(sympy.Integer(0),sympy.Integer(0)),A],
5:[(2*a,sympy.Integer(0)),A]}
#5:[(sympy.Rational(1,2)*a,h),A]}
panels = {(1,2):t,
(2,3):t,
(3,4):t,
(4,5):t,
(5,1):t}
S1 = Section(stringers, panels)
S1.cycles
start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos)
plt.arrow(0,0,20,0)
plt.arrow(0,0,0,20)
#plt.text(0,0, 'CG', fontsize=24)
plt.axis('equal')
plt.title("Section in starting reference Frame",fontsize=16);
positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
x_ct, y_ct = S1.ct.subs(datav)
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, pos=positions)
plt.plot([0],[0],'o',ms=12,label='CG')
plt.plot([x_ct],[y_ct],'^',ms=12, label='SC')
#plt.text(0,0, 'CG', fontsize=24)
#plt.text(x_ct,y_ct, 'SC', fontsize=24)
plt.legend(loc='lower right', shadow=True)
plt.axis('equal')
plt.title("Section in pricipal reference Frame",fontsize=16);
S1.compute_L()
S1.L
S1.compute_H()
S1.H
S1.compute_KM(A,h,t)
S1.Ktilde
S1.Mtilde
sol_data = (S1.Ktilde.inv()*(S1.Mtilde.subs(datav))).eigenvects()
β2 = [sol[0] for sol in sol_data]
β2
X = [sol[2][0] for sol in sol_data]
X
λ = [sympy.N(sympy.sqrt(E*A*h/(G*t)*βi).subs(datav)) for βi in β2]
λ
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a storage bucket
Step2: In the examples below, the BUCKET_NAME variable is referenced in the commands using $.
Step3: About the data
Step4: Create a blank file named __init__.py
Step5: Save training code in one Python file in the census_training directory. The following cell writes a training file to the census_training directory. The training file performs the following operations
Step6: Submit the training job
Step7: Verify model file in Cloud Storage
Step8: Serve the model
Step9: Create the model in AI Platform
Step10: Create a version that points to your model file in Cloud Storage
Step11: Make predictions
Step12: Send the online prediction request
Step13: Clean up
|
<ASSISTANT_TASK:>
Python Code:
!gcloud services enable ml.googleapis.com
!gcloud services enable compute.googleapis.com
BUCKET_NAME = 'your-new-bucket'
!gsutil mb gs://$BUCKET_NAME/
!mkdir census_training
!touch ./census_training/__init__.py
%%writefile ./census_training/train.py
import argparse
import pickle
import pandas as pd
from google.cloud import storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
parser = argparse.ArgumentParser()
parser.add_argument("--bucket-name", help="The bucket name", required=True)
arguments, unknown = parser.parse_known_args()
bucket_name = arguments.bucket_name
# Define the format of your input data, including unused columns.
# These are the columns from the census data files.
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value
# to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Create a Cloud Storage client to download the census data
storage_client = storage.Client()
# Download the data
public_bucket = storage_client.bucket('cloud-samples-data')
blob = public_bucket.blob('ml-engine/sklearn/census_data/adult.data')
blob.download_to_filename('adult.data')
# Load the training census dataset
with open("./adult.data", "r") as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Removing the whitespaces in categorical features
for col in CATEGORICAL_COLUMNS:
raw_training_data[col] = raw_training_data[col].apply(lambda x: str(x).strip())
# Remove the column we are trying to predict ('income-level') from our features
# list and convert the DataFrame to a lists of lists
train_features = raw_training_data.drop("income-level", axis=1).values.tolist()
# Create our training labels list, convert the DataFrame to a lists of lists
train_labels = (raw_training_data["income-level"] == " >50K").values.tolist()
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a
# numerical value. To do this, each categorical column will use a pipeline that
# extracts one feature column via SelectKBest(k=1) and a LabelBinarizer() to
# convert the categorical value to a numerical one. A scores array (created
# below) will select and extract the feature column. The scores array is
# created by iterating over the columns and checking if it is a
# categorical column.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [
# 39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married',
# 'Adm-clerical', 'Not-in-family', 'White', 'Male', 2174, 0,
# 40, 'United-States'
# ]
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
(
'categorical-{}'.format(i),
Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])
)
)
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(("numerical", skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([("union", preprocess), ("classifier", classifier)])
# Create the model file
# It is required to name the model file "model.pkl" if you are using pickle
model_filename = "model.pkl"
with open(model_filename, "wb") as model_file:
pickle.dump(pipeline, model_file)
# Upload the model to Cloud Storage
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(model_filename)
blob.upload_from_filename(model_filename)
import time
# Define a timestamped job name
JOB_NAME = "census_training_{}".format(int(time.time()))
# Submit the training job:
!gcloud ai-platform jobs submit training $JOB_NAME \
--job-dir gs://$BUCKET_NAME/census_job_dir \
--package-path ./census_training \
--module-name census_training.train \
--region us-central1 \
--runtime-version=1.12 \
--python-version=3.5 \
--scale-tier BASIC \
--stream-logs \
-- \
--bucket-name $BUCKET_NAME
!gsutil ls gs://$BUCKET_NAME/
MODEL_NAME = "CensusPredictor"
VERSION_NAME = "census_predictor_{}".format(int(time.time()))
!gcloud ai-platform models create $MODEL_NAME --regions us-central1
!gcloud ai-platform versions create $VERSION_NAME \
--model=$MODEL_NAME \
--framework=scikit-learn \
--origin=gs://$BUCKET_NAME/ \
--python-version=3.5 \
--runtime-version=1.12
# Define a name for the input file
INPUT_FILE = "./census_training/input.json"
%%writefile $INPUT_FILE
[25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"]
[38, "Private", 89814, "HS-grad", 9, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0, 0, 50, "United-States"]
[28, "Local-gov", 336951, "Assoc-acdm", 12, "Married-civ-spouse", "Protective-serv", "Husband", "White", "Male", 0, 0, 40, "United-States"]
[44, "Private", 160323, "Some-college", 10, "Married-civ-spouse", "Machine-op-inspct", "Husband", "Black", "Male", 7688, 0, 40, "United-States"]
[18, "?", 103497, "Some-college", 10, "Never-married", "?", "Own-child", "White", "Female", 0, 0, 30, "United-States"]
[34, "Private", 198693, "10th", 6, "Never-married", "Other-service", "Not-in-family", "White", "Male", 0, 0, 30, "United-States"]
[29, "?", 227026, "HS-grad", 9, "Never-married", "?", "Unmarried", "Black", "Male", 0, 0, 40, "United-States"]
[63, "Self-emp-not-inc", 104626, "Prof-school", 15, "Married-civ-spouse", "Prof-specialty", "Husband", "White", "Male", 3103, 0, 32, "United-States"]
[24, "Private", 369667, "Some-college", 10, "Never-married", "Other-service", "Unmarried", "White", "Female", 0, 0, 40, "United-States"]
[55, "Private", 104996, "7th-8th", 4, "Married-civ-spouse", "Craft-repair", "Husband", "White", "Male", 0, 0, 10, "United-States"]
!gcloud ai-platform predict --model $MODEL_NAME --version \
$VERSION_NAME --json-instances $INPUT_FILE
# Delete the model version
!gcloud ai-platform versions delete $VERSION_NAME --model=$MODEL_NAME --quiet
# Delete the model
!gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete the bucket and contents
!gsutil rm -r gs://$BUCKET_NAME
# Delete the local files created by the tutorial
!rm -rf census_training
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hough transform combined with a polygonal mask
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# convert to grayscale and smooth with a Gaussian
img = mpimg.imread('testimg.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel_size = 5
blurred = cv2.GaussianBlur(gray_img, (kernel_size, kernel_size), 0)
# edge detect with Canny
low = 50
high = 150
edges = cv2.Canny(blurred, low, high)
# build lines with Hough transform
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 10
max_line_gap = 1
line_img = np.copy(img) * 0 # blank of same dim as our img
lines = cv2.HoughLinesP(edges, rho, theta,
threshold,np.array([]), min_line_length, max_line_gap)
# draw!
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1,y1), (x2,y2), (255,0,0), 10)
# colorized binary image
colorized = np.dstack((edges, edges, edges))
# draw the colorized lines
combined = cv2.addWeighted(colorized, 0.8, line_img, 1, 0)
plt.imshow(combined)
plt.show()
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# convert to grayscale and smooth with a Gaussian
img = mpimg.imread('testimg.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel_size = 5
blurred = cv2.GaussianBlur(gray_img, (kernel_size, kernel_size), 0)
# edge detect with Canny
low = 50
high = 150
edges = cv2.Canny(blurred, low, high)
# build masked edge
mask = np.zeros_like(edges)
mask_ignored = 255
imshape = img.shape
# TODO turn these knobs until the mask selects the lane area
#def draw(n1, n2, n3, n4):
vertices = np.array([[(275,imshape[0]),(650, 200),
(imshape[1], 1000),
(imshape[1],imshape[0])]], dtype=np.int32)
cv2.fillPoly(mask, vertices, mask_ignored)
masked_edges = cv2.bitwise_and(edges, mask)
# build lines with Hough transform
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 5
max_line_gap = 1
line_img = np.copy(img) * 0 # blank of same dim as our img
lines = cv2.HoughLinesP(masked_edges, rho, theta,
threshold, np.array([]),
min_line_length, max_line_gap)
# draw!
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1,y1), (x2,y2), (255,0,0), 10)
# colorized binary image
colorized = np.dstack((edges, edges, edges))
# draw the colorized lines
combined = cv2.addWeighted(colorized, 0.8, line_img, 1, 0)
plt.imshow(combined)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing Excel files in current directory
Step2: To plot your figure, simply run the below cell to generate a series of iPython widgets as interface to FPLCPlot. Any changes made to the parameters on the plot are shown after pressing the Run plotTraces button. (Note
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from fplcplot.chromatogram import plotTraces
file_list = !ls *A.xls
file_list
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import display
interact(plotTraces, file_list=fixed(file_list),
title=widgets.Text("Protein A $E. coli$", description='Title:'),
output=widgets.Checkbox(value=False, description="Save file?"),
f_format=widgets.Dropdown(options=['.png', '.pdf'], description='File format:'),
y_lower=widgets.IntSlider(min=-200,max=100,step=10,value=-20,
description='Lower y-limit:'),
y_upper=widgets.IntSlider(min=-10,max=4500,step=50,value=2000,
description='Upper y-limit:'),
second_trace=widgets.ToggleButtons(options=['None','buffer_b',
'buffer_b_abs',
'conductivity'],
description='2nd trace:'),
buffer_A=widgets.IntSlider(min=0,max=500,step=10,value=10,
description='Buffer A (mM):'),
buffer_B=widgets.IntSlider(min=0,max=3000,step=10,value=400,
description='Buffer B (mM):'),
__manual=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: Embedding
Step9: Negative sampling
Step10: Validation
Step11: Training
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
from collections import Counter
import random
random.seed(100)
word_count = Counter(int_words)
total_count = len(int_words)
def keep_prob(word, threshold=1e-5):
return np.sqrt(threshold / (word_count[word] / total_count))
keep_probs = {word: keep_prob(word) for word in word_count.keys()}
# The final subsampled word list
train_words = [word for word in int_words if random.random() < keep_probs[word]]
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size + 1)
start, end = np.clip([idx - R, idx + R], 0, len(words) - 1)
window = words[start:idx] + words[idx + 1:end + 1]
target = set(window)
return list(target)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], -1, 1)) # embedding weight matrix
embed = tf.nn.embedding_lookup(embedding, inputs) # hidden layer output
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1)) # softmax weight matrix
softmax_b = tf.Variable(tf.zeros(n_vocab)) # softmax biases
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Caffe net offers a layer dict that maps layer names to layer objects. These objects do not provide very much information though, but access to their weights and the type of the layer.
Step2: Getting input and output shape.
Step3: Getting the weigths.
Step4: The weights are also accessible through the layer blobs.
Step5: For visualizing the weights the axis still have to be moved around.
Step6: Layers that have no weights simply keep empty lists as their blob vector.
Step7: Getting the activations and the net input.
Step8: Feeding the input and forwarding it.
Step9: Getting layer properties
Step10: Parsed messages for the layer can be found in message.layer list.
|
<ASSISTANT_TASK:>
Python Code:
model_def = 'example_caffe_mnist_model.prototxt'
model_weights = 'mnist.caffemodel'
net = caffe.Net(model_def, model_weights, caffe.TEST)
net.layer_dict
conv_layer = net.layer_dict['conv2d_1']
conv_layer.type, conv_layer.blobs[0].data.shape
for name, blob in net.blobs.items():
print('{}: \t {}'.format(name, blob.data.shape))
net.params
for name, param in net.params.items():
print('{}:\t {} \t{}'.format(name, param[0].data.shape, param[1].data.shape))
for layer in net.layers:
try:
print (layer.type + '\t' + str(layer.blobs[0].data.shape), str(layer.blobs[1].data.shape))
except:
continue
weights = net.params['conv2d_1'][0].data
weights.shape
for i in range(32):
plt.imshow(np.moveaxis(weights[i], 0, -1)[..., 0])
plt.show()
list(net.layer_dict['dropout_1'].blobs)
def remove_inplace(model_def):
protonet = caffe_pb2.NetParameter()
with open(model_def, 'r') as fp:
google.protobuf.text_format.Parse(str(fp.read()), protonet)
replaced_tops = {}
for layer in protonet.layer:
# Check whehter bottoms were renamed.
for i in range(len(layer.bottom)):
if layer.bottom[i] in replaced_tops.keys():
layer.bottom[i] = replaced_tops[layer.bottom[i]]
if layer.bottom == layer.top:
for i in range(len(layer.top)):
# Retain the mapping from the old to the new name.
new_top = layer.top[i] + '_' + layer.name
replaced_tops[layer.top[i]] = new_top
# Redefine layer.top
layer.top[i] = new_top
return protonet
model_def = 'example_caffe_mnist_model_deploy.prototxt'
protonet_no_inplace = remove_inplace(model_def)
protonet_no_inplace
model_def = 'example_caffe_network_no_inplace_deploy.prototxt'
model_weights = 'mnist.caffemodel'
net_no_inplace = caffe.Net(model_def, model_weights, caffe.TEST)
net_no_inplace.layer_dict
net_no_inplace.blobs
# Loading and preprocessing data.
data = mnist.load_data()[1][0]
# Normalize data.
data = data / data.max()
plt.imshow(data[0, :, :])
seven = data[0, :, :]
print(seven.shape)
seven = seven[np.newaxis, ...]
print(seven.shape)
net_no_inplace.blobs['data'].data[...] = seven
output = net_no_inplace.forward()
output['prob'][0].argmax()
activations = net_no_inplace.blobs['relu_1'].data
for i in range(32):
plt.imshow(activations[0, i, :, :])
plt.title('Feature map %d' % i)
plt.show()
net_input = net_no_inplace.blobs['conv2d_1'].data
for i in range(32):
plt.imshow(net_input[0, i, :, :])
plt.title('Feature map %d' % i)
plt.show()
model_def = 'example_caffe_mnist_model.prototxt'
f = open(model_def, 'r')
protonet = caffe_pb2.NetParameter()
google.protobuf.text_format.Parse(str(f.read()), protonet)
f.close()
protonet
type(protonet)
for i in range(0, len(protonet.layer)):
if protonet.layer[i].type == 'Convolution':
print('layer %s has kernel_size %d'
% (protonet.layer[i].name,
protonet.layer[i].convolution_param.kernel_size[0]))
lconv_proto = protonet.layer[i]
len(protonet.layer), len(net.layers)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nicely formatted results
Step2: Creating cells
|
<ASSISTANT_TASK:>
Python Code:
# Hit shift + enter or use the run button to run this cell and see the results
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
class_name = "Nishanth Koganti"
message = class_name + " is awesome!"
message
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
lat=np.array([[10, 20, 30],
[20, 11, 33],
[21, 20, 10]])
lon=np.array([[100, 102, 103],
[105, 101, 102],
[100, 102, 103]])
val=np.array([[17, 2, 11],
[86, 84, 1],
[9, 5, 10]])
df = pd.DataFrame({'lat': lat.ravel(), 'lon': lon.ravel(), 'val': val.ravel()})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Теперь перейдем непосредственно к машинному обучению.
Step2: Напишем функцию, которая будет заменять значения NaN на медиану в каждом столбце таблицы.
Step3: Считываем данные
Step4: Рассмотрим типы считанных данных
Step5: Посмотрим на распределение классов в зависимой переменной
Step6: Выберем названия всех признаков, кроме прогнозируемого
Step7: Применяем функцию, заменяющую все значения NaN на медианное значение соответствующего столбца.
Step8: Разделяем целевой признак и все остальные – получаем обучающую выборку.
Step9: Бутстрэп
Step10: Дерево решений, подбор гиперпараметров
Step11: Используем модуль DecisionTreeClassifier для построения дерева решений. Из-за несбалансированности классов в целевом признаке добавляем параметр балансировки. Используем также параметр random_state=17 для воспроизводимости результатов.
Step12: Перебирать будем вот такие значения гиперпараметров
Step13: Зафиксируем кросс-валидацию
Step14: <font color='red'>Задание 3.</font>
Step15: Простая реализация случайного леса
Step16: <font color='red'>Задание 5.</font>
Step17: Случайный лес sklearn, подбор гиперпараметров
Step18: Логистическая регрессия, подбор гиперпараметров
Step19: Логистическая регрессия и случайный лес на разреженных признаках
Step20: <font color='red'>Задание 8.</font> Сделайте полный перебор по параметру C из выборки [0.1, 1, 10, 100]. Какое лучшее значение ROC AUC получилось на кросс-валидации? Выберите самое близкое значение.
Step21: <font color='red'>Задание 9.</font> Теперь попробуем сравнить со случайным лесом. Аналогично делаем перебор и получаем максимальное ROC AUC. Выберите самое близкое значение.
|
<ASSISTANT_TASK:>
Python Code:
import math
def nCr(n,r):
f = math.factorial
return f(n) / f(r) / f(n - r)
p, N, m, s = 0.8, 7, 4, 0
for i in range(m, N+1):
s += nCr(N, i) * p**i * (1 - p) ** (N - i)
print(s)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
def impute_nan_with_median(table):
for col in table.columns:
table[col]= table[col].fillna(table[col].median())
return table
data = pd.read_csv('../../data/credit_scoring_sample.csv', sep=";")
data.head()
data.dtypes
ax = data['SeriousDlqin2yrs'].hist(orientation='horizontal', color='red')
ax.set_xlabel("number_of_observations")
ax.set_ylabel("unique_value")
ax.set_title("Target distribution")
print('Distribution of target:')
data['SeriousDlqin2yrs'].value_counts() / data.shape[0]
independent_columns_names = data.columns.values
independent_columns_names = [x for x in data if x != 'SeriousDlqin2yrs']
independent_columns_names
table = impute_nan_with_median(data)
X = table[independent_columns_names]
y = table['SeriousDlqin2yrs']
def get_bootstrap_samples(data, n_samples, seed=0):
# функция для генерации подвыборок с помощью бутстрэпа
np.random.seed(seed)
indices = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indices]
return samples
def stat_intervals(stat, alpha):
# функция для интервальной оценки
boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])
return boundaries
# сохранение в отдельные numpy массивы данных по просрочке
churn = data[data['SeriousDlqin2yrs'] == 1]['MonthlyIncome'].values
not_churn = data[data['SeriousDlqin2yrs'] == 0]['MonthlyIncome'].values
# генерируем выборки с помощью бутстрэра и сразу считаем по каждой из них среднее
churn_mean_scores = [np.mean(sample)
for sample in get_bootstrap_samples(churn, 1000, seed=17)]
not_churn_mean_scores = [np.mean(sample)
for sample in get_bootstrap_samples(not_churn, 1000, seed=17)]
# выводим интервальную оценку среднего
print("Mean interval", stat_intervals(churn_mean_scores, 0.1))
print("Mean interval", stat_intervals(not_churn_mean_scores, 0.1))
print("Difference is", stat_intervals(not_churn_mean_scores, 0.1)[0] -
stat_intervals(churn_mean_scores, 0.1)[1])
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV, StratifiedKFold
dt = DecisionTreeClassifier(random_state=17, class_weight='balanced')
max_depth_values = [5, 6, 7, 8, 9]
max_features_values = [4, 5, 6, 7]
tree_params = {'max_depth': max_depth_values,
'max_features': max_features_values}
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=17)
dt_grid_search = GridSearchCV(dt, tree_params, n_jobs=-1, scoring ='roc_auc', cv=skf)
dt_grid_search.fit(X, y)
round(float(dt_grid_search.best_score_), 3)
dt_grid_search.best_params_
dt_grid_search.cv_results_["std_test_score"][np.argmax(dt_grid_search.cv_results_["mean_test_score"])]
from sklearn.base import BaseEstimator
from sklearn.model_selection import cross_val_score
class RandomForestClassifierCustom(BaseEstimator):
def __init__(self, n_estimators=10, max_depth=10, max_features=10,
random_state=17):
self.n_estimators = n_estimators
self.max_depth = max_depth
self.max_features = max_features
self.random_state = random_state
self.trees = []
self.feat_ids_by_tree = []
def fit(self, X, y):
for i in range(self.n_estimators):
np.random.seed(i + self.random_state)
feat_to_use_ids = np.random.choice(range(X.shape[1]), self.max_features,
replace=False)
examples_to_use = list(set(np.random.choice(range(X.shape[0]), X.shape[0],
replace=True)))
self.feat_ids_by_tree.append(feat_to_use_ids)
dt = DecisionTreeClassifier(class_weight='balanced',
max_depth=self.max_depth,
max_features=self.max_features,
random_state = self.random_state)
dt.fit(X[examples_to_use, :][:, feat_to_use_ids], y[examples_to_use])
self.trees.append(dt)
return self
def predict_proba(self, X):
predictions = []
for i in range(self.n_estimators):
feat_to_use_ids = self.feat_ids_by_tree[i]
predictions.append(self.trees[i].predict_proba(X[:,feat_to_use_ids]))
return np.mean(predictions, axis=0)
rf = RandomForestClassifierCustom(max_depth=7, max_features=6).fit(X.values, y.values)
cv_aucs = cross_val_score(RandomForestClassifierCustom(max_depth=7, max_features=6),
X.values, y.values, scoring="roc_auc", cv=skf)
print("Средняя ROC AUC для собственного случайного леса:", np.mean(cv_aucs))
from sklearn.ensemble import RandomForestClassifier
cv_aucs = cross_val_score(RandomForestClassifier(n_estimators=10, max_depth=7,
max_features=6,
random_state=17, n_jobs=-1,
class_weight='balanced'),
X.values, y.values, scoring="roc_auc", cv=skf)
print("Средняя ROC AUC для случайного леса Sklearn:", np.mean(cv_aucs))
max_depth_values = range(5, 15)
max_features_values = [4, 5, 6, 7]
forest_params = {'max_depth': max_depth_values,
'max_features': max_features_values}
rf = RandomForestClassifier(random_state=17, n_jobs=-1,
class_weight='balanced')
rf_grid_search = GridSearchCV(rf, forest_params, n_jobs=-1,
scoring='roc_auc', cv=skf)
rf_grid_search.fit(X.values, y.values)
rf_grid_search.best_score_
rf_grid_search.best_params_
rf_grid_search.cv_results_["std_test_score"][np.argmax(rf_grid_search.cv_results_["mean_test_score"])]
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
scaler = StandardScaler()
logit = LogisticRegression(random_state=17, class_weight='balanced')
logit_pipe = Pipeline([('scaler', scaler), ('logit', logit)])
logit_pipe_params = {'logit__C': np.logspace(-8, 8, 17)}
logit_pipe_grid_search = GridSearchCV(logit_pipe, logit_pipe_params, n_jobs=-1,
scoring ='roc_auc', cv=skf)
logit_pipe_grid_search.fit(X.values, y.values)
logit_pipe_grid_search.best_score_
logit_pipe_grid_search.best_params_
logit_pipe_grid_search.cv_results_["std_test_score"][np.argmax(logit_pipe_grid_search.cv_results_["mean_test_score"])]
# Загрузим данные
df = pd.read_csv("../../data/movie_reviews_train.csv", nrows=50000)
# Разделим данные на текст и целевой признак
X_text = df["text"]
y_text = df["label"]
# Соотношения классов
df.label.value_counts()
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline
# будем разбивать на 3 фолда
skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=17)
# в Pipeline будем сразу преобразовать наш текст и обучать логистическую регрессию
classifier = Pipeline([
('vectorizer', CountVectorizer(max_features = 100000, ngram_range = (1, 3))),
('clf', LogisticRegression(random_state=17))])
%%time
parameters = {'clf__C': (0.1, 1, 10, 100)}
grid_search = GridSearchCV(classifier, parameters, n_jobs=-1, scoring ='roc_auc', cv=skf)
grid_search = grid_search.fit(X_text, y_text)
grid_search.best_params_
grid_search.best_score_
classifier = Pipeline([
('vectorizer', CountVectorizer(max_features = 100000, ngram_range = (1, 3))),
('clf', RandomForestClassifier(random_state=17, n_jobs=-1))])
min_samples_leaf = [1, 2, 3]
max_features = [0.3, 0.5, 0.7]
max_depth = [None]
%%time
parameters = {'clf__max_features': max_features,
'clf__min_samples_leaf': min_samples_leaf,
'clf__max_depth': max_depth}
grid_search = GridSearchCV(classifier, parameters, n_jobs=-1, scoring ='roc_auc', cv=skf)
grid_search = grid_search.fit(X_text, y_text)
grid_search.best_params_
grid_search.best_score_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: All three analytic forms capture parts of the distribution well and it's not entirely clear which would be the best fit (taking into account completeness etc.). We will estimate the uncertainty in the profile as the standard deviation between the observed number of clouds in each bin and the predictions from all three density profiles. We further scale the counts such that the integrated surface density is $1\,M_\odot\,\mathrm{pc}^{-2}$ as determined by McKee et al.
Step2: The uncertainty in the surface density is 10%. There is an addition, larger uncertainty of about 30% due to the uncertainty CO-to-$H_2$ conversion, which we will apply separately (because it is a systematic).
Step3: Now we will take the Dickey & Lockman model as the fiducial model and quantify the uncertainty as the spread between the three models. We further add an overall 10% uncertainty in the counts and an uncertainty that decreases quadratically with height up to 600 pc, because observations are most confused and difficult close to the mid-plane. There's quite a bit of data (e.g., Schmidt 1957), so we'll bin the data into our normal bins
Step4: This gives a not unreasonable looking uncertainty, although the uncertainty on the total surface density remains small. We will allow for an additional, systematic offset later.
Step5: The Berkhuijsen & Muller model is quite different from the others, especially at low $z$. The data on pulsar dispersion measures appears to be quite sparse near the plane, so perhaps the range spanned by these three models does not seem too small and we will conservatively consider the spread as an estimate of the error. We bin the profile in 100 pc bins (based on the available data in Schnitzeler) and scale the uncertainties up somewhat further to create a total column density with a similar uncertainty as in McKee et al.
Step6: This profile correctly conveys that the overall profile is quite flat and leads to a reasonable uncertainty on the total column.
|
<ASSISTANT_TASK:>
Python Code:
cloud_name= 'apjaa4dfdt1_mrt.txt'
if not os.path.exists(cloud_name):
!wget http://iopscience.iop.org/0004-637X/834/1/57/suppdata/apjaa4dfdt1_mrt.txt
cloud_data= ascii.read(cloud_name,format='cds')
# Compute distsance and height z based on whether near of far kinematic distance is more likely
d_cloud= cloud_data['Dn']
d_cloud[cloud_data['INF'] == 1]= cloud_data['Df'][cloud_data['INF'] == 1]
z_cloud= cloud_data['zn']
z_cloud[cloud_data['INF'] == 1]= cloud_data['zf'][cloud_data['INF'] == 1]
figsize(7,6)
bovy_plot.bovy_print(axes_labelsize=17.,text_fontsize=12.,xtick_labelsize=15.,ytick_labelsize=15.)
zbins_h2= numpy.arange(-0.2125,0.225,0.025)
zbinsp_h2= 0.5*(numpy.roll(zbins_h2,-1)+zbins_h2)[:-1]
d,e,_= hist(z_cloud[(cloud_data['Rgal'] > 8.25)*(cloud_data['Rgal'] < 8.75)],
bins=zbins_h2,range=[-.5,.5],normed=True,histtype='step',lw=2.,color='k')
gca().set_yscale('log')
xs= numpy.linspace(-0.5,0.5,1001)
plot(xs,1./numpy.sqrt(2.*numpy.pi)/.074*numpy.exp(-0.5*xs**2./.074**2.),
label=r'$\rho(z) \propto \exp(-[z/105\,\mathrm{pc}]^2)$')
plot(xs,1./0.1*numpy.exp(-numpy.fabs(xs)/.05),
label=r'$\rho(z) \propto \exp(-|z|/50\,\mathrm{pc})$')
plot(xs,1./0.2*1./numpy.cosh(-xs/.1)**2.,
label=r'$\rho(z) \propto \sech^2(-|z|/100\,\mathrm{pc})$')
legend(loc='upper left',frameon=False,fontsize=15.)
ylim(0.1,120.)
xlabel(r'$z\,(\mathrm{kpc})$')
counts_h2= d
ecounts_h2= numpy.std([d,
1./numpy.sqrt(2.*numpy.pi)/.074*numpy.exp(-0.5*zbinsp_h2**2./.074**2.),
1./0.1*numpy.exp(-numpy.fabs(zbinsp_h2)/.05),
1./0.2*1./numpy.cosh(-zbinsp_h2/.1)**2.],axis=0)
ecounts_h2/= numpy.sum(counts_h2)*(zbins_h2[1]-zbins_h2[0])*1000./1.0
counts_h2/= numpy.sum(counts_h2)*(zbins_h2[1]-zbins_h2[0])*1000./1.0
counts_h2[numpy.fabs(zbinsp_h2) > 0.21]= numpy.nan
ecounts_h2[numpy.fabs(zbinsp_h2) > 0.21]= numpy.nan
figsize(6,4.5)
bovy_plot.bovy_plot(1000.*zbinsp_h2,
counts_h2,'ko',semilogy=True,
lw=2.,zorder=2,
xlabel=r'$Z\,(\mathrm{pc})$',
ylabel=r'$\rho_{H_2}(z)\,(M_\odot\,\mathrm{pc}^{-3})$',
xrange=[-420,420],yrange=[0.1/1000.,50./1000.])
errorbar(1000.*zbinsp_h2,counts_h2,yerr=ecounts_h2,color='k',marker='o',
ls='None')
bovy_plot.bovy_text(r'$\Sigma_{H_2} = %.1f \pm %.1f\,M_\odot\,\mathrm{pc}^{-2}$' \
% (numpy.nansum(counts_h2)*(zbinsp_h2[1]-zbinsp_h2[0])*1000.,
numpy.sqrt(numpy.nansum(ecounts_h2**2.))*(zbinsp_h2[1]-zbinsp_h2[0])*1000.),
top_left=True,size=16.)
def dickeylockman(z):
return (0.395*numpy.exp(-z**2.*4.*numpy.log(2.)/0.212**2.)\
+0.107*numpy.exp(-z**2.*4.*numpy.log(2.)/0.530**2.)\
+0.064*numpy.exp(-numpy.fabs(z)/0.403))/0.566
def kalberla(z):
return (0.5*numpy.exp(-numpy.fabs(z)/0.15)+0.19*numpy.exp(-numpy.fabs(z)/0.5))/0.69
def schmidt(z):
return (numpy.exp(-z**2.*4.*numpy.log(2.)/0.220**2.)\
+0.2*numpy.exp(-z**2.*4.*numpy.log(2.)/0.520**2.))/1.2
figsize(6,4)
xs= numpy.linspace(-1.,1.,1001)
bovy_plot.bovy_plot(xs,dickeylockman(xs),label=r'$\mathrm{Dickey\ \&\ Lockman\ (1990)}$',
xlabel=r'$z\,(\mathrm{kpc})$',
xrange=[-1.,1.],
yrange=[0.,1.3])
plot(xs,kalberla(xs)*0.69/0.566/1.2,label=r'$\mathrm{Kalberla}$')
plot(xs,schmidt(xs),label=r'$\mathrm{Schmidt}$')
legend(loc='upper left',fontsize=16.)
zbins_hi= numpy.arange(-0.8125,0.825,0.025)
zbinsp_hi= 0.5*(numpy.roll(zbins_hi,-1)+zbins_hi)[:-1]
counts_hi= dickeylockman(zbinsp_hi)
ecounts_hi= numpy.std([dickeylockman(zbinsp_hi),schmidt(zbinsp_hi),kalberla(zbinsp_hi)],axis=0)\
+0.1*counts_hi+(0.6-numpy.fabs(zbinsp_hi))**2.*0.2
counts_hi+= numpy.random.normal(size=len(counts_hi))*ecounts_hi
# Normalize
norm_xs= numpy.linspace(0.,1.1,1001)
sfmass= numpy.sum(dickeylockman(norm_xs))*2.*(norm_xs[1]-norm_xs[0])*1000.
counts_hi/= sfmass/10.7
ecounts_hi/= sfmass/10.7
figsize(6,4.5)
bovy_plot.bovy_plot(1000.*zbinsp_hi,
counts_hi,'ko',semilogy=True,
lw=2.,zorder=2,
xlabel=r'$Z\,(\mathrm{pc})$',
ylabel=r'$\rho_{\mathrm{HI}}(z)\,(M_\odot\,\mathrm{pc}^{-3})$',
xrange=[-620,620],yrange=[1./1000.,80./1000.])
errorbar(1000.*zbinsp_hi,counts_hi,yerr=ecounts_hi,color='k',marker='o',
ls='None')
bovy_plot.bovy_text(r'$\Sigma_{\mathrm{HI}\ \lesssim\ 600\,\mathrm{pc}} = %.1f \pm %.1f\,M_\odot\,\mathrm{pc}^{-2}$' \
% (numpy.nansum(counts_hi)*(zbinsp_hi[1]-zbinsp_hi[0])*1000.,
numpy.sqrt(numpy.nansum(ecounts_hi**2.))*(zbinsp_hi[1]-zbinsp_hi[0])*1000.),
top_left=True,size=16.)
def berkhuijsen(z):
return 21.7/0.93*numpy.exp(-numpy.fabs(z)/0.93)/25.6
def gaensler(z):
return 25.6/1.83*numpy.exp(-numpy.fabs(z)/1.83)/25.6
def schnitzeler(z):
return 24.4/1.59*numpy.exp(-numpy.fabs(z)/1.59)/25.6
figsize(6,4)
xs= numpy.linspace(-3.,3.,1001)
bovy_plot.bovy_plot(xs,berkhuijsen(xs),
label=r'$\mathrm{Berkhuijsen\ \&\ Mueller}$',
xlabel=r'$z\,(\mathrm{kpc})$',
xrange=[-3.,3.],
yrange=[0.,1.1])
plot(xs,gaensler(xs)*0.69/0.566/1.2,label=r'$\mathrm{Gaensler}$')
plot(xs,schnitzeler(xs),label=r'$\mathrm{Schnitzeler}$')
legend(loc='upper left',fontsize=16.)
zbins_hii= numpy.arange(-2.05,2.15,0.1)
zbinsp_hii= 0.5*(numpy.roll(zbins_hii,-1)+zbins_hii)[:-1]
counts_hii= schnitzeler(zbinsp_hii)
ecounts_hii= numpy.std([schnitzeler(zbinsp_hii),gaensler(zbinsp_hii),berkhuijsen(zbinsp_hii)],axis=0)*1.5
ecounts_hii[numpy.fabs(zbinsp_hii) > 0.5]*= 2.5
counts_hii+= numpy.random.normal(size=len(counts_hii))*ecounts_hii
# Normalize
norm_xs= numpy.linspace(0.,5.1,1001)
sfmass= numpy.sum(schnitzeler(norm_xs))*2.*(norm_xs[1]-norm_xs[0])*1000.
counts_hii/= sfmass/1.8
ecounts_hii/= sfmass/1.8
figsize(6,4.5)
bovy_plot.bovy_plot(1000.*zbinsp_hii,
counts_hii,'ko',semilogy=True,
lw=2.,zorder=2,
xlabel=r'$Z\,(\mathrm{pc})$',
ylabel=r'$\rho_{\mathrm{HII}}(z)\,(M_\odot\,\mathrm{pc}^{-3})$',
xrange=[-2120,2120],yrange=[0.05/1000.,2./1000.])
errorbar(1000.*zbinsp_hii,counts_hii,yerr=ecounts_hii,color='k',marker='o',
ls='None')
bovy_plot.bovy_text(r'$\Sigma_{\mathrm{HII}\ \lesssim\ 2\,\mathrm{kpc}} = %.1f \pm %.1f\,M_\odot\,\mathrm{pc}^{-2}$' \
% (numpy.nansum(counts_hii)*(zbinsp_hii[1]-zbinsp_hii[0])*1000.,
numpy.sqrt(numpy.nansum(ecounts_hii**2.))*(zbinsp_hii[1]-zbinsp_hii[0])*1000.),
top_left=True,size=16.)
figsize(6,8.5)
marker='o'
ms= 6.
for ii,(xrange,yrange,xlabel) in enumerate(zip([[-620.,620.],[-2120,2120]],
[[10.**-4.,10.**-1.],[10.**-4.,10.**-1.]],
[None,r'$Z\,(\mathrm{pc})$'])):
subplot(2,1,ii+1)
# H2
bovy_plot.bovy_plot(1000.*zbinsp_h2,
counts_h2,marker,semilogy=True,gcf=True,ms=ms,
lw=2.,zorder=2,color=sns.color_palette()[0],
xlabel=xlabel,
ylabel=r'$\rho(z)\,(M_\odot\,\mathrm{pc}^{-3})$',
xrange=xrange,yrange=yrange)
errorbar(1000.*zbinsp_h2,counts_h2,yerr=ecounts_h2,marker=marker,color=sns.color_palette()[0],
ls='None',ms=ms,label=r'$H_2$')
#HI
bovy_plot.bovy_plot(1000.*zbinsp_hi,
counts_hi,marker,semilogy=True,ms=ms,
color=sns.color_palette()[1],
lw=2.,zorder=2,overplot=True)
errorbar(1000.*zbinsp_hi,counts_hi,yerr=ecounts_hi,marker=marker,color=sns.color_palette()[1],
ls='None',ms=ms,label=r'$\mathrm{HI}$')
#HII
bovy_plot.bovy_plot(1000.*zbinsp_hii,
counts_hii,marker,semilogy=True,ms=ms,
color=sns.color_palette()[2],
lw=2.,zorder=2,overplot=True)
errorbar(1000.*zbinsp_hii,counts_hii,yerr=ecounts_hii,marker=marker,color=sns.color_palette()[2],
ls='None',ms=ms,label=r'$\mathrm{HII}$')
if ii == 0:
bovy_plot.bovy_text(r'$\rho_{\mathrm{ISM}}(z=0) = %.3f\pm%.3f\,M_\odot\,\mathrm{pc}^{-3}$' \
% (counts_h2[len(zbinsp_h2)//2]
+counts_hi[len(zbinsp_hi)//2]
+counts_hii[len(zbinsp_hii)//2]\
,numpy.sqrt(ecounts_h2[len(zbinsp_h2)//2]**2.\
+ecounts_hi[len(zbinsp_hi)//2]**2.+
ecounts_hii[len(zbinsp_hii)//2]**2.)),
top_left=True,size=16.)
elif ii == 1:
bovy_plot.bovy_text(r'$\Sigma_{\mathrm{ISM}}(|z| \leq 1.1\,\mathrm{kpc}) = %.1f\pm%.1f\,M_\odot\,\mathrm{pc}^{-2}$' \
% (1000.*(numpy.nansum(counts_h2)*(zbins_h2[1]-zbins_h2[0])
+numpy.nansum(counts_hi)*(zbins_hi[1]-zbins_hi[0])
+numpy.nansum(counts_hii)*(zbins_hii[1]-zbins_hii[0])),
1000.*numpy.sqrt(numpy.nansum(ecounts_h2**2.)*(zbins_h2[1]-zbins_h2[0])**2.
+numpy.nansum(ecounts_hi**2.)*(zbins_hi[1]-zbins_hi[0])**2.
+numpy.nansum(ecounts_hii**2.)*(zbins_hii[1]-zbins_hii[0])**2.)),
top_left=True,size=16.)
legend(loc='center left',fontsize=16.)
tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Статьи
Step2: Кинематические данные по звездам
Step3: Данные по фотометрии
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
from IPython.display import Image
import os
%pylab
%matplotlib inline
%run ../../../utils/load_notebook.py
from photometry import *
from instabilities import *
name = 'N4258'
gtype = 'SA(s)ab'
incl = 70. #(adopted by Epinat+2008)
scale = 0.092 #kpc/arcsec according to ApJ 142 145(31pp) 2011
data_path = '../../data/ngc4258'
sin_i, cos_i = np.sin(incl*np.pi/180.), np.cos(incl*np.pi/180.)
os.chdir(data_path)
# Данные из NED
HTML('<iframe src=http://ned.ipac.caltech.edu/cgi-bin/objsearch?objname=ngc+4258&extend=no&hconst=\
73&omegam=0.27&omegav=0.73&corr_z=1&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=\
30000.0&list_limit=5&img_stamp=YES width=1000 height=350></iframe>')
# Данные из HYPERLEDA
HTML('<iframe src=http://leda.univ-lyon1.fr/ledacat.cgi?o=ngc4258 width=1000 height=350></iframe>')
#SDSS
Image('ngc3898_SDSS.jpeg', width=300)
#JHK
Image('ngc3898_JHK.jpg', width=300)
Image('u7353.png')
Image()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
Step2: Visualize Data
Step3: Preprocess Data
Step4: Setup TensorFlow
Step5: SOLUTION
Step6: Features and Labels
Step7: Training Pipeline
Step8: Model Evaluation
Step9: Train the Model
Step10: Evaluate the Model
|
<ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
import pandas as pd
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'data/train.p'
validation_file= 'data/valid.p'
testing_file = 'data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, Y_train = train['features'], train['labels']
X_valid, Y_valid = valid['features'], valid['labels']
X_test, Y_test = test['features'], test['labels']
frame = pd.read_csv('signnames.csv')
def get_signname(label_id):
return frame["SignName"][label_id]
import numpy as np
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.2, random_state=0)
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index])
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That is almost 300 times faster!! Let's compare this to NumPy
Step2: In certain scenarios, Numba can even outperform NumPy!
Step3: The methods of the {class}~arviz.Numba class can be used to enable or disable numba. The attribute numba_flag indicates whether numba is enabled within ArviZ or not.
|
<ASSISTANT_TASK:>
Python Code:
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
%timeit np.var(data, ddof=1)
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Your task starts here
Step5: The loss you should try to minimize is the Hinge Loss
Step6: Training
Step7: Implement gradient descent with momentum and test it's performance for different learning rate and momentum values.
Step8: Same task but for Nesterov's accelerated gradient
Step9: Finally, try Adam algorithm. You can start with beta = 0.9 and mu = 0.999
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random
from IPython import display
from sklearn import datasets, preprocessing
(X, y) = datasets.make_circles(n_samples=1024, shuffle=True, noise=0.2, factor=0.4)
ind = np.logical_or(y==1, X[:,1] > X[:,0] - 0.5)
X = X[ind,:]
X = preprocessing.scale(X)
y = y[ind]
y = 2*y - 1
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.show()
h = 0.01
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
def visualize(X, y, w, loss, n_iter):
plt.clf()
Z = classify(np.c_[xx.ravel(), yy.ravel()], w)
Z = Z.reshape(xx.shape)
plt.subplot(1,2,1)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.subplot(1,2,2)
plt.plot(loss)
plt.grid()
ymin, ymax = plt.ylim()
plt.ylim(0, ymax)
display.clear_output(wait=True)
display.display(plt.gcf())
def expand(X):
X_ = np.zeros((X.shape[0], 6))
X_[:,0:2] = X
X_[:,2:4] = X**2
X_[:,4] = X[:,0] * X[:,1]
X_[:,5] = 1
return X_
def classify(X, w):
Given feature matrix X [n_samples,2] and weight vector w [6],
return an array of +1 or -1 predictions
#<your code here>
def compute_loss(X, y, w):
Given feature matrix X [n_samples,2], target vector [n_samples] of +1/-1,
and weight vector w [6], compute scalar loss function using formula above.
#<your code here>
def compute_grad(X, y, w):
Given feature matrix X [n_samples,2], target vector [n_samples] of +1/-1,
and weight vector w [6], compute vector [6] of derivatives of L over each weights.
#<your code here>
w = np.array([1,0,0,0,0,0])
alpha = 0.0 # learning rate
n_iter = 50
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12,5))
for i in range(n_iter):
ind = random.sample(range(X.shape[0]), batch_size)
loss[i] = compute_loss(X, y, w)
visualize(X[ind,:], y[ind], w, loss, n_iter)
w = w - alpha * compute_grad(X[ind,:], y[ind], w)
visualize(X, y, w, loss, n_iter)
plt.clf()
w = np.array([1,0,0,0,0,0])
alpha = 0.0 # learning rate
mu = 0.0 # momentum
n_iter = 50
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12,5))
for i in range(n_iter):
ind = random.sample(range(X.shape[0]), batch_size)
loss[i] = compute_loss(X, y, w)
visualize(X[ind,:], y[ind], w, loss, n_iter)
#<update w and anything else here>
visualize(X, y, w, loss, n_iter)
plt.clf()
w = np.array([1,0,0,0,0,0])
alpha = 0.0 # learning rate
mu = 0.0 # momentum
n_iter = 50
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12,5))
for i in range(n_iter):
ind = random.sample(range(X.shape[0]), batch_size)
loss[i] = compute_loss(X, y, w)
visualize(X[ind,:], y[ind], w, loss, n_iter)
#<update w and anything else here>
visualize(X, y, w, loss, n_iter)
plt.clf()
w = np.array([1,0,0,0,0,0])
alpha = 0.0 # learning rate
beta = 0.0 # (beta1 coefficient in original paper) exponential decay rate for the 1st moment estimates
mu = 0.0 # (beta2 coefficient in original paper) exponential decay rate for the 2nd moment estimates
eps = 1e-8 # A small constant for numerical stability
n_iter = 50
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12,5))
for i in range(n_iter):
ind = random.sample(range(X.shape[0]), batch_size)
loss[i] = compute_loss(X, y, w)
visualize(X[ind,:], y[ind], w, loss, n_iter)
#<update w and anything else here>
visualize(X, y, w, loss, n_iter)
plt.clf()
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.