Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
2,700
|
<ASSISTANT_TASK:>
Python Code:
import pycuda.autoinit
import pycuda.driver as drv
import numpy
from pycuda.compiler import SourceModule
mod = SourceModule(
__global__ void multiplicar(float *dest, float *a, float *b)
{
const int i = threadIdx.x;
dest[i] = a[i] * b[i];
}
)
multiplicar = mod.get_function("multiplicar")
a = numpy.random.randn(400).astype(numpy.float32)
b = numpy.random.randn(400).astype(numpy.float32)
dest = numpy.zeros_like(a)
print dest
multiplicar(
drv.Out(dest), drv.In(a), drv.In(b),
block=(400,1,1), grid=(1,1))
print dest
print dest-a*b
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy
a = numpy.random.randn(4,4)
a = a.astype(numpy.float32)
a_gpu = cuda.mem_alloc(a.nbytes)
cuda.memcpy_htod(a_gpu, a)
mod = SourceModule(
__global__ void duplicar(float *a)
{
int idx = threadIdx.x + threadIdx.y*4;
a[idx] *= 2;
}
)
mod
func = mod.get_function("duplicar")
func(a_gpu, block=(4,4,1))
func
a_duplicado = numpy.empty_like(a)
cuda.memcpy_dtoh(a_duplicado, a_gpu)
print a_duplicado
print a
print(type(a))
print(type(a_gpu))
print(type(a_duplicado))
a_duplicado
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ¿Por qué PyCUDA?
Step2: Al correr este programa vamos a obtener un montón de ceros; algo no muy interesante. Sin embargo detrás de escenas sí pasó algo interesante.
Step3: Transferir datos
Step4: sin embargo nuestro arreglo a consiste en números de doble precisión, dado que no todos los GPU de NVIDIA cuentan con doble precisión es que hacemos lo siguiente
Step5: finalmente, necesitmos un arreglo hacia el cuál transferir la información, así que deberíamos guardar la memoria en el dispositivo
Step6: como último paso, necesitamos tranferir los datos al GPU
Step8: Ejecutando kernels
Step9: Si no hay errores, el código ahora ha sido compilado y cargado en el dispositivo. Encontramos una referencia a nuestra pycuda.driver.Function y la llamamos, especificando a_gpu como el argumento, y un tamaño de bloque de $4\times 4$
Step10: Finalmente recogemos la información del GPU y la mostramos con el a original
|
2,701
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
matplotlib.rcParams['figure.figsize'] = 9, 6
from numpy import *
import scipy.integrate
# This code is not very efficient, it recalculates many quantities from many
# different functions. It is easy to maintain though, and does not need to be
# run very often.
def r(t,p,R,F): # Position on reflector of facet at theta,phi
return R*array([sin(t)*cos(p), sin(t)*sin(p), 1-cos(t)])
def A(t,p,R,F): # Alignment point for facet at theta,phi
return array([0,0,F+norm(r(t,p,R,F)-array([0,0,F]))])
def n(t,p,R,F): # Normal of facet at theta,phi
_n = A(t,p,R,F)-r(t,p,R,F)
return _n/norm(_n)
def g(t,p,U,R,F):
_g = r(t,p,R,F)-U
return _g/norm(_g)
def s(t,p,U,R,F): # Direction of ray reflected from facet at theta,phi
_g = g(t,p,U,R,F)
_n = n(t,p,R,F)
return _g - 2*dot(_g,_n)*_n
def r_im(t,p,U,R,F): # Position on image plane of ray reflected from facet at theta,phi
V = 1.0/(1.0/F-1.0/U[2])
_r = r(t,p,R,F)
_s = s(t,p,U,R,F)
tim = (V-_r[2])/_s[2]
return (_r+tim*_s)/V
def integrate(fn,U,R,F,D): # Integral of parallel beam over full reflector
integrand = lambda t,p: -(R**2)*sin(t)*fn(t,p)*cos(t)*dot(n(t,p,R,F),g(t,p,U,R,F))
tmax = arcsin(D/2.0/R)
gfun = lambda p: 0
hfun = lambda p: tmax
return scipy.integrate.dblquad(integrand,0,2*pi,gfun,hfun)
F = 1600.
R = F*1.2
D = 1200.
Uz = 10.6 * 1e5
d = 4.0/180.0*pi
figure(1,figsize=[15,3])
labset=False
C = [ 'k','r','b' ]
for d in arange(0,4.01,0.5)/180.0*pi:
U = array([-Uz*tan(d), 0, Uz])
for i in range(0,3):
Di = D*float(i+1)/3.0/2.0
t = Di/R
p = arange(0.0,360.1,1)/180.0*pi
rfp = list(map(lambda _p: r_im(t,_p,U,R,F),p))
x = array(list(map(lambda _r: _r[0]/pi*180, rfp)))
y = array(list(map(lambda _r: _r[1]/pi*180, rfp)))
lab = "_NOLABEL_"
if labset==False:
lab = 'd=%.0fcm'%Di
plot(x,y,C[i],label=lab)
labset = True
axis([-0.25, 4.75, -.5, .5]);
xlabel('Tangential focal plane position [deg]')
ylabel('Sagittal position [deg]')
legend(loc=2);
#gcf().savefig('zfs_lewes_10k.pdf',bbox_inches='tight')
def tof(t,p,U,R,F):
V = 1.0/(1.0/F-1.0/U[2])
r_ref = r(t,p,R,F)
return (norm(r_ref-U)+norm(r_im(t,p,U,R,F)*V - r_ref))/2.9979246e+10
def calcPSF_SA(dv, R, F, D, Uz):
meanxv = []
meanyv = []
meantv = []
rmsxv = []
rmsyv = []
rmstv = []
for d in dv:
U = array([-Uz*tan(d), 0, Uz])
int1 = integrate(lambda t,p: 1,U,R,F,D)
intx = integrate(lambda t,p: r_im(t,p,U,R,F)[0],U,R,F,D)
inty = integrate(lambda t,p: r_im(t,p,U,R,F)[1],U,R,F,D)
intxx = integrate(lambda t,p: r_im(t,p,U,R,F)[0]**2,U,R,F,D)
intyy = integrate(lambda t,p: r_im(t,p,U,R,F)[1]**2,U,R,F,D)
intt = integrate(lambda t,p: tof(t,p,U,R,F),U,R,F,D)
inttt = integrate(lambda t,p: tof(t,p,U,R,F)**2,U,R,F,D)
meanx = intx[0]/int1[0]
meany = inty[0]/int1[0]
meant = intt[0]/int1[0]
varx = intxx[0]/int1[0] - meanx**2
vary = intyy[0]/int1[0] - meany**2
vart = inttt[0]/int1[0] - meant**2
meanxv.append(meanx)
meanyv.append(meany)
meantv.append(meant)
rmsxv.append(sqrt(varx))
rmsyv.append(sqrt(vary))
rmstv.append(sqrt(vart))
meanx = array(meanxv)
meany = array(meanyv)
meant = array(meantv)
rmsx = array(rmsxv)
rmsy = array(rmsyv)
rmst = array(rmstv)
return meanx, meany, rmsx, rmsy, meant, rmst
dv = arange(0.0,10.01,0.1)/180.0*pi
meanx_sa, meany_sa, rmsx_sa, rmsy_sa, meant_sa, rmst_sa = calcPSF_SA(dv,R,F,D,Uz)
P=polyfit(dv,meanx_sa,1)
print("Plate-scale factor: %.3f"%P[0])
import sys,os
from calin.simulation.vs_optics import *
from numpy import *
def calcPSF_RT(dv, R, F, D, Uz, N=100000):
param = calin.ix.simulation.vs_optics.IsotropicDCArrayParameters()
param.mutable_prescribed_array_layout().add_scope_positions();
dc = param.mutable_reflector()
dc.set_curvature_radius(R)
dc.set_aperture(D)
dc.set_facet_spacing(1.0)
dc.set_facet_size(dc.facet_spacing())
dc.set_facet_focal_length(F)
dc.mutable_psf_align().set_object_plane(inf);
dc.set_alignment_image_plane(F)
dc.set_weathering_factor(1.0)
dc.set_facet_spot_size_probability(0.8)
dc.set_facet_spot_size(0)
param.mutable_focal_plane().mutable_translation().set_y(1.0/(1.0/F-1.0/Uz))
param.mutable_pixel().set_spacing(1)
param.mutable_pixel().set_cone_inner_diameter(1)
param.mutable_pixel().set_cone_survival_prob(1)
rng = calin.math.rng.RNG()
cta = calin.simulation.vs_optics.VSOArray()
cta.generateFromArrayParameters(param, rng)
scope = cta.telescope(0)
print(scope.numMirrors(), scope.numPixels())
PS = 1/scope.focalPlanePosition()[1]
raytracer = calin.simulation.vs_optics.VSORayTracer(cta, rng)
ph = calin.math.ray.Ray()
info = calin.simulation.vs_optics.VSOTraceInfo()
nhit = []
meanxv = []
meanyv = []
meantv = []
rmsxv = []
rmsyv = []
rmstv = []
for d in dv:
x = []
y = []
t = []
for i in range(0,N):
raytracer.testBeam(ph, scope, d, 0, Uz)
pixel = raytracer.trace(ph, info, scope)
# print(info.status)
if info.rayHitFocalPlane():
x.append(info.fplane_z)
y.append(info.fplane_x)
t.append(info.fplane_t)
x = array(x)*PS
y = array(y)*PS
nhit.append(len(x))
meanxv.append(mean(x))
meanyv.append(mean(y))
meantv.append(mean(t))
rmsxv.append(std(x))
rmsyv.append(std(y))
rmstv.append(std(t))
nhit = array(nhit)
meanx = array(meanxv)
meany = array(meanyv)
meant = array(meantv)
rmsx = array(rmsxv)
rmsy = array(rmsyv)
rmst = array(rmstv)
return nhit, meanx, meany, rmsx, rmsy, meant, rmst
dv2 = arange(0.0,10.01,0.5)/180.0*pi
nhit_rt, meanx_rt, meany_rt, rmsx_rt, rmsy_rt, meant_rt, rmst_rt = calcPSF_RT(dv2,R,F,D,Uz,N=100000)
figure(figsize=(6,4))
plot(dv/pi*180,rmsx_sa/pi*180,'r-',label='Quadrature: tangential')
plot(dv/pi*180,rmsy_sa/pi*180,'g-',label='Quadrature: sagittal')
plot(dv2/pi*180,rmsx_rt/pi*180,'rx',label='Monte Carlo: tangential')
plot(dv2/pi*180,rmsy_rt/pi*180,'gx',label='Monte Carlo: sagittal')
xlabel('Off axis angle [deg]')
ylabel('Image RMS [deg]')
legend(loc=2)
axis(array(axis())+array([-0.1, 0.1, 0, 0]))
grid()
#gcf().savefig('zfs_rms_10k.pdf',bbox_inches='tight')
figure(figsize=(6,4))
plot(dv/pi*180,rmst_sa*1e9,'b-',label='Quadrature')
#plot(dv2/pi*180,rmst_rt*1e9,'bx',label='Monte Carlo')
errorbar(dv2/pi*180,rmst_rt*1e9,fmt='bx',yerr=rmst_rt*1e9/sqrt(2*nhit_rt),label='Monte Carlo')
text(0.025,0.025,'Note: errors on RMS assume Gaussian distribution',transform=gca().transAxes)
xlabel('Off axis angle [deg]')
ylabel('Time dispersion RMS [ns]')
legend(loc=1)
a=array(axis())
a[0] = -0.1
a[1] = 10.1
a[3] = 0.7501
axis(a)
grid()
#gcf().savefig('zfs_tdisp_10k.pdf',bbox_inches='tight')
meanx_sa_x2, meany_sa_x2, rmsx_sa_x2, rmsy_sa_x2, meant_sa_x2, rmst_sa_x2 = calcPSF_SA(dv,R,F,D*2.0,Uz)
nhit_rt_x2, meanx_rt_x2, meany_rt_x2, rmsx_rt_x2, rmsy_rt_x2, meant_rt_x2, rmst_rt_x2 = calcPSF_RT(dv2,R,F,D*2.0,Uz,N=100000)
figure(figsize=(6,4))
plot(dv/pi*180,rmsx_sa_x2/pi*180,'r-',label='Quadrature: tangential')
plot(dv/pi*180,rmsy_sa_x2/pi*180,'g-',label='Quadrature: sagittal')
plot(dv2/pi*180,rmsx_rt_x2/pi*180,'rx',label='Monte Carlo: tangential')
plot(dv2/pi*180,rmsy_rt_x2/pi*180,'gx',label='Monte Carlo: sagittal')
xlabel('Off axis angle [deg]')
ylabel('Image RMS [deg]')
legend(loc=2)
axis(array(axis())+array([-0.1, 0.1, 0, 0]))
grid()
#gcf().savefig('zfs_rms_D2400_10k.pdf',bbox_inches='tight')
figure(figsize=(6,4))
plot(dv/pi*180,rmst_sa_x2*1e9,'b-',label='Quadrature')
#plot(dv2/pi*180,rmst_rt_x2*1e9,'bx',label='Monte Carlo')
errorbar(dv2/pi*180,rmst_rt_x2*1e9,fmt='bx',yerr=rmst_rt_x2*1e9/sqrt(2*nhit_rt_x2),label='Monte Carlo')
text(0.025,0.025,'Note: errors on RMS assume Gaussian distribution',transform=gca().transAxes)
xlabel('Off axis angle [deg]')
ylabel('Time dispersion RMS [ns]')
legend(loc=1)
a=array(axis())
a[0] = -0.1
a[1] = 10.1
axis(a)
grid()
#gcf().savefig('zfs_tdisp_D2400_10k.pdf',bbox_inches='tight')
#numpy.savez('zfs_10k.npz',dv, meanx_sa, meany_sa, rmsx_sa, rmsy_sa, meant_sa, rmst_sa,
# dv2, nhit_rt, meanx_rt, meany_rt, rmsx_rt, rmsy_rt, meant_rt, rmst_rt,
# meanx_sa_x2, meany_sa_x2, rmsx_sa_x2, rmsy_sa_x2, meant_sa_x2, rmst_sa_x2,
# nhit_rt_x2, meanx_rt_x2, meany_rt_x2, rmsx_rt_x2, rmsy_rt_x2, meant_rt_x2, rmst_rt_x2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Development of semi-analytic method
Step2: Illustration of focal plane images
Step3: Image moments using semi-analytic method
Step4: Fitting a linear function to the centroid position gives the plate-scale correction factor appropriate for this modified DC design
Step5: Image moments using Monte Carlo ray-tracing code
Step6: Comparison of results - PSF and time dispersion
Step7: System with f=F/D=1600/2400=0.67
|
2,702
|
<ASSISTANT_TASK:>
Python Code::
import matplotlib.pyplot as plt
plt.hist(L)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
2,703
|
<ASSISTANT_TASK:>
Python Code:
plt.imshow(plt.imread('./res/fig11_2.png'))
show_image('fig11_5.png')
show_image('fig11_7.png', figsize=(8, 10))
show_image('fig11_9.png', figsize=(8, 10))
show_image('fig11_10.png', figsize=(8, 10))
show_image('fig11_13.png')
show_image('ex11_17.png')
#Exercise
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 11.2.2 Using Eigenvectors for Dimensionality Reduction
Step2: 11.3.2 Interpretation of SVD
Step3: If we set the $s$ smallest singular values to 0, then we can also eliminate the corresponding $s$ rows of $U$ and $V$.
Step4: 11.3.4 Why Zeroing Low Singular Values Works
Step5: Eliminating Duplicate Rows and Columns
|
2,704
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
rcParams["figure.figsize"] = (8, 6)
rcParams["axes.grid"] = True
from IPython.display import display, clear_output
from mpl_toolkits.axes_grid1 import make_axes_locatable
from time import sleep
from __future__ import division
def cart2pol(x, y):
theta = arctan2(y, x)
r = sqrt(x ** 2 + y ** 2)
return theta, r
def pol2cart(theta, r):
x = r * cos(theta)
y = r * sin(theta)
return x, y
def selfDiffusionOfWater(T, P=101.325):
# Implements the Krynicki formula; returns the self-diffusion of water (in micrometers^2/millisec)
# given the temperature T (in Centigrade) and the pressure P (in kPa).
d = 12.5 * exp(P * -5.22 * 1e-6) * sqrt(T + 273.15) * exp(-925 * exp(P * -2.6 * 1e-6)/(T + 273.15 - (95 + P * 2.61 * 1e-4)))
return d
D = selfDiffusionOfWater(37)
print("%f micrometers^2/millisecond" % D)
T = arange(25,41)
D = selfDiffusionOfWater(T)
figure()
plot(T, D, 'k')
xlabel('Temperature (Centigrade)', fontsize=14)
ylabel('Self-diffusion ($\mu$m$^2$/ms)', fontsize=14)
plot([37,37], [2,3.4], 'r-')
text(37, 3.45, 'Body Temperature', ha='center', color='r', fontsize=12)
# compute your answer here
voxel_size = 50.0 # micrometers
ADC = 2.0 # micrometers^2/millisecond)
barrier_spacing = 10.0 # micrometers (set this to 0 for no barriers)
num_particles = 500
def draw_particles(ax, title, xy, particle_color, voxel_size, barrier_spacing):
ax.set_xlabel('X position $(\mu m)$')
ax.set_ylabel('Y position $(\mu m)$')
ax.axis('equal')
ax.set_title(title)
ax.set_xlim(-voxel_size/2, voxel_size/2)
ax.set_ylim(-voxel_size/2, voxel_size/2)
if barrier_spacing > 0:
compartments = unique(np.round(xy[1,:] / barrier_spacing))
for c in range(compartments.size):
ax.hlines(compartments[c]*barrier_spacing, -voxel_size/2, voxel_size/2, linewidth=4, colors=[.7, .7, .8], linestyles='solid')
particles = []
for p in range(xy.shape[1]):
particles.append(Circle(xy[:,p], 0.3, color=particle_color[p]))
ax.add_artist(particles[p])
return particles
# Place some particles randomly distributed in the volume
xy = random.rand(2, num_particles) * voxel_size - voxel_size/2.
start_xy = xy
particle_color = [((xy[0,p] + voxel_size/2) / voxel_size, (xy[1,p] + voxel_size/2) / voxel_size, .5) for p in range(num_particles)]
# draw 'em
fig,ax = subplots(1, 1, figsize=(6, 6))
particles = draw_particles(ax, 'initial particle positions', xy, particle_color, voxel_size, barrier_spacing)
import time, sys
from IPython.core.display import clear_output
time_step = 0.1 # milliseconds
nTimeSteps = 100
fig,ax = subplots(1, 1, figsize=(6, 6))
total_time = 0
for t_i in range(nTimeSteps):
dxy = np.random.standard_normal(xy.shape) * sqrt(2 * ADC * time_step)
new_xy = xy + dxy
if barrier_spacing>0:
curCompartment = np.round(xy[1,:]/barrier_spacing)
newCompartment = np.round(new_xy[1,:]/barrier_spacing)
for p in range(newCompartment.size):
if newCompartment[p]!=curCompartment[p]:
# approximate particles reflecting off the impermeable barrier
new_xy[1,p] = xy[1,p]
xy = new_xy
title = 'elapsed time: %5.2f ms' % (time_step * t_i)
particles = draw_particles(ax, title, xy, particle_color, voxel_size, barrier_spacing)
clear_output(wait=True)
display(fig,ax)
ax.cla()
total_time += time_step
close()
# compute your answer here
Dt = cov(start_xy - xy) / (2 * total_time)
[val,vec] = eig(Dt)
estimatedADC = val / total_time
principalDiffusionDirection = vec[0]
print('ADC = ' + str(estimatedADC))
print('Principal Diffusion Direction (PDD) = ' + str(principalDiffusionDirection))
# compute your answer here
fig,ax = subplots(1, 1, figsize=(6, 6))
start_p = draw_particles(ax, 'initial particle positions', start_xy, particle_color, voxel_size, barrier_spacing)
for p in range(num_particles):
ax.plot((start_xy[0,p], xy[0,p]), (start_xy[1,p], xy[1,p]), linewidth=1, color=[.5, .5, .5], linestyle='solid')
# compute your answer here
B0 = 3.0 # Magnetic field strength (Tesla)
gyromagneticRatio = 42.58e+6 # Gyromagnetic constant for hydrogen (Hz / Tesla)
# The Larmor frequency (in Hz) of Hydrogen spins in this magnet is:
spinFreq = gyromagneticRatio * B0
# compute your answer here
voxelSize = 100.0 # micrometers
gx = 5e-8 # Tesla / micrometer
gy = 0.0 # Tesla / micrometer
def draw_spins(ax, title, field_image, im_unit, sx, sy, px, py):
# a function to draw spin-packets
# draw the relative magnetic field map image
im = ax.imshow(field_image, extent=im_unit+im_unit, cmap=matplotlib.cm.bone)
ax.set_ylabel('Y position (micrometers)')
ax.set_xlabel('X position (micrometers)')
ax.set_title(title)
# Place some spin packets in there:
ax.scatter(x=sx+px, y=sy+py, color='r', s=3)
ax.scatter(x=sx, y=sy, color='c', s=3)
ax.set_xlim(im_unit)
ax.set_ylim(im_unit)
# add a colorbar
divider = make_axes_locatable(ax)
cax = divider.append_axes("bottom", size="7%", pad=0.5)
cbl = fig.colorbar(im, cax=cax, orientation='horizontal')
cbl.set_label('Relative field strength (micro Tesla)')
im_unit = (-voxelSize/2, voxelSize/2)
x = linspace(im_unit[0], im_unit[1], 100)
y = linspace(im_unit[0], im_unit[1], 100)
[X,Y] = meshgrid(x,y)
# micrometers * Tesla/micrometer * 1e6 = microTesla
relative_field_image = (X*gx + Y*gy) * 1e6
locations = linspace(-voxelSize/2+voxelSize/10, voxelSize/2-voxelSize/10, 5)
sx,sy = meshgrid(locations, locations);
sx = sx.flatten()
sy = sy.flatten()
# set the phase/magnitude to be zero
px = zeros(sx.shape)
py = zeros(sy.shape)
fig,ax = subplots(1, 1)
draw_spins(ax, 'Spin packets at rest in a gradient', relative_field_image, im_unit, sx, sy, px, py)
spinFreq = (sx * gx + sy * gy) * gyromagneticRatio + B0 * gyromagneticRatio
print(spinFreq)
relativeSpinFreq = (sx * gx + sy * gy) * gyromagneticRatio
print(relativeSpinFreq)
# compute your answer here
fig,ax = subplots(1, 1)
timeStep = .001
# Initialize the transverse magnitization to reflect a 90 deg RF pulse.
# The scale here is arbitrary and is selected simply to make a nice plot.
Mxy0 = 5
# Set the T2 value of the spins (in seconds)
T2 = 0.07
curPhase = zeros(sx.size)
t = 0.
nTimeSteps = 50
for ti in range(nTimeSteps):
# Update the current phase based on the spin's precession rate, which is a function
# of its local magnetic field.
curPhase = curPhase + 2*pi*gyromagneticRatio * (sx*gx+sy*gy) * timeStep
# Do a 180 flip at the TE/2:
if ti==round(nTimeSteps/2.):
curPhase = -curPhase
# The transverse magnitization magnitude decays with the T2:
curMagnitude = Mxy0 * exp(-t/T2)
px = sin(curPhase) * curMagnitude
py = cos(curPhase) * curMagnitude
# Summarize the total (relative) MR signal for this iteration
S = sqrt(sum(px**2 + py**2)) / sx.size
title = 'elapsed time: %5.1f/%5.1f ms' % (t*1000., timeStep*(nTimeSteps-1)*1000)
draw_spins(ax, title, relative_field_image, im_unit, sx, sy, px, py)
clear_output(wait=True)
display(fig,ax)
ax.cla()
t = t+timeStep
close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Self-diffusion of water
Step2: The self-diffusion of water at body temperature and standard pressure, in micrometers<sup>2</sup>/millimeter, is
Step3: Now we'll plot D for a biologically meaningful range of temperatures
Step4: Question 1
Step5: Brownian motion
Step6: Run the diffusion simulation
Step7: Question 2
Step8: By comparing the particle ending positions (in xy) with their starting positions (in start_xy), we can compute the diffusion tensor. This is essentially just a 2-d Gaussian fit to the position differences, computed using the covariance function (cov). We also need to normalize the positions by the total time that we diffused.
Step9: Question 3
Step10: Now lets show the particle starting positions a little line segment showing where each moved to.
Step11: Question 4
Step12: The effect of diffusion on the MR signal
Step13: Question 4
Step14: Simulate spins in an MR experiment
Step15: Calculate the relative spin frequency at each spin location. Our gradient strengths are expressed as T/cm and are symmetric about the center of the voxelSize. To understand the following expression, work through the units
Step16: Note that including the B0 field in this equation simply adds an offset to the spin frequency. For most purposes, we usually only care about the spin frequency relative to the B0 frequency (i.e., the rotating frame of reference), so we can leave that last term off and compute relative frequencies (in Hz)
Step17: Question 5
Step18: Display the relative frequencies (in Hz)
|
2,705
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
path = "data/galaxy/sample/"
#path = "data/galaxy/"
train_path = path + 'train/'
valid_path = path + 'valid/'
test_path = path + 'test/'
results_path = path + 'results/'
model_path = path + 'model/'
from utils import *
batch_size = 32
num_epoch = 1
import pandas as pd
df = pd.read_csv(path+ "train.csv")
df_val = pd.read_csv(path+ "valid.csv")
# custom iterator for regression
import Iterator; reload(Iterator)
from Iterator import DirectoryIterator
imgen = image.ImageDataGenerator()
# imgen = image.ImageDataGenerator(samplewise_center=0,
# rotation_range=360,
# width_shift_range=0.05,
# height_shift_range=0.05,
# zoom_range=[0.9,1.2],
# horizontal_flip=True,
# channel_shift_range=0.1,
# dim_ordering='tf')
batches = DirectoryIterator(train_path, imgen,
class_mode=None,
dataframe=df,
batch_size=4,
target_size=(128,128))
val_imgen = image.ImageDataGenerator()
val_batches = DirectoryIterator(valid_path, val_imgen,
class_mode=None,
dataframe=df_val,
batch_size=4,
target_size=(128,128))
imgs, target = next(batches)
imgs[0].shape
plots(imgs)
def conv1():
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,128,128)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(37)
])
model.compile(Adam(lr=0.0001), loss='mse')
return model
model = conv1()
model.summary()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
model.save_weights(model_path+'conv1.h5')
train_files = batches.filenames
train_out = model.predict_generator(batches, batches.nb_sample)
features = list(df.columns.values)
train_ids = [os.path.splitext(f) for f in train_files]
submission = pd.DataFrame(train_out, columns=features[2:])
submission.insert(0, 'GalaxyID', [int(a[0][7:]) for a in train_ids])
submission.head()
df.loc[df['GalaxyID'] == 924379]
val_files = val_batches.filenames
val_out = model.predict_generator(val_batches, val_batches.nb_sample)
features = list(df_val.columns.values)
val_ids = [os.path.splitext(f) for f in val_files]
submission = pd.DataFrame(val_out, columns=features[2:])
submission.insert(0, 'GalaxyID', [int(a[0][7:]) for a in val_ids])
submission.head()
df_val.loc[df_val['GalaxyID'] == 546684]
test_batches = get_batches(test_path, batch_size=64, target_size=(128,128))
test_files = test_batches.filenames
test_out = model.predict_generator(test_batches, test_batches.nb_sample)
save_array(results_path+'test_out.dat', test_out)
features = list(df.columns.values)
test_ids = [os.path.splitext(f) for f in test_files]
submission = pd.DataFrame(test_out, columns=features[2:])
submission.insert(0, 'GalaxyID', [int(a[0][7:]) for a in test_ids])
submission.head()
subm_name = results_path+'subm.csv'
submission.to_csv(subm_name, index=False)
FileLink(subm_name)
imgen_aug = image.ImageDataGenerator(horizontal_flip=True)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(rotation_range=360)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(width_shift_range=0.05)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(channel_shift_range=20)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(horizontal_flip=True,
rotation_range=180,
width_shift_range=0.05,
channel_shift_range=20)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=5,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First Model
Step2: To Do
|
2,706
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
ml = ModelMaq(kaq=[1, 20, 2], z=[25, 20, 18, 10, 8, 0], c=[1000, 2000],
Saq=[0.1, 1e-4, 1e-4], Sll=[0, 0], phreatictop=True,
tmin=0.1, tmax=1000)
w = Well(ml, xw=0, yw=0, rw=0.2, tsandQ=[(0, 1000)], layers=1, label='well 1')
yls = [-500, -300, -200, -100, -50, 0, 50, 100, 200, 300, 500]
xls = 50 * np.ones(len(yls))
ls1 = HeadLineSinkString(ml, list(zip(xls, yls)), tsandh='fixed', layers=0, label='river')
ml.solve()
ml.xsection(x1=-200, x2=200, npoints=100, t=100, layers=[0, 1, 2], sstart=-200)
t = np.logspace(-1, 3, 100)
Q = ls1.discharge(t)
plt.semilogx(t, Q[0])
plt.ylabel('Q [m$^3$/d]')
plt.xlabel('time [days]');
ml.contour(win=[-200, 200, -200, 200], ngr=[20, 20], t=100, layers=0,
levels=20, color='b', labels='True', decimals=2, figsize=(8, 8))
ml = ModelMaq(kaq=[1, 20, 2], z=[25, 20, 18, 10, 8, 0], c=[1000, 2000],
Saq=[0.1, 1e-4, 1e-4], Sll=[0, 0], phreatictop=True,
tmin=0.1, tmax=2000)
tsandQ=[(0, 1000), (100, 0), (365, 1000), (465, 0),
(730, 1000), (830, 0), (1095, 1000), (1195, 0),
(1460, 1000), (1560, 0)]
w = Well(ml, xw=0, yw=0, rw=0.2, tsandQ=tsandQ, layers=1, label='well 1')
yls = [-500, -300, -200, -100, -50, 0, 50, 100, 200, 300, 500]
xls = 50 * np.ones(len(yls))
ls1 = HeadLineSinkString(ml, list(zip(xls, yls)), tsandh='fixed', layers=0, label='river')
ml.solve()
t = np.linspace(0.1, 2000, 200)
Q = ls1.discharge(t)
plt.plot(t, Q[0])
plt.ylabel('Q [m$^3$/d]')
plt.xlabel('time [days]');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Consider a well in the middle aquifer of a three aquifer system located at $(x,y)=(0,0)$. The well starts pumping at time $t=0$ at a discharge of $Q=1000$ m$^3$/d. Aquifer properties are the shown in table 3 (same as exercise 2). A stream runs North-South along the line $x=50$. The head along the stream is fixed.
Step2: Exercise 3b
Step3: Exercise 3c
Step4: Exercise 3d
|
2,707
|
<ASSISTANT_TASK:>
Python Code:
# Assign value 1 to variable x
x = 1
x = 1 # Assign value 1 to variable x
This is a multi-line comment.
Assign value 1 to variable x.
x = 1
print(1) # Print a constant
x = 2014
print(x) # Print an integer variable
xstr = "Hello World." # Print a string
print(xstr)
print(x,xstr) # Print multiple objects
print("String 1" + " " + "String2") # Concatenate multiple strings and print them
x = 1
print("Formatted integer is {0:06d}".format(x)) # Note the format specification, 06d, for the integer.
y = 12.66666666667
print("Formatted floating point number is {0:2.3f}".format(y)) # Note the format specification, 2.3f, for the floating point number.
iStr = "Hello World"
fStr = "Goodbye World"
print("Initial string: {0:s} . Final string: {1:s}.".format(iStr,fStr)) # Note the format specification, s, for the string.
print("Initial string: {0} . Final string: {1}.".format(iStr,fStr)) # In this case, omitting the s format specified works too.
x = 1
print("Formatted integer is {0:06d}".format(x))
y = 12.66666666667
print("Formatted floating point number is {0:2.3f}".format(y))
year = 2014
print(year)
print("The year is %d." % year)
print(type(year))
help(year)
help(int)
mean = (1.0 + 0.7 + 2.1)/3.0
print(mean)
print("The mean is %6.2f." % mean)
print(type(mean))
help(mean)
help(float)
x = 2**3 # ** is the exponentiation operator
print(x)
x = 9 % 4 # % is the modulus operator
print(x)
x = 9 // 4 # // is the operator for floor division
print(x)
x = 2.0
y = 5.0
y += x # y = y + x
print(y)
y %= x # y = y%x
print(y)
x = 1
y = 1
x == y # Check for equality
x = 1
y = 1
x != 1 # Check for inequality
x = 0.5
y = 1.0
x > y # Check if x greater than y
x < y # Check if x less than y
x >= y # Check if x greater than equal to y
x <= y # Check if x less than equal to y
a = 99
b = 99
(a == b) and (a <= 100) # use the and operator to check if both the operands are true
a = True
b = False
a and b
a = True
b = False
a or b # use the or operator to check if at least one of the two operands is true
a = 100
b = 100
a == b
not(a == b) # use the not operator to reverse a logical statement
pythonStr = 'A first Python string.' # String specified with single quotes.
print(type(pythonStr))
print(pythonStr)
pythonStr = "A first Python string" # String specified with double quotes.
print(type(pythonStr))
print(pythonStr)
pythonStr = A multi-line string.
A first Python string. # Multi-line string specified with triple quotes.
print(type(pythonStr))
print(pythonStr)
str1 = " Rock "
str2 = " Paper "
str3 = " Scissors "
longStr = str1 + str2 + str3
print(longStr)
str1 = "Rock,Paper,Scissors\n"
repeatStr = str1*5
print(repeatStr)
str1 = "Python"
lenStr1 = len(str1)
print("The length of str: is " + str(lenStr1) + ".")
str1 = "Python"
print(str1[0]) # Print the first character element of the string.
print(str1[len(str1)-1]) # Print the last character element of the string.
print(str1[2:4]) # Print a 2-element slice of the string, starting from the 2-nd element up to but not including the 4-th element.
str1 = "Python"
str1[1] = "3" # Error, strings can't be modified.
str1 = "Python"
print(str1.upper()) # Convert str1 to all uppercase.
str2 = "PYTHON"
print(str1.lower()) # Convert str2 to all lowercase.
str3 = "Rock,Paper,Scissors,Lizard,Spock"
print(str3.split(",")) # Split str3 using "," as the separator. A list of string elements is returned.
str4 = "The original string has trailing spaces.\t\n"
print("***"+str4.strip()+"***") # Print stripped string with trailing space characters removed.
# pyList contains an integer, string and floating point number
pyList = [2014,"02 June", 74.5]
# Print all the elements of pyList
print(pyList)
print("\n")
# Print the length of pyList obtained using the len() function
print("Length of pyList is: {0}.\n".format(len(pyList)))
print(pyList)
print("\n")
# Print the first element of pyList. Remember, indexing starts with 0.
print("First element of pyList: {0}.\n".format(pyList[0]))
# Print the last element of pyList. Last element can be conveniently indexed using -1.
print("Last element of pyList: {0}.\n".format(pyList[-1]))
# Also the last element has index = (length of list - 1)
check = (pyList[2] == pyList[-1])
print("Is pyList[2] equal to pyList[-1]?\n{0}.\n".format(check))
# Assign a new value to the third element of the list
pyList[2] = -99.0
print("Modified element of pyList[2]: {0}.\n".format(pyList[2]))
pyList = ["rock","paper","scissors","lizard","Spock"]
print(pyList[2:4]) # Print elements of a starting from the second, up to but not including the fourth.
print(pyList[:2]) # Print the first two elements of pyList.
print(pyList[2:]) # Print all the elements of pyList starting from the second.
print(pyList[:]) # Print all the elements of pyList
pyList = ["rock","paper","scissors","lizard","Spock"]
pyList[2:4] = ["gu","pa"] # Replace the second and third elements of pyList
print("Original contents of pyList:")
print(pyList)
print("\n")
pyList[:] = [] # Clear pyList, replace all items with an empty list
print("Modified contents of pyList:")
print(pyList)
pyList = ["rock","paper"]
print("Printing Python list pyList:")
print(pyList)
print("\n")
pyList.append("scissors")
print("Appended the string 'scissors' to pyList:")
print(pyList)
print("\n")
anotherList = ["lizard","Spock"]
pyList.extend(anotherList)
print("Extended pyList:")
print(pyList)
print("\n")
pyList1 = ["rock","paper","scissors"]
pyList2 = ["lizard","Spock"]
newList = pyList1 + pyList2
print("New list:")
print(newList)
pyLists = [["rock","paper","scissors"], ["ji","gu","pa"]]
# Print the first element (0-th index) of pyLists which is itself a list
print("pyLists[0] = ")
print(pyLists[0])
print("\n")
# Print the 0-th index element of the first list element in pyLists
print("pylists[0][0] = " + pyLists[0][0] + ".")
print("\n")
# Print the second element of pyLists which is itself a list
print("pyLists[1] = ")
print(pyLists[1])
print("\n")
# Print the 0-th index element of the second list element in pyLists
print("pyLists[1][0] = " + pyLists[1][0] + ".")
print("\n")
pyList = [1,3,4,2]
pyList.sort(reverse=True)
sum(pyList)
2*(pyList)
#2**(pyList)
# pyTuple contains an integer, string and floating point number
pyTuple = (2014,"02 June", 74.5)
# Print all the elements of pyTuple
print("pyTuple is: ")
print(pyTuple)
print("\n")
# Print the length of pyTuple obtained using the len() function
print("Length of pyTuple is: {0}.\n".format(len(pyTuple)))
pyTuple[1] = "31 December" # Error as pyTuple is a tuple and hence, immutable
pyTuple = "rock", "paper", "scissors" # pack the strings into a tuple named pyTuple
print(pyTuple)
str0,str1,str2 = pyTuple # unpack the tuple into strings named str0, str1, str2
print("str0 = " + str0 + ".")
print("str1 = " + str1 + ".")
print("str2 = " + str2 + ".")
pyTuples = (("rock","paper","scissors"),("ji","gu","pa"))
print("pyTuples[0] = {0}.".format(pyTuples[0])) # Print the first sub-tuple in pyTuples.
print("pyTuples[1] = {0}.".format(pyTuples[1])) # Print the second sub-tuple in pyTuples.
pyNested = (["rock","paper","scissors"],["ji","gu","pa"])
pyNested[0][2] = "lizard" # OK, list within the tuple is mutable
print(pyNested[0]) # Print first list element of the tuple
pyNested = [("rock","paper","scissors"),("ji","gu","pa")]
pyNested[0][2] = "lizard" # Error, tuples is immutable*
pyDict = {"Canada":"CAN","Argentina":"ARG","Austria":"AUT"}
print("pyDict: {0}.".format(pyDict))
print("pyDict['Argentina']: " + pyDict['Argentina'] + ".") # Print the value corresponding to key 'afghanistan'
print(pyDict.keys())
print(pyDict.values()) # Return all the values in the dictionary as a list.
print(pyDict.items()) # Return key, value pairs from the dictionary as a list of tuples.
pyDicts = {"Canada":{"Alpha-2":"CA","Alpha-3":"CAN","Numeric":"124"},
"Argentina":{"Alpha-2":"AR","Alpha-3":"ARG","Numeric":"032"},
"Austria":{"Alpha-2":"AT","Alpha-3":"AUT","Numeric":"040"}}
print("pyDicts['Canada'] = {0}.".format(pyDicts['Canada']))
print("pyDicts['Canada']['Alpha-2'] = {0}.".format(pyDicts['Canada']['Alpha-2']))
pyNested = {"Canada":[2011,2008,2006,2004,2000 ],"Argentina":[2013,2011,2009,2007,2005],"Austria":[2013,2008,2006,2002,1999]}
print("pyNested['Canada'] = {0}".format(pyNested['Canada']))
print("pyNested['Austria'][4] = {0}.".format(pyNested['Austria'][4]))
pyNested = [{"year":2011,"countries":["Canada","Argentina"]},
{"year":2008,"countries":["Canada","Austria"]},
{"year":2006,"countries":["Canada","Austria"]},
{"year":2013,"countries":["Argentina","Austria"]}]
print("pyNested[0] = {0}".format(pyNested[0]))
print("pyNested[0]['year'] = {0}, pyNested[0]['countries'] = {1}.".format(pyNested[0]['year'],pyNested[0]['countries']))
pyNested = [{"year":2011,"countries":["Canada","Argentina"]},
{"year":2008,"countries":["Canada","Austria"]},
{"year":2006,"countries":["Canada","Austria"]},
{"year":2013,"countries":["Argentina","Austria"]}]
# Check if first dictionary element of pyNested corresponds to years 2006 or 2008
if(pyNested[0]["year"] == 2008):
print("Countries corresponding to year 2008 are: {0}.".format(pyNested[0]["countries"]))
elif(pyNested[0]["year"] == 2011):
print("Countries corresponding to year 2011 are: {0}.".format(pyNested[0]["countries"]))
else:
print("The first element does not correspond to either 2008 or 2011.")
countryList = ["Canada", "United States of America", "Mexico"]
for country in countryList: # Loop over countryList, set country to next element in list.
print(country)
countryDict = {"Canada":"124","United States":"840","Mexico":"484"}
print("Country\t\tISO 3166-1 Numeric Code")
for country,code in countryDict.items(): # Loop over all the key and value pairs in the dictionary
print("{0:12s}\t\t{1:12s}".format(country,code))
countryList = ["Canada", "United States of America", "Mexico"]
for i in range(0,3): # Loop over values of i in the range 0 up to, but not including, 3
print(countryList[i])
countryList = ["Canada", "United States of America", "Mexico"]
# iterate over countryList backwards, starting from the last element
while(countryList):
print(countryList[-1])
countryList.pop()
i = 0
countryList = ["Canada", "United States of America", "Mexico"]
while(i < len(countryList)):
print("Iteration variable i = {0}, Country = {1}.".format(i,countryList[i]))
i += 1
countryList = ["Canada", "United States of America", "Mexico"]
for country in countryList:
if(country == "United States of America"):
# if the country name matches, then break out of the for loop
break
else:
# do some processing
print(country)
countryList = ["Canada", "United States of America", "Mexico"]
for country in countryList:
if(country == "United States of America"):
# if the country name matches, then break out of the for loop
continue
else:
# do some processing
print(country)
filename = "tmp.txt"
fout = open(filename,"w") # The 'r' option indicates that the file is being opened to be read
for i in range(0,5): # Read in each line from the file
# Do some processing
fout.write("i = {0}.\n".format(i))
fout.close() # Once the file has been read, close the file
filename = "tmp.txt"
with open(filename,"w") as fout:
for i in range(0,5):
fout.write("i = {0}.\n".format(i))
fout.close()
filename = "tmp.txt"
fin = open(filename,"r") # The 'r' option indicates that the file is being opened to be read
for line in fin: # Read in each line from the file
# Do some processing
print(line)
fin.close() # Once the file has been read, close the file
filename = "tmp.txt"
with open(filename,"r") as fin:
for line in fin:
# Do some processing
print(line)
import csv
with open("game.csv","wb") as csvfile:
csvwriter = csv.writer(csvfile,delimiter=',')
csvwriter.writerow(["rock","paper","scissor"])
csvwriter.writerow(["ji","gu","pa"])
csvwriter.writerow(["rock","paper","scissor","lizard","Spock"])
cat game.csv
import csv
with open("game.csv","r") as csvfile:
csvreader = csv.reader(csvfile,delimiter=",")
for row in csvreader:
print(row)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comments can also be placed on the same line as the code as shown here.
Step3: For multi-line comments, use triple-quoted strings.
Step4: 1.4 Python's print() function
Step5: For web-scraping and text-processing type tasks, we'd like better control over how things get printed out, such as the number of decimal places when printing out floating point numbers. Use the format() method on the string to be printed out to control the output format.
Step6: 2. Numeric Variable Types
Step7: 2.2. Floating Point Numbers
Step8: 3. Basic Operators
Step9: 3.2. Assignment Operators
Step10: 3.3. Comparison Operators
Step11: 3.4. Logical Operators
Step13: 4. Strings
Step14: Strings can be concatenated using the addition(+) operator.
Step15: Strings can also be repeated with the multiplication (*) operator.
Step16: The len() function returns the length of a string.
Step17: Since, the Python string is a sequence of characters, individual characters in the string can be indexed. Note that, unlike R, in Python sequences indexing starts at 0 and goes up to one less than the length of the sequence.
Step18: Strings are immutable. That is, an existing instance of a string cannot be modified. Instead, a new string that contains the modification should be created.
Step19: Strings come with some powerful methods (https
Step20: 5. Python Data Structures
Step21: List elements can be individually referenced using their index in the list. Python indexing starts with 0 and runs up to the length of the sequence - 1. The square bracket is used to specify the index in to the list. This notation can also be used to assign values to the elements of the list. In contrast to strings, lists are mutable.
Step22: Python lists can be sliced using the slice notation of two indices separated by a colon. An omitted first index indicates 0 and an omitted second index indicates the length of the list/sequence.
Step23: Python slice notation can also be used to assign into lists.
Step24: Python lists come with useful methods to add elements - append() and extend()
Step25: Python lists can be concatenated using the "+" operator (similar to strings).
Step26: Python lists can be nested - list within a list within a list and so on. An index needs to be specified for each level of nesting.
Step27: 5.2. Tuples
Step28: Tuples are immutable. Attempting to change elements of a tuple will result in errors.
Step29: Tuples can be packed from and unpacked into individual elements.
Step30: One can declare tuples of tuples.
Step31: One can declare a tuple of lists.
Step32: One can also declare a list of tuples.
Step33: 5.3. Dictionaries
Step34: Parsing hierarchical data structures involving Python dictionaries will be very useful when working with the JSON data format and APIs such as the Twitter API.
Step35: Values in a dictionary can also be lists.
Step36: Lastly, we can have lists of dictionaries
Step37: 6. Control Flow
Step38: Scripting languages, such as Python, make it easy to automate repetitive tasks. In this workshop, we'll use two of Python's syntactic constructs for iteration - the for loop and the while loop.
Step39: 6.3 <tt>range()</tt> Function
Step40: 6.4 while() Statement
Step41: 6.5 <tt>break</tt> and <tt>continue</tt> Statements
Step42: If, instead of exiting the loop, one merely wishes to skip that iteration, then use the continue statement as shown here.
Step43: 7. Python File I/O
Step44: Alternative syntax for writing to file using 'with open' is shown below.
Step45: 7.2 Reading from a file
Step46: The code below demonstrates another way to open a file and read each line. With this syntax, the file is automatically closed after the <tt >with</tt> block.
Step47: An input file can also be read in as one string by using the read() method.
Step48: And this is an example of reading the games.csv file. Each row of the csv file is read in as a list.
|
2,708
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler
data = load_data()
scaler = StandardScaler()
scaler.fit(data)
scaled = scaler.transform(data)
inversed = scaler.inverse_transform(scaled)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
2,709
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
learning_rate = 0.001
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1), name="inputs")
targets_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1), name="targets")
### Encoder
'''
tf.layers.conv2d(inputs,
filters,
kernel_size,
strides=(1, 1), # stride of (1, 1) will not reduce size
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
max_pooling2d(
inputs,
pool_size,
strides,
padding='valid',
data_format='channels_last',
name=None
)
'''
conv1 = tf.layers.conv2d(inputs_, 16, (5, 5), padding="same", activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), strides=(2, 2), padding="same")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), strides=(2, 2), padding="same")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), strides=(2, 2), padding="valid")
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (5, 5), padding="same", activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (5, 5), padding="same", activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='output')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d()
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Architecture
Step2: Training
Step3: Denoising
Step4: Checking out the performance
|
2,710
|
<ASSISTANT_TASK:>
Python Code:
import cashflows as cf
cflo = cf.cashflow(const_value= 0,nper=6, spec = [(0, -1000),
(1, 400),
(2, 360),
(3, 320),
(4, 280),
(5, 240)])
cflo
cf.timevalue(cflo = cflo,
marr = cf.nominal_rate([10]*6) )
cf.timevalue(cflo = cflo,
marr = cf.nominal_rate([10]*6),
base_date = 5 )
## la función npv puede recibir simultaneamente varios flujos de efectivo
cf.timevalue(cflo = [cflo,cflo,cflo],
marr = cf.nominal_rate([10]*6))
# o varias tasas de interes
marr1 = cf.nominal_rate([10]*6)
marr2 = cf.nominal_rate([20]*6)
marr3 = cf.nominal_rate([30]*6)
cf.timevalue(cflo = cflo,
marr = [marr1, marr2, marr3])
## o una tasa de descuento para cada flujo de efectivo
marr1 = cf.nominal_rate([10]*6)
marr2 = cf.nominal_rate([20]*6)
marr3 = cf.nominal_rate([30]*6)
cf.timevalue(cflo = [cflo, cflo, cflo],
marr = [marr1, marr2, marr3])
cf.irr(cflo)
cf.mirr(cflo=cflo, finance_rate=0, reinvest_rate=0)
## la función puede recibir varios flujos de fondos simulataneamente.
cf.irr([cflo, cflo, cflo])
## se construye una función que recibe la información relevante y retorn el npv
def project(marr,
produccion,
precio,
costo,
inversion):
ingre = cf.cashflow(const_value=0,
nper=11,
spec = [(t, precio * produccion) if t > 0 else (0,0) for t in range(11)])
opera = cf.cashflow(const_value=0,
nper=11,
spec = [(t, costo) if t > 0 else (0,0) for t in range(11)])
inver = cf.cashflow(const_value=0,
nper=11,
spec = (0, inversion))
dep = cf.depreciation_sl(costs = cf.cashflow(const_value=0, nper=11, spec=(0, inversion)),
life = cf.cashflow(const_value=0, nper=11, spec=(0, 10)),
noprint = True)
antes = ingre - opera - inver - dep
desp = cf.after_tax_cashflow(antes,
tax_rate = cf.nominal_rate(const_value=[30] * 11))
neto = antes + desp
npv = cf.timevalue(cflo = neto,
marr = cf.nominal_rate([marr]*11))
return npv
project(10, 100, 10, 220, 2000)
## resultados para diferentes valores de la MARR
x=[]
for i in [8, 10, 12]:
x.append(project( i, 100, 10, 220, 2000))
x
## resultados para diferentes valores de la inversión
[project(10, 100, 10, 220, x) for x in [1600, 1800, 2000, 2200, 2400]]
## resultados para diferentes valores del precio
[project(10, 100, x, 220, 2000) for x in [8, 9, 10, 11, 12]]
import matplotlib.pyplot as plt
%matplotlib inline
precio = [8, 9, 10, 11, 12]
y = [project(0.10, 100, x, 220, 2000) for x in precio]
plt.plot(precio, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Criterio de la tasa interna de retorno
Step2: Tasa Interna de Retorno Modificada.
Step3: Análisis de sensibilidad
|
2,711
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
from ascat.h_saf import AscatSsmDataRecord
test_data_path = os.path.join('..', 'tests','ascat_test_data', 'hsaf')
h109_path = os.path.join(test_data_path, 'h109')
h110_path = os.path.join(test_data_path, 'h110')
h111_path = os.path.join(test_data_path, 'h111')
grid_path = os.path.join(test_data_path, 'grid')
static_layer_path = os.path.join(test_data_path, 'static_layer')
h109_dr = AscatSsmDataRecord(h109_path, grid_path, static_layer_path=static_layer_path)
h110_dr = AscatSsmDataRecord(h110_path, grid_path, static_layer_path=static_layer_path)
h111_dr = AscatSsmDataRecord(h111_path, grid_path, static_layer_path=static_layer_path)
gpi = 2501225
h109_ts = h109_dr.read(gpi)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
ax.plot(h109_ts['sm'], label='Metop ASCAT SSM Data Record (H109)')
ax.set_ylabel('Degree of Saturation (%)')
ax.legend()
h110_ts = h110_dr.read(gpi)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
ax.plot(h109_ts['sm'], label='Metop ASCAT SSM Data Record (H109)')
ax.plot(h110_ts['sm'], label='Metop ASCAT SSM Data Record Extension (H110)')
ax.set_ylabel('Degree of Saturation (%)')
ax.legend()
fields = ['sm', 'sm_noise', 'ssf', 'snow_prob', 'frozen_prob']
h111_ts = h111_dr.read(gpi)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[fields].plot(ax=ax)
ax.legend()
conf_flag_ok = h111_ts['conf_flag'] == 0
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[conf_flag_ok][fields].plot(ax=ax)
ax.legend()
metop_a = h111_ts[conf_flag_ok]['sat_id'] == 3
metop_b = h111_ts[conf_flag_ok]['sat_id'] == 4
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[conf_flag_ok]['sm'][metop_a].plot(ax=ax, ls='none', marker='o',
color='C1', fillstyle='none', label='Metop-A SSM')
h111_ts[conf_flag_ok]['sm'][metop_b].plot(ax=ax, ls='none', marker='o',
color='C0', fillstyle='none', label='Metop-B SSM')
ax.set_ylabel('Degree of Saturation (%)')
ax.legend()
h111_ts = h111_dr.read(gpi, absolute_sm=True)
conf_flag_ok = h111_ts['conf_flag'] == 0
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[conf_flag_ok]['abs_sm_gldas'].plot(ax=ax, label='Absolute SSM using porosity from Noah GLDAS')
h111_ts[conf_flag_ok]['abs_sm_hwsd'].plot(ax=ax, label='Absolute SSM using porosity from HWSD')
ax.set_ylabel('Vol. soil moisture ($m^3 m^{-3}$)')
ax.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A soil moisture time series is read for a specific grid point. The data attribute contains a pandas.DataFrame object.
Step2: Time series plots
Step3: The SSM data record H109 can be extended using H110, representing a consistent continuation of the data set
Step4: A soil moisture time series can also be plotted using the plot function provided by the pandas.DataFrame. The following example displays several variables stored in the time series.
Step5: Masking invalid soil moisture measurements
Step6: Differentiate between soil moisture from Metop satellites
Step7: Convert to absolute surface soil moisture
|
2,712
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_mark||',
'?' : '||Question_mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'--' : '||Dash||',
'\n' : '||Return||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.8)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 1)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
inputs = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, inputs)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
biases_initializer = tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
#print(int_text[:20])
#print(batch_size)
#print(seq_length)
#print(len(int_text))
#print(batches[0][0][1])
#batches = np.array(batches)
#print(batches.shape)
#print(len(int_text))
num_batches = len(int_text)//(batch_size * seq_length)
#print(num_batches)
batches = np.zeros([num_batches, 2, batch_size, seq_length], dtype=np.int)
for i in range(num_batches):
for j in range(batch_size):
batches[i,0,j,:] = int_text[num_batches*seq_length*j + i*seq_length : num_batches*seq_length*j + seq_length + i*seq_length]
batches[i,1,j,:] = int_text[num_batches*seq_length*j + i*seq_length + 1 : num_batches*seq_length*j + seq_length + 1 + i*seq_length]
if i == num_batches-1 and j == batch_size-1:
#print('!!')
batches[i,1,j,seq_length-1] = int_text[0]
#print(batches[num_batches-1, 2, batch_size, seq_length])
#print(batches[1])
#print(batches[0][1])
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 512
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
#print(probabilities)
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
2,713
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=2)
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
from sklearn.svm import LinearSVC
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
classifier = LinearSVC().fit(X_train, y_train)
y_test_pred = classifier.predict(X_test)
print("Accuracy: %f" % classifier.score(X_test, y_test))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_test_pred)
plt.matshow(confusion_matrix(y_test, y_test_pred))
plt.colorbar()
plt.xlabel("Predicted label")
plt.ylabel("True label")
from sklearn.metrics import classification_report
print(classification_report(y_test, y_test_pred))
X, y = digits.data, digits.target == 3
from sklearn.cross_validation import cross_val_score
from sklearn.svm import SVC
cross_val_score(SVC(), X, y)
from sklearn.dummy import DummyClassifier
cross_val_score(DummyClassifier("most_frequent"), X, y)
from sklearn.metrics import roc_curve, roc_auc_score
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
for gamma in [.01, .05, 1]:
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate (recall)")
svm = SVC(gamma=gamma).fit(X_train, y_train)
decision_function = svm.decision_function(X_test)
fpr, tpr, _ = roc_curve(y_test, decision_function)
acc = svm.score(X_test, y_test)
auc = roc_auc_score(y_test, svm.decision_function(X_test))
plt.plot(fpr, tpr, label="acc:%.2f auc:%.2f" % (acc, auc), linewidth=3)
plt.legend(loc="best")
from sklearn.cross_validation import cross_val_score
cross_val_score(SVC(), X, y, scoring="roc_auc")
from sklearn.metrics.scorer import SCORERS
print(SCORERS.keys())
def my_accuracy_scoring(est, X, y):
return np.mean(est.predict(X) == y)
cross_val_score(SVC(), X, y, scoring=my_accuracy_scoring)
def my_super_scoring(est, X, y):
return np.mean(est.predict(X) == y) - np.mean(est.coef_ != 0)
from sklearn.grid_search import GridSearchCV
from sklearn.svm import LinearSVC
y = digits.target
grid = GridSearchCV(LinearSVC(C=.01, dual=False),
param_grid={'penalty' : ['l1', 'l2']},
scoring=my_super_scoring)
grid.fit(X, y)
print(grid.best_params_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here, we predicted 94.4% of samples correctly. For multi-class problems, it is often interesting to know which of the classes are hard to predict, and which are easy, or which classes get confused. One way to get more information about misclassifications is the confusion_matrix, which shows for each true class, how frequent a given predicted outcome is.
Step2: A plot is sometimes more readable
Step3: We can see that most entries are on the diagonal, which means that we predicted nearly all samples correctly. The off-diagonal entries show us that many eights were classified as ones, and that nines are likely to be confused with many other classes.
Step4: These metrics are helpful in two particular cases that come up often in practice
Step5: Now we run cross-validation on a classifier to see how well it does
Step6: Our classifier is 90% accurate. Is that good? Or bad? Keep in mind that 90% of the data is "not three". So let's see how well a dummy classifier does, that always predicts the most frequent class
Step7: Also 90% (as expected)! So one might thing that means our classifier is not very good, it doesn't to better than a simple strategy that doesn't even look at the data.
Step8: With a very small decision threshold, there will be few false positives, but also few false negatives, while with a very high threshold, both true positive rate and false positive rate will be high. So in general, the curve will be from the lower left to the upper right. A diagonal line reflects chance performance, while the goal is to be as much in the top left corner as possible. This means giving a higher decision_function value to all positive samples than to any negative sample.
Step9: Built-In and custom scoring functions
Step10: It is also possible to define your own scoring metric. Instead of a string, you can provide a callable to as scoring parameter, that is an object with a __call__ method or a function.
Step11: The interesting thing about this interface is that we can access any attributes of the estimator we trained. Let's say we have trained a linear model, and we want to penalize having non-zero coefficients in our model selection
Step12: We can evaluate if this worked as expected, by grid-searching over l1 and l2 penalties in a linear SVM. An l1 penalty is expected to produce exactly zero coefficients
|
2,714
|
<ASSISTANT_TASK:>
Python Code:
import csv
data = []
revid = []
with open('page_data.csv') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append([row[0],row[1],row[2]])
revid.append(row[2])
# Remove the first element ('rev_id') from revid so that the list only contains revision IDs.
revid.pop(0)
from itertools import islice
import csv
import pandas as pd
population = []
with open('Population Mid-2015.csv') as population_file:
reader = csv.reader(population_file)
# note that first row is title; the second and last two rows are blank
# skip first and last two rows in the csv file
for row in islice(reader,2,213):
population.append([row[0],row[4]])
chunks = [revid[x:x+50] for x in range(0, len(revid), 50)]
import requests
import json
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters - smushing all the revision IDs together separated by | marks.
# Yes, 'smush' is a technical term, trust me I'm a scientist.
# What do you mean "but people trusting scientists regularly goes horribly wrong" who taught you tha- oh.
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
return response
headers = {'User-Agent' : 'https://github.com/yawen32', 'From' : 'liy44@uw.edu'}
article_quality = []
for i in range(len(chunks)):
response = get_ores_data(chunks[i],headers)
aq = response['enwiki']['scores']
for j in range(len(chunks[i])):
for key in aq[chunks[i][j]]["wp10"]:
# Flag the articles have been deleted
if key == "error":
article_quality.append("None")
else:
article_quality.append(aq[chunks[i][j]]['wp10']['score']['prediction'])
aq = open("article_quality.txt","w")
for item in article_quality:
aq.write("{}\n".format(item))
aq.close()
with open("article_quality.csv","w",newline="") as f:
aqcsv = csv.writer(f)
aqcsv.writerow(article_quality)
with open('article_quality.txt','r') as f:
articleQuality = f.read().splitlines()
wiki_data = pd.DataFrame(data[1:],columns=data[0])
wiki_data
len(pd.Series(articleQuality).values)
# Add the ORES data into the Wikipedia data
wiki_data["article_quality"] = pd.Series(articleQuality).values
# Rename columns of the Wikipedia data
wiki_data.columns = ["article_name","country","revision_id","article_quality"]
# Convert data (country and population) from the population file to dataframe
population_data = pd.DataFrame(population[1:],columns=population[0])
# Renames the columns with suitable names
population_data.columns = ["Location","population"]
# Merge two datasets(wiki_data and population_data) base on the common key (country name). This step removes the rows do not have
# matching data automatically.
merge_data = pd.merge(wiki_data, population_data, left_on = 'country', right_on = 'Location', how = 'inner')
merge_data = merge_data.drop('Location', axis=1)
# Swap first and second columns so that the dataframe follows the formatting conventions
merge_data = merge_data[["country","article_name","revision_id","article_quality","population"]]
merge_data.to_csv("final_data.csv")
# Extract column "country" from merge data
merge_country = merge_data.iloc[:,0].tolist()
# Count the number of articles for each country
from collections import Counter
count_article = Counter(merge_country)
prop_article_per_population = []
df_prop_article_per_population = pd.DataFrame(columns=['country', 'population', 'num_articles','prop_article_per_population'])
num_country = 0
for country in count_article:
population = int(population_data.loc[population_data["Location"] == country, "population"].iloc[0].replace(",",""))
percentage = count_article[country] / population
prop_article_per_population.append("{:.10%}".format(percentage))
df_prop_article_per_population.loc[num_country] = [country,population,count_article[country],"{:.10%}".format(percentage)]
num_country += 1
# Show the table of the proportion of articles-per-population for each country
df_prop_article_per_population
prop_high_quality_articles_each_country = []
df_prop_high_quality_articles_each_country = pd.DataFrame(columns=["country","num_high_quality_articles","num_articles","prop_high_quality_articles"])
num_country = 0
for country in count_article:
num_FA = Counter(merge_data.loc[merge_data['country'] == country].iloc[:,3].tolist())['FA']
num_GA = Counter(merge_data.loc[merge_data['country'] == country].iloc[:,3].tolist())['GA']
num_high_quality = num_FA + num_GA
percentage = num_high_quality / count_article[country]
prop_high_quality_articles_each_country.append("{:.10%}".format(percentage))
df_prop_high_quality_articles_each_country.loc[num_country] = [country,num_high_quality,count_article[country],"{:.10%}".format(percentage)]
num_country += 1
# Show the table of the proportion of high-quality articles for each country
df_prop_high_quality_articles_each_country
# Get index of 10 highest-ranked countries
idx = df_prop_article_per_population["prop_article_per_population"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=False).index[0:10]
# Retrieve these rows by index values
highest_rank_10_prop_article_per_population = df_prop_article_per_population.loc[idx]
highest_rank_10_prop_article_per_population.to_csv("highest_rank_10_prop_article_per_population.csv")
highest_rank_10_prop_article_per_population
# Get index of 10 lowest-ranked countries
idx = df_prop_article_per_population["prop_article_per_population"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=True).index[0:10]
# Retrieve these rows by index values
lowest_rank_10_prop_article_per_population = df_prop_article_per_population.loc[idx]
lowest_rank_10_prop_article_per_population.to_csv("lowest_rank_10_prop_article_per_population.csv")
lowest_rank_10_prop_article_per_population
# Get index of 10 highest-ranked countries
idx = df_prop_high_quality_articles_each_country["prop_high_quality_articles"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=False).index[0:10]
# Retrieve these rows by index values
highest_rank_10_prop_high_quality_articles = df_prop_high_quality_articles_each_country.loc[idx]
highest_rank_10_prop_high_quality_articles.to_csv("highest_rank_10_prop_high_quality_articles.csv")
highest_rank_10_prop_high_quality_articles
# Get index of 10 lowest-ranked countries
idx = df_prop_high_quality_articles_each_country["prop_high_quality_articles"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=True).index[0:10]
# Retrieve these rows by index values
lowest_rank_10_prop_high_quality_articles = df_prop_high_quality_articles_each_country.loc[idx]
lowest_rank_10_prop_high_quality_articles.to_csv("lowest_rank_10_prop_high_quality_articles_allzeros.csv")
lowest_rank_10_prop_high_quality_articles
# Get index of 10 lowest-ranked countries that proportions of high-quality articles are NOT equal to 0
idx = df_prop_high_quality_articles_each_country["prop_high_quality_articles"].apply(lambda x:float(x.strip('%'))/100).sort_values(ascending=True)!=0
idx_not_zero = idx[idx == True].index[0:10]
lowest_rank_10_prop_high_quality_articles_not_zero = df_prop_high_quality_articles_each_country.loc[idx_not_zero]
lowest_rank_10_prop_high_quality_articles_not_zero.to_csv("lowest_rank_10_prop_high_quality_articles_notzeros.csv")
lowest_rank_10_prop_high_quality_articles_not_zero
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data (country and population) from the population file
Step2: Getting article quality predictions
Step3: Write a function to make a request with multiple revision IDs
Step4: Request the values for prediction (the quality of an article) from ORES API.
Step5: Save prediction values to a file
Step6: Read prediction values from the saved file
Step7: Combining the datasets
Step8: Write merged data to a CSV file
Step9: Analysis
Step10: Calculate the proportion (as a percentage) of high-quality articles for each country.
Step11: Tables
Step12: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
Step13: 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
Step14: 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
|
2,715
|
<ASSISTANT_TASK:>
Python Code:
from accounts import create_accounts_json
num_files = 25
n = 100000 # number of accounts per file
k = 500 # number of transactions
create_accounts_json(num_files, n, k)
from nfs import create_denormalized
create_denormalize()
from random_array import random_array
random_array()
Image("http://dask.pydata.org/en/latest/_images/collections-schedulers.png")
SVG("http://dask.pydata.org/en/latest/_images/dask-array-black-text.svg")
import dask.array as da
x = da.arange(25, chunks=5)
y = x ** 2
y
y.visualize()
y.dask.keys()
y.compute()
np.array(y)
y.compute(get=dask.get)
y.compute(get=dask.threaded.get)
from multiprocessing import cpu_count
cpu_count()
y.compute(get=dask.multiprocessing.get)
import h5py
import os
f = h5py.File(os.path.join('..', 'data', 'random.hdf5'))
dset = f['/x']
sums = []
for i in range(0, 1000000000, 1000000):
chunk = dset[i: i + 1000000]
sums.append(chunk.sum())
total = np.sum(sums)
print(total / 1e9)
x = da.from_array(dset, chunks=(1000000, ))
result = x.mean()
result
result.compute()
x[:10].compute()
# [Solution here]
%load solutions/dask_array.py
import numpy as np
%%time
x = np.random.normal(10, 0.1, size=(20000, 20000))
y = x.mean(axis=0)[::100]
y
%%time
x = da.random.normal(10, 0.1, size=(20000, 20000), chunks=(1000, 1000))
y = x.mean(axis=0)[::100]
y.compute()
import os
import dask.bag as db
bag = db.read_text(os.path.join('..', 'data', 'accounts.*.json.gz'))
bag.take(3)
import json
js = bag.map(json.loads)
js.take(3)
counts = js.pluck('name').frequencies()
counts.compute()
%load solutions/bag_alice.py
b = db.from_sequence(['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'])
b.groupby(len).compute()
b = db.from_sequence(list(range(10)))
b.groupby(lambda x: x % 2).compute()
b.groupby(lambda x: x % 2).map(lambda k, v: (k, max(v))).compute()
import functools
values = range(10)
def func(acc, y):
print(acc)
print(y)
print()
return acc + y
functools.reduce(func, values)
b.foldby(lambda x: x % 2, binop=max, combine=max).compute()
js.take(1)
from dask.diagnostics import ProgressBar
counts = js.foldby(key='name',
binop=lambda total, x: total + 1,
initial=0,
combine=lambda a, b: a + b,
combine_initial=0)
with ProgressBar():
result = counts.compute()
result
%load solutions/bag_foldby.py
import dask.dataframe as dd
df = dd.read_csv("../data/NationalFoodSurvey/NFS*.csv")
df.head(5)
df.npartitions
df.known_divisions
partitions = list(range(1974, 2001)) + [2000]
df = df.set_partition('styr', divisions=partitions)
df.known_divisions
df.divisions
df.info()
df2000 = df.get_division(26)
type(df2000)
df2000.set_index('minfd')
grp = df2000.groupby('minfd')
size = grp.apply(len, columns='size')
size.head()
minfd = size.compute().idxmax()
print(minfd)
food_mapping = pd.read_csv("../data/NationalFoodSurvey/food_mapping.csv")
food_mapping.ix[food_mapping.minfd.isin([minfd])]
# [Solution here]
%load solutions/nfs_most_purchased.py
def most_frequent_food(partition):
# partition is a pandas.DataFrame
grpr = partition.groupby('minfd')
size = grpr.size()
minfd = size.idxmax()
idx = food_mapping.minfd.isin([minfd])
description = food_mapping.ix[idx].minfddesc.iloc[0]
year = int(partition.styr.iloc[0])
return year, description
mnfd_year = df.map_partitions(most_frequent_food)
mnfd_year.compute()
zip(mnfd_year.compute(),)
%load solutions/average_consumption.py
Image('images/bcolz_bench.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Denormalize NFS Data
Step2: Random Array
Step3: Dask
Step4: Dask Array
Step5: Arithmetic and scalar mathematics, +, *, exp, log, ...
Step6: The idea of the chunk is important and has performance implications
Step7: Dask operates on a delayed computation model
Step8: You can execute the graph by using compute
Step9: As an example of the __array__ protocol
Step10: Scheduling Backends
Step11: Threaded Scheduler
Step12: By default, dask will use as many threads as there are logical processors on your machine
Step13: Process Scheduler
Step14: Distributed Executor
Step15: If were to implement this ourselves it might look like this
Step16: Dask does this for you and uses the backend scheduler to do so in parallel
Step17: x looks and behaves much like a numpy array
Step18: Exercise
Step19: Performance vs. NumPy
Step20: Faster and needs only MB of memory
Step21: Linear Algebra
Step22: Using map to process the lines in the text files
Step23: Exercise
Step24: GroupBy / FoldBy
Step25: Group by evens and odds
Step26: Group by eevens and odds and take the largest value
Step27: FoldBby, while harder to grok, is much more efficient
Step28: Using the accounts data above, find the number of people with the same name
Step29: Exercise
Step30: Dask DataFrame
Step31: DataFrame.head is one operation that is not lazy
Step32: Partitions
Step33: We are going to set the partition explicitly to styr to make some operations more performant
Step34: Nothing yet is loaded in to memory
Step35: DataFrame API
Step36: What food group was consumed the most times in 2000?
Step37: NOTE
Step38: There isn't (yet) support for idxmin/idxmax.
Step39: Get the pre-processed mapping across food grouping variables
Step40: Pandas provides the efficient isin method
Step41: Exercise
Step42: map_partitions
Step43: Exercise
Step44: Aside on Storage Formats
|
2,716
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,scipy,sklearn
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data # training data
y = digits.target # training label
print(X.shape)
print(y.shape)
import matplotlib.pyplot as plt
import pylab as pl
num_rows = 4
num_cols = 5
fig, ax = plt.subplots(nrows=num_rows, ncols=num_cols, sharex=True, sharey=True)
ax = ax.flatten()
for index in range(num_rows*num_cols):
img = digits.images[index]
label = digits.target[index]
ax[index].imshow(img, cmap='Greys', interpolation='nearest')
ax[index].set_title('digit ' + str(label))
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
plt.show()
from sklearn.preprocessing import StandardScaler
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
print ('scikit-learn version: ' + str(Version(sklearn_version)))
# 1. Standardize features by removing the mean and scaling to unit variance
X_std = StandardScaler().fit_transform(X) # fit_transform(X) will fit to data, then transform it.
print ('1. Complete removing the mean and scaling to unit variance.')
# 2. splitting data into 70% training and 30% test data:
split_ratio = 0.3
X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=split_ratio, random_state=0)
print('2. Complete splitting with ' + str(y_train.shape[0]) + \
'(' + str(int((1-split_ratio)*100)) +'%) training data and ' + \
str(y_test.shape[0]) + '(' + str(int(split_ratio*100)) +'%) test data.')
from sklearn.linear_model import Perceptron
from sklearn.metrics import accuracy_score
# Training
ppn = Perceptron(n_iter=800, eta0=0.1, random_state=0)
ppn.fit(X_train, y_train)
# Testing
y_pred = ppn.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
from sklearn.linear_model import LogisticRegression
# Training
lr = LogisticRegression(C=1.0, random_state=0) # we observe that changing C from 0.0001 to 1000 has ignorable effect
lr.fit(X_train, y_train)
# Testing
y_pred = lr.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
from sklearn.svm import SVC
# 1. Using linear kernel
# Training
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train, y_train)
# Testing
y_pred = svm.predict(X_test)
# Results
print('1. Using linear kernel:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
# 2. Using rbf kernel
# Training
svm = SVC(kernel='rbf', C=1.0, random_state=0)
svm.fit(X_train, y_train)
# Testing
y_pred = svm.predict(X_test)
# Results
print('2. Using rbf kernel:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
from sklearn.tree import DecisionTreeClassifier
# 1. Using entropy criterion
# Training
tree = DecisionTreeClassifier(criterion='entropy', random_state=0)
tree.fit(X_train, y_train)
# Testing
y_pred = tree.predict(X_test)
# Results
print('1. Using entropy criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
# 2. Using Gini criterion
# Training
tree = DecisionTreeClassifier(criterion='gini', random_state=0)
tree.fit(X_train, y_train)
# Testing
y_pred = tree.predict(X_test)
# Results
print('2. Using Gini criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
from sklearn.ensemble import RandomForestClassifier
# 1. Using entropy criterion
# Training
forest = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2)
forest.fit(X_train, y_train)
# Testing
y_pred = forest.predict(X_test)
# Results
print('1. Using entropy criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
# 2. Using Gini criterion
# Training
forest = RandomForestClassifier(criterion='gini', n_estimators=10, random_state=1, n_jobs=2)
forest.fit(X_train, y_train)
# Testing
y_pred = forest.predict(X_test)
# Results
print('2. Using Gini criterion:')
print(' Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print(' Accuracy: %.3f' % accuracy_score(y_test, y_pred))
from sklearn.neighbors import KNeighborsClassifier
# Training
knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
knn.fit(X_train, y_train)
# Testing
y_pred = knn.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
from sklearn.naive_bayes import GaussianNB
# Training
gnb = GaussianNB()
gnb.fit(X_train, y_train)
# Testing
y_pred = gnb.predict(X_test)
# Results
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
print('Accuracy: %.3f' % accuracy_score(y_test, y_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Visualize data
Step3: Date Preprocessing
Step4: Classifier #1 Perceptron
Step5: Classifier #2 Logistic Regression
Step6: Classifier #3 SVM
Step7: Classifier #4 Decision Tree
Step8: Classifer #5 Random Forest
Step9: Classifier #6 KNN
Step10: Classifier #7 Naive Bayes
|
2,717
|
<ASSISTANT_TASK:>
Python Code:
import nbformat
from nbformat.v4 import new_notebook
nb = new_notebook()
display(nb)
nbformat.validate(nb)
nb.pizza = True
nbformat.validate(nb)
nb = new_notebook() # get rid of pizza
from nbformat.v4 import new_code_cell, new_markdown_cell, new_raw_cell
md = new_markdown_cell("First argument is the source string.")
display(md)
nb.cells.append(md)
raw = new_raw_cell(["Sources can also be a ","list of strings."])
display(raw)
nb.cells.append(raw)
code = new_code_cell(["#Either way, you need newlines\n",
"print('like this')"])
display(code)
nb.cells.append(code)
nbformat.write(nb, "my_demo_notebook.ipynb")
!ls my_*
nb2 = nbformat.read("my_demo_notebook.ipynb", as_version=4)
print(nb2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: cells
Step2: What happens if it's invalid?
Step3: Cells and their sources
Step4: Three types of cells
Step5: cell_type
Step6: cell_type
Step7: cell_type
|
2,718
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import math
def tf_assert_shape(tensors, requested_shape):
if not type(tensors) is list:
tensors = [tensors]
for tensor in tensors:
shape = tensor.get_shape().as_list()
error_msg = 'Tensor {} has shape {} while shape {} was requested'.format(tensor.name, shape, requested_shape)
assert shape == requested_shape, error_msg
BATCH_SIZE = 2500
NSAMPLE = BATCH_SIZE
DIM = 3 # number of dimentions
offset_data = np.float32(np.linspace(-10, 10, num=DIM))
y_data_one = np.float32(np.random.uniform(-10, 10, (NSAMPLE, 1)))
y_data = np.float32(np.random.uniform(-10, 10, (NSAMPLE, DIM)))
r_x = np.float32(np.random.normal(size=(NSAMPLE, 1)))
r_y = np.float32(np.random.normal(size=(NSAMPLE, 3)))
x_data= np.float32(np.sin(0.75*y_data_one)*7.0+y_data_one*1+r_x*0.4)
y_data[:,0] = y_data_one[:,0]
y_data[:,1] = y_data_one[:,0]
y_data[:,2] = y_data_one[:,0]
y_data = y_data + offset_data + r_y*0.4
plt.figure(figsize=(8, 8))
plt.plot(x_data , y_data[:,0],'ro', alpha=0.3)
plt.plot(x_data , y_data[:,1],'go', alpha=0.3)
plt.plot(x_data , y_data[:,2],'bo', alpha=0.3)
plt.show()
NHIDDEN = 24
STDEV = 0.5
KMIX = 10 # number of mixtures
NOUT = KMIX * DIM * 3 # pi, mu, sigma
FLOAT_TYPE = tf.float64
# Define input (x) and output (y)
# output can be higher dimension than x
x = tf.placeholder(dtype=FLOAT_TYPE, shape=[BATCH_SIZE, 1], name="x")
y = tf.placeholder(dtype=FLOAT_TYPE, shape=[BATCH_SIZE, DIM], name="y")
# Define coeficients
Wh = tf.Variable(tf.random_normal([1,NHIDDEN], stddev=STDEV, dtype=FLOAT_TYPE))
bh = tf.Variable(tf.random_normal([1,NHIDDEN], stddev=STDEV, dtype=FLOAT_TYPE))
Wo = tf.Variable(tf.random_normal([NHIDDEN,NOUT], stddev=STDEV, dtype=FLOAT_TYPE))
bo = tf.Variable(tf.random_normal([1, NOUT], stddev=STDEV, dtype=FLOAT_TYPE))
# connect layers
hidden_layer = tf.nn.tanh(tf.matmul(x, Wh) + bh)
output = tf.matmul(hidden_layer,Wo) + bo
# get_mixture_coef take the output of a model and convert it to a mixture model
def get_mixture_coef(output, batch_size, data_size, mixture_size):
tf_assert_shape(output, [batch_size, data_size * mixture_size * 3]) # 3 stand for pi, mu, stdev
# split the output
out_pi, out_sigma, out_mu = tf.split(output, 3, axis=1)
tf_assert_shape([out_pi, out_sigma, out_mu], [batch_size, data_size * mixture_size])
# reshape pi, sigma and mu
new_shape = [batch_size, data_size, mixture_size]
out_pi = tf.reshape(out_pi, new_shape)
out_sigma = tf.reshape(out_sigma, new_shape)
out_mu = tf.reshape(out_mu, new_shape)
# pi is a distribution
out_pi = tf.nn.softmax(out_pi, axis=2)
# sigma value are on an exponetial scale
out_sigma = tf.exp(out_sigma)
return out_pi, out_sigma, out_mu
# Add the mixture model to the tf graph
out_pi, out_sigma, out_mu = get_mixture_coef(output, NSAMPLE, DIM, KMIX)
oneDivSqrtTwoPI = 1 / math.sqrt(2*math.pi) # normalisation factor for gaussian, not needed.
def tf_normal(y, mu, sigma, batch_size, data_size):
# check args
tf_assert_shape(y, [batch_size, data_size])
tf_assert_shape([mu, sigma], [batch_size, data_size, KMIX])
y = tf.reshape(y, (batch_size, data_size, 1)) # add one dim to ease broadcast
result = tf.subtract(y, mu) # Broadcast should work now
tf_assert_shape(result, [batch_size, data_size, KMIX])
result = tf.multiply(result, tf.reciprocal(sigma)) # element wise
result = -tf.square(result)/2 # element wise
tf_assert_shape(result, [batch_size, data_size, KMIX])
return tf.multiply(tf.exp(result), tf.reciprocal(sigma))*oneDivSqrtTwoPI
def get_lossfunc(out_pi, out_sigma, out_mu, y, batch_size, data_size):
# check args
tf_assert_shape(y, [batch_size, data_size])
tf_assert_shape([out_pi, out_sigma, out_mu], [batch_size, data_size, KMIX])
normal = tf_normal(y, out_mu, out_sigma, batch_size, data_size)
tf_assert_shape(normal, [batch_size, data_size, KMIX])
result = tf.multiply(normal, out_pi) # element wise
tf_assert_shape(result, [batch_size, data_size, KMIX])
result = tf.reduce_sum(result, axis=2)
tf_assert_shape(result, [batch_size, data_size])
result = -tf.log(result)
result = tf.reduce_mean(result)
tf_assert_shape(result, [])
return result
lossfunc = get_lossfunc(out_pi, out_sigma, out_mu, y, NSAMPLE, DIM)
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(lossfunc)
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
NEPOCH = 15000
loss = np.zeros(NEPOCH) # store the training progress here.
for i in range(NEPOCH):
loss[i], _ = sess.run((lossfunc, train_op), feed_dict={x: x_data, y: y_data})
if i%500 == 0:
print ('step:',i,'loss:',loss[i])
plt.figure(figsize=(8, 8))
plt.plot(np.arange(500, NEPOCH,1), loss[500:], 'r-')
plt.show()
# Helper fonction to pick a gaussian according to the distribution
def generate_ensemble(out_pi, out_mu, out_sigma):
result = np.zeros([NSAMPLE, DIM]) # initially random [0, 1]
rn = np.random.randn(NSAMPLE, DIM) # normal random matrix (0.0, 1.0)
for dim in range(DIM):
for i in range(NSAMPLE):
idx = np.random.choice(KMIX, p=out_pi[i,dim])
mu = out_mu[i, dim, idx]
std = out_sigma[i, dim, idx]
result[i, dim] = mu + rn[i, dim]*std
return result
# generate a new set of inputs
x_test = np.float32(np.arange(-12.5,12.5,0.01))
x_test = x_test[:NSAMPLE]
x_test = x_test.reshape(NSAMPLE, 1) # needs to be a matrix, not a vector
# ask the model to generate
out_pi_test, out_sigma_test, out_mu_test = sess.run([out_pi, out_sigma, out_mu], feed_dict={x: x_test})
# generate the output using the model
y_test = generate_ensemble(out_pi_test, out_mu_test, out_sigma_test)
plt.figure(figsize=(8, 8))
plt.plot(x_data , y_data[:,0],'x', alpha=0.1, color='black')
plt.plot(x_data , y_data[:,1],'x', alpha=0.1, color='black')
plt.plot(x_data , y_data[:,2],'x', alpha=0.1, color='black')
plt.plot(x_test, y_test[:,0],'ro',alpha=0.3)
plt.plot(x_test, y_test[:,1],'go',alpha=0.3)
plt.plot(x_test, y_test[:,2],'bo',alpha=0.3)
plt.show()
# Inspecting the first dimention mixture model
dim = 0
plt.figure(figsize=(8, 8))
plt.plot(x_test, out_mu_test[:,dim,:],'go', alpha=0.03)
plt.plot(x_test , y_test[:,dim],'bo',alpha=0.1)
plt.plot(x_test, out_pi_test[:,dim,:]*20,'ro', alpha=0.03)
plt.show()
# Inspecting the scond dimention mixture model
dim = 1
plt.figure(figsize=(8, 8))
plt.plot(x_test, out_mu_test[:,dim,:],'go', alpha=0.03)
plt.plot(x_test , y_test[:,dim],'bo',alpha=0.1)
plt.plot(x_test, out_pi_test[:,dim,:]*20,'ro', alpha=0.03)
plt.show()
# Inspecting the first dimention mixture model
dim = 2
plt.figure(figsize=(8, 8))
plt.plot(x_test, out_mu_test[:,dim,:],'go', alpha=0.03)
plt.plot(x_test , y_test[:,dim],'bo',alpha=0.1)
plt.plot(x_test, out_pi_test[:,dim,:]*20,'ro', alpha=0.03)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate some data
Step2: Model
Step3: Add the loss
Step4: Trainning
Step5: Sample the model
Step6: inspecting the model
|
2,719
|
<ASSISTANT_TASK:>
Python Code:
factors(689)
max_seq_len = 682
#full_train_size = 55820
#train_size = 55800
#small_train_size = 6000 #just because of performance reasons, no statistics behind this decision
#test_size = 6200
data_path = '../../../../Dropbox/data'
phae_path = data_path + '/price_hist_autoencoder'
csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
assert path.isfile(csv_in)
npz_unprocessed = phae_path + '/price_history_full_seqs.npz'
assert path.isfile(npz_unprocessed)
npz_dates = phae_path + '/price_history_full_seqs_dates.npz'
assert path.isfile(npz_dates)
npz_train = phae_path + '/price_history_seqs_dates_normed_train.npz'
assert path.isfile(npz_train)
npz_test = phae_path + '/price_history_seqs_dates_normed_test.npz'
assert path.isfile(npz_test)
npz_path = npz_train[:-len('_train.npz')]
for key, val in np.load(npz_train).iteritems():
print key, ",", val.shape
dp = PriceHistoryAutoEncDataProvider(npz_path=npz_path, batch_size=53, with_EOS=False)
for data in dp.datalist:
print data.shape
# for item in dp.next():
# print item.shape
# model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
# graph = model.getGraph(batch_size=53,
# enc_num_units = 10,
# dec_num_units = 10,
# ts_len=max_seq_len)
#show_graph(graph)
def experiment():
return model.run(npz_path=npz_path,
epochs=2,
batch_size = 53,
enc_num_units = 400,
dec_num_units = 400,
ts_len=max_seq_len,
learning_rate = 1e-4,
preds_gather_enabled = False,
)
dyn_stats_dic = experiment()
dyn_stats_dic['dyn_stats'].plotStats()
plt.show()
dyn_stats_dic['dyn_stats_diff'].plotStats()
plt.show()
model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
npz_test = npz_path + '_test.npz'
assert path.isfile(npz_test)
path.abspath(npz_test)
def experiment():
return model.run(npz_path=npz_path,
epochs=200,
batch_size = 53,
enc_num_units = 450,
dec_num_units = 450,
ts_len=max_seq_len,
learning_rate = 1e-4,
preds_gather_enabled = True,
)
#%%time
# dyn_stats_dic, preds_dict, targets, twods = experiment()
dyn_stats, preds_dict, targets, twods = get_or_run_nn(experiment, filename='035_autoencoder_000',
nn_runs_folder = data_path + "/nn_runs")
dyn_stats['dyn_stats'].plotStats()
plt.show()
dyn_stats['dyn_stats_diff'].plotStats()
plt.show()
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b', label='reals')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
twod_arr = np.array(twods.values())
twod_arr.shape
plt.figure(figsize=(16,7))
plt.plot(twod_arr[:, 0], twod_arr[:, 1], 'r.')
plt.title('two dimensional representation of our time series after dimensionality reduction')
plt.xlabel('first dimension')
plt.ylabel('second dimension')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1 - collect data
Step2: Step 2 - Build model
Step3: targets
Step4: Quick test run
Step5: Step 3 training the network
Step6: Conclusion
|
2,720
|
<ASSISTANT_TASK:>
Python Code:
import colour
colour.utilities.filter_warnings(True, False)
sorted(colour.LIGHTNESS_METHODS.keys())
colour.colorimetry.lightness_Glasser1958(10.08)
colour.lightness(10.08, method='Glasser 1958')
%matplotlib inline
from colour.plotting import *
colour_plotting_defaults()
# Plotting the "Glasser (1958)" "Lightness" function.
single_lightness_function_plot('Glasser 1958')
colour.colorimetry.lightness_Wyszecki1963(10.08)
colour.lightness(10.08, method='Wyszecki 1963')
# Plotting the "Wyszecki (1963)" "Lightness" function.
single_lightness_function_plot('Wyszecki 1963')
colour.colorimetry.lightness_CIE1976(10.08)
colour.lightness(10.08)
colour.lightness(10.08, method='CIE 1976', Y_n=95)
colour.lightness(10.08, method='Lstar1976', Y_n=95)
# Plotting the "CIE 1976" "Lightness" function.
single_lightness_function_plot('CIE 1976')
# Plotting multiple "Lightness" functions for comparison.
multi_lightness_function_plot(['CIE 1976', 'Glasser 1958'])
colour.colorimetry.lightness_Fairchild2010(10.08 / 100, 1.836)
colour.lightness(10.08 / 100, method='Fairchild 2010', epsilon=1.836)
# Plotting the "Fairchild and Wyble (2010)" "Lightness" function.
single_lightness_function_plot('Fairchild 2010')
# Plotting multiple "Lightness" functions for comparison.
multi_lightness_function_plot(['CIE 1976', 'Fairchild 2010'])
colour.colorimetry.lightness_Fairchild2011(10.08 / 100, 0.710)
colour.lightness(10.08 / 100, method='Fairchild 2011', epsilon=0.710)
# Plotting the "Fairchild and Chen (2011)" "Lightness" function.
single_lightness_function_plot('Fairchild 2011')
# Plotting multiple "Lightness" functions for comparison.
multi_lightness_function_plot(['CIE 1976', 'Fairchild 2011'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Note
Step3: Wyszecki (1963) Method
Step4: Note
Step5: CIE 1976 Method
Step6: Note
Step7: Fairchild and Wyble (2010) Method
Step8: Fairchild and Chen (2011) Method
|
2,721
|
<ASSISTANT_TASK:>
Python Code:
# Install libraries.
# The magic cells insures that those libraries can be part of a custom container
# if moving the code somewhere else.
%pip install -q googleads
%pip install -q -U kfp matplotlib Faker --user
# Automatically restart kernel after installs
# import IPython
# app = IPython.Application.instance()
# app.kernel.do_shutdown(True)
# Import
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os, json, random
import hashlib, uuid
import time, calendar, math
import pandas as pd, numpy as np
import matplotlib.pyplot as plt, seaborn as sns
from datetime import datetime
from google.cloud import bigquery
from googleads import adwords
PROJECT_ID = "[YOUR-PROJECT]" #@param {type:"string"}
REGION = "US"
! gcloud config set project $PROJECT_ID
import sys
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
! bq show $PROJECT_ID:ltv_ecommerce || bq mk $PROJECT_ID:ltv_ecommerce
# Loads CRM data
!bq load \
--project_id $PROJECT_ID \
--skip_leading_rows 1 \
--max_bad_records 100000 \
--replace \
--field_delimiter "," \
--autodetect \
ltv_ecommerce.00_crm \
gs://solutions-public-assets/analytics-componentized-patterns/ltv/crm.csv
# Loads Sales data
!bq load \
--project_id $PROJECT_ID \
--skip_leading_rows 1 \
--max_bad_records 100000 \
--replace \
--field_delimiter "," \
--autodetect \
ltv_ecommerce.10_orders \
gs://solutions-public-assets/analytics-componentized-patterns/ltv/sales_*
# BigQuery client
bq_client = bigquery.Client(project=PROJECT_ID)
%%bigquery --params $MATCH_FIELDS --project $PROJECT_ID
CREATE OR REPLACE TABLE `ltv_ecommerce.10_orders` AS (
SELECT
CAST(customer_id AS STRING) AS customer_id,
order_id AS order_id,
transaction_date AS transaction_date,
product_sku AS product_sku,
qty AS qty,
unit_price AS unit_price
FROM
`[YOUR_PROJECT].[YOUR_DATASET].[YOUR_SOURCE_TABLE]`
);
%%bigquery df_histo_qty --project $PROJECT_ID
WITH
min_max AS (
SELECT
MIN(qty) min_qty,
MAX(qty) max_qty,
CEIL((MAX(qty) - MIN(qty)) / 100) step
FROM
`ltv_ecommerce.10_orders`
)
SELECT
COUNT(1) c,
bucket_same_size AS bucket
FROM (
SELECT
-- Creates (1000-100)/100 + 1 buckets of data.
ML.BUCKETIZE(qty, GENERATE_ARRAY(min_qty, max_qty, step)) AS bucket_same_size,
-- Creates custom ranges.
ML.BUCKETIZE(qty, [-1, -1, -2, -3, -4, -5, 0, 1, 2, 3, 4, 5]) AS bucket_specific,
FROM
`ltv_ecommerce.10_orders`, min_max )
# WHERE bucket != "bin_1" and bucket != "bin_2"
GROUP BY
bucket
-- Ohterwise, orders bin_10 before bin_2
ORDER BY CAST(SPLIT(bucket, "_")[OFFSET(1)] AS INT64)
# Uses a log scale for bucket_same_size.
# Can remove the log scale when using bucket_specific.
plt.figure(figsize=(12,5))
plt.title('Log scaled distribution for qty')
hqty = sns.barplot( x='bucket', y='c', data=df_histo_qty)
hqty.set_yscale("log")
%%bigquery df_histo_unit_price --project $PROJECT_ID
WITH
min_max AS (
SELECT
MIN(unit_price) min_unit_price,
MAX(unit_price) max_unit_price,
CEIL((MAX(unit_price) - MIN(unit_price)) / 10) step
FROM
`ltv_ecommerce.10_orders`
)
SELECT
COUNT(1) c,
bucket_same_size AS bucket
FROM (
SELECT
-- Creates (1000-100)/100 + 1 buckets of data.
ML.BUCKETIZE(unit_price, GENERATE_ARRAY(min_unit_price, max_unit_price, step)) AS bucket_same_size,
-- Creates custom ranges.
ML.BUCKETIZE(unit_price, [10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000]) AS bucket_specific,
FROM
`ltv_ecommerce.10_orders`, min_max )
# WHERE bucket != "bin_1" and bucket != "bin_2"
GROUP BY
bucket
-- Ohterwise, orders bin_10 before bin_2
ORDER BY CAST(SPLIT(bucket, "_")[OFFSET(1)] AS INT64)
# Uses a log scale for bucket_same_size.
# Can remove the log scale when using bucket_specific.
plt.figure(figsize=(12,5))
q = sns.barplot( x='bucket', y='c', data=df_histo_unit_price)
q.set_yscale("log")
plt.title('Log scaled distribution for unit_price')
LTV_PARAMS = {
'WINDOW_LENGTH': 0,
'WINDOW_STEP': 30,
'WINDOW_STEP_INITIAL': 90,
'LENGTH_FUTURE': 30,
'MAX_STDV_MONETARY': 500,
'MAX_STDV_QTY': 100,
'TOP_LTV_RATIO': 0.2
}
LTV_PARAMS
%%bigquery --params $LTV_PARAMS --project $PROJECT_ID
DECLARE MAX_STDV_MONETARY INT64 DEFAULT @MAX_STDV_MONETARY;
DECLARE MAX_STDV_QTY INT64 DEFAULT @MAX_STDV_QTY;
CREATE OR REPLACE TABLE `ltv_ecommerce.20_aggred` AS
SELECT
customer_id,
order_day,
ROUND(day_value_after_returns, 2) AS value,
day_qty_after_returns as qty_articles,
day_num_returns AS num_returns,
CEIL(avg_time_to_return) AS time_to_return
FROM (
SELECT
customer_id,
order_day,
SUM(order_value_after_returns) AS day_value_after_returns,
STDDEV(SUM(order_value_after_returns)) OVER(PARTITION BY customer_id ORDER BY SUM(order_value_after_returns)) AS stdv_value,
SUM(order_qty_after_returns) AS day_qty_after_returns,
STDDEV(SUM(order_qty_after_returns)) OVER(PARTITION BY customer_id ORDER BY SUM(order_qty_after_returns)) AS stdv_qty,
CASE
WHEN MIN(order_min_qty) < 0 THEN count(1)
ELSE 0
END AS day_num_returns,
CASE
WHEN MIN(order_min_qty) < 0 THEN AVG(time_to_return)
ELSE NULL
END AS avg_time_to_return
FROM (
SELECT
customer_id,
order_id,
-- Gives the order date vs return(s) dates.
MIN(transaction_date) AS order_day,
MAX(transaction_date) AS return_final_day,
DATE_DIFF(MAX(transaction_date), MIN(transaction_date), DAY) AS time_to_return,
-- Aggregates all products in the order
-- and all products returned later.
SUM(qty * unit_price) AS order_value_after_returns,
SUM(qty) AS order_qty_after_returns,
-- If negative, order has qty return(s).
MIN(qty) order_min_qty
FROM
`ltv_ecommerce.10_orders`
GROUP BY
customer_id,
order_id)
GROUP BY
customer_id,
order_day)
WHERE
-- [Optional] Remove dates with outliers per a customer.
(stdv_value < MAX_STDV_MONETARY
OR stdv_value IS NULL) AND
(stdv_qty < MAX_STDV_QTY
OR stdv_qty IS NULL);
SELECT * FROM `ltv_ecommerce.20_aggred` LIMIT 5;
%%bigquery df_dist_dates --project $PROJECT_ID
SELECT count(1) c, SUBSTR(CAST(order_day AS STRING), 0, 7) as yyyy_mm
FROM `ltv_ecommerce.20_aggred`
WHERE qty_articles > 0
GROUP BY yyyy_mm
ORDER BY yyyy_mm
plt.figure(figsize=(12,5))
sns.barplot( x='yyyy_mm', y='c', data=df_dist_dates)
%%bigquery df_dist_customers --params $LTV_PARAMS --project $PROJECT_ID
SELECT customer_id, count(1) c
FROM `ltv_ecommerce.20_aggred`
GROUP BY customer_id
plt.figure(figsize=(12,4))
sns.distplot(df_dist_customers['c'], hist_kws=dict(ec="k"), kde=False)
%%bigquery df_dist_qty --params $LTV_PARAMS --project $PROJECT_ID
SELECT qty_articles, count(1) c
FROM `ltv_ecommerce.20_aggred`
GROUP BY qty_articles
plt.figure(figsize=(12,4))
sns.distplot(df_dist_qty['qty_articles'], hist_kws=dict(ec="k"), kde=False)
%%bigquery df_dist_values --params $LTV_PARAMS --project $PROJECT_ID
SELECT value
FROM `ltv_ecommerce.20_aggred`
axv = sns.violinplot(x=df_dist_values["value"])
axv.set_xlim(-200, 3500)
%%bigquery --params $LTV_PARAMS --project $PROJECT_ID
-- Lenght are number of days
--
-- Date of the first order in the dataset.
DECLARE MIN_DATE DATE;
-- Date of the final order in the dataset.
DECLARE MAX_DATE DATE;
-- Date that separates inputs orders from target transactions.
DECLARE THRESHOLD_DATE DATE;
-- How many days back for inputs transactions. 0 means from the start.
DECLARE WINDOW_LENGTH INT64 DEFAULT @WINDOW_LENGTH;
-- Date at which an input transactions window starts.
DECLARE WINDOW_START DATE;
-- How many days between thresholds.
DECLARE WINDOW_STEP INT64 DEFAULT @WINDOW_STEP;
-- How many days for the first window.
DECLARE WINDOW_STEP_INITIAL INT64 DEFAULT @WINDOW_STEP_INITIAL;
-- Index of the window being run.
DECLARE STEP INT64 DEFAULT 1;
-- How many days to predict for.
DECLARE LENGTH_FUTURE INT64 DEFAULT @LENGTH_FUTURE;
SET (MIN_DATE, MAX_DATE) = (
SELECT AS STRUCT
MIN(order_day) AS min_days,
MAX(order_day) AS max_days
FROM
`ltv_ecommerce.20_aggred`
);
SET THRESHOLD_DATE = MIN_DATE;
-- For more information about the features of this table,
-- see https://github.com/CamDavidsonPilon/lifetimes/blob/master/lifetimes/utils.py#L246
-- and https://cloud.google.com/solutions/machine-learning/clv-prediction-with-offline-training-train#aggregating_data
CREATE OR REPLACE TABLE ltv_ecommerce.30_featured
(
-- dataset STRING,
customer_id STRING,
monetary FLOAT64,
frequency INT64,
recency INT64,
T INT64,
time_between FLOAT64,
avg_basket_value FLOAT64,
avg_basket_size FLOAT64,
has_returns STRING,
avg_time_to_return FLOAT64,
num_returns INT64,
-- threshold DATE,
-- step INT64,
target_monetary FLOAT64,
);
LOOP
-- Can choose a longer original window in case
-- there were not many orders in the early days.
IF STEP = 1 THEN
SET THRESHOLD_DATE = DATE_ADD(THRESHOLD_DATE, INTERVAL WINDOW_STEP_INITIAL DAY);
ELSE
SET THRESHOLD_DATE = DATE_ADD(THRESHOLD_DATE, INTERVAL WINDOW_STEP DAY);
END IF;
SET STEP = STEP + 1;
IF THRESHOLD_DATE >= DATE_SUB(MAX_DATE, INTERVAL (WINDOW_STEP) DAY) THEN
LEAVE;
END IF;
-- Takes all transactions before the threshold date unless you decide
-- to use a different window lenght to test model performance.
IF WINDOW_LENGTH != 0 THEN
SET WINDOW_START = DATE_SUB(THRESHOLD_DATE, INTERVAL WINDOW_LENGTH DAY);
ELSE
SET WINDOW_START = MIN_DATE;
END IF;
INSERT ltv_ecommerce.30_featured
SELECT
-- CASE
-- WHEN THRESHOLD_DATE <= DATE_SUB(MAX_DATE, INTERVAL LENGTH_FUTURE DAY) THEN 'UNASSIGNED'
-- ELSE 'TEST'
-- END AS dataset,
CAST(tf.customer_id AS STRING),
ROUND(tf.monetary_orders, 2) AS monetary,
tf.cnt_orders AS frequency,
tf.recency,
tf.T,
ROUND(tf.recency/cnt_orders, 2) AS time_between,
ROUND(tf.avg_basket_value, 2) AS avg_basket_value,
ROUND(tf.avg_basket_size, 2) AS avg_basket_size,
has_returns,
CEIL(avg_time_to_return) AS avg_time_to_return,
num_returns,
-- THRESHOLD_DATE AS threshold,
-- STEP - 1 AS step,
ROUND(tt.target_monetary, 2) AS target_monetary,
FROM (
-- This SELECT uses only data before THRESHOLD_DATE to make features.
SELECT
customer_id,
SUM(value) AS monetary_orders,
DATE_DIFF(MAX(order_day), MIN(order_day), DAY) AS recency,
DATE_DIFF(THRESHOLD_DATE, MIN(order_day), DAY) AS T,
COUNT(DISTINCT order_day) AS cnt_orders,
AVG(qty_articles) avg_basket_size,
AVG(value) avg_basket_value,
CASE
WHEN SUM(num_returns) > 0 THEN 'y'
ELSE 'n'
END AS has_returns,
AVG(time_to_return) avg_time_to_return,
THRESHOLD_DATE AS threshold,
SUM(num_returns) num_returns,
FROM
`ltv_ecommerce.20_aggred`
WHERE
order_day <= THRESHOLD_DATE AND
order_day >= WINDOW_START
GROUP BY
customer_id
) tf
INNER JOIN (
-- This SELECT uses all data after threshold as target.
SELECT
customer_id,
SUM(value) target_monetary
FROM
`ltv_ecommerce.20_aggred`
WHERE
order_day <= DATE_ADD(THRESHOLD_DATE, INTERVAL LENGTH_FUTURE DAY)
-- Overall value is similar to predicting only what's after threshold.
-- and the prediction performs better. We can substract later.
-- AND order_day > THRESHOLD_DATE
GROUP BY
customer_id) tt
ON
tf.customer_id = tt.customer_id;
END LOOP;
%%bigquery --project $PROJECT_ID
-- Shows all data for a specific customer and some other random records.
SELECT * FROM `ltv_ecommerce.30_featured` WHERE customer_id = "10"
UNION ALL
(SELECT * FROM `ltv_ecommerce.30_featured` LIMIT 5)
ORDER BY customer_id, frequency, T
%%bigquery df_featured --project $PROJECT_ID
ltv_ecommerce.30_featured
df_featured.describe()
# Display distribution for all columns that are numerical (but will still ignore the categorical ones like day of the week)
valid_column_names = [key for key in dict(df_featured.dtypes) if dict(df_featured.dtypes)[key] in ['float64', 'int64']]
NUM_COLS = 5
NUM_ROWS = math.ceil(int(len(valid_column_names)) / NUM_COLS)
fig, axs = plt.subplots(nrows=NUM_ROWS, ncols=NUM_COLS, figsize=(25, 7))
for idx, cname in enumerate(valid_column_names):
x = int(idx/NUM_COLS)
y = idx % NUM_COLS
sns.violinplot(df_featured[cname], ax=axs[x, y], label=cname)
# You can run this query using the magic cell but the cell would run for hours.
# Although stopping the cell would not stop the query, using the Python client
# also enables you to add a custom parameter for the model name.
suffix_now = datetime.now().strftime("%Y%m%d_%H%M%S")
train_model_jobid = f'train_model_{suffix_now}'
train_model_sql = f'''
CREATE OR REPLACE MODEL `ltv_ecommerce.model_tutorial_{suffix_now}`
OPTIONS(MODEL_TYPE="AUTOML_REGRESSOR",
INPUT_LABEL_COLS=["target_monetary"],
OPTIMIZATION_OBJECTIVE="MINIMIZE_MAE")
AS SELECT
* EXCEPT(customer_id)
FROM
`ltv_ecommerce.30_featured`
'''
bq_client.query(train_model_sql, job_id=train_model_jobid)
%%bigquery --params $LTV_PARAMS --project $PROJECT_ID
-- TODO(developer):
-- 1. Update the model name to the one you want to use.
-- 2. Update the table where to output predictions.
-- How many days back for inputs transactions. 0 means from the start.
DECLARE WINDOW_LENGTH INT64 DEFAULT @WINDOW_LENGTH;
-- Date at which an input transactions window starts.
DECLARE WINDOW_START DATE;
-- Date of the first transaction in the dataset.
DECLARE MIN_DATE DATE;
-- Date of the final transaction in the dataset.
DECLARE MAX_DATE DATE;
-- Date from which you want to predict.
DECLARE PREDICT_FROM_DATE DATE;
SET (MIN_DATE, MAX_DATE) = (
SELECT AS STRUCT
MIN(order_day) AS min_days,
MAX(order_day) AS max_days
FROM
`ltv_ecommerce.20_aggred`
);
-- You can set any date here. In production, it is generally today.
SET PREDICT_FROM_DATE = MAX_DATE;
IF WINDOW_LENGTH != 0 THEN
SET WINDOW_START = DATE_SUB(PREDICT_FROM_DATE, INTERVAL WINDOW_LENGTH DAY);
ELSE
SET WINDOW_START = MIN_DATE;
END IF;
CREATE OR REPLACE TABLE `ltv_ecommerce.predictions_tutorial`
AS (
SELECT
customer_id,
monetary AS monetary_so_far,
ROUND(predicted_target_monetary, 2) AS monetary_predicted,
ROUND(predicted_target_monetary - monetary, 2) AS monetary_future
FROM
ML.PREDICT(
-- /!\ Set your model name here.
MODEL ltv_ecommerce.model_tutorial_YYYYMMDD,
(
SELECT
customer_id,
ROUND(monetary_orders, 2) AS monetary,
cnt_orders AS frequency,
recency,
T,
ROUND(recency/cnt_orders, 2) AS time_between,
ROUND(avg_basket_value, 2) AS avg_basket_value,
ROUND(avg_basket_size, 2) AS avg_basket_size,
has_returns,
CEIL(avg_time_to_return) AS avg_time_to_return,
num_returns
FROM (
SELECT
customer_id,
SUM(value) AS monetary_orders,
DATE_DIFF(MAX(order_day), MIN(order_day), DAY) AS recency,
DATE_DIFF(PREDICT_FROM_DATE, MIN(order_day), DAY) AS T,
COUNT(DISTINCT order_day) AS cnt_orders,
AVG(qty_articles) avg_basket_size,
AVG(value) avg_basket_value,
CASE
WHEN SUM(num_returns) > 0 THEN 'y'
ELSE 'n'
END AS has_returns,
AVG(time_to_return) avg_time_to_return,
SUM(num_returns) num_returns,
FROM
`ltv_ecommerce.20_aggred`
WHERE
order_day <= PREDICT_FROM_DATE AND
order_day >= WINDOW_START
GROUP BY
customer_id
)
)
)
)
%%bigquery df_predictions --project $PROJECT_ID
ltv_ecommerce.predictions_windowed
df_predictions.describe()
from matplotlib.gridspec import GridSpec
fig = plt.figure(constrained_layout=True, figsize=(15, 5))
gs = GridSpec(2, 2, figure=fig)
sns.set(font_scale = 1)
plt.tick_params(axis='x', labelsize=14)
ax0 = plt.subplot(gs.new_subplotspec((0, 0), colspan=1))
ax1 = plt.subplot(gs.new_subplotspec((0, 1), colspan=1))
ax2 = plt.subplot(gs.new_subplotspec((1, 0), colspan=2))
sns.violinplot(df_predictions['monetary_so_far'], ax=ax0, label='monetary_so_far')
sns.violinplot(df_predictions['monetary_predicted'], ax=ax1, label='monetary_predicted')
sns.violinplot(df_predictions['monetary_future'], ax=ax2, label='monetary_future')
%%bigquery df_top_ltv --params $LTV_PARAMS --project $PROJECT_ID
DECLARE TOP_LTV_RATIO FLOAT64 DEFAULT @TOP_LTV_RATIO;
SELECT
p.customer_id,
monetary_future,
c.email AS email
FROM (
SELECT
customer_id,
monetary_future,
PERCENT_RANK() OVER (ORDER BY monetary_future DESC) AS percent_rank_monetary
FROM
`ltv_ecommerce.predictions_windowed` ) p
-- This creates fake emails. You need to join with your own CRM table.
INNER JOIN (
SELECT
customer_id,
email
FROM
`ltv_ecommerce.00_crm` ) c
ON
p.customer_id = CAST(c.customer_id AS STRING)
WHERE
-- Decides the size of your list of emails. For similar-audience use cases
-- where you need to find a minimum of matching emails, 20% should provide
-- enough potential emails.
percent_rank_monetary <= TOP_LTV_RATIO
ORDER BY monetary_future DESC
df_top_ltv.head(5)
# Shows distribution of the predicted monetary value for the top LTV customers.
print(df_top_ltv.describe())
fig, axs = plt.subplots()
sns.set(font_scale = 1.2)
sns.distplot(df_top_ltv['monetary_future'])
# Sets your variables.
if 'google.colab' in sys.modules:
from google.colab import files
ADWORDS_FILE = "/tmp/adwords.yaml"
DEVELOPER_TOKEN = "[YOUR_DEVELOPER_TOKEN]"
OAUTH_2_CLIENT_ID = "[YOUR_OAUTH_2_CLIENT_ID]"
CLIENT_SECRET = "[YOUR_CLIENT_SECRET]"
REFRESH_TOKEN = "[YOUR_REFRESH_TOKEN]"
# Creates a local YAML file
adwords_content = f
# AdWordsClient configurations
adwords:
#############################################################################
# Required Fields #
#############################################################################
developer_token: {DEVELOPER_TOKEN}
#############################################################################
# Optional Fields #
#############################################################################
# client_customer_id: INSERT_CLIENT_CUSTOMER_ID_HERE
# user_agent: INSERT_USER_AGENT_HERE
# partial_failure: True
# validate_only: True
#############################################################################
# OAuth2 Configuration #
# Below you may provide credentials for either the installed application or #
# service account flows. Remove or comment the lines for the flow you're #
# not using. #
#############################################################################
# The following values configure the client for the installed application
# flow.
client_id: {OAUTH_2_CLIENT_ID}
client_secret: {CLIENT_SECRET}
refresh_token: {REFRESH_TOKEN}
# The following values configure the client for the service account flow.
# path_to_private_key_file: INSERT_PATH_TO_JSON_KEY_FILE_HERE
# delegated_account: INSERT_DOMAIN_WIDE_DELEGATION_ACCOUNT
#############################################################################
# ReportDownloader Headers #
# Below you may specify boolean values for optional headers that will be #
# applied to all requests made by the ReportDownloader utility by default. #
#############################################################################
# report_downloader_headers:
# skip_report_header: False
# skip_column_header: False
# skip_report_summary: False
# use_raw_enum_values: False
# AdManagerClient configurations
ad_manager:
#############################################################################
# Required Fields #
#############################################################################
application_name: INSERT_APPLICATION_NAME_HERE
#############################################################################
# Optional Fields #
#############################################################################
# The network_code is required for all services except NetworkService:
# network_code: INSERT_NETWORK_CODE_HERE
# delegated_account: INSERT_DOMAIN_WIDE_DELEGATION_ACCOUNT
#############################################################################
# OAuth2 Configuration #
# Below you may provide credentials for either the installed application or #
# service account (recommended) flows. Remove or comment the lines for the #
# flow you're not using. #
#############################################################################
# The following values configure the client for the service account flow.
path_to_private_key_file: INSERT_PATH_TO_JSON_KEY_FILE_HERE
# delegated_account: INSERT_DOMAIN_WIDE_DELEGATION_ACCOUNT
# The following values configure the client for the installed application
# flow.
# client_id: INSERT_OAUTH_2_CLIENT_ID_HERE
# client_secret: INSERT_CLIENT_SECRET_HERE
# refresh_token: INSERT_REFRESH_TOKEN_HERE
# Common configurations:
###############################################################################
# Compression (optional) #
# Below you may specify whether to accept and automatically decompress gzip #
# encoded SOAP requests. By default, gzip compression is not enabled. #
###############################################################################
# enable_compression: False
###############################################################################
# Logging configuration (optional) #
# Below you may specify the logging configuration. This will be provided as #
# an input to logging.config.dictConfig. #
###############################################################################
# logging:
# version: 1
# disable_existing_loggers: False
# formatters:
# default_fmt:
# format: ext://googleads.util.LOGGER_FORMAT
# handlers:
# default_handler:
# class: logging.StreamHandler
# formatter: default_fmt
# level: INFO
# loggers:
# Configure root logger
# "":
# handlers: [default_handler]
# level: INFO
###############################################################################
# Proxy configurations (optional) #
# Below you may specify an HTTP or HTTPS Proxy to be used when making API #
# requests. Note: You must specify the scheme used for the proxy endpoint. #
# #
# For additional information on configuring these values, see: #
# http://docs.python-requests.org/en/master/user/advanced/#proxies #
###############################################################################
# proxy_config:
# http: INSERT_HTTP_PROXY_URI_HERE
# https: INSERT_HTTPS_PROXY_URI_HERE
# If specified, the given cafile will only be used if certificate validation
# is not disabled.
# cafile: INSERT_PATH_HERE
# disable_certificate_validation: False
################################################################################
# Utilities Included (optional) #
# Below you may specify whether the library will include utilities used in the #
# user agent. By default, the library will include utilities used in the user #
# agent. #
################################################################################
# include_utilities_in_user_agent: True
################################################################################
# Custom HTTP headers (optional) #
# Specify one or more custom headers to pass along with all requests to #
# the API. #
################################################################################
# custom_http_headers:
# X-My-Header: 'content'
with open(ADWORDS_FILE, "w") as adwords_file:
print(adwords_content, file=adwords_file)
# Google Ads client
# adwords_client = adwords.AdWordsClient.LoadFromStorage(ADWORDS_FILE)
ltv_emails = list(set(df_top_ltv['email']))
# https://developers.google.com/adwords/api/docs/samples/python/remarketing#create-and-populate-a-user-list
# https://github.com/googleads/googleads-python-lib/blob/7c41584c65759b6860572a13bde65d7395c5b2d8/examples/adwords/v201809/remarketing/add_crm_based_user_list.py
# Adds a user list and populates it with hashed email addresses.
# Note: It may take several hours for the list to be populated with members. Email
# addresses must be associated with a Google account. For privacy purposes, the
# user list size will show as zero until the list has at least 1000 members. After
# that, the size will be rounded to the two most significant digits.
#
# def normalize_and_SHA256(s):
# Normalizes (lowercase, remove whitespace) and hashes a string with SHA-256.
# Args:
# s: The string to perform this operation on.
# Returns:
# A normalized and SHA-256 hashed string.
#
# return hashlib.sha256(s.strip().lower()).hexdigest()
# def create_user_list(client):
# # Initialize appropriate services.
# user_list_service = client.GetService('AdwordsUserListService', 'v201809')
# user_list = {
# 'xsi_type': 'CrmBasedUserList',
# 'name': f'Customer relationship management list #{uuid.uuid4()}',
# 'description': 'A list of customers that originated from email addresses',
# # CRM-based user lists can use a membershipLifeSpan of 10000 to indicate
# # unlimited; otherwise normal values apply.
# 'membershipLifeSpan': 30,
# 'uploadKeyType': 'CONTACT_INFO'
# }
# # Create an operation to add the user list.
# operations = [{
# 'operator': 'ADD',
# 'operand': user_list
# }]
# result = user_list_service.mutate(operations)
# user_list_id = result['value'][0]['id']
# emails = ltv_emails
# members = [{'hashedEmail': normalize_and_SHA256(email)} for email in emails]
# mutate_members_operation = {
# 'operand': {
# 'userListId': user_list_id,
# 'membersList': members
# },
# 'operator': 'ADD'
# }
# response = user_list_service.mutateMembers([mutate_members_operation])
# if 'userLists' in response:
# for user_list in response['userLists']:
# print('User list with name "%s" and ID "%d" was added.'
# % (user_list['name'], user_list['id']))
# create_user_list(adwords_client)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import packages
Step2: Set up your GCP project
Step3: Authenticate your GCP account
Step4: Create a working dataset
Step5: Load example tables
Step6: Create clients
Step7: [Optional] Match your dataset to template
Step8: Analyze dataset
Step9: Unit price
Step10: Set parameters for LTV
Step11: Aggregate per day per customer
Step12: Check distributions
Step13: Orders are quite well distributed across the year despite a lower number in the early days of the dataset. You can keep this in mind when choosing a value for WINDOW_STEP_INITIAL.
Step14: The number of transactions per customer is distributed across a few discrete values with no clear outliers.
Step15: A few customers seems to have quite large quantities in their orders but the distribution is generally healthy.
Step16: The distribution shows a few outliers that you could investigate to improve the base model that you create in this tutorial.
Step17: Dataset
Step18: Seems like for most values, there is a long tail of records. This is something that might required additional feature preparation even if AutoML already provides some automatic engineering. You can investigate this if you want to improve the base model.
Step19: This is an example of a model evaluation
Step20: The monetary distribution analysis shows small monetary amounts for the next month compare to the overall historical value. The difference is about 3 to 4 orders of magnitude.
Step22: Setup Adwords client
Step25: Create an AdWords user list
|
2,722
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("..")
import numpy as np
from pstd import PSTD, PML, Medium, PointSource
from acoustics import Signal
#import seaborn as sns
%matplotlib inline
x = 30.0
y = 20.0
z = 0.0
soundspeed = 343.2
density = 1.296
maximum_frequency_target = 200.0
medium = Medium(soundspeed=soundspeed, density=density)
pml = PML(absorption_coefficient=(1000.0, 1000.0), depth=10.0)
model = PSTD(
maximum_frequency=maximum_frequency_target,
pml=pml,
medium=medium,
cfl=None,
size=[x, y]
)
source_position = (x/4.0, y/2.0)
source = model.add_object('source', 'PointSource', position=source_position,
excitation='pulse', quantity='pressure', amplitude=0.1)
receiver_position = (x*3.0/4.0, y/2.0)
receiver = model.add_object('receiver', 'Receiver', position=receiver_position, quantity='pressure')
print(model.overview())
_ = model.plot_scene()
model.run(seconds=0.002)
_ = model.plot_field()
%%prun
model.run(seconds=0.06)
_ = model.plot_field()
_ = receiver.recording().plot()
model.restart()
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
model.run(steps=10)
_ = model.plot_field()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configuration
Step2: Create model
Step3: The model is only finite and to prevent aliasing we need a Perfectly Matched Layer.
Step4: Now we create the actual model.
Step5: In this example our source excites a pulse.
Step6: We also add a receiver on the other side of the domain
Step7: Check model
Step8: To check whether the geometry is as we want it to be, we can simply draw it.
Step9: Running the simulation
Step10: Let's see how the sound pressure field looks like now.
Step11: It might happen that you realize that you actually need to calculate a bit further. This can easily be done, since the state is remembered. Simply use model.run() again and the simulation continues.
Step12: as you can see.
Step13: Recordings
Step14: If however, you want to restart the simulation you can do so with model.restart().
Step15: Show log
Step16: When we now run the simulation, you will see which step it is at.
|
2,723
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import GPyOpt
from numpy.random import seed
func = GPyOpt.objective_examples.experiments1d.forrester()
domain =[{'name': 'var1', 'type': 'continuous', 'domain': (0,1)}]
X_init = np.array([[0.0],[0.5],[1.0]])
Y_init = func.f(X_init)
iter_count = 10
current_iter = 0
X_step = X_init
Y_step = Y_init
while current_iter < iter_count:
bo_step = GPyOpt.methods.BayesianOptimization(f = None, domain = domain, X = X_step, Y = Y_step)
x_next = bo_step.suggest_next_locations()
y_next = func.f(x_next)
X_step = np.vstack((X_step, x_next))
Y_step = np.vstack((Y_step, y_next))
current_iter += 1
x = np.arange(0.0, 1.0, 0.01)
y = func.f(x)
plt.figure()
plt.plot(x, y)
for i, (xs, ys) in enumerate(zip(X_step, Y_step)):
plt.plot(xs, ys, 'rD', markersize=10 + 20 * (i+1)/len(X_step))
bo_loop = GPyOpt.methods.BayesianOptimization(f = func.f, domain = domain, X = X_init, Y = Y_init)
bo_loop.run_optimization(max_iter=iter_count)
X_loop = bo_loop.X
Y_loop = bo_loop.Y
plt.figure()
plt.plot(x, y)
for i, (xl, yl) in enumerate(zip(X_loop, Y_loop)):
plt.plot(xl, yl, 'rD', markersize=10 + 20 * (i+1)/len(X_step))
pending_X = np.array([[0.75]])
ignored_X = np.array([[0.15], [0.85]])
bo = GPyOpt.methods.BayesianOptimization(f = None, domain = domain, X = X_step, Y = Y_step, de_duplication = True)
bo.suggest_next_locations(pending_X = pending_X, ignored_X = ignored_X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the purposes of this notebook we are going to use one of the predefined objective functions that come with GPyOpt. However the key thing to realize is that the function could be anything (e.g., the results of a physical experiment). As long as users are able to externally evaluate the suggested points somehow and provide GPyOpt with results, the library has opinions about the objective function's origin.
Step2: Now we define the domain of the function to optimize as usual.
Step3: First we are going to run the optimization loop outside of GPyOpt, and only use GPyOpt to get the next point to evaluate our function.
Step4: Let's visualize the results. The size of the marker denotes the order in which the point was evaluated - the bigger the marker the later was the evaluation.
Step5: To compare the results, let's now execute the whole loop with GPyOpt.
Step6: Now let's print the results of this optimization and compare to the previous external evaluation run. As before, size of the marker corresponds to its evaluation order.
Step7: To allow even more control over the execution, this API allows to specify points that should be ignored (say the objetive is known to fail in certain locations), as well as points that are already pending evaluation (say in case the user is running several candidates in parallel). Here is how one can provide this information.
|
2,724
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
PROJECT_ID = 'yourProject' # Change to your project.
PROJECT_NUMBER = 'yourProjectNumber' # Change to your project number
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourPredictionRegion' # Change to your AI Platform Prediction region.
ARTIFACTS_REPOSITORY_NAME = 'ml-serving'
EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/embedding_lookup_model'
EMBEDDNIG_LOOKUP_MODEL_NAME = 'item_embedding_lookup'
EMBEDDNIG_LOOKUP_MODEL_VERSION = 'v1'
INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'
SCANN_MODEL_NAME = 'index_server'
SCANN_MODEL_VERSION = 'v1'
KIND = 'song'
!gcloud config set project $PROJECT_ID
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
!gcloud ai-platform models create {EMBEDDNIG_LOOKUP_MODEL_NAME} --region={REGION}
!gcloud ai-platform versions create {EMBEDDNIG_LOOKUP_MODEL_VERSION} \
--region={REGION} \
--model={EMBEDDNIG_LOOKUP_MODEL_NAME} \
--origin={EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR} \
--runtime-version=2.2 \
--framework=TensorFlow \
--python-version=3.7 \
--machine-type=n1-standard-2
print("The model version is deployed to AI Platform Prediction.")
import googleapiclient.discovery
from google.api_core.client_options import ClientOptions
api_endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=api_endpoint)
service = googleapiclient.discovery.build(
serviceName='ml', version='v1', client_options=client_options)
def caip_embedding_lookup(input_items):
request_body = {'instances': input_items}
service_name = f'projects/{PROJECT_ID}/models/{EMBEDDNIG_LOOKUP_MODEL_NAME}/versions/{EMBEDDNIG_LOOKUP_MODEL_VERSION}'
print(f'Calling : {service_name}')
response = service.projects().predict(
name=service_name, body=request_body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
input_items = ['2114406', '2114402 2120788', 'abc123']
embeddings = caip_embedding_lookup(input_items)
print(f'Embeddings retrieved: {len(embeddings)}')
for idx, embedding in enumerate(embeddings):
print(f'{input_items[idx]}: {embedding[:5]}')
!gcloud beta artifacts repositories create {ARTIFACTS_REPOSITORY_NAME} \
--location={REGION} \
--repository-format=docker
!gcloud beta auth configure-docker {REGION}-docker.pkg.dev --quiet
IMAGE_URL = f'{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}/{SCANN_MODEL_NAME}:{SCANN_MODEL_VERSION}'
PORT=5001
SUBSTITUTIONS = ''
SUBSTITUTIONS += f'_IMAGE_URL={IMAGE_URL},'
SUBSTITUTIONS += f'_PORT={PORT}'
!gcloud builds submit --config=index_server/cloudbuild.yaml \
--substitutions={SUBSTITUTIONS} \
--timeout=1h
repository_id = f'{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}'
!gcloud beta artifacts docker images list {repository_id}
SERVICE_ACCOUNT_NAME = 'caip-serving'
SERVICE_ACCOUNT_EMAIL = f'{SERVICE_ACCOUNT_NAME}@{PROJECT_ID}.iam.gserviceaccount.com'
!gcloud iam service-accounts create {SERVICE_ACCOUNT_NAME} \
--description="Service account for AI Platform Prediction to access cloud resources."
!gcloud projects describe {PROJECT_ID} --format="value(projectNumber)"
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/iam.serviceAccountAdmin \
--member=serviceAccount:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/storage.objectViewer \
--member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/ml.developer \
--member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}
!gcloud ai-platform models create {SCANN_MODEL_NAME} --region={REGION}
HEALTH_ROUTE=f'/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}'
PREDICT_ROUTE=f'/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}:predict'
ENV_VARIABLES = f'PROJECT_ID={PROJECT_ID},'
ENV_VARIABLES += f'REGION={REGION},'
ENV_VARIABLES += f'INDEX_DIR={INDEX_DIR},'
ENV_VARIABLES += f'EMBEDDNIG_LOOKUP_MODEL_NAME={EMBEDDNIG_LOOKUP_MODEL_NAME},'
ENV_VARIABLES += f'EMBEDDNIG_LOOKUP_MODEL_VERSION={EMBEDDNIG_LOOKUP_MODEL_VERSION}'
!gcloud beta ai-platform versions create {SCANN_MODEL_VERSION} \
--region={REGION} \
--model={SCANN_MODEL_NAME} \
--image={IMAGE_URL} \
--ports={PORT} \
--predict-route={PREDICT_ROUTE} \
--health-route={HEALTH_ROUTE} \
--machine-type=n1-standard-4 \
--env-vars={ENV_VARIABLES} \
--service-account={SERVICE_ACCOUNT_EMAIL}
print("The model version is deployed to AI Platform Prediction.")
from google.cloud import datastore
import requests
client = datastore.Client(PROJECT_ID)
def caip_scann_match(query_items, show=10):
request_body = {
'instances': [{
'query':' '.join(query_items),
'show':show
}]
}
service_name = f'projects/{PROJECT_ID}/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}'
print(f'Calling: {service_name}')
response = service.projects().predict(
name=service_name, body=request_body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
match_tokens = response['predictions']
keys = [client.key(KIND, int(key)) for key in match_tokens]
items = client.get_multi(keys)
return items
songs = {
'2120788': 'Limp Bizkit: My Way',
'1086322': 'Jacques Brel: Ne Me Quitte Pas',
'833391': 'Ricky Martin: Livin\' la Vida Loca',
'1579481': 'Dr. Dre: The Next Episode',
'2954929': 'Black Sabbath: Iron Man'
}
for item_Id, desc in songs.items():
print(desc)
print("==================")
similar_items = caip_scann_match([item_Id], 5)
for similar_item in similar_items:
print(f'- {similar_item["artist"]}: {similar_item["track_title"]}')
print()
BQ_DATASET_NAME = 'recommendations'
BQML_MODEL_NAME = 'item_matching_model'
BQML_MODEL_VERSION = 'v1'
BQML_MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/item_matching_model'
!bq --quiet extract -m {BQ_DATASET_NAME}.{BQML_MODEL_NAME} {BQML_MODEL_OUTPUT_DIR}
!saved_model_cli show --dir {BQML_MODEL_OUTPUT_DIR} --tag_set serve --signature_def serving_default
!gcloud ai-platform models create {BQML_MODEL_NAME} --region={REGION}
!gcloud ai-platform versions create {BQML_MODEL_VERSION} \
--region={REGION} \
--model={BQML_MODEL_NAME} \
--origin={BQML_MODEL_OUTPUT_DIR} \
--runtime-version=2.2 \
--framework=TensorFlow \
--python-version=3.7 \
--machine-type=n1-standard-2
print("The model version is deployed to AI Platform Predicton.")
def caip_bqml_matching(input_items, show):
request_body = {'instances': input_items}
service_name = f'projects/{PROJECT_ID}/models/{BQML_MODEL_NAME}/versions/{BQML_MODEL_VERSION}'
print(f'Calling : {service_name}')
response = service.projects().predict(
name=service_name, body=request_body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
match_tokens = response['predictions'][0]["predicted_item2_Id"][:show]
keys = [client.key(KIND, int(key)) for key in match_tokens]
items = client.get_multi(keys)
return items
return response['predictions']
for item_Id, desc in songs.items():
print(desc)
print("==================")
similar_items = caip_bqml_matching([int(item_Id)], 5)
for similar_item in similar_items:
print(f'- {similar_item["artist"]}: {similar_item["track_title"]}')
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configure GCP environment settings
Step2: Authenticate your GCP account
Step3: Deploy the embedding lookup model to AI Platform Prediction
Step4: Next, deploy the model
Step5: Once the model is deployed, you can verify it in the AI Platform console.
Step6: Run the caip_embedding_lookup method to retrieve item embeddings. This method accepts item IDs, calls the embedding lookup model in AI Platform Prediction, and returns the appropriate embedding vectors.
Step7: Test the caip_embedding_lookup method with three item IDs
Step8: ScaNN matching service
Step9: Use Cloud Build to build the Docker container image
Step10: Run the following command to verify the container image has been built
Step11: Create a service account for AI Platform Prediction
Step12: Grant the Cloud ML Engine (AI Platform) service account the iam.serviceAccountAdmin privilege, and grant the caip-serving service account the privileges required by the ScaNN matching service, which are storage.objectViewer and ml.developer.
Step13: Deploy the custom container to AI Platform Prediction
Step14: Deploy the custom container to AI Platform prediction. Note that you use the env-vars parameter to pass environmental variables to the Flask application in the container.
Step15: Test the Deployed ScaNN Index Service
Step16: Call the caip_scann_match method with five item IDs and request five match items for each
Step17: (Optional) Deploy the matrix factorization model to AI Platform Prediction
Step18: Deploy the exact matching model to AI Platform Prediction
|
2,725
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
print(tf.__version__)
# Load the diabetes dataset
from sklearn.datasets import load_diabetes
diabetes_datasets = load_diabetes()
print(diabetes_datasets["DESCR"])
# Save the input and target variables
print(diabetes_datasets.keys())
data = diabetes_datasets["data"]
targets = diabetes_datasets["target"]
# The ten features are already normalised.
# Normalise the target data (this will make clearer training curves)
targets = (targets - targets.mean(axis = 0)) / targets.std()
import numpy as np
# for reproducibility
np.random.seed(8)
tf.random.set_seed(8)
# Split the data into train and test sets
from sklearn.model_selection import train_test_split
train_data, test_data, train_targets, test_targets = train_test_split(data, targets, test_size = 0.1)
print (train_data.shape)
print (test_data.shape)
print (train_targets.shape)
print (test_targets.shape)
# example of train data
train_data[0]
# example of train target
train_targets[0]
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Build the model
firstModel = Sequential([
Dense(128, activation = "relu", input_shape=(train_data.shape[1],)),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(1)
])
# Print the model summary
firstModel.summary()
# Compile the model: optimiser is Adam, we measure the loss and the Mean Abosulte Error as accuracy
firstModel.compile(optimizer = "adam", loss="mse", metrics=["mae"])
# Train the model, with some of the data reserved for validation (15%)
firstTrainingHistory = firstModel.fit(train_data, train_targets, epochs = 100, validation_split = 0.15,
batch_size = 64, verbose = False)
firstTrainingHistory.history['loss'][0]
firstTrainingHistory.history['val_loss'][0]
lossU, maeU = firstModel.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossU}\nMAE is {maeU}")
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
def plotMetricsByEpoch(history):
frame = pd.DataFrame(history)
epochs = np.arange(len(frame))
fig = plt.figure(figsize=(12,4))
# Loss plot
ax = fig.add_subplot(121)
ax.plot(epochs, frame['loss'], label="Train")
ax.plot(epochs, frame['val_loss'], label="Validation")
ax.set_xlabel("Epochs")
ax.set_ylabel("Loss")
ax.set_title("Loss vs Epochs")
ax.legend()
# Accuracy plot
ax = fig.add_subplot(122)
ax.plot(epochs, frame['mae'], label="Train")
ax.plot(epochs, frame['val_mae'], label="Validation")
ax.set_xlabel("Epochs")
ax.set_ylabel("Mean Absolute Error")
ax.set_title("Accuracy: Mean Absolute Error vs Epochs")
ax.legend();
plotMetricsByEpoch(firstTrainingHistory.history)
from tensorflow.keras import regularizers
# we define a model that is the same as the previous one, only regularised
# wd is the weight decay
wd = 1e-5
regularisedModel = Sequential([
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu", input_shape=(train_data.shape[1],)),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dense(1)])
# Compile the model
regularisedModel.compile(optmizer="adam", loss="mse", metrics=["mae"])
# Train the model, with some of the data reserved for validation
trainingHistoryR1 = regularisedModel.fit(train_data, train_targets, epochs = 100,
validation_split = 0.15, batch_size = 64, verbose=False)
plotMetricsByEpoch(trainingHistoryR1.history)
# Evaluate the model on the test set
lossR1, maeR1 = regularisedModel.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossR1}\nMAE is {maeR1}")
from tensorflow.keras.layers import Dropout
# we define a function to build a regularised model that we can call later to tune the parameters
# (weight D ... and dropout rate)
def getRegularisedModel(wd, rate):
model = Sequential([
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu", input_shape=(train_data.shape[1],)),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(1)
])
return model
regularisedModelD = getRegularisedModel(1e-5, 0.3)
# Compile the model
regularisedModelD.compile(optmizer="adam", loss="mse", metrics=["mae"])
# Train the model, with some of the data reserved for validation
trainingHistoryR2 = regularisedModelD.fit(train_data, train_targets, epochs = 100,
validation_split = 0.15, batch_size = 64, verbose=False)
# Evaluate the model on the test set
lossR2, maeR2 = regularisedModelD.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossR2}\nMAE is {maeR2}")
# Plot the training and validation loss
plotMetricsByEpoch(trainingHistoryR2.history)
from tensorflow.keras.layers import BatchNormalization
wd = 1e-8
rate = 0.2
modelBN = Sequential([
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu", input_shape=(train_data.shape[1],)),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
Dropout(rate),
])
# Add a customised batch normalisation layer
modelBN.add(BatchNormalization(
momentum=0.95,
epsilon=0.005,
axis = -1,
beta_initializer = tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.05),
gamma_initializer= tf.keras.initializers.Constant(value=0.9)
))
# Add the output layer
modelBN.add(Dense(1))
# Print the model summary
modelBN.summary()
# Compile the model
modelBN.compile(optimizer='adam',
loss='mse',
metrics=['mae'])
# Train the model
trainingHistoryBN = modelBN.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64,
verbose=False)
# Evaluate the model on the test set
lossBN, maeBN = modelBN.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossBN}\nMAE is {maeBN}")
# Plot the training and validation loss
plotMetricsByEpoch(trainingHistoryBN.history)
modelBN2 = Sequential([
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu", input_shape=(train_data.shape[1],)),
BatchNormalization(),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
BatchNormalization(),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
BatchNormalization(),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
BatchNormalization(),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
BatchNormalization(),
Dropout(rate),
Dense(128, kernel_regularizer = regularizers.l2(wd), activation="relu"),
BatchNormalization(
momentum=0.95,
epsilon=0.005,
beta_initializer=RandomNormal(mean=0.0, stddev=0.05),
gamma_initializer=Constant(value=0.9)),
Dense(1)
])
# Compile the model
modelBN2.compile(optimizer='adam',
loss='mse',
metrics=['mae'])
# Train the model
trainingHistoryBN2 = modelBN2.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64,
verbose=False)
# Evaluate the model on the test set
lossBN2, maeBN2 = modelBN2.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossBN2}\nMAE is {maeBN2}")
# Plot the training and validation loss
plotMetricsByEpoch(trainingHistoryBN2.history)
# Write a custom callback
from tensorflow.keras.callbacks import Callback
class TrainingCallback(Callback):
def on_train_begin(self, logs=None):
print("Starting Training ...")
# Print the loss and mean absolute error after each epoch
def on_epoch_end(self, epoch, logs=None):
print('Epoch {}: Average loss is {:7.2f}, mean absolute error is {:7.2f}.'.format(epoch, logs['loss'], logs['mae']))
def on_train_end(self, logs=None):
print("Finished training!")
# Train the model, with some of the data reserved for validation
regularisedModel.fit(train_data, train_targets, epochs = 3, batch_size=128, verbose=False, callbacks=[TrainingCallback()])
# Write more custom callbacks
class LossAndMetricCallback(Callback):
def on_test_begin(self, logs=None):
print("Starting Testing ...")
# Print the loss after each batch in the test set
def on_test_batch_end(self, batch, logs=None):
print('\n After batch {}, the loss is {:7.2f}.'.format(batch, logs['loss']))
def on_test_end(self, logs=None):
print("Finished testing!")
def on_predict_begin(self, logs=None):
print("Starting prediction ...")
# Notify the user when prediction has finished on each batch
def on_predict_batch_end(self,batch, logs=None):
print("Finished prediction on batch {}!".format(batch))
def on_predict_end(self, logs=None):
print("Finished Prediction!")
# Evaluate the model
regularisedModel.evaluate(test_data, test_targets, verbose=False, callbacks=[LossAndMetricCallback()])
# Make predictions with the model
regularisedModel.predict(test_data, verbose=False, callbacks=[LossAndMetricCallback()])
from tensorflow.keras.callbacks import EarlyStopping
# Re-train the regularised model
regularisedModelES = getRegularisedModel(1e-8, 0.2)
regularisedModelES.compile(optimizer="adam", loss="mse", metrics=["mae"])
reg_history = regularisedModelES.fit(train_data, train_targets, epochs=100,
validation_split=0.15, batch_size=64, verbose=False,
callbacks= [EarlyStopping(patience=3)])
# Evaluate the model on the test set
lossR2, maeR2 = regularisedModelES.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossR2}\nMAE is {maeR2}")
# Plot the training and validation loss
plotMetricsByEpoch(reg_history.history)
# Re-train the regularised model
regularisedModelES = getRegularisedModel(1e-8, 0.2)
regularisedModelES.compile(optimizer="adam", loss="mse", metrics=["mae"])
reg_history = regularisedModelES.fit(train_data, train_targets, epochs=100,
validation_split=0.15, batch_size=64, verbose=False,
callbacks= [EarlyStopping(monitor='val_mae', patience=3)])
# Evaluate the model on the test set
lossR2, maeR2 = regularisedModelES.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossR2}\nMAE is {maeR2}")
# Plot the training and validation loss
plotMetricsByEpoch(reg_history.history)
# Define the learning rate schedule. The tuples below are (start_epoch, new_learning_rate)
lr_schedule = [
(0, 0.01), (4, 0.007), (7, 0.005), (11, 0.003), (15, 0.001)
]
def get_new_epoch_lr(epoch, lr):
# Checks to see if the input epoch is listed in the learning rate schedule
# and if so, returns index in lr_schedule
epoch_in_sched = [i for i in range(len(lr_schedule)) if lr_schedule[i][0]==int(epoch)]
if len(epoch_in_sched)>0:
# If it is, return the learning rate corresponding to the epoch
return lr_schedule[epoch_in_sched[0]][1]
else:
# Otherwise, return the existing learning rate
return lr
# Define the custom callback
class LRScheduler(tf.keras.callbacks.Callback):
def __init__(self, new_lr):
super(LRScheduler, self).__init__()
# Add the new learning rate function to our callback
self.new_lr = new_lr
def on_epoch_begin(self, epoch, logs=None):
# Make sure that the optimizer we have chosen has a learning rate, and raise an error if not
if not hasattr(self.model.optimizer, 'lr'):
raise ValueError('Error: Optimizer does not have a learning rate.')
# Get the current learning rate
curr_rate = float(tf.keras.backend.get_value(self.model.optimizer.lr))
# Call the auxillary function to get the scheduled learning rate for the current epoch
scheduled_rate = self.new_lr(epoch, curr_rate)
# Set the learning rate to the scheduled learning rate
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_rate)
print('Learning rate for epoch {} is {:7.3f}'.format(epoch, scheduled_rate))
regularisedModelLR = getRegularisedModel(1e-8, 0.2)
# Print the model summary
regularisedModelLR.summary()
# Compile the model
regularisedModelLR.compile(optimizer = "adam", loss="mse", metrics=["mae"])
# Fit the model with our learning rate scheduler callback
new_history = regularisedModelLR.fit(train_data, train_targets, epochs=20,
validation_split=0.15, batch_size=64, callbacks=[LRScheduler(get_new_epoch_lr)], verbose=False)
# Evaluate the model on the test set
lossULT, maeULT = regularisedModelLR.evaluate(test_data, test_targets, verbose=False)
print(f"Loss is {lossULT}\nMAE is {maeULT}")
# Plot the training and validation loss
plotMetricsByEpoch(new_history.history)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and pre-process the data
Step2: Split the data into train and test sets
Step3: First model
Step4: Compile and train the first unregularised model
Step5: Let's say then that we want
Step6: In the previous post we have seen that the model.fit method returns the
Step7: Evaluate the model on the test set
Step8: Loss = 0.68
Step9: One interesting outcome from the curves is that the metrics on the validation dataset (the orange one) is not improving at all after the first epochs. It even diverges.
Step10: The loss improved to 0.55 but the results on the validation dataset shows it's still overfitting.
Step11: Loss = 0.45
Step12: Validation now is much more similar to training and improved during the epochs
Step13: There are some parameters and hyperparameters associated with batch normalisation
Step14: Let's now compile and fit our model with batch normalisation, and track the progress on training and validation sets.
Step15: Batch normalisation can be applied to all layers, not only before the last one, and this usually improves the performance
Step16: Plot the learning curves
Step17: Introduction to callbacks
Step18: You can see that on_epoch_end has been overwritten to print the current loss and accuracy, after the end of the epoch.
Step19: There are quite a few
Step20: Plot the learning curves
Step21: You can see that the model stopped training after 20 epochs (that's a fifth of the original one!) because the error on training wasn't improving anymore.
Step22: Plot the learning curves
Step23: The process of setting the hyper-parameters requires expertise and extensive trial and error. There are no simple and easy ways to set hyper-parameters — specifically, learning rate, batch size, momentum, and weight decay.
Step24: We can apply the callback to the regularised model and see how the learning rate parameter gets adapted.
Step25: You can see that the learning rate was constantly adapted and diminished.
Step26: Plot the learning curves
|
2,726
|
<ASSISTANT_TASK:>
Python Code:
%pdb
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (1000, 1010)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines/sentences: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
words_counter = Counter(text)
words_sorted = sorted(words_counter, key=words_counter.get,reverse=True)
int_to_vocab = dict([i,words_sorted[i]] for i in range(len(words_sorted)))
vocab_to_int = dict([words_sorted[i],i] for i in range(len(words_sorted)))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_lookup_dict = {
"." : "||PERIOD||",
"," : "||COMMA||",
"\"" : "||QUOTATION||",
";" : "||SEMICOLON||",
"!" : "||EXLAMATIONMARK||",
"?" : "||QUESTIONMARK||",
"(" : "||LEFTPARENTHESIS||",
")" : "||RIGHTPARENTHESIS||",
"--" : "||DASH||",
"\n" : "||RETURN||"
}
return token_lookup_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input = tf.placeholder(tf.int32,[None,None], name='input')
targets = tf.placeholder(tf.int32,[None,None], name='targets')
learning_rate = tf.placeholder(tf.float32,shape=(), name = 'learning_rate')
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size, keep_prob = 0.5, num_layers = 1):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers)
initial_state = cell.zero_state(batch_size = batch_size, dtype=tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform([vocab_size,embed_dim],-0.5,0.5), name = 'embedding')
embed = tf.nn.embedding_lookup(embedding,input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
output,final_state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32)
final_state = tf.identity(final_state,"final_state")
return output, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
#get embeddings
embed_layer = get_embed(input_data,vocab_size,300)
#get the rnn
output, final_state = build_rnn(cell,embed_layer)
#fully connected layer
logits_pre = tf.contrib.layers.fully_connected(output,
300,
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.05),
biases_initializer=tf.zeros_initializer()
)
#adding an extra layer
logits = tf.contrib.layers.fully_connected(logits_pre,
vocab_size,
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.05),
biases_initializer=tf.zeros_initializer()
)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
#%pdb
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
#print("length int_text before: ",len(int_text))
n_batches = len(int_text)//(batch_size * seq_length)
print("Losing",len(int_text)%(batch_size * seq_length),"characters")
int_text = int_text[:n_batches*batch_size*seq_length +1]
#print("length int_text after: ",len(int_text))
#print("batch size = ",batch_size, " , seq_length = ",seq_length , " n_batches = ", n_)
batches = np.zeros(shape=(n_batches,2,batch_size,seq_length),dtype=int)
batch_served = 0
for i in range(0,len(int_text)-2,seq_length):
#print("i: ",i," batch served: ",batch_served)
batches[batch_served,0,i//(seq_length*n_batches)]= int_text[i:i+seq_length]
batches[batch_served,1,i//(seq_length*n_batches)] =int_text[i+1:i+seq_length+1]
batch_served+=1
if(batch_served == n_batches):
batch_served =0
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 150
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 50
#adding keep prob
keep_prob = 0.8
#adding number of LSTM layers
num_layers = 1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size, keep_prob,num_layers)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 500
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
2,727
|
<ASSISTANT_TASK:>
Python Code:
xi, l, rho = symbols('xi, l, rho')
# Shape functions
S = Matrix(np.zeros((4, 12)))
x2 = (1 - xi)
S[0, 0 ] = x2 # extension
S[0, 6 ] = xi
S[1, 1 ] = x2**2 * (3 - 2*x2) # y-deflection
S[1, 7 ] = xi**2 * (3 - 2*xi)
S[1, 5 ] = -x2**2 * (x2 - 1) * l
S[1, 11] = xi**2 * (xi - 1) * l
S[2, 2 ] = x2**2 * (3 - 2*x2) # z-deflection
S[2, 8 ] = xi**2 * (3 - 2*xi)
S[2, 4 ] = x2**2 * (x2 - 1) * l
S[2, 10] = -xi**2 * (xi - 1) * l
S[3, 3 ] = x2 # torsion
S[3, 9 ] = xi
#S[4, 2 ] = 6 * x2 * (x2 - 1) / l # y-rotation
#S[4, 8 ] = 6 * xi * (xi - 1) / l
#S[4, 4 ] = -x2 * (3*x2 - 2)
#S[4, 10] = xi * (3*xi - 2)
#S[5, 1 ] = -6 * x2 * (x2 - 1) / l # z-rotation
#S[5, 7 ] = -6 * xi * (xi - 1) / l
#S[5, 5 ] = x2 * (3*x2 - 2)
#S[5, 11] = xi * (3*xi - 2)
S[:3, :].T
titles = ['x-defl', 'y-defl', 'z-defl', 'torsion']
for i in range(4):
sympy.plot(*([xx.subs(l, 2) for xx in S[i,:] if xx != 0] + [(xi, 0, 1)]),
title=titles[i])
rho1, rho2 = symbols('rho_1, rho_2')
rho = (1 - xi)*rho1 + xi*rho2
rho
def sym_me():
m = Matrix(np.diag([rho, rho, rho, 0]))
integrand = S.T * m * S
me = integrand.applyfunc(
lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).expand().factor()
)
return me
me = sym_me()
me.shape
me[0,:]
me[6,:]
me[1,:]
me.subs({rho2: rho1})/rho1
mass = l * sympy.integrate(rho, (xi, 0, 1)).factor()
mass
shape_integral_1 = S[:3, :].applyfunc(
lambda xxx: l * sympy.integrate(rho * xxx, (xi, 0, 1)).expand().simplify()
)
shape_integral_1.T
shape_integral_2 = [
[l * (S[i, :].T * S[j, :]).applyfunc(
lambda xxx: sympy.integrate(rho * xxx, (xi, 0, 1)).expand().simplify())
for j in range(3)]
for i in range(3)
]
(shape_integral_2[0][0] + shape_integral_2[1][1] + shape_integral_2[2][2] - me).expand()
B = Matrix(np.zeros((4, 12)))
B[0, :] = S[0, :].diff(xi, 1) / l
B[1, :] = S[1, :].diff(xi, 2) / l**2
B[2, :] = S[2, :].diff(xi, 2) / l**2
B[3, :] = S[3, :].diff(xi, 1) / l
B.simplify()
B[:, :6]
B[:, 6:]
titles = ['x-defl', 'y-defl', 'z-defl', 'torsion']
for i in range(4):
sympy.plot(*([xx.subs(l, 2) for xx in B[i,:] if xx != 0] + [(xi, 0, 1)]),
title=titles[i])
EA1, EA2, EIy1, EIy2, EIz1, EIz2, GJ1, GJ2 = symbols('EA_1, EA_2, EIy_1, EIy_2, EIz_1, EIz_2, GJ_1, GJ_2')
EA = (1 - xi)*EA1 + xi*EA2
EIy = (1 - xi)*EIy1 + xi*EIy2
EIz = (1 - xi)*EIz1 + xi*EIz2
GJ = (1 - xi)*GJ1 + xi*GJ2
EA, EIy, EIz, GJ
Ex, Ey, Ez, Gx = symbols('E_x, E_y, E_z, G_x')
def sym_ke():
# Note the order -- y deflections depend on EIz etc
E = Matrix(np.diag([EA, EIz, EIy, GJ]))
integrand = B.T * E * B
ke = integrand.applyfunc(
lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).factor() #.subs((EA1+EA2), 2*Ex) #.expand().factor()
)
return ke
ke = sym_ke()
def simplify_ke(ke, EI=False):
result = ke.applyfunc(
lambda xxx: xxx.subs((EA1+EA2), 2*Ex).subs((GJ1+GJ2), 2*Gx))
if EI:
result = result.applyfunc(
lambda xxx: xxx.subs((EIy1+EIy2), 2*Ey).subs((EIz1+EIz2), 2*Ez))
return result
kem = simplify_ke(ke, True)
kem[:, :6]
kem[:, 6:]
kem.subs({EA1: Ex, EA2: Ex, EIy1: Ey, EIy2: Ey, EIz1: Ez, EIz2: Ez})
# G = Matrix(np.zeros((3, 12)))
# G[0, :] = S[0, :].diff(xi, 1) / l
# G[1, :] = S[1, :].diff(xi, 1) / l
# G[2, :] = S[2, :].diff(xi, 1) / l
G = S[:3, :].diff(xi, 1) / l
G.simplify()
G[:, :6]
G[:, 6:]
def sym_ks_axial_force():
# Unit axial force (absorbing area from integral), block matrix 3 times
#smat3 = Matrix(9, 9, lambda i, j: 1 if i == j and i % 3 == 0 else 0)
#integrand = G.T * smat3 * G
integrand = G.T * G
ks = integrand.applyfunc(
lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).factor() #.subs((EA1+EA2), 2*Ex) #.expand().factor()
)
return ks
ks = sym_ks_axial_force()
ks * 30*l
fx1, fx2, fy1, fy2, fz1, fz2 = symbols('f_x1, f_x2, f_y1, f_y2, f_z1, f_z2')
fx = (1 - xi)*fx1 + xi*fx2
fy = (1 - xi)*fy1 + xi*fy2
fz = (1 - xi)*fz1 + xi*fz2
f = Matrix([fx, fy, fz])
f
# Shape functions for applied force -- linear
SF = Matrix(np.zeros((3, 12)))
SF[0, 0 ] = x2 # x
SF[0, 6 ] = xi
SF[1, 1 ] = x2 # y
SF[1, 7 ] = xi
SF[2, 2 ] = x2 # z
SF[2, 8 ] = xi
SF
shape_integral_F1 = SF.applyfunc(
lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).expand().simplify()
)
shape_integral_F1
shape_integral_F2 = [
[l * (S[i, :].T * SF[j, :]).applyfunc(
lambda xxx: sympy.integrate(xxx, (xi, 0, 1)).expand().simplify())
for j in range(3)]
for i in range(3)
]
F = shape_integral_F2[0][0] + shape_integral_F2[1][1] + shape_integral_F2[2][2]
F * Matrix([fx1, fy1, fz1, 0, 0, 0, fx1, fy1, fz1, 0, 0, 0])
F * Matrix([0, 0, fz1, 0, 0, 0, 0, 0, fz1, 0, 0, 0]) / (l/12*fz1)
def numpy_array_str(expr):
return str(expr) \
.replace('Matrix([', 'array([\n') \
.replace('], [', '],\n[') \
.replace(']])', ']\n])')
def numpy_array_str_2x2(arr):
return ',\n'.join(['[{}]'.format(',\n'.join([numpy_array_str(arr[i][j])
for j in range(3)]))
for i in range(3)])
import datetime
code =
# Automatically generated from SymPy in notebook
# {date}
from __future__ import division
from numpy import array
def mass(l, rho_1, rho_2):
return {mass}
def S1(l, rho_1, rho_2):
return {S1}
def S2(l, rho_1, rho_2):
return [
{S2}
]
def K(l, E_x, G_x, EIy_1, EIy_2, EIz_1, EIz_2):
return {K}
def Ks(l):
return {Ks}
def F1(l):
return {F1}
def F2(l):
return [
{F2}
]
.format(
date=datetime.datetime.now(),
mass=mass,
S1=numpy_array_str(shape_integral_1),
S2=numpy_array_str_2x2(shape_integral_2),
K=numpy_array_str(simplify_ke(ke)),
Ks=numpy_array_str(ks),
F1=numpy_array_str(shape_integral_F1),
F2=numpy_array_str_2x2(shape_integral_F2)
)
with open('../beamfe/tapered_beam_element_integrals.py', 'wt') as f:
f.write(code)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mass matrix
Step2: Integrate the density distribution with the shape functions.
Step3: Special case
Step4: Shape integrals
Step5: First shape integral
Step6: The mass matrix should be the same as the trace of the 2nd shape integrals
Step7: Stiffness matrix
Step8: Define the stiffness distribution (linear)
Step9: Note that $EI_y$ refers to the stiffness for bending in the $y$-direction, not about the $y$ axis.
Step10: Special case
Step11: This is the same as Rao2004 p.326
Step12: This agrees with Cook1989, p.434, for the transverse directions.
Step13: First shape integral for applied forces
Step14: Second shape integral for applied forces
Step15: The generalised nodal forces are given by the trace of this (the other parts of it are used to find the moments on the whole body directly...)
Step16: Special case -- constant force
Step17: Another example -- a uniform distributed force in the z direction
Step19: Output
|
2,728
|
<ASSISTANT_TASK:>
Python Code:
b = 5
b = 6
assert b == 6
# Here's a comment!
b = 5
# Here's another comment! Neither of these comments are evaluated by Python!
# this is my comment
b = 6
print(b)
print(500)
# Integer
i = 1
# String
s = "Hello World"
# Float
f = 55.55
hundred_integer = 100
hundred_string = "hundred"
hundred_float = 100.5
assert hundred_integer == 100
a = type(5)
# The type is assigned to a. When you print the type, it is abbreviated to `str`
print(a)
c = type(10)
print(c)
five = 5
twenty_five = five * 5
negative_five = -five
assert twenty_five == 25
assert negative_five == -5
ten = "10"
eight = 8
int_ten = int(ten)
str_eight = str(eight)
assert int_ten == 10
assert str_eight == u"8"
l = []
# Print the type of `l` to confirm it's a list.
print(type(l))
l.append("January")
l.append("February")
l.append("March")
l.append("April")
print(l)
l = ["January", "February", "March", "April"]
m = [0,1,2,3]
years = [_ for _ in range(2010, 2015)]
print(years)
o = [u"Jan", 5., 1., u"uary", 10]
print(o)
int_months = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
index_four = int_months[4]
last_value = int_months[-1]
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov"]
second_last = months[-2]
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sept", "Oct", "Nov", "Dec"]
eight_eleven = months[8:12]
ending_index = len(months)
eight_eleven = months[8:ending_index]
five_nine = months[5:10]
print(months[:5])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 4
Step2: 5
Step3: 6
Step4: 7
Step5: 8
Step6: 9
Step7: 11
Step8: 12
Step9: 13
Step10: 14
Step11: 15
Step12: 16
|
2,729
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../../')
import imp
import macrodensity as md
import math
import numpy as np
import matplotlib.pyplot as plt
import os
if os.path.isfile('LOCPOT'):
print('LOCPOT already exists')
else:
os.system('bunzip2 LOCPOT.bz2')
input_file = 'LOCPOT'
#=== No need to edit below
vasp_pot, NGX, NGY, NGZ, Lattice = md.read_vasp_density(input_file)
vector_a,vector_b,vector_c,av,bv,cv = md.matrix_2_abc(Lattice)
resolution_x = vector_a/NGX
resolution_y = vector_b/NGY
resolution_z = vector_c/NGZ
grid_pot, electrons = md.density_2_grid(vasp_pot,NGX,NGY,NGZ)
cube_origin = [1,1,1]
travelled = [0,0,0]
int(cube_origin[0]*NGX)
dim = [1,10,20,40,60,80,100]
print("Dimension Potential Variance")
print("--------------------------------")
for d in dim:
cube = [d, d, d]
cube_potential, cube_var = md.volume_average(cube_origin, cube,grid_pot, NGX, NGY, NGZ, travelled=travelled)
print(" %3i %10.4f %10.6f"%(d, cube_potential, cube_var))
print("IP: %3.4f eV" % (2.3068 -- 2.4396 ))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the potential
Step2: Look for pore centre points
Step3: We want to try a range of sampling area sizes.
Step4: From the OUTCAR the VBM is at -2.4396 V
|
2,730
|
<ASSISTANT_TASK:>
Python Code:
from geopy.geocoders import GoogleV3
geolocator = GoogleV3()
# geolocator = GoogleV3(api_key=<your_google_api_key>)
t = pd.read_csv('https://data.cityofnewyork.us/api/views/43nn-pn8j/rows.csv?accessType=DOWNLOAD', header=0, sep=',', dtype={'PHONE':str, 'INSPECTION DATE':str});
## Helper Functions
from datetime import datetime
def str_to_iso(text):
if text != '':
for fmt in (['%m/%d/%Y']):
try:
#print(fmt)
#print(datetime.strptime(text, fmt))
return datetime.isoformat(datetime.strptime(text, fmt))
except ValueError:
#print(text)
pass
#raise ValueError('Changing date')
else:
return None
def getLatLon(row):
if row['Address'] != '':
location = geolocator.geocode(row['Address'], timeout=10000, sensor=False)
if location != None:
lat = location.latitude
lon = location.longitude
#print(lat,lon)
return [lon, lat]
elif row['Zipcode'] !='' or location != None:
location = geolocator.geocode(row['Zipcode'], timeout=10000, sensor=False)
if location != None:
lat = location.latitude
lon = location.longitude
#print(lat,lon)
return [lon, lat]
else:
return None
def getAddress(row):
if row['Building'] != '' and row['Street'] != '' and row['Boro'] != '':
x = row['Building']+' '+row['Street']+' '+row['Boro']+',NY'
x = re.sub(' +',' ',x)
return x
else:
return ''
def combineCT(x):
return str(x['Inspection_Date'][0][0:10])+'_'+str(x['Camis'])
# process column names: remove spaces & use title casing
t.columns = map(str.title, t.columns)
t.columns = map(lambda x: x.replace(' ', '_'), t.columns)
# replace nan with ''
t.fillna('', inplace=True)
# Convert date to ISO format
t['Inspection_Date'] = t['Inspection_Date'].map(lambda x: str_to_iso(x))
t['Record_Date'] = t['Record_Date'].map(lambda x: str_to_iso(x))
t['Grade_Date'] = t['Grade_Date'].map(lambda x: str_to_iso(x))
#t['Inspection_Date'] = t['Inspection_Date'].map(lambda x: x.split('/'))
# Combine Street, Building and Boro information to create Address string
t['Address'] = t.apply(getAddress, axis=1)
addDict = t[['Address','Zipcode']].copy(deep=True)
addDict = addDict.drop_duplicates()
addDict['Coord'] = [None]* len(addDict)
for item_id, item in addDict.iterrows():
if item_id % 100 == 0:
print(item_id)
if addDict['Coord'][item_id] == None:
addDict['Coord'][item_id] = getLatLon(item)
#print(addDict.loc[item_id]['Coord'])
# Save address dictionary to CSV
#addDict.to_csv('./dict_final.csv')
# Merge coordinates into original table
t1 = t.merge(addDict[['Address', 'Coord']])
# Keep only 1 value of score and grade per inspection
t2=t1.copy(deep=True)
t2['raw_num'] = t2.index
t2['RI'] = t2.apply(combineCT, axis=1)
yy = t2.groupby('RI').first().reset_index()['raw_num']
t2['Unique_Score'] = None
t2['Unique_Score'].loc[yy.values] = t2['Score'].loc[yy.values]
t2['Unique_Grade'] = None
t2['Unique_Grade'].loc[yy.values] = t2['Grade'].loc[yy.values]
del(t2['RI'])
del(t2['raw_num'])
del(t2['Grade'])
del(t2['Score'])
t2.rename(columns={'Unique_Grade' : 'Grade','Unique_Score':'Score'}, inplace=True)
t2['Grade'].fillna('', inplace=True)
t2.iloc[1]
### Create and configure Elasticsearch index
# Name of index and document type
index_name = 'nyc_restaurants';
doc_name = 'inspection'
# Delete donorschoose index if one does exist
if es.indices.exists(index_name):
es.indices.delete(index_name)
# Create donorschoose index
es.indices.create(index_name)
# Add mapping
with open('./inspection_mapping.json') as json_mapping:
d = json.load(json_mapping)
es.indices.put_mapping(index=index_name, doc_type=doc_name, body=d)
# Index data
for item_id, item in t2.iterrows():
if item_id % 1000 == 0:
print(item_id)
thisItem = item.to_dict()
#thisItem['Coord'] = getLatLon(thisItem)
thisDoc = json.dumps(thisItem);
#pprint.pprint(thisItem)
# write to elasticsearch
es.index(index=index_name, doc_type=doc_name, id=item_id, body=thisDoc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Data
Step2: Data preprocessing
Step3: Create a dictionary of unique Addresses. We do this to avoid calling the Google geocoding api multiple times for the same address
Step4: Get address for the geolocation for each address. This step can take a while because it's calling the Google geocoding API for each unique address.
Step5: Index Data
|
2,731
|
<ASSISTANT_TASK:>
Python Code:
def sort(L):
if len(L) <= 1:
return L
x, y, R = L[0], L[-1], L[1:-1]
p1, p2 = min(x, y), max(x, y)
L1, L2, L3 = partition(p1, p2, R)
if p1 == p2:
return sort(L1) + [p1] + L2 + [p2] + sort(L3)
else:
return sort(L1) + [p1] + sort(L2) + [p2] + sort(L3)
def partition(p1, p2, L):
if L == []:
return [], [], []
x, *R = L
R1, R2, R3 = partition(p1, p2, R)
if x < p1:
return [x] + R1, R2, R3
if x <= p2:
return R1, [x] + R2, R3
else:
return R1, R2, [x] + R3
partition(5, 13, [1, 19, 27, 2, 5, 6, 4, 7, 8, 5, 8, 17, 13])
sort([1, 19, 27, 2, 5, 6, 4, 7, 8, 5, 8, 17, 13])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function partition receives three arguments
|
2,732
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Xtrain = pd.read_csv('./data/multivariate_tr.csv')
Xtrain.head()
Xtrain.describe()
Xtest = pd.read_csv('./data/multivariate_ts.csv')
Xtest.head()
Xtest.describe()
from scipy.stats import multivariate_normal
def check_mult_norm(data, levels):
cov = np.cov(data.values.T)
mu = np.mean(data).values
x = np.linspace(min(data.iloc[:, 0])-0.5, max(data.iloc[:, 0])+0.5, 1000)
y = np.linspace(min(data.iloc[:, 1])-0.5, max(data.iloc[:, 1])+0.5, 1000)
x, y = np.meshgrid(x, y)
mult = multivariate_normal(mean=mu, cov=cov)
z = mult.pdf(np.array([x.ravel(), y.ravel()]).T).reshape(x.shape)
plt.figure(figsize=(16, 10))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1])
plt.contour(x, y, z, levels=np.arange(0, 0.5, 0.005))
check_mult_norm(Xtrain, np.arange(0, 0.5, 0.005))
check_mult_norm(Xtest, np.arange(0, 0.5, 0.005))
def calc_rolling_var(data, variable, variable_pos):
rolling_mean = data[variable].cumsum() / np.arange(1, len(data)+1)
rolling_var = []
for i in np.arange(2, len(data)+1):
rolling_var.append(np.sum((data.iloc[:i, variable_pos] - rolling_mean[i-1])**2) / i)
return rolling_mean, np.array(rolling_var)
rolling_mean_x1, rolling_var_x1 = calc_rolling_var(Xtrain, 'X1', 0)
rolling_mean_x2, rolling_var_x2 = calc_rolling_var(Xtrain, 'X2', 1)
def plot_rollings(data1, data2, window, title, xlab, ylab1, ylab2):
fig, ax = plt.subplots(2, 1, figsize=(16, 15))
ax[0].plot(data1[window])
ax[0].set_xlabel(xlab)
ax[0].set_ylabel(ylab1)
ax[0].set_title(title, size=25)
ax[1].plot(data2[window])
ax[1].set_xlabel(xlab)
ax[1].set_ylabel(ylab2)
plot_rollings(rolling_mean_x1, rolling_mean_x2, np.arange(len(Xtrain)), 'Rolling Mean over Time', 'Epochs', 'X1 Measurements', 'X2 Measurements');
plot_rollings(rolling_var_x1, rolling_var_x2, np.arange(len(Xtrain)-1), 'Rolling Variance over Time', 'Epochs', 'X1 Measurements', 'X2 Measurements');
plot_rollings(rolling_mean_x1, rolling_mean_x2, np.arange(3000, 3101), 'Rolling Mean over Time', 'Epochs', 'X1 Measurements', 'X2 Measurements')
rolling_mean = []
for i in np.arange(1, len(Xtrain)+1):
rolling_mean.append(np.sum(Xtrain.iloc[i-10:i, 0]) / 10)
rolling_mean = np.array(rolling_mean)
rolling_mean[10:20]
def calc_rolling_var_windowed(data, variable_pos, window):
rolling_mean = []
for i in np.arange(1, len(data)+1):
rolling_mean.append(np.sum(data.iloc[i-window:i, variable_pos]) / window)
rolling_mean = np.array(rolling_mean)
rolling_var = []
for i in np.arange(2, len(data)+1):
rolling_var.append(np.sum((data.iloc[i-window:i, variable_pos] - rolling_mean[i-1])**2) / window)
return rolling_mean, np.array(rolling_var)
rolling_mean_x1_10, rolling_var_x1_10 = calc_rolling_var_windowed(Xtrain, 0, 1000)
rolling_mean_x2_10, rolling_var_x2_10 = calc_rolling_var_windowed(Xtrain, 1, 1000)
plot_rollings(rolling_mean_x1_10, rolling_mean_x2_10, np.arange(len(rolling_mean_x1_10)), 'Rolling Mean over Time', 'Epochs', 'X1 Measurements', 'X2 Measurements');
plot_rollings(rolling_var_x1_10, rolling_var_x2_10, np.arange(len(rolling_var_x1_10)), 'Rolling Variance over Time', 'Epochs', 'X1 Measurements', 'X2 Measurements');
def estd_multi(data):
cov = np.cov(data.values.T)
mu = np.mean(data).values
return multivariate_normal(mean=mu, cov=cov)
def is_outlier(obs, distr, threshold):
prob = distr.pdf(obs)
if prob < threshold:
return 1, prob
else:
return 0, prob
multi = estd_multi(Xtrain.iloc[:100, ])
print(is_outlier(Xtrain.iloc[152, ], multi, threshold=0.01))
print(is_outlier(Xtrain.iloc[1523, ], multi, threshold=0.01))
def live_feed(stream_data, past_data, window, threshold = 0.0001):
is_out = []
for i in np.arange(len(stream_data)):
data = past_data.iloc[-window:, ]
multi = estd_multi(data)
flag, _ = is_outlier(stream_data.iloc[i, ], multi, threshold)
is_out.append(flag)
past_data = past_data.append(stream_data.iloc[i, ])
return is_out, past_data
past = Xtest.iloc[:100, ]
is_out, past = live_feed(Xtest.iloc[100:200, ], past, 10)
past.iloc[np.append(np.repeat(False, 100), (np.array(is_out)))==1, ]
is_out, past = live_feed(Xtest.iloc[200:300, ], past, 10)
past.iloc[np.append(np.repeat(False, 200), (np.array(is_out)))==1, ]
past = Xtest.iloc[:1000, ]
is_out, _ = live_feed(Xtest.iloc[1000:, ], past, 1000)
np.append(np.repeat(False, 1000), (np.array(is_out)))[1000:1100]
plt.figure(figsize=(16, 10))
plt.scatter(Xtest.iloc[np.append(np.repeat(False, 1000), (np.array(is_out)))==0, 0],
Xtest.iloc[np.append(np.repeat(False, 1000), (np.array(is_out)))==0, 1], marker='o')
plt.scatter(Xtest.iloc[np.append(np.repeat(False, 1000), (np.array(is_out)))==1, 0],
Xtest.iloc[np.append(np.repeat(False, 1000), (np.array(is_out)))==1, 1], marker='x')
# I'm drawing just 10 examples of the distribution in order to make the plot readable...
x = np.linspace(min(Xtest.iloc[:, 0])-0.5, max(Xtest.iloc[:, 0])+0.5, 1000)
y = np.linspace(min(Xtest.iloc[:, 1])-0.5, max(Xtest.iloc[:, 1])+0.5, 1000)
x, y = np.meshgrid(x, y)
# for i in range(10):
# cov = np.cov(Xtest.iloc[i*2550:i*2550+10, ].values.T)
# mu = np.mean(Xtest.iloc[i*2550:i*2550+10, ]).values
# mult = multivariate_normal(mean=mu, cov=cov)
# z = mult.pdf(np.array([x.ravel(), y.ravel()]).T).reshape(x.shape)
# plt.contour(x, y, z, levels=[0.0001]);
cov = np.cov(Xtest.values.T)
mu = np.mean(Xtest).values
mult = multivariate_normal(mean=mu, cov=cov)
z = mult.pdf(np.array([x.ravel(), y.ravel()]).T).reshape(x.shape)
plt.contour(x, y, z);
errors = pd.read_csv('./data/errors.csv')
errors.head()
errors.describe()
threshold = 0.01
n = 1000
p = 5/1000
n_blocks = int(np.ceil(len(errors) / 1000))
from scipy.stats import binom
distr = binom(p=p, n=n)
blocks = []
is_out = []
for i in np.arange(n_blocks):
curr_count = errors[n*i:n*(i+1)].sum()[0]
blocks.append(curr_count)
is_out.append(distr.pmf(curr_count) < threshold)
print(is_out[:10])
print(blocks[:10])
fig, ax = plt.subplots(figsize=(16, 10))
ax.bar(np.arange(n_blocks), blocks, color=['red' if out else 'blue' for out in is_out])
ax.set_title('Error per Block of 1000 Obs')
ax.set_xlabel('Block No.')
ax.set_ylabel('Errors')
ax.set_xlim(-1.5, n_blocks+0.5)
ax.hlines(10.5, -1.5, n_blocks+0.5, color='r', alpha=0.7, linestyles='--')
ax.xaxis.set_major_locator(plt.MaxNLocator(20));
iris_train = pd.read_csv('./data/iris_train.csv')
iris_test = pd.read_csv('./data/iris_test.csv')
iris_train.head()
iris_train.describe()
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3)
km.fit(iris_train.iloc[:, :4])
print(km.labels_)
print(km.cluster_centers_)
centers = np.array([list(km.cluster_centers_[lab]) for lab in km.labels_])
distances = np.sqrt(np.sum((iris_train.iloc[:, :4] - centers)**2, axis=1))
percentiles = np.array([np.percentile(distances[km.labels_==i], 98) for i in range(3)])
percentiles
pred = km.predict(iris_test.iloc[:, :4])
centers = np.array([list(km.cluster_centers_[lab]) for lab in pred])
distances = np.sqrt(np.sum((iris_test.iloc[:, :4] - centers)**2, axis=1))
is_out = [distances[i] > percentiles[lab] for i, lab in enumerate(pred)]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(16, 10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_test.iloc[:, 0],
iris_test.iloc[:, 1],
iris_test.iloc[:, 2],
color=['red' if out else 'blue' for out in is_out]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Check that the training data is suitable for a multivariate modeling approach (multivariate_tr.csv & multivariate_ts.csv)
Step2: 2 - Check for any drift in the training data mean and std over time
Step3: I think $N=10$ is a good window...
Step4: 3 - Create the following functions
Step5: 4 - Create a function to simulate a live feed (a stream) of the test data
Step6: 5 - Create a single plot that has
Step7: 6 - How do you think the threshold could be optimized?
Step8: 2 - Create a script that will simulate this data as a feed
Step9: 3 - Create a plot with the following properties
Step10: Anomaly Detection Using Cluster Analysis
Step11: 1 - Fit the training data using k-means clustering (without labels) using a number a clusters determined by the label values
Step12: 2 - Get the 98th percentile for distances from cluster centers for the training data
Step13: 3 - Use these percentiles to determine outliers in the test data and create a 3d plot using the first three attributes and highlight outliers
|
2,733
|
<ASSISTANT_TASK:>
Python Code:
from symbulate import *
%matplotlib inline
cards = ['club', 'diamond', 'heart', 'spade'] * 4 # 4 cards of each suit
len(cards)
P = BoxModel(cards, size=2, replace=False, order_matters=True)
P.draw()
sims = P.sim(10000)
sims
sims = P.sim(10000)
sims.tabulate()
def first_is_heart(x):
return (x[0] == 'heart')
first_is_heart(('heart', 'club'))
first_is_heart(('club', 'heart'))
sims_first_is_heart = sims.filter(first_is_heart)
sims_first_is_heart.tabulate()
len(sims_first_is_heart) / len(sims)
# Type your Symbulate commands in this cell.
# Type your Symbulate commands in this cell.
n = 10
prizes = list(range(n))
prizes
def number_collected(x):
return len(set(x))
# For example
number_collected([2, 1, 2, 0, 2, 2, 0])
n = 3
prizes = list(range(n))
prizes
# Type your Symbulate commands in this cell.
# Type your Symbulate commands in this cell.
# Type your Symbulate commands in this cell.
# Type your Symbulate commands in this cell.
# Type your Symbulate commands in this cell.
# Type your relevant code in this cell for 0.5
# Type your relevant code in this cell for 0.9
# Type your relevant code in this cell for 0.99
# Type your Symbulate commands in this cell.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part I. Introduction to Symbulate, and conditional versus unconditional probability
Step2: Now we define a BoxModel probability space corresponding to drawing two cards (size=2) from the deck at random. We'll assume that the cards are drawn without replacement (replace=False). We also want to keep track of which card was drawn first and which second (order_matters=True).
Step3: The .draw() method simulates a single outcome from the probability space. Note that each outcome is an ordered pair of cards.
Step4: Many outcomes can be simulated using .sim(). The following simulates 10000 draws and stores the results in the variable sims.
Step5: We can summarize the simulation results with .tabulate(). Note that ('heart', 'club') is counted as a separate outcome than ('club', 'heart') because the order matters.
Step6: The above table could be used to estimate the probabilities in question. Instead, we will illustrate several other tools available in Symbulate to summarize simulation output.
Step7: Now we filter the simulated outcomes to create the subset of outcomes for which first_is_heart returns True.
Step8: Returning to question 1, we can estimate the probability that the first card is a heart by dividing the number of simulated draws for which the first card is a heart divided by the total number of simulated draws (using the length function len to count.)
Step9: The true probability is 4/16 = 0.25. Your simulated probability should be close to 0.25, but there will be some natural variability due to the randomness in the simulation. Very roughly, the margin of error of a probability estimate based on $N$ simulated repetitions is about $1/\sqrt{N}$, so about 0.01 for 10000 repetitions. The interval constructed by adding $\pm 0.01$ to your estimate will likely contain 0.25.
Step10: b)
Step11: c)
Step12: And here is a function that returns the number of distinct prizes collected among a set of prizes.
Step13: Aside from the above functions, you should use Symbulate commands exclusively for Part II.
Step14: a)
Step15: b)
Step16: c)
Step17: d)
Step18: Problem 2.
Step19: Problem 3.
Step20: Problem 4.
|
2,734
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', dataset='lc01')
b.add_dataset('mesh', times=[0], columns=['intensities*'])
print(b['gravb_bol'])
print(b['gravb_bol@primary'])
print(b.run_checks())
b['teff@primary'] = 8500
b['gravb_bol@primary'] = 0.8
print(b.run_checks())
b['teff@primary'] = 7000
b['gravb_bol@primary'] = 0.2
print(b.run_checks())
b['teff@primary'] = 6000
b['gravb_bol@primary'] = 1.0
print(b.run_checks())
b['teff@primary'] = 6000
b['gravb_bol@primary'] = 0.32
b.run_compute(model='gravb_bol_32')
afig, mplfig = b['primary@mesh01@gravb_bol_32'].plot(fc='intensities', ec='None', show=True)
b['gravb_bol@primary'] = 1.0
b.run_compute(model='gravb_bol_10')
afig, mplfig = b['primary@mesh01@gravb_bol_10'].plot(fc='intensities', ec='None', show=True)
np.nanmax((b.get_value('intensities', component='primary', model='gravb_bol_32') - b.get_value('intensities', component='primary', model='gravb_bol_10'))/b.get_value('intensities', component='primary', model='gravb_bol_10'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
Step3: If you have a logger enabled, PHOEBE will print a warning if the value of gravb_bol is outside the "suggested" ranges. Note that this is strictly a warning, and will never turn into an error at b.run_compute().
Step4: Influence on Intensities
Step5: Comparing these two plots, it is essentially impossible to notice any difference between the two models. But if we compare the intensities directly, we can see that there is a subtle difference, with a maximum difference of about 3%.
|
2,735
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-1', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
2,736
|
<ASSISTANT_TASK:>
Python Code:
from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:-6])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
cs=3
c1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)]
x1 = np.stack(c1_dat[:-2])
x2 = np.stack(c2_dat[:-2])
x3 = np.stack(c3_dat[:-2])
y = np.stack(c4_dat[:-2])
x1[:4], x2[:4], x3[:4]
y[:4]
x1.shape, y.shape
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
c1_hidden = dense_in(c1)
dense_hidden = Dense(n_hidden, activation='tanh')
c2_dense = dense_in(c2)
hidden_2 = dense_hidden(c1_hidden)
c2_hidden = merge([c2_dense, hidden_2])
c3_dense = dense_in(c3)
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
dense_out = Dense(vocab_size, activation='softmax')
c4_out = dense_out(c3_hidden)
model = Model([c1_in, c2_in, c3_in], c4_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.optimizer.lr=0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr=0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr = 0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr = 0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
cs=8
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
for n in range(cs)]
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]
xs = [np.stack(c[:-2]) for c in c_in_dat]
len(xs), xs[0].shape
y = np.stack(c_out_dat[:-2])
[xs[n][:cs] for n in range(cs)]
y[:cs]
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
hidden = dense_in(c_ins[0][1])
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden])
c_out = dense_out(hidden)
model = Model([c[0] for c in c_ins], c_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12, verbose=2)
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity'),
Dense(vocab_size, activation='softmax')
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(np.concatenate(xs,axis=1), y, batch_size=64, nb_epoch=8, verbose=2)
def get_next_keras(inp):
idxs = [char_indices[c] for c in inp]
arrs = np.array(idxs)[np.newaxis,:]
p = model.predict(arrs)[0]
return chars[np.argmax(p)]
get_next_keras('this is ')
get_next_keras('part of ')
get_next_keras('queens a')
#c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
# for n in range(cs)]
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]
for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax', name='output')
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
model = Model([inp1] + [c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12, verbose=2)
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
n_hidden, n_fac, cs, vocab_size
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, return_sequences=True, activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
xs[0].shape
x_rnn=np.stack(xs, axis=1)
y_rnn=np.expand_dims(np.stack(ys, axis=1), -1)
x_rnn.shape, y_rnn.shape
model.fit(x_rnn[:,:,0], y_rnn[:,:,0], batch_size=64, nb_epoch=8, verbose=2)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
model=Sequential([
SimpleRNN(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
oh_ys = [to_categorical(o, vocab_size) for o in ys]
oh_y_rnn=np.stack(oh_ys, axis=1)
oh_xs = [to_categorical(o, vocab_size) for o in xs]
oh_x_rnn=np.stack(oh_xs, axis=1)
oh_x_rnn.shape, oh_y_rnn.shape
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8, verbose=2)
def get_nexts_oh(inp):
idxs = np.array([char_indices[c] for c in inp])
arr = to_categorical(idxs, vocab_size)
p = model.predict(arr[np.newaxis,:])[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_oh(' this is')
bs=64
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,8)),
BatchNormalization(),
LSTM(n_hidden, return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
mx = len(x_rnn)//bs*bs
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
n_input = vocab_size
n_output = vocab_size
def init_wgts(rows, cols):
scale = math.sqrt(2/rows)
return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))
def init_bias(rows):
return shared(np.zeros(rows, dtype=np.float32))
def wgts_and_bias(n_in, n_out):
return init_wgts(n_in, n_out), init_bias(n_out)
def id_and_bias(n):
return shared(np.eye(n, dtype=np.float32)), init_bias(n)
t_inp = T.matrix('inp')
t_outp = T.matrix('outp')
t_h0 = T.vector('h0')
lr = T.scalar('lr')
all_args = [t_h0, t_inp, t_outp, lr]
W_h = id_and_bias(n_hidden)
W_x = wgts_and_bias(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W_h, W_x, W_y]))
def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):
# Calculate the hidden activations
h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)
# Calculate the output activations
y = nnet.softmax(T.dot(h, W_y) + b_y)
# Return both (the 'Flatten()' is to work around a theano bug)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
X = oh_x_rnn
Y = oh_y_rnn
X.shape, Y.shape
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.3f}".format(err/1000))
err=0.0
f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)
pred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)
act = np.argmax(X[6], axis=1)
[indices_char[o] for o in act]
[indices_char[o] for o in pred]
def sigmoid(x): return 1/(1+np.exp(-x))
def sigmoid_d(x):
output = sigmoid(x)
return output*(1-output)
def relu(x): return np.maximum(0., x)
def relu_d(x): return (x > 0.)*1.
relu(np.array([3.,-3.])), relu_d(np.array([3.,-3.]))
def dist(a,b): return pow(a-b,2)
def dist_d(a,b): return 2*(a-b)
import pdb
eps = 1e-7
def x_entropy(pred, actual):
return -np.sum(actual * np.log(np.clip(pred, eps, 1-eps)))
def x_entropy_d(pred, actual): return -actual/pred
def softmax(x): return np.exp(x)/np.exp(x).sum()
def softmax_d(x):
sm = softmax(x)
res = np.expand_dims(-sm,-1)*sm
res[np.diag_indices_from(res)] = sm*(1-sm)
return res
test_preds = np.array([0.2,0.7,0.1])
test_actuals = np.array([0.,1.,0.])
nnet.categorical_crossentropy(test_preds, test_actuals).eval()
x_entropy(test_preds, test_actuals)
test_inp = T.dvector()
test_out = nnet.categorical_crossentropy(test_inp, test_actuals)
test_grad = theano.function([test_inp], T.grad(test_out, test_inp))
test_grad(test_preds)
x_entropy_d(test_preds, test_actuals)
pre_pred = random(oh_x_rnn[0][0].shape)
preds = softmax(pre_pred)
actual = oh_x_rnn[0][0]
loss_d=x_entropy_d
np.allclose(softmax_d(pre_pred).dot(loss_d(preds,actual)), preds-actual)
softmax(test_preds)
nnet.softmax(test_preds).eval()
test_out = T.flatten(nnet.softmax(test_inp))
test_grad = theano.function([test_inp], theano.gradient.jacobian(test_out, test_inp))
test_grad(test_preds)
softmax_d(test_preds)
act=relu
act_d = relu_d
loss=x_entropy
def scan(fn, start, seq):
res = []
prev = start
for s in seq:
app = fn(prev, s)
res.append(app)
prev = app
return res
scan(lambda prev,curr: prev+curr, 0, range(5))
inp = oh_x_rnn
outp = oh_y_rnn
n_input = vocab_size
n_output = vocab_size
inp.shape, outp.shape
def one_char(prev, item):
# Previous state
tot_loss, pre_hidden, pre_pred, hidden, ypred = prev
# Current inputs and output
x, y = item
pre_hidden = np.dot(x,w_x) + np.dot(hidden,w_h)
hidden = act(pre_hidden)
pre_pred = np.dot(hidden,w_y)
ypred = softmax(pre_pred)
return (
# Keep track of loss so we can report it
tot_loss+loss(ypred, y),
# Used in backprop
pre_hidden, pre_pred,
# Used in next iteration
hidden,
# To provide predictions
ypred)
def get_chars(n): return zip(inp[n], outp[n])
def one_fwd(n): return scan(one_char, (0,0,0,np.zeros(n_hidden),0), get_chars(n))
# "Columnify" a vector
def col(x): return x[:,newaxis]
def one_bkwd(args, n):
global w_x,w_y,w_h
i=inp[n] # 8x86
o=outp[n] # 8x86
d_pre_hidden = np.zeros(n_hidden) # 256
for p in reversed(range(len(i))):
totloss, pre_hidden, pre_pred, hidden, ypred = args[p]
x=i[p] # 86
y=o[p] # 86
d_pre_pred = softmax_d(pre_pred).dot(loss_d(ypred,y)) # 86
d_pre_hidden = (np.dot(d_pre_hidden, w_h.T)
+ np.dot(d_pre_pred,w_y.T)) * act_d(pre_hidden) # 256
# d(loss)/d(w_y) = d(loss)/d(pre_pred) * d(pre_pred)/d(w_y)
w_y -= col(hidden) * d_pre_pred * alpha
# d(loss)/d(w_h) = d(loss)/d(pre_hidden[p-1]) * d(pre_hidden[p-1])/d(w_h)
if (p>0): w_h -= args[p-1][3].dot(d_pre_hidden) * alpha
w_x -= col(x)*d_pre_hidden * alpha
return d_pre_hidden
scale=math.sqrt(2./n_input)
w_x = normal(scale=scale, size=(n_input,n_hidden))
w_y = normal(scale=scale, size=(n_hidden, n_output))
w_h = np.eye(n_hidden, dtype=np.float32)
overallError=0
alpha=0.0001
for n in range(10000):
res = one_fwd(n)
overallError+=res[-1][0]
deriv = one_bkwd(res, n)
if(n % 1000 == 999):
print ("Error:{:.4f}; Gradient:{:.5f}".format(
overallError/1000, np.linalg.norm(deriv)))
overallError=0
model=Sequential([
GRU(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8, verbose=2)
get_nexts_oh(' this is')
W_h = id_and_bias(n_hidden)
W_x = init_wgts(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
rW_h = init_wgts(n_hidden, n_hidden)
rW_x = wgts_and_bias(n_input, n_hidden)
uW_h = init_wgts(n_hidden, n_hidden)
uW_x = wgts_and_bias(n_input, n_hidden)
w_all = list(chain.from_iterable([W_h, W_y, uW_x, rW_x]))
w_all.extend([W_x, uW_h, rW_h])
def gate(x, h, W_h, W_x, b_x):
return nnet.sigmoid(T.dot(x, W_x) + b_x + T.dot(h, W_h))
def step(x, h, W_h, b_h, W_y, b_y, uW_x, ub_x, rW_x, rb_x, W_x, uW_h, rW_h):
reset = gate(x, h, rW_h, rW_x, rb_x)
update = gate(x, h, uW_h, uW_x, ub_x)
h_new = gate(x, h * reset, W_h, W_x, b_h)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.1
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
l_rate *= 0.95
print ("Error:{:.2f}".format(err/1000))
err=0.0
W = (shared(np.concatenate([np.eye(n_hidden), normal(size=(n_input, n_hidden))])
.astype(np.float32)), init_bias(n_hidden))
rW = wgts_and_bias(n_input+n_hidden, n_hidden)
uW = wgts_and_bias(n_input+n_hidden, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W, W_y, uW, rW]))
def gate(m, W, b): return nnet.sigmoid(T.dot(m, W) + b)
def step(x, h, W, b, W_y, b_y, uW, ub, rW, rb):
m = T.concatenate([h, x])
reset = gate(m, rW, rb)
update = gate(m, uW, ub)
m = T.concatenate([h*reset, x])
h_new = gate(m, W, b)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.2f}".format(err/1000))
err=0.0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
Step3: Map from chars to indices and back again
Step4: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
Step5: 3 char model
Step6: Our inputs
Step7: Our output
Step8: The first 4 inputs and outputs
Step9: The number of latent factors to create (i.e. the size of the embedding matrix)
Step10: Create inputs and embedding outputs for each of our 3 character inputs
Step11: Create and train model
Step12: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
Step13: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
Step14: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
Step15: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
Step16: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
Step17: The third hidden state is the input to our output layer.
Step18: Test model
Step19: Our first RNN!
Step20: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
Step21: Then create a list of the next character in each of these series. This will be the labels for our model.
Step22: So each column below is one series of 8 characters from the text.
Step23: ...and this is the next character after each sequence.
Step24: Create and train model
Step25: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Step26: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
Step27: Putting the final hidden state through dense_out() gives us our output.
Step28: So now we can create our model.
Step29: Test model
Step30: Our first RNN with keras!
Step31: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
Step32: Returning sequences
Step33: Reading down each column shows one set of inputs and outputs.
Step34: Create and train model
Step35: We're going to pass a vector of all zeros as our starting point - here's our input layers for that
Step36: Test model
Step37: Sequence model with keras
Step38: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
Step39: One-hot sequence model with keras
Step40: Stateful model with keras
Step41: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
Step42: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
Step43: Theano RNN
Step44: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
Step45: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
Step46: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation
Step47: Now we're ready to create our intial weight matrices.
Step48: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character
Step49: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
Step50: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
Step51: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
Step52: We're finally ready to compile the function!
Step53: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
Step54: Pure python RNN!
Step55: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement
Step56: ...for instance, scan on + is the cumulative sum.
Step57: Set up training
Step58: Here's the function to do a single forward pass of an RNN, for a single character.
Step59: We use scan to apply the above to a whole sequence of characters.
Step60: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
Step61: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
Step62: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
Step63: Keras GRU
Step64: Theano GRU
Step65: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
Step66: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
Step67: Everything from here on is identical to our simple RNN in theano.
Step68: Combined weights
|
2,737
|
<ASSISTANT_TASK:>
Python Code:
from search import *
%psource Problem
%psource GraphProblem
romania_map = UndirectedGraph(dict(
Arad=dict(Zerind=75, Sibiu=140, Timisoara=118),
Bucharest=dict(Urziceni=85, Pitesti=101, Giurgiu=90, Fagaras=211),
Craiova=dict(Drobeta=120, Rimnicu=146, Pitesti=138),
Drobeta=dict(Mehadia=75),
Eforie=dict(Hirsova=86),
Fagaras=dict(Sibiu=99),
Hirsova=dict(Urziceni=98),
Iasi=dict(Vaslui=92, Neamt=87),
Lugoj=dict(Timisoara=111, Mehadia=70),
Oradea=dict(Zerind=71, Sibiu=151),
Pitesti=dict(Rimnicu=97),
Rimnicu=dict(Sibiu=80),
Urziceni=dict(Vaslui=142)))
romania_map.locations = dict(
Arad=(91, 492), Bucharest=(400, 327), Craiova=(253, 288),
Drobeta=(165, 299), Eforie=(562, 293), Fagaras=(305, 449),
Giurgiu=(375, 270), Hirsova=(534, 350), Iasi=(473, 506),
Lugoj=(165, 379), Mehadia=(168, 339), Neamt=(406, 537),
Oradea=(131, 571), Pitesti=(320, 368), Rimnicu=(233, 410),
Sibiu=(207, 457), Timisoara=(94, 410), Urziceni=(456, 350),
Vaslui=(509, 444), Zerind=(108, 531))
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
romania_locations = romania_map.locations
print(romania_locations)
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
from matplotlib import lines
from ipywidgets import interact
import ipywidgets as widgets
from IPython.display import display
import time
# initialise a graph
G = nx.Graph()
# use this while labeling nodes in the map
node_labels = dict()
# use this to modify colors of nodes while exploring the graph.
# This is the only dict we send to `show_map(node_colors)` while drawing the map
node_colors = dict()
for n, p in romania_locations.items():
# add nodes from romania_locations
G.add_node(n)
# add nodes to node_labels
node_labels[n] = n
# node_colors to color nodes while exploring romania map
node_colors[n] = "white"
# we'll save the initial node colors to a dict to use later
initial_node_colors = dict(node_colors)
# positions for node labels
node_label_pos = {k:[v[0],v[1]-10] for k,v in romania_locations.items()}
# use thi whiel labeling edges
edge_labels = dict()
# add edges between cities in romania map - UndirectedGraph defined in search.py
for node in romania_map.nodes():
connections = romania_map.get(node)
for connection in connections.keys():
distance = connections[connection]
# add edges to the graph
G.add_edge(node, connection)
# add distances to edge_labels
edge_labels[(node, connection)] = distance
def show_map(node_colors):
# set the size of the plot
plt.figure(figsize=(18,13))
# draw the graph (both nodes and edges) with locations from romania_locations
nx.draw(G, pos = romania_locations, node_color = [node_colors[node] for node in G.nodes()])
# draw labels for nodes
node_label_handles = nx.draw_networkx_labels(G, pos = node_label_pos, labels = node_labels, font_size = 14)
# add a white bounding box behind the node labels
[label.set_bbox(dict(facecolor='white', edgecolor='none')) for label in node_label_handles.values()]
# add edge lables to the graph
nx.draw_networkx_edge_labels(G, pos = romania_locations, edge_labels=edge_labels, font_size = 14)
# add a legend
white_circle = lines.Line2D([], [], color="white", marker='o', markersize=15, markerfacecolor="white")
orange_circle = lines.Line2D([], [], color="white", marker='o', markersize=15, markerfacecolor="orange")
red_circle = lines.Line2D([], [], color="white", marker='o', markersize=15, markerfacecolor="red")
gray_circle = lines.Line2D([], [], color="white", marker='o', markersize=15, markerfacecolor="gray")
plt.legend((white_circle, orange_circle, red_circle, gray_circle),
('Un-explored', 'Frontier', 'Currently exploring', 'Explored'),
numpoints=1,prop={'size':16}, loc=(.8,.75))
# show the plot. No need to use in notebooks. nx.draw will show the graph itself.
plt.show()
show_map(node_colors)
def final_path_colors(problem, solution):
"returns a node_colors dict of the final path provided the problem and solution"
# get initial node colors
final_colors = dict(initial_node_colors)
# color all the nodes in solution and starting node to green
final_colors[problem.initial] = "green"
for node in solution:
final_colors[node] = "green"
return final_colors
def display_visual(user_input, algorithm=None, problem=None):
if user_input == False:
def slider_callback(iteration):
# don't show graph for the first time running the cell calling this function
try:
show_map(all_node_colors[iteration])
except:
pass
def visualize_callback(Visualize):
if Visualize is True:
button.value = False
global all_node_colors
iterations, all_node_colors, node = algorithm(problem)
solution = node.solution()
all_node_colors.append(final_path_colors(problem, solution))
slider.max = len(all_node_colors) - 1
for i in range(slider.max + 1):
slider.value = i
# time.sleep(.5)
slider = widgets.IntSlider(min=0, max=1, step=1, value=0)
slider_visual = widgets.interactive(slider_callback, iteration = slider)
display(slider_visual)
button = widgets.ToggleButton(value = False)
button_visual = widgets.interactive(visualize_callback, Visualize = button)
display(button_visual)
if user_input == True:
node_colors = dict(initial_node_colors)
if algorithm == None:
algorithms = {"Breadth First Tree Search": breadth_first_tree_search, "Breadth First Search": breadth_first_search, "Uniform Cost Search": uniform_cost_search, "A-star Search": astar_search}
algo_dropdown = widgets.Dropdown(description = "Search algorithm: ", options = sorted(list(algorithms.keys())), value = "Breadth First Tree Search")
display(algo_dropdown)
def slider_callback(iteration):
# don't show graph for the first time running the cell calling this function
try:
show_map(all_node_colors[iteration])
except:
pass
def visualize_callback(Visualize):
if Visualize is True:
button.value = False
problem = GraphProblem(start_dropdown.value, end_dropdown.value, romania_map)
global all_node_colors
if algorithm == None:
user_algorithm = algorithms[algo_dropdown.value]
# print(user_algorithm)
# print(problem)
iterations, all_node_colors, node = user_algorithm(problem)
solution = node.solution()
all_node_colors.append(final_path_colors(problem, solution))
slider.max = len(all_node_colors) - 1
for i in range(slider.max + 1):
slider.value = i
# time.sleep(.5)
start_dropdown = widgets.Dropdown(description = "Start city: ", options = sorted(list(node_colors.keys())), value = "Arad")
display(start_dropdown)
end_dropdown = widgets.Dropdown(description = "Goal city: ", options = sorted(list(node_colors.keys())), value = "Fagaras")
display(end_dropdown)
button = widgets.ToggleButton(value = False)
button_visual = widgets.interactive(visualize_callback, Visualize = button)
display(button_visual)
slider = widgets.IntSlider(min=0, max=1, step=1, value=0)
slider_visual = widgets.interactive(slider_callback, iteration = slider)
display(slider_visual)
def tree_search(problem, frontier):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = dict(initial_node_colors)
frontier.append(Node(problem.initial))
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def breadth_first_tree_search(problem):
"Search the shallowest nodes in the search tree first."
iterations, all_node_colors, node = tree_search(problem, FIFOQueue())
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Fagaras', romania_map)
display_visual(user_input = False, algorithm = breadth_first_tree_search, problem = romania_problem)
def breadth_first_search(problem):
"[Figure 3.11]"
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = dict(initial_node_colors)
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = FIFOQueue()
frontier.append(node)
# modify the color of frontier nodes to blue
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
if problem.goal_test(child.state):
node_colors[child.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, child)
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(user_input = False, algorithm = breadth_first_search, problem = romania_problem)
def best_first_graph_search(problem, f):
Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned.
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = dict(initial_node_colors)
f = memoize(f, 'f')
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = PriorityQueue(min, f)
frontier.append(node)
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
elif child in frontier:
incumbent = frontier[child]
if f(child) < f(incumbent):
del frontier[incumbent]
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def uniform_cost_search(problem):
"[Figure 3.14]"
iterations, all_node_colors, node = best_first_graph_search(problem, lambda node: node.path_cost)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(user_input = False, algorithm = uniform_cost_search, problem = romania_problem)
def best_first_graph_search(problem, f):
Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned.
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = dict(initial_node_colors)
f = memoize(f, 'f')
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = PriorityQueue(min, f)
frontier.append(node)
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
elif child in frontier:
incumbent = frontier[child]
if f(child) < f(incumbent):
del frontier[incumbent]
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def astar_search(problem, h=None):
A* search is best-first graph search with f(n) = g(n)+h(n).
You need to specify the h function when you call astar_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search(problem, lambda n: n.path_cost + h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(user_input = False, algorithm = astar_search, problem = romania_problem)
all_node_colors = []
# display_visual(user_input = True, algorithm = breadth_first_tree_search)
display_visual(user_input = True)
penguin_problem = GraphProblem('Start', 'Penguin', museum_graph)
penguin_ucs_node = uniform_cost_search(penguin_problem)
penguin_ucs_node.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Review
Step2: The Problem class has six methods.
Step3: Now it's time to define our problem. We will define it by passing initial, goal, graph to GraphProblem. So, our problem is to find the goal state starting from the given initial state on the provided graph. Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
Step4: It is pretty straight forward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes.
Step5: Romania map visualisation
Step6: Let's start the visualisations by importing necessary modules. We use networkx and matplotlib to show the map in notebook and we use ipywidgets to interact with the map to see how the searching algorithm works.
Step7: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
Step8: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching using variety of algorithms from the book.
Step9: We can simply call the function with node_colors dictionary object to display it.
Step10: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements.
Step12: Breadth first tree search
Step13: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback function which are called when we interact with slider and the button.
Step14: Breadth first search
Step16: Uniform cost search
Step19: A* search
|
2,738
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.misc import factorial
from scipy.optimize import curve_fit
import scipy.constants as cst
import matplotlib.pyplot as plt
%matplotlib inline
ao = cst.physical_constants["Bohr radius"][0] / cst.angstrom
print("a0 = {} A".format(ao))
def rms(y_th, y):
compute the root mean square between y and y_th values
return np.sqrt(((np.array(y) - np.array(y_th))**2).mean())
def OA4s(r, z=19, ao=ao):
Return values of the radial part of a 4s AO of an hydrogen like specie
Args:
r : float or ndarray of float in the same unit as ao (A by default)
z (int) : atomic number
ao (float): Bohr radius (in A by default)
poly = 24 - 18 * z * r / ao + 3 * z**2 * r**2 / ao**2 - z**3 * r**3 / (8 * ao**3)
return (z / ao)**(3/2) / 96 * poly * np.exp(- z * r / (4 * ao))
def OA3s(r, z=11, ao=ao):
Return values of the radial part of a 3s AO of an hydrogen like specie
Args:
r : float or ndarray of float in the same unit as ao (A by default)
z (int) : atomic number
ao (float): Bohr radius (in A by default)
poly = 6 - 4 * z * r / ao + 4 * z**2 * r**2 / (9 * ao**2)
return (z / ao)**(3/2) / (9. * np.sqrt(3)) * poly * np.exp(-z * r / (3 * ao))
def slater_ns(r, n=3, z=11, ao=ao):
Return values of the radial part of a slater type AO of an hydrogen like specie
Args:
r : float or ndarray of float in the same unit as ao (A by default)
z (int) : atomic number
ao (float): Bohr radius (in A by default)
fact = 1 / np.sqrt(factorial(2 * n)) * (2 * z / (n * ao))**(n + 0.5)
return fact * r**(n-1) * np.exp(- z * r / (n * ao))
def smooth_3s(r, c=14, alpha=1.9, ao=ao):
Return values of the radial part of a smooth AO without any oscillation closed to the nuclei.
Args:
r : float or ndarray of float in the same unit as ao (A by default)
z (int) : atomic number
ao (float): Bohr radius (in A by default)
return c * np.exp(- alpha * r / ao)
r = np.linspace(0, 1.4, 200)
plt.plot(r, OA3s(r, z=11), "b-", label="OA 3s", lw=2)
#plt.plot(r, smooth_3s(r), "m-", label="3s smooth", lw=2)
ymin, ymax = plt.ylim()
plt.title("Atomic orbitals")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
plt.legend()
plt.savefig("orbitals_1.pdf")
plt.plot(r, slater_ns(r, n=3, z=11), "r-", label="Slater 3s", lw=2)
plt.vlines(0.5, ymin, ymax, lw=2)
plt.fill_between(x=[0, .5], y1=[ymin, ymin], y2=[ymax, ymax], color="k", alpha=.15)
plt.annotate("Core part", xy=(0.25, 27), horizontalalignment="center")
plt.annotate("Valence part", xy=(0.9, 27), horizontalalignment="center")
plt.savefig("orbitals_2.pdf")
def pw_1D(x, u, box=1, phi=0):
Compute the real part of a 1D plane wave at position x. The length of the box is assume to be a.
Args:
x: float or array of float
u: coordinate of the wave vector in reciprocal space
box: length of the box
phi: phase
return 1 / np.sqrt(box) * np.cos(2 * np.pi * u * x / box + phi)
def e_kinet_pw(u, box=1):
Return the kinetic energy of a plane wave in electron volt defined by its reciprocal space vectors G.
Args:
u (int): plane waves index
g = 2 * np.pi * u / (box * cst.angstrom)
return cst.hbar**2 * g**2 / (2 * cst.m_e) / cst.eV
BOXLENGTH = 5
r = np.linspace(0, BOXLENGTH, 200)
spw = np.zeros(200)
for u in range(1, 9):
pw = pw_1D(r, u, box=BOXLENGTH)
spw += pw
if u in [1, 2, 4, 8]:
plt.plot(r, pw, label="{:5.1f} eV".format(e_kinet_pw(u, box=BOXLENGTH)), lw=1)
plt.plot(r, spw/7, label="sum", lw=3)
plt.title("Plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
plt.legend()
plt.savefig("planewaves.pdf")
BOXLENGTH = 5
rmax = 2
r = np.linspace(0, rmax, 100)
def pw_vector(x, *args):
return the linear combination of plane waves
npar = len(args)
return np.sum([args[u] * pw_1D(x, u + 1, box=BOXLENGTH, phi=0) for u in range(npar)], axis=0)
rp = np.linspace(0, rmax, 50)
for umax in [1, 5, 10, 20, 40, 50]:
popt, pcov = curve_fit(pw_vector, r, OA3s(r), p0=np.ones(umax))
ek = e_kinet_pw(umax, box=BOXLENGTH)
valrms = rms(OA3s(r), pw_vector(r, *popt))
print("{:3d} plane waves, ENCUT = {:8.2f} eV rms = {:8.4f}".format(umax, ek, valrms))
plt.plot(rp, pw_vector(rp, *popt), "r-", label="{} plane waves".format(umax), lw=2)
plt.plot(rp, OA3s(rp), "ko--", label="AO 3s", alpha=.75)
plt.legend()
plt.title("Fit a 3s hydrogen like AO with plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
plt.ylim(-10, 40)
plt.savefig("fit_AO3s_{}.pdf".format(umax))
plt.clf()
plt.plot(rp, pw_vector(rp, *popt), "r-", label="{} plane waves".format(umax), lw=2)
plt.plot(rp, OA3s(rp), "ko--", label="AO 3s", alpha=.75)
plt.legend()
plt.ylim(-10, 40)
plt.title("Fit a 3s hydrogen like AO with plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
rp = np.linspace(0, rmax, 50)
for umax in [1, 5, 10, 15]:
popt, pcov = curve_fit(pw_vector, r, smooth_3s(r), p0=np.ones(umax))
ek = e_kinet_pw(umax, box=BOXLENGTH)
valrms = rms(smooth_3s(r), pw_vector(r, *popt))
print("{:3d} plane waves, ENCUT = {:8.2f} eV r^2 = {:8.4f}".format(umax, ek, valrms))
plt.plot(rp, pw_vector(rp, *popt), "r-", label="{} plane waves".format(umax), lw=2)
plt.plot(rp, OA3s(rp), "ko--", label="AO 3s", alpha=.75)
plt.legend()
plt.title("Fit a smooth 3s AO with plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
plt.savefig("fit_smooth3s_{}.pdf".format(umax))
plt.clf()
plt.plot(rp, pw_vector(rp, *popt), "r-", label="{} plane waves".format(umax), lw=2)
plt.plot(rp, smooth_3s(rp), "ko--", label="AO 3s", alpha=.75)
plt.legend()
plt.title("Fit a smooth 3s AO with plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
smooth_popt = popt
rp = np.linspace(0, rmax, 50)
for umax in [1, 5, 10, 15, 20]:
popt, pcov = curve_fit(pw_vector, r, slater_ns(r, z=11, n=3), p0=np.ones(umax))
ek = e_kinet_pw(umax, box=BOXLENGTH)
valrms = rms(slater_ns(r, z=11, n=3), pw_vector(r, *popt))
print("{:3d} plane waves, ENCUT = {:8.2f} eV r^2 = {:8.4f}".format(umax, ek, valrms))
plt.plot(rp, pw_vector(rp, *popt), "r-", label="{} plane waves".format(umax), lw=2)
plt.plot(rp, slater_ns(rp, z=11, n=3), "ko--", label="AO 3s", alpha=.75)
plt.legend()
plt.title("Fit a slater type 3s AO with plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
plt.ylim(-1, 5)
plt.savefig("fit_slater3s_{}.pdf".format(umax))
plt.clf()
plt.plot(rp, pw_vector(rp, *popt), "r-", label="{} plane waves".format(umax), lw=2)
plt.plot(rp, slater_ns(rp, z=11, n=3), "ko--", label="AO 3s", alpha=.75)
plt.legend()
plt.title("Fit a slater type 3s AO with plane waves")
plt.xlabel("r / $\AA$")
plt.ylabel("wavefunction")
plt.ylim(-1, 5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matplotlib is used in order to make plots
Step2: Bhor radius is defined in angstrom from scipy.constants module.
Step4: A root mean square function is defined in order to measure the quality of the fit with plane waves.
Step6: Atomic orbitals
Step8: We define the radial part of a 4s AO of an hydrogen like specie.
Step10: For comparison we define also a slater type orbital
Step12: A smooth AO is defined for comparison, without oscillations closed to the nuclei.
Step13: Make some simple plots for comparison. As expected, the slater type orbital is smoother than the solution of the hydrogen like one.
Step15: Plane waves basis set
Step17: kinetic energy of a plane waves
Step18: Example
Step19: Fit on a 3s atomoc orbital
Step21: Fit function is defined such as the number of parameters set the number of used plane waves.
Step22: Hydrogen like atomic orbital
Step23: smooth 3s atomic orbital
Step24: Slater type atomic orbital
|
2,739
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
ant1 = np.array([-500e3,500e3,0]) # in m
ant2 = np.array([500e3,-500e3,+10]) # in m
b_ENU = ant2-ant1 # baseline
D = np.sqrt(np.sum((b_ENU)**2)) # |b|
print str(D/1000)+" km"
L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians
A = np.arctan2(b_ENU[0],b_ENU[1])
print "Baseline Azimuth="+str(np.degrees(A))+"°"
E = np.arcsin(b_ENU[2]/D)
print "Baseline Elevation="+str(np.degrees(E))+"°"
%matplotlib nbagg
plotBL.sphere(ant1,ant2,A,E,D,L)
# Observation parameters
c = 3e8 # Speed of light
f = 1420e9 # Frequency
lam = c/f # Wavelength
dec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination
time_steps = 600 # time steps
h = np.linspace(-4,4,num=time_steps)*np.pi/12 # Hour angle window
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
L = (np.pi/180)*(-30-43.0/60-17.34/3600)
A=np.arctan2(b_ENU[0],b_ENU[1])
print "Azimuth=",A*(180/np.pi)
E=np.arcsin(b_ENU[2]/D)
print "Elevation=",E*(180/np.pi)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
%matplotlib nbagg
plotBL.UV(u,v,w)
%matplotlib inline
from matplotlib.patches import Ellipse
# parameters of the UVtrack as an ellipse
a=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b=a*np.sin(dec) # minor axis
v0=Z/lam*np.cos(dec)/1e3 # center of ellipse
plotBL.UVellipse(u,v,w,a,b,v0)
L=np.radians(90.)
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
A=np.arctan2(b_ENU[0],b_ENU[1])
print "Azimuth=",A*(180/np.pi)
E=np.arcsin(b_ENU[2]/D)
print "Elevation=",E*(180/np.pi)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
dec=np.radians(90.)
uNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aNCP=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bNCP=aNCP*np.sin(dec) # minor axi
v0NCP=Z/lam*np.cos(dec)/1e3 # center of ellipse
dec=np.radians(30.)
u30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b30=a*np.sin(dec) # minor axi
v030=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u30,v30,w30,a30,b30,v030)
plotBL.UVellipse(uNCP,vNCP,wNCP,aNCP,bNCP,v0NCP)
L=np.radians(90.)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
# At local zenith == Celestial Equator
dec=np.radians(0.)
uEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wEQ = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aEQ=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bEQ=aEQ*np.sin(dec) # minor axi
v0EQ=Z/lam*np.cos(dec)/1e3 # center of ellipse
# Close to Zenith
dec=np.radians(10.)
u10 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v10 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w10 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a10=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b10=a*np.sin(dec) # minor axi
v010=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u10,v10,w10,a10,b10,v010)
plotBL.UVellipse(uEQ,vEQ,wEQ,aEQ,bEQ,v0EQ)
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
d = 100 #We assume that we have already divided by wavelength
delta = 60*(np.pi/180) #Declination in degrees
u_60 = d*np.cos(H)
v_60 = d*np.sin(H)*np.sin(delta)
RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600])
DEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600])
Flux_sources_labels = np.array(["","1 Jy","0.5 Jy","0.2 Jy"])
Flux_sources = np.array([1,0.5,0.1]) #in Jy
step_size = 200
print "Phase center Source 1 Source 2 Source3"
print repr("RA="+str(RA_sources)).ljust(2)
print "DEC="+str(DEC_sources)
RA_rad = np.array(RA_sources)*(np.pi/12)
DEC_rad = np.array(DEC_sources)*(np.pi/180)
RA_delta_rad = RA_rad-RA_rad[0]
l = np.cos(DEC_rad)*np.sin(RA_delta_rad)
m = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad))
print "l=",l*(180/np.pi)
print "m=",m*(180/np.pi)
point_sources = np.zeros((len(RA_sources)-1,3))
point_sources[:,0] = Flux_sources
point_sources[:,1] = l[1:]
point_sources[:,2] = m[1:]
%matplotlib inline
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.xlabel("$l$ [degrees]")
plt.ylabel("$m$ [degrees]")
plt.plot(l[0],m[0],"bx")
plt.hold("on")
plt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),"ro")
counter = 1
for xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25):
ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',
verticalalignment='bottom')
counter = counter + 1
plt.grid()
u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True)
v = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True)
uu, vv = np.meshgrid(u, v)
zz = np.zeros(uu.shape).astype(complex)
s = point_sources.shape
for counter in xrange(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i))
zz = zz[:,::-1]
u_track = u_60
v_track = v_60
z = np.zeros(u_track.shape).astype(complex)
s = point_sources.shape
for counter in xrange(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))
plt.subplot(121)
plt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Real part of visibilities")
plt.subplot(122)
plt.imshow(zz.imag,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Imaginary part of visibilities")
plt.subplot(121)
plt.plot(z.real)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Real: sampled visibilities")
plt.subplot(122)
plt.plot(z.imag)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Imag: sampled visibilities")
plt.subplot(121)
plt.imshow(abs(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Amplitude of visibilities")
plt.subplot(122)
plt.imshow(np.angle(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Phase of visibilities")
plt.subplot(121)
plt.plot(abs(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Abs: sampled visibilities")
plt.subplot(122)
plt.plot(np.angle(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Phase: sampled visibilities")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 4.4.1 UV coverage
Step3: Let's express the corresponding physical baseline in ENU coordinates.
Step4: Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
Step5: Figure 4.4.1
Step6: Computation of the projected baselines in ($u$,$v$,$w$) coordinates along time
Step7: As the $u$, $v$, $w$ coordinates depend explicitely on $H$, we must evaluate them for each time step of the observation. We will use the equations defined in $\S$ 4.1.2 ➞
Step8: We have now everything that describe the $uvw$-track of the baseline over an 8-hour observation period. Under this form, it is for the moment quite unclear which shape the $uvw$ track takes. Let's plot it on the $uvw$ space and its protection $uv$ space.
Step9: Figure 4.4.2
Step10: Figure 4.4.3
Step11: Let's compute the $uv$ tracks when observing the NCP ($\delta=90^\circ$)
Step12: Let's compute the uv tracks when observing a source at $\delta=30^\circ$
Step13: Figure 4.4.4
Step14: Figure 4.4.5
Step15: Simulating a sky
Step16: We then convert the ($\alpha$,$\delta$) to $l,m$ with
Step17: The coordinates of the sources and the phase center are now in degrees.
Step18: Figure 4.4.6
Step19: We create the dimensions of our visibility plane.
Step20: We create our completely filled in visibitly plane. If we had a perfect interferometer we could sample the entire $uv$-plane, but due to the fact that we only have a finite amount of antennas this is not possible. Recall that our sky brightness $I(l,m)$ is related to to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write
Step21: Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
Step22: Figure 4.4.7
Step23: Figure 4.4.8
Step24: Figure 4.4.9
|
2,740
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # for plot styling
import numpy as np
from sklearn.datasets.samples_generator import make_blobs
X, y_true = make_blobs(n_samples=300, centers=4,
cluster_std=0.60, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=50);
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
y_kmeans = kmeans.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);
from sklearn.metrics import pairwise_distances_argmin
def find_clusters(X, n_clusters, rseed=2):
# 1. Randomly choose clusters
rng = np.random.RandomState(rseed)
i = rng.permutation(X.shape[0])[:n_clusters]
centers = X[i]
while True:
# 2a. Assign labels based on closest center
labels = pairwise_distances_argmin(X, centers)
# 2b. Find new centers from means of points
new_centers = np.array([X[labels == i].mean(0)
for i in range(n_clusters)])
# 2c. Check for convergence
if np.all(centers == new_centers):
break
centers = new_centers
return centers, labels
centers, labels = find_clusters(X, 4)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
centers, labels = find_clusters(X, 4, rseed=0)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
labels = KMeans(6, random_state=0).fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
from sklearn.datasets import make_moons
X, y = make_moons(200, noise=.05, random_state=0)
labels = KMeans(2, random_state=0).fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels,
s=50, cmap='viridis');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: k-Means Algorithm
Step2: Caveats of expectation–maximization
Step3: Here the E–M approach has converged, but has not converged to a globally optimal configuration. For this reason, it is common for the algorithm to be run for multiple starting guesses, as indeed Scikit-Learn does by default (set by the n_init parameter, which defaults to 10).
Step4: Whether the result is meaningful is a question that is difficult to answer definitively; one approach that is rather intuitive, but that we won't discuss further here, is called silhouette analysis.
|
2,741
|
<ASSISTANT_TASK:>
Python Code:
import pandas
raw_elections_2015 = pandas.read_csv("test2015.csv",delimiter='\t')
raw_elections_2016 = pandas.read_csv("test2016.csv",delimiter='\t')
def find_PODEMOS(words):
if 'PODEMOS' in words:
return 'PODEMOS'
if 'EN COMU' == words:
return 'PODEMOS'
return words
elections_2015 = raw_elections_2015
elections_2016 = raw_elections_2016
elections_2015['Partido'] = elections_2015['Partido'].apply(find_PODEMOS)
elections_2016['Partido'] = elections_2016['Partido'].apply(find_PODEMOS)
print(elections_2015.columns)
#The data is in spanish format (10,0 and 1.000)
def replace_dots(word):
return word.replace('.','')
def replace_commas(word):
return word.replace(',','.')
elections_2015['Votos'] = elections_2015['Votos'].apply(replace_dots)
elections_2015['%'] = elections_2015['%'].apply(replace_commas)
elections_2016['Votos'] = elections_2016['Votos'].apply(replace_dots)
elections_2016['%'] = elections_2016['%'].apply(replace_commas)
elections_2015.info()
elections_2015['Votos'] = pandas.to_numeric(elections_2015['Votos'], errors='coerse')
elections_2015['Escannos'] = pandas.to_numeric(elections_2015['Escannos'], errors='coerse')
elections_2015['%'] = pandas.to_numeric(elections_2015['%'], errors='coerse')
elections_2016['Votos'] = pandas.to_numeric(elections_2016['Votos'], errors='coerse')
elections_2016['Escannos'] = pandas.to_numeric(elections_2016['Escannos'], errors='coerse')
elections_2016['%'] = pandas.to_numeric(elections_2016['%'], errors='coerse')
print(elections_2015.head())
print(elections_2016.head())
elections_2015.describe()
elections_2016 = elections_2016.groupby('Partido',as_index=False).sum()
elections_2015 = elections_2015.groupby('Partido',as_index=False).sum()
elections_2015.columns=['Partido','Votos 2015','% 2015', 'Escanos 2015']
elections_2016.columns=['Partido','Votos 2016','% 2016', 'Escanos 2016']
belections = pandas.merge(elections_2015, elections_2016, on='Partido',how='outer')
elections = pandas.merge(elections_2015, elections_2016, on='Partido',how='inner')
elections.sort_values('Votos 2015', ascending = False)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style("whitegrid")
sns.set_style({"grid.color": "1."})
sns.set_context("notebook", font_scale=1.5)
election_filtred = elections[elections['% 2015'] > 1.01]
election_filtred = election_filtred.sort_values('% 2016', ascending=False)
columns = election_filtred['Partido']
#Dataframe transposition for graph
dict_pre = {}
for i in range(election_filtred.shape[0]):
#print(i)
dict_pre[election_filtred.iloc[i,0]] = [election_filtred.iloc[i,1],election_filtred.iloc[i,4]]
election_graph = pandas.DataFrame(dict_pre, index=[2015,2016])
election_graph.columns = columns
election_graph.plot(colormap='Paired')
#elections.to_csv("elections_clean.csv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p>All the data are string</p>
Step2: <h3>Despite all the corruption cases from the PP, only this growh their votes in 2016</h3>
|
2,742
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
#print(text)
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {word: index for index,word in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
symbols_dict = {'.' : '||Period||' , ',' : '||Comma||' , '"' : '||QuotationMark||' ,
';' : '||Semicolon||' , '!' : '||Exclamationmark||' , '?' : '||Questionmark||',
'(' : '||LeftParentheses||' , ')' : '||RightParentheses||' ,
'--' : '||Dash||' , '\n' : '||Return||'
}
return symbols_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
# Create the graph object
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def lstm_cell(state_size , output_keep_prob):
cell = tf.contrib.rnn.BasicLSTMCell(
state_size, forget_bias=0.0, state_is_tuple=True, reuse=tf.get_variable_scope().reuse)
return cell
def get_init_cell(batch_size, rnn_size,keep_prob=0.75 ):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
# Stack up multiple LSTM layers, for deep learning
num_layers = 3
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell( rnn_size, keep_prob) for _ in range(num_layers)], state_is_tuple = True)
initial_state=cell.zero_state(batch_size, tf.float32)
initial_state=tf.identity(initial_state, name='initial_state')
return cell,initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = embed_dim
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
n_vocab = len(int_to_vocab)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
elements_per_batch = batch_size * seq_length
num_batches = len(int_text)//elements_per_batch
#Keep only enough elements to make full batches
int_text = int_text[:num_batches * elements_per_batch]
int_text = np.array(int_text).reshape((batch_size, -1))
batches = np.zeros((num_batches, 2, batch_size, seq_length), dtype=np.int32)
batch_num = 0
for n in range(0, int_text.shape[1], seq_length):
# The features
x = int_text[:, n:n+seq_length]
# The target shifted by one
if ((n+seq_length)%int_text.shape[1] == 0):
y = np.zeros_like(x)
y[:,:-1] = x[:,1:]
for batch in range(num_batches):
y[batch,-1] = int_text[(batch+1)%num_batches,0]
else:
y = int_text[:, n+1:n+seq_length+1]
batches[batch_num][0] = x
batches[batch_num][1] = y
batch_num += 1
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
word_picked = np.random.choice(list(int_to_vocab.values()), p=probabilities)
return word_picked
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 1200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
2,743
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
a = np.identity((n), dtype = np.int)
for element in np.nditer(a, op_flags=['readwrite']):
if element > 0:
element[...] = element + n - 2
return a
complete_deg(3)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
b = np.identity((n), dtype = np.int)
for element in np.nditer(b, op_flags=['readwrite']):
if element == 0:
element[...] = 1
else:
element[...] = 0
return b
complete_adj(3)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
def L(n):
L = complete_deg(n) - complete_adj(n)
return L
print L(1)
print L(2)
print L(3)
print L(4)
print L(5)
print L(6)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step2: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step3: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step4: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
2,744
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
fake_depth = np.linspace(100, 150, 101)
fake_log = np.array([np.random.choice([0, 1]) for _ in fake_depth])
plt.figure(figsize=(15, 1))
plt.plot(fake_depth, fake_log, 'o-')
from striplog import Striplog, Component
comps = [
Component({'pay': True}),
Component({'pay': False})
]
s = Striplog.from_log(fake_log, cutoff=0.5, components=comps, basis=fake_depth)
s[-1].base.middle = 150.5 # Adjust the bottom thickness... not sure if this is a bug.
s[0]
from striplog import Legend
legend = Legend.random(comps)
legend.get_decor(comps[-1]).width = 0.2
legend.plot()
legend_csv = colour,hatch,width,component pay
#48cc0e,None,1,True
#5779e2,None,0.2,False
legend = Legend.from_csv(text=legend_csv)
legend.plot()
s.plot(legend=legend, aspect=5)
pruned = s.prune(limit=1.0, keep_ends=True)
annealed = pruned.anneal()
merged = annealed.merge_neighbours() # Anneal works on a copy
fig, axs = plt.subplots(ncols=4, figsize=(6, 10))
axs[0] = s.plot(legend=legend, ax=axs[0], lw=1, aspect=5)
axs[0].set_title('Original')
axs[1] = pruned.plot(legend=legend, ax=axs[1], lw=1, aspect=5)
axs[1].set_yticklabels([])
axs[1].set_title('Pruned')
axs[2] = annealed.plot(legend=legend, ax=axs[2], lw=1, aspect=5)
axs[2].set_yticklabels([])
axs[2].set_title('Annealed')
axs[3] = merged.plot(legend=legend, ax=axs[3], lw=1, aspect=5)
axs[3].set_yticklabels([])
axs[3].set_title('Merged')
plt.show()
fig, axs = plt.subplots(ncols=4, figsize=(6, 10))
opening = s.binary_morphology('pay', 'opening', step=0.1, p=7)
closing = s.binary_morphology('pay', 'closing', step=0.1, p=7)
axs[0] = s.plot(legend=legend, ax=axs[0], lw=1, aspect=5)
ntg = s.net_to_gross('pay')
axs[0].set_title(f'Original\n{ntg:.2f}')
axs[1] = merged.plot(legend=legend, ax=axs[1], lw=1, aspect=5)
axs[1].set_yticklabels([])
ntg = merged.net_to_gross('pay')
axs[1].set_title(f'PAM\n{ntg:.2f}') # Prune-anneal-merge
axs[2] = opening.plot(legend=legend, ax=axs[2], lw=1, aspect=5)
axs[2].set_yticklabels([])
ntg = opening.net_to_gross('pay')
axs[2].set_title(f'Opening\n{ntg:.2f}')
axs[3] = closing.plot(legend=legend, ax=axs[3], lw=1, aspect=5)
axs[3].set_yticklabels([])
ntg = closing.net_to_gross('pay')
axs[3].set_title(f'Closing\n{ntg:.2f}')
plt.show()
s.unique
s.thickest()
s.thickest(5).plot(legend=legend, lw=1, aspect=5)
s.bar(legend=legend)
s.bar(legend=legend, sort=True)
n, ents, ax = s.hist(legend=legend)
s
legend
data = [c[1] for c in s.unique]
colors = [c['_colour'] for c in legend.table]
fig, axs = plt.subplots(ncols=2,
gridspec_kw={'width_ratios': [1, 3]})
axs[0] = s.plot(ax=axs[0], legend=legend)
axs[0].set_title("Striplog")
axs[1].pie(data, colors=colors)
axs[1].set_title("Pay proportions")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make a striplog
Step2: Each Interval in the striplog looks like
Step3: Plot the intervals
Step5: Or we can make one with a bit more control
Step6: Remove thin things
Step7: Now we can anneal the gaps
Step8: Then merge the adjacent intervals that are alike...
Step9: We could have chained these commands
Step10: Dilate and erode
Step11: Some statistics
Step12: We can get at the thickest (and thinnest, with .thinnest()) intervals
Step13: These functions optionally take an integer argument n specifying how many of the thickest or thinnest intervals you want to see. If n is greater than 1, a Striplog object is returned so you can see the positions of those items
Step14: Bar plots and histograms
Step15: More interesting is to sort the thicknesses
Step16: Finally, we can make a thickness histogram of the various types of component present in the log.
|
2,745
|
<ASSISTANT_TASK:>
Python Code:
# Function to generate target value for a given x.
true_func = lambda X: np.cos(1.5 * np.pi * X)
np.random.seed(0)
# Training Set: No. of random samples used for training the model
n_samples = 30
x = np.sort(np.random.rand(n_samples))
y = true_func(x) + np.random.randn(n_samples) * 0.1
# Test Set: 100 samples for which we want the model to predict value
n_test = 100
x_test = np.linspace(0, 1, n_test)
y_test_actual = true_func(x_test) + np.random.randn(n_test) * 0.1
x[:5]
x[:5],y[:5]
# Function to add more features from existing features
# in this case degree is the desired order of polynomials
# Example degree 3 with 1 feature x would output: x,x^2,x^3
# Similary, if there are multiple features x1,x2: x1,x2,x1*x2,x1^2,x1^2*x2,x1^3....and so forth
def generate_higher_order(degrees, x):
# Generate higher order features from a given set of features.
poly = PolynomialFeatures(degree = degrees,
include_bias = False)
x_new = poly.fit_transform(x)
df = pd.DataFrame(x_new)
df.columns = df.columns.map(lambda n: 'x' + str(n))
return df
data_path = r'..\Data\RegressionExamples\under_over_fit_30samples'
# degrees for feature generation
degrees = [1, 4, 15]
# Generate training set for each of the degree
for d in degrees:
df = generate_higher_order(d, x.reshape((n_samples, 1)))
df['y'] = y
df.to_csv(os.path.join(data_path,'fit_degree_{0}_example_train{1}.csv'.format(d,n_samples)),
index = True,
index_label = 'Row')
# Generate Evaluation set. Contains all the features and target.
# Generate Test set. Contains only the features. AWSML would predict the target
for d in degrees:
df = generate_higher_order(d, x_test.reshape((n_test, 1)))
df.to_csv(os.path.join(data_path,'fit_degree_{0}_example_test{1}.csv'.format(d, n_samples)),
index = True,
index_label = 'Row')
df['y'] = y_test_actual
df.to_csv(os.path.join(data_path,'fit_degree_{0}_example_eval{1}.csv'.format(d, n_samples)),
index = True,
index_label = 'Row')
# Pull Predictions
df_samples = pd.read_csv(os.path.join(data_path, 'fit_degree_1_example_train30.csv'),
index_col = 'Row')
df_actual = pd.read_csv(os.path.join(data_path, 'fit_degree_1_example_eval30.csv'),
index_col = 'Row')
df_d1_predicted = pd.read_csv(
os.path.join(data_path,'output_deg_1',
'bp-aYBztCIPIdb-fit_degree_1_example_test30.csv.gz'))
df_d1_predicted.columns = ["Row","y_predicted"]
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df_samples['x0'],
y = df_samples['y'],
color = 'b',
label='samples')
plt.scatter(x = df_actual['x0'],
y = df_actual['y'],
color = 'r',
label = 'true function')
plt.scatter(x = df_actual['x0'],
y = df_d1_predicted['y_predicted'],
color = 'g',
label = 'predicted with degree 1')
plt.title('Model with degree 1 feature - Underfit')
plt.grid(True)
plt.legend()
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df_actual['y'],
df_d1_predicted['y_predicted']],
labels=['actual','predicted with deg1'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('Target Attribute')
plt.grid(True)
df_d4_predicted = pd.read_csv(
os.path.join(data_path,'output_deg_4',
'bp-W4oBOhwClbH-fit_degree_4_example_test30.csv.gz'))
df_d4_predicted.columns = ["Row","y_predicted"]
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df_samples['x0'],
y = df_samples['y'],
color = 'b',
label = 'samples')
plt.scatter(x = df_actual['x0'],
y = df_actual['y'],
color = 'r',
label = 'true function')
plt.scatter(x = df_actual['x0'],
y = df_d4_predicted['y_predicted'],
color = 'g',
label = 'predicted with degree 4')
plt.title('Model with degree 4 features - normal fit')
plt.grid(True)
plt.legend()
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df_actual['y'],
df_d1_predicted['y_predicted'],
df_d4_predicted['y_predicted']],
labels=['actual','predicted with deg1','predicted with deg4'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('Target Attribute')
plt.grid(True)
df_d15_predicted = pd.read_csv(
os.path.join(data_path,'output_deg_15',
'bp-rBWxcnPN3zu-fit_degree_15_example_test30.csv.gz'))
df_d15_predicted.columns = ["Row","y_predicted"]
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df_samples['x0'],
y = df_samples['y'],
color = 'b',
label = 'samples')
plt.scatter(x = df_actual['x0'],
y = df_actual['y'],
color = 'r',
label = 'true function')
plt.scatter(x = df_actual['x0'],
y = df_d15_predicted['y_predicted'],
color = 'g',
label = 'predicted with degree 15')
plt.grid(True)
plt.legend()
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df_actual['y'],
df_d1_predicted['y_predicted'],
df_d4_predicted['y_predicted'],
df_d15_predicted['y_predicted']],
labels = ['actual','predicted deg1','predicted deg4','predicted deg15'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('Target Attribute')
plt.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Polynomial with degree 1 is a straight line - Underfitting<br>
Step2: <h4>Model with degree 4 features</h4>
Step3: Good Fit with degree 4 polynomial<br>
Step4: <h4>Model with degree 15 features</h4>
Step5: Not quite over fitting as shown in sci-kit example; fit is actually pretty good here.<br>
|
2,746
|
<ASSISTANT_TASK:>
Python Code:
# Author: Ivana Kojcic <ivana.kojcic@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>
# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
# In this example, raw data will be simulated for the sample subject, so its
# information needs to be loaded. This step will download the data if it not
# already on your machine. Subjects directory is also set so it doesn't need
# to be given to functions.
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
meg_path = op.join(data_path, 'MEG', subject)
# First, we get an info structure from the sample subject.
fname_info = op.join(meg_path, 'sample_audvis_raw.fif')
info = mne.io.read_info(fname_info)
tstep = 1 / info['sfreq']
# To simulate sources, we also need a source space. It can be obtained from the
# forward solution of the sample subject.
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
src = fwd['src']
# To simulate raw data, we need to define when the activity occurs using events
# matrix and specify the IDs of each event.
# Noise covariance matrix also needs to be defined.
# Here, both are loaded from the sample dataset, but they can also be specified
# by the user.
fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')
fname_cov = op.join(meg_path, 'sample_audvis-cov.fif')
events = mne.read_events(fname_event)
noise_cov = mne.read_cov(fname_cov)
# Standard sample event IDs. These values will correspond to the third column
# in the events matrix.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
# Take only a few events for speed
events = events[:80]
activations = {
'auditory/left':
[('G_temp_sup-G_T_transv-lh', 30), # label, activation (nAm)
('G_temp_sup-G_T_transv-rh', 60)],
'auditory/right':
[('G_temp_sup-G_T_transv-lh', 60),
('G_temp_sup-G_T_transv-rh', 30)],
'visual/left':
[('S_calcarine-lh', 30),
('S_calcarine-rh', 60)],
'visual/right':
[('S_calcarine-lh', 60),
('S_calcarine-rh', 30)],
}
annot = 'aparc.a2009s'
# Load the 4 necessary label names.
label_names = sorted(set(activation[0]
for activation_list in activations.values()
for activation in activation_list))
region_names = list(activations.keys())
def data_fun(times, latency, duration):
Function to generate source time courses for evoked responses,
parametrized by latency and duration.
f = 15 # oscillating frequency, beta band [Hz]
sigma = 0.375 * duration
sinusoid = np.sin(2 * np.pi * f * (times - latency))
gf = np.exp(- (times - latency - (sigma / 4.) * rng.rand(1)) ** 2 /
(2 * (sigma ** 2)))
return 1e-9 * sinusoid * gf
times = np.arange(150, dtype=np.float64) / info['sfreq']
duration = 0.03
rng = np.random.RandomState(7)
source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)
for region_id, region_name in enumerate(region_names, 1):
events_tmp = events[np.where(events[:, 2] == region_id)[0], :]
for i in range(2):
label_name = activations[region_name][i][0]
label_tmp = mne.read_labels_from_annot(subject, annot,
subjects_dir=subjects_dir,
regexp=label_name,
verbose=False)
label_tmp = label_tmp[0]
amplitude_tmp = activations[region_name][i][1]
if region_name.split('/')[1][0] == label_tmp.hemi[0]:
latency_tmp = 0.115
else:
latency_tmp = 0.1
wf_tmp = data_fun(times, latency_tmp, duration)
source_simulator.add_data(label_tmp,
amplitude_tmp * wf_tmp,
events_tmp)
# To obtain a SourceEstimate object, we need to use `get_stc()` method of
# SourceSimulator class.
stc_data = source_simulator.get_stc()
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd)
raw_sim.set_eeg_reference(projection=True)
mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)
mne.simulation.add_eog(raw_sim, random_state=0)
mne.simulation.add_ecg(raw_sim, random_state=0)
# Plot original and simulated raw data.
raw_sim.plot(title='Simulated raw data')
epochs = mne.Epochs(raw_sim, events, event_id, tmin=-0.2, tmax=0.3,
baseline=(None, 0))
evoked_aud_left = epochs['auditory/left'].average()
evoked_vis_right = epochs['visual/right'].average()
# Visualize the evoked data
evoked_aud_left.plot(spatial_colors=True)
evoked_vis_right.plot(spatial_colors=True)
method, lambda2 = 'dSPM', 1. / 9.
inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)
stc_aud = mne.minimum_norm.apply_inverse(
evoked_aud_left, inv, lambda2, method)
stc_vis = mne.minimum_norm.apply_inverse(
evoked_vis_right, inv, lambda2, method)
stc_diff = stc_aud - stc_vis
brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,
hemi='split', views=['lat', 'med'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In order to simulate source time courses, labels of desired active regions
Step3: Create simulated source activity
Step4: Here,
Step5: Simulate raw data
Step6: Extract epochs and compute evoked responsses
Step7: Reconstruct simulated source time courses using dSPM inverse operator
|
2,747
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# just use right temporal sensors for speed
epochs.pick_channels(mne.read_vectorview_selection('Right-temporal'))
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(-0.1, 0.4)
tfr_epochs.crop(-0.1, 0.4)
epochs_power = tfr_epochs.data
sensor_adjacency, ch_names = mne.channels.find_ch_adjacency(
tfr_epochs.info, 'grad')
# Subselect the channels we are actually using
use_idx = [ch_names.index(ch_name.replace(' ', ''))
for ch_name in tfr_epochs.ch_names]
sensor_adjacency = sensor_adjacency[use_idx][:, use_idx]
assert sensor_adjacency.shape == \
(len(tfr_epochs.ch_names), len(tfr_epochs.ch_names))
assert epochs_power.data.shape == (
len(epochs), len(tfr_epochs.ch_names),
len(tfr_epochs.freqs), len(tfr_epochs.times))
adjacency = mne.stats.combine_adjacency(
sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times))
# our adjacency is square with each dim matching the data size
assert adjacency.shape[0] == adjacency.shape[1] == \
len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)
threshold = 3.
n_permutations = 50 # Warning: 50 is way too small for real-world analysis.
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,
threshold=threshold, tail=0,
adjacency=adjacency,
out_type='mask', verbose=True)
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
# Just plot one channel's data
ch_idx, f_idx, t_idx = np.unravel_index(
np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:])
# ch_idx = tfr_epochs.ch_names.index('MEG 1332') # to show a specific one
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs[ch_idx], cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})')
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2], time_unit='s')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Define adjacency for statistics
Step3: Compute statistic
Step4: View time-frequency plots
|
2,748
|
<ASSISTANT_TASK:>
Python Code:
#On windows
#import findspark
#findspark.init(spark_home="C:/Users/me/software/spark-1.6.3-bin-hadoop2.6/")
import pyspark
import numpy as np # we'll be using numpy for some numeric operations
sc = pyspark.SparkContext(master="local[*]", appName="tour")
sc.stop()
# To try the SparkContext with other masters first stop the one that is already running
# sc.stop()
f = lambda line: 'Spark' in line
f("we are learning park")
def f(line):
return 'Spark' in line
f("we are learning Spark")
my_list = [0,1,2,3,4,5,6,7,8,9]
data = sc.parallelize(my_list) # create RDD from Python collection
data_squared = data.map(lambda num: num ** 2) # transformation
data_squared.collect()
data = sc.parallelize([0,1,2,3,4,5,6,7,8,9]) # creation of RDD
data_squared = data.map(lambda num: num ** 2) # transformation
data_squared.collect() # action
text = sc.textFile("myfile.txt",1) # load data
text = sc.textFile("myfile.txt") # load data
# count only lines that mention "Spark"
spark_lines = text.filter(lambda line: 'Spark' in line)
spark_lines.collect()
def is_prime(num):
return True if num is prime, False otherwise
if num < 1 or num % 1 != 0:
raise Exception("invalid argument")
for d in range(2, int(np.sqrt(num) + 1)):
if num % d == 0:
return False
return True
numbersRDD = sc.parallelize(range(1, 1000000)) # creation of RDD
primesRDD = numbersRDD.filter(is_prime) # transformation
# primesRDD has not been materialized until this point
primes = primesRDD.collect() # action
type(primesRDD)
print(primes[0:15])
print(primesRDD.take(15))
primesRDD.persist()
primesRDD.persist() # we're asking Spark to keep this RDD in memory. Note that cache is the same but as using persist for memory. However, persist allows you to define other types of storage
print("Found", primesRDD.count(), "prime numbers") # first action -- causes primesRDD to be materialized
print("Here are some of them:")
print(primesRDD.take(20)) # second action - RDD is already in memory
primesRDD.unpersist()
%%timeit
primes = primesRDD.collect()
primesRDD.persist()
%%timeit
primes = primesRDD.collect()
data = sc.parallelize(range(10))
squares = data.map(lambda x: x**2)
squares.collect()
def f(x):
return the square of a number
return x**2
data = sc.parallelize(range(10))
squares = data.map(f)
squares.collect()
class SearchFunctions(object):
def __init__(self, query):
self.query
def is_match(self, s):
return self.query in s
def get_matches_in_rdd_v1(self, rdd):
return rdd.filter(self.is_match) # the function is an object method
def get_matches_in_rdd_v2(self, rdd):
return rdd.filter(lambda x: self.query in x) # the function references an object field
class SearchFunctions(object):
def __init__(self, query):
self.query
def is_match(self, s):
return self.query in s
def get_matches_in_rdd(self, rdd):
query = self.query
return rdd.filter(lambda x: query in x)
phrases = sc.parallelize(["hello world", "terve terve", "how are you"])
words_map = phrases.map(lambda phrase: phrase.split(" "))
words_map.collect() # This returns a list of lists
phrases = sc.parallelize(["hello world", "terve terve", "how are you"])
words_flatmap = phrases.flatMap(lambda phrase: phrase.split(" "))
words_flatmap.collect() # This returns a list withe the combined elements of the list
# We can use the flatmap to make a word count
words_flatmap.map(
lambda x: (x,1)
).reduceByKey(
lambda x,y: x+y
).collect()
oneRDD = sc.parallelize([1, 1, 1, 2, 3, 3, 4, 4])
oneRDD.persist()
otherRDD = sc.parallelize([1, 4, 4, 7])
otherRDD.persist()
unionRDD = oneRDD.union(otherRDD)
unionRDD.persist()
oneRDD.subtract(otherRDD).collect()
oneRDD.distinct().collect()
oneRDD.intersection(otherRDD).collect() # removes duplicates
oneRDD.cartesian(otherRDD).collect()[:5]
np.sum([1,43,62,23,52])
data = sc.parallelize([1,43,62,23,52])
data.reduce(lambda x, y: x + y)
data.reduce(lambda x, y: x * y)
data.reduce(lambda x, y: x**2 + y**2) # this does NOT compute the sum of squares of RDD elements
((((1 ** 2 + 43 ** 2) ** 2 + 62 ** 2) **2 + 23 ** 2) **2 + 52 **2)
data.reduce(lambda x, y: np.sqrt(x**2 + y**2)) ** 2
np.sum(np.array([1,43,62,23,52]) ** 2)
help(data.aggregate)
def seq(x,y):
return x[0] + y, x[1] + 1
def comb(x,y):
print(x,y,"comb")
return x[0] + y[0], x[1] + y[1]
data = sc.parallelize([1,43,62,23,52], 1) # Try different levels of paralellism
aggr = data.aggregate(zeroValue = (0,0),
seqOp = seq, #
combOp = comb)
aggr
aggr[0] / aggr[1] # average value of RDD elements
help(pairRDD.reduceByKey)
pairRDD = sc.parallelize([('$APPL', 100.64),
('$APPL', 100.52),
('$GOOG', 706.2),
('$AMZN', 552.32),
('$AMZN', 552.32) ])
pairRDD.reduceByKey(lambda x,y: x + y).collect() # sum of values per key
help(pairRDD.combineByKey)
pairRDD = sc.parallelize([ ('$APPL', 100.64), ('$GOOG', 706.2), ('$AMZN', 552.32), ('$APPL', 100.52), ('$AMZN', 552.32) ])
aggr = pairRDD.combineByKey(createCombiner = lambda x: (x, 1),
mergeValue = lambda x,y: (x[0] + y, x[1] + 1),
mergeCombiners = lambda x,y: (x[0] + y[0], x[1] + y[1]))
aggr.collect()
aggr.map(lambda x: (x[0], x[1][0]/x[1][1])).collect() # average value per key
help(course_a.join)
course_a = sc.parallelize([ ("Antti", 8), ("Tuukka", 10), ("Leena", 9)])
course_b = sc.parallelize([ ("Leena", 10), ("Tuukka", 10)])
result = course_a.join(course_b)
result.collect()
text = sc.textFile("myfile.txt")
long_lines = sc.accumulator(0) # create accumulator
def line_len(line):
global long_lines # to reference an accumulator, declare it as global variable
length = len(line)
if length > 30:
long_lines += 1 # update the accumulator
return length
llengthRDD = text.map(line_len)
llengthRDD.count()
long_lines.value # this is how we obtain the value of the accumulator in the driver program
help(long_lines)
long_lines.value # this is how we obtain the value of the accumulator in the driver program
text = sc.textFile("myfile.txt")
long_lines_2 = sc.accumulator(0)
def line_len(line):
global long_lines_2
length = len(line)
if length > 30:
long_lines_2 += 1
text.foreach(line_len)
long_lines_2.value
def load_address_table():
return {"Anu": "Chem. A143", "Karmen": "VTT, 74", "Michael": "OIH, B253.2",
"Anwar": "T, B103", "Orestis": "T, A341", "Darshan": "T, A325"}
address_table = sc.broadcast(load_address_table())
def find_address(name):
res = None
if name in address_table.value:
res = address_table.value[name]
return res
people = sc.parallelize(["Anwar", "Michael", "Orestis", "Darshan"])
pairRDD = people.map(lambda name: (name, find_address(name))) # first operation that uses the address table
print(pairRDD.collectAsMap())
other_people = sc.parallelize(["Karmen", "Michael", "Anu"])
pairRDD = other_people.map(lambda name: (name, find_address(name))) # second operation that uses the address table
print(pairRDD.collectAsMap())
sc.stop()
import random
NUM_SAMPLES = 10000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, NUM_SAMPLES)).filter(inside).count()
print("Pi is roughly %f" % (4.0 * count / NUM_SAMPLES))
# Details of the algorithm can be found here: http://www.cs.princeton.edu/~chazelle/courses/BIB/pagerank.htm
iterations = 5
def computeContribs(urls, rank):
Calculates URL contributions to the rank of other URLs.
num_urls = len(urls)
for url in urls:
yield (url, rank / num_urls)
def parseNeighbors(urls):
Parses a urls pair string into urls pair.
parts = urls.split(',')
return parts[0], parts[1]
# Read the lines
lines = sc.textFile("higgs-mention_network.txt").persist()
lines.collect()
# Loads all URLs from input file and initialize their neighbors.
links = lines.map(lambda urls: parseNeighbors(urls)).distinct().groupByKey().cache()
links.collect()
# Loads all URLs with other URL(s) link to from input file and initialize ranks of them to one.
ranks = links.map(lambda url_neighbors: (url_neighbors[0], 1.0))
ranks.collect()
# Calculates and updates URL ranks continuously using PageRank algorithm.
for iteration in range(iterations):
# Calculates URL contributions to the rank of other URLs.
contribs = links.join(ranks).flatMap(
lambda url_urls_rank: computeContribs(url_urls_rank[1][0], url_urls_rank[1][1]))
# Re-calculates URL ranks based on neighbor contributions.
ranks = contribs.reduceByKey(lambda x,y: x+y).mapValues(lambda rank: rank * 0.85 + 0.15)
# Collects all URL ranks and dump them to console.
for (link, rank) in ranks.collect():
print("%s has rank: %s." % (link, rank))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: local
Step2: Lambda functions
Step3: Creating RDDS
Step4: RDD operations
Step5:
Step7: Lazy evaluation
Step8: Persistence
Step9: If we do not need primesRDD in memory anymore, we can tell Spark to discard it.
Step10: How long does it take to collect primesRDD? Let's time the operation.
Step11: When I ran the above on my laptop, it took about more than 10s. That's because Spark had to evaluate primesRDD before performing collect on it.
Step13: When I ran the above on my laptop, it took about 1s to collect primesRDD - that's almost $10$ times faster compared to when the RDD had to be recomputed from scratch.
Step14: Be careful, though
Step15: The following is a better way to implement the two methods above (get_matches_in_rdd_v1 and get_matches_in_rdd_v2), if we want to avoid sending a SearchFunctions object for computation to the cluster.
Step16: map and flatmap
Step17: (Pseudo) set operations
Step18: reduce
Step19: aggregate
Step20: reduceByKey
Step21: From https
Step22: (inner) join
Step23: Accumulators
Step24: Warning
Step25: Broadcast variable
Step26: Stopping
Step27: Example
Step30: Example
|
2,749
|
<ASSISTANT_TASK:>
Python Code:
plt.plot(pop_x, pop_y, 'o')
plt.xlabel('Year')
plt.ylabel('Population [Millions]')
plt.show()
graphene_used = np.concatenate( (np.ones(5), np.zeros(5)) )
temperature = np.concatenate( (T, T) )
intercept = np.ones(10)
x_mat = np.column_stack( (intercept, temperature, graphene_used) )
print(x_mat)
#Ignore these
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import scipy.stats as ss
pop_x = np.arange(1998, 2016)
pop_y = 275.9 * np.exp((pop_x - 1998) * 0.005) + ss.norm.rvs(size=len(pop_x)) * 0.2
T = np.arange(280, 280 + 5 * 5, 5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1 Answer
Step2: Example 3
|
2,750
|
<ASSISTANT_TASK:>
Python Code:
!pip install -qq optax
import numpy as np
import jax
from jax import numpy as jnp
from jax import grad, jit, vmap, random
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import tensorflow_datasets as tfds
except ModuleNotFoundError:
%pip install -qq tensorflow tensorflow_datasets
import tensorflow_datasets as tfds
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot as plt
import matplotlib.gridspec as gridspec
def plot_digit(img, label=None, ax=None):
Plot MNIST Digit.
if ax is None:
fig, ax = plt.subplots()
if img.ndim == 1:
img = img.reshape(28, 28)
ax.imshow(img.squeeze(), cmap="Greys_r")
ax.axis("off")
if label is not None:
ax.set_title(f"Label:{label}", fontsize=10, pad=1.3)
return ax
def grid_plot_imgs(imgs, dim=None, axs=None, labels=None, figsize=(5, 5)):
Plot a series of digits in a grid.
if dim is None:
if axs is None:
n_imgs = len(imgs)
dim = np.sqrt(n_imgs)
if not dim.is_integer():
raise ValueError("If dim not specified `len(imgs)` must be a square number.")
else:
dim = int(dim)
else:
dim = len(axs)
if axs is None:
gridspec_kw = {"hspace": 0.05, "wspace": 0.05}
if labels is not None:
gridspec_kw["hspace"] = 0.25
fig, axs = plt.subplots(dim, dim, figsize=figsize, gridspec_kw=gridspec_kw)
for n in range(dim**2):
img = imgs[n]
row_idx = n // dim
col_idx = n % dim
axi = axs[row_idx, col_idx]
if labels is not None:
ax_label = labels[n]
else:
ax_label = None
plot_digit(img, ax=axi, label=ax_label)
return axs
def gridspec_plot_imgs(imgs, gs_base, title=None, dim=5):
Plot digits into a gridspec subgrid.
Args:
imgs - images to plot.
gs_base - from `gridspec.GridSpec`
title - subgrid title.
Note that, in general, for this type of plotting it is considerably more
simple to using `fig.subfigures()` however that requires matplotlib >=3.4
which has some conflicts with the default colab setup as of the time of
writing.
gs0 = gs_base.subgridspec(dim, dim)
for i in range(dim):
for j in range(dim):
ax = fig.add_subplot(gs0[i, j])
plot_digit(imgs[i * dim + j], ax=ax)
if (i == 0) and (j == 2):
if title is not None:
ax.set_title(title)
def initialise_params(N_vis, N_hid, key):
Initialise the parameters.
Args:
N_vis - number of visible units.
N_hid - number of hidden units.
key - PRNG key.
Returns:
params - (W, a, b), Weights and biases for network.
W_key, a_key, b_key = random.split(key, 3)
W = random.normal(W_key, (N_vis, N_hid)) * 0.01
a = random.normal(a_key, (N_hid,)) * 0.01
b = random.normal(b_key, (N_vis,)) * 0.01
return (W, a, b)
@jit
def sample_hidden(vis, params, key):
Performs the hidden layer sampling, P(h|v;θ).
Args:
vis - state of the visible units.
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
The probabilities and states of the hidden layer sampling.
W, a, _ = params
activation = jnp.dot(vis, W) + a
hid_probs = jax.nn.sigmoid(activation)
hid_states = random.bernoulli(key, hid_probs).astype("int8")
return hid_probs, hid_states
@jit
def sample_visible(hid, params, key):
Performs the visible layer sampling, P(v|h;θ).
Args:
hid - state of the hidden units
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
The probabilities and states of the visible layer sampling.
W, _, b = params
activation = jnp.dot(hid, W.T) + b
vis_probs = jax.nn.sigmoid(activation)
vis_states = random.bernoulli(key, vis_probs).astype("int8")
return vis_probs, vis_states
@jit
def CD1(vis_sample, params, key):
The one-step contrastive divergence algorithm.
Can handle batches of training data.
Args:
vis_sample - sample of visible states from data.
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
An estimate of the gradient of the log likelihood with respect
to the parameters.
key, subkey = random.split(key)
hid_prob0, hid_state0 = sample_hidden(vis_sample, params, subkey)
key, subkey = random.split(key)
vis_prob1, vis_state1 = sample_visible(hid_state0, params, subkey)
key, subkey = random.split(key)
# It would be more efficient here to not actual sample the unused states.
hid_prob1, _ = sample_hidden(vis_state1, params, subkey)
delta_W = jnp.einsum("...j,...k->...jk", vis_sample, hid_prob0) - jnp.einsum(
"...j,...k->...jk", vis_state1, hid_prob1
)
delta_a = hid_prob0 - hid_prob1
delta_b = vis_sample - vis_state1
return (delta_W, delta_a, delta_b)
@jit
def reconstruct_vis(vis_sample, params, key):
Reconstruct the visible state from a conditional sample of the hidden
units.
Returns
Reconstruction probabilities.
subkey1, subkey2 = random.split(key, 2)
_, hid_state = sample_hidden(vis_sample, params, subkey1)
vis_recon_prob, _ = sample_visible(hid_state, params, subkey2)
return vis_recon_prob
@jit
def reconstruction_loss(vis_samples, params, key):
Calculate the L2 loss between a batch of visible samples and their
reconstructions.
Note this is a heuristic for evaluating training progress, not an objective
function.
reconstructed_samples = reconstruct_vis(vis_samples, params, key)
loss = optax.l2_loss(vis_samples.astype("float32"), reconstructed_samples).mean()
return loss
@jit
def vis_free_energy(vis_state, params):
Calculate the free enery of a visible state.
The free energy of a visible state is equal to the sum of the energies of
all of the configurations of the total state (hidden + visible) which
contain that visible state.
Args:
vis_state - state of the visible units.
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
The free energy of the visible state.
W, a, b = params
activation = jnp.dot(vis_state, W) + a
return -jnp.dot(vis_state, b) - jnp.sum(jax.nn.softplus(activation))
@jit
def free_energy_gap(vis_train_samples, vis_test_samples, params):
Calculate the average difference in free energies between test and train
data.
The free energy gap can be used to evaluate overfitting. If the model
starts to overfit the training data the free energy gap will start to
become increasingly negative.
Args:
vis_train_samples - samples of visible states from training data.
vis_test_samples - samples of visible states from validation data.
params - (W, a, b), Weights and biases for network.
Returns:
The difference between the test and validation free energies.
train_FE = vmap(vis_free_energy, (0, None))(vis_train_samples, params)
test_FE = vmap(vis_free_energy, (0, None))(vis_test_samples, params)
return train_FE.mean() - test_FE.mean()
@jit
def evaluate_params(train_samples, test_samples, params, key):
Calculate performance measures of parameters.
train_key, test_key = random.split(key)
train_recon_loss = reconstruction_loss(train_samples, params, train_key)
test_recon_loss = reconstruction_loss(test_samples, params, test_key)
FE_gap = free_energy_gap(train_samples, test_samples, params)
return train_recon_loss, test_recon_loss, FE_gap
def preprocess_images(images):
images = images.reshape((len(images), -1))
return jnp.array(images > (255 / 2), dtype="float32")
def load_mnist(split):
images, labels = tfds.as_numpy(tfds.load("mnist", split=split, batch_size=-1, as_supervised=True))
procced_images = preprocess_images(images)
return procced_images, labels
mnist_train_imgs, mnist_train_labels = load_mnist("train")
mnist_test_imgs, mnist_test_labels = load_mnist("test")
def train_RBM(params, train_data, optimizer, key, eval_samples, n_epochs=5, batch_size=20):
Optimize parameters of RBM using the CD1 algoritm.
@jit
def batch_step(params, opt_state, batch, key):
grads = jax.tree_map(lambda x: x.mean(0), CD1(batch, params, key))
updates, opt_state = optimizer.update(grads, opt_state, params)
params = jax.tree_map(lambda p, u: p - u, params, updates)
return params, opt_state
opt_state = optimizer.init(params)
metric_list = []
param_list = [params]
n_batches = len(train_data) // batch_size
for _ in range(n_epochs):
key, subkey = random.split(key)
perms = random.permutation(subkey, len(mnist_train_imgs))
perms = perms[: batch_size * n_batches] # Skip incomplete batch
perms = perms.reshape((n_batches, -1))
for n, perm in enumerate(perms):
batch = mnist_train_imgs[perm, ...]
key, subkey = random.split(key)
params, opt_state = batch_step(params, opt_state, batch, subkey)
if n % 200 == 0:
key, eval_key = random.split(key)
batch_metrics = evaluate_params(*eval_samples, params, eval_key)
metric_list.append(batch_metrics)
param_list.append(params)
return params, metric_list, param_list
# In practice you can use many more than 100 hidden units, up to 1000-2000.
# A small number is chosen here so that training is fast.
N_vis, N_hid = mnist_train_imgs.shape[-1], 100
key = random.PRNGKey(111)
key, subkey = random.split(key)
init_params = initialise_params(N_vis, N_hid, subkey)
optimizer = optax.sgd(learning_rate=0.05, momentum=0.9)
eval_samples = (mnist_train_imgs[:1000], mnist_test_imgs[:1000])
params, metric_list, param_list = train_RBM(init_params, mnist_train_imgs, optimizer, key, eval_samples)
train_recon_loss, test_recon_loss, FE_gap = list(zip(*metric_list))
epoch_progress = np.linspace(0, 5, len(train_recon_loss))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
ax1.plot(epoch_progress, train_recon_loss, label="Train Reconstruction Loss")
ax1.plot(epoch_progress, test_recon_loss, label="Test Reconstruction Loss")
ax1.legend()
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Loss")
ax2.plot(epoch_progress, FE_gap)
ax2.set_xlabel("Epoch")
ax2.set_ylabel("Free Energy Gap");
vis_data_samples = mnist_test_imgs[:25]
fig = plt.figure(figsize=(15, 5))
gs_bases = gridspec.GridSpec(1, 3, figure=fig)
recon_params = (param_list[0], param_list[1], param_list[-1])
subfig_titles = ("Initial", "Epoch 1", "Epoch 5")
key, subkey = random.split(key)
for gs_base, epoch_param, sf_title in zip(gs_bases, recon_params, subfig_titles):
# Use the same subkey for all parameter sets.
vis_recon_probs = reconstruct_vis(vis_data_samples, epoch_param, subkey)
title = f"{sf_title} Parameters"
gridspec_plot_imgs(vis_recon_probs, gs_base, title)
fig.suptitle("Reconstruction Samples", fontsize=20);
class RBM_LogReg:
Perform logistic regression on samples transformed to RBM hidden
representation with `params`.
def __init__(self, params):
self.params = params
self.LR = LogisticRegression(solver="saga", tol=0.1)
def _transform(self, samples):
W, a, _ = self.params
activation = jnp.dot(samples, W) + a
hidden_probs = jax.nn.sigmoid(activation)
return hidden_probs
def fit(self, train_samples, train_labels):
transformed_samples = self._transform(train_samples)
self.LR.fit(transformed_samples, train_labels)
def score(self, test_samples, test_labels):
transformed_samples = self._transform(test_samples)
return self.LR.score(transformed_samples, test_labels)
def predict(self, test_samples):
transformed_samples = self._transform(test_samples)
return self.LR.predict(transformed_samples)
def reconstruct_samples(self, samples, key):
return reconstruct_vis(samples, self.params, key)
train_data = (mnist_train_imgs, mnist_train_labels)
test_data = (mnist_test_imgs, mnist_test_labels)
# Train LR classifier on the raw pixel data for comparison.
LR_raw = LogisticRegression(solver="saga", tol=0.1)
LR_raw.fit(*train_data)
# LR classifier trained on hidden representations after 1 Epoch of training.
rbm_lr1 = RBM_LogReg(param_list[1])
rbm_lr1.fit(*train_data)
# LR classifier trained on hidden representations after 5 Epochs of training.
rbm_lr5 = RBM_LogReg(param_list[-1])
rbm_lr5.fit(*train_data)
print("Logistic Regression Accuracy:")
print(f"\tRaw Data: {LR_raw.score(*test_data)}")
print(f"\tHidden Units Epoch-1: {rbm_lr1.score(*test_data)}")
print(f"\tHidden Units Epoch-5: {rbm_lr5.score(*test_data)}")
class1_correct = rbm_lr1.predict(mnist_test_imgs) == mnist_test_labels
class5_correct = rbm_lr5.predict(mnist_test_imgs) == mnist_test_labels
diff_class_img_idxs = np.where(class5_correct & ~class1_correct)[0]
print(f"There are {len(diff_class_img_idxs)} images which were correctly labelled after >1 Epochs of training.")
key = random.PRNGKey(100)
# Try out different subsets of img indices.
idx_list = diff_class_img_idxs[100:]
n_rows = 5
fig, axs = plt.subplots(n_rows, 3, figsize=(9, 20))
for img_idx, ax_row in zip(idx_list, axs):
ax1, ax2, ax3 = ax_row
img = mnist_test_imgs[img_idx]
plot_digit(img, ax=ax1)
true_label = mnist_test_labels[img_idx]
ax1.set_title(f"Raw Image\nTrue Label: {true_label}")
epoch1_recon = rbm_lr1.reconstruct_samples(img, key)
plot_digit(epoch1_recon, ax=ax2)
hid1_label = rbm_lr1.predict(img[None, :])[0]
ax2.set_title(f"Epoch 1 Reconstruction\nPredicted Label: {hid1_label} (incorrect)")
epoch5_recon = rbm_lr5.reconstruct_samples(img, key)
hid5_label = rbm_lr5.predict(img[None, :])[0]
plot_digit(epoch5_recon, ax=ax3)
ax3.set_title(f"Epoch 5 Reconstruction\nPredicted Label: {hid5_label} (correct)");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Plotting functions
Step14: Restricted Boltzmann Machines
Step15: Load MNIST
Step17: Training with optax
Step18: Evaluating Training
Step20: Classification
Step21: The increase in accuracy here is modest because of the small number of hidden units. When 1000 hidden units are used the Epoch-5 accuracy approaches 97.5%.
Step22: We can explore the quality of the learned hidden tranformation by inspecting reconstructions of these test images.
|
2,751
|
<ASSISTANT_TASK:>
Python Code:
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=top_words)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
# step redundent, eval part of training already
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
from keras.layers import Dropout
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length, dropout=0.2))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
# Final evaluation of the model
#scores = model.evaluate(X_test, y_test, verbose=0)
#print("Accuracy: %.2f%%" % (scores[1]*100))
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length, dropout=0.2))
model.add(LSTM(100, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
from keras.layers.convolutional import Convolution1D
from keras.layers.convolutional import MaxPooling1D
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Convolution1D(nb_filter=32, filter_length=3, border_mode='same', activation='relu'))
model.add(MaxPooling1D(pool_length=2))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can see that this simple LSTM with little tuning achieves near state-of-the-art results on the IMDB problem. Importantly, this is a template that you can use to apply LSTM networks to your own sequence classification problems.
Step2: We can see dropout having the desired impact on training with a slightly slower trend in convergence and in this case a lower final accuracy. The model could probably use a few more epochs of training and may achieve a higher skill (try it an see).
Step3: We can see that the LSTM specific dropout has a more pronounced effect on the convergence of the network than the layer-wise dropout. As above, the number of epochs was kept constant and could be increased to see if the skill of the model can be further lifted.
|
2,752
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
ints = np.arange(1,21)
pows = 2**ints
print(pows)
print(pows[9], pows[19])
sum = 0
for i in range(1, 100):
sum += i
if sum > 200:
print(i, sum)
break
from scipy.special import factorial
def fxn(n, k):
'''Computes the number of permutations of n objects in a k-length sequence.
Args:
n: The number of objects
k: The sequence length
Retunrs:
The number of permutations of fixed length.
'''
return factorial(n) / factorial(n - k)
cats = 1
while fxn(n=30, k=cats) < 10**6:
cats += 1
print(cats)
0.75 * 0.25
gsum = 0
n = 0
p = 0.75
while gsum < 0.99:
n += 1
gsum += (1 - p)**(n - 1) * p
print(n, gsum)
from scipy import stats as ss
mu = 27500
sig = 15000
Z = (0 - mu) / sig
print(ss.norm.cdf(Z))
Z = (67000 - mu) / sig
print(1 - ss.norm.cdf(Z))
ss.norm.ppf(0.99, scale=sig, loc=mu)
def annual(P, r=0.05, W=30000):
'''Computes the change in principal after one year
Args:
P: The principal - amount of money at the beginning of the year
r: The rate of return from principal
W: The amount withdrawn
Returns:
The new principal'''
P -= W
P *= (r + 1)
return P
def terminator(P, r=0.05, W=30000, upper_limit=50):
'''Finds the number of years before the principal is exhausted.
Args:
P: The principal - amount of money at the beginning of the year
r: The rate of return from principal
W: The amount withdrawn
upper_limit: The maximum iterations before giving up.
Returns:
The new principal'''
for i in range(upper_limit):
if(P < 0):
break
P = annual(P, r, W)
return i
terminator(250000)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
ps = [i / 1000. for i in range(10**5, int(5 * 10**5), 100)]
ys = [terminator(p * 1000) for p in ps]
plt.plot(ps, ys)
plt.xlabel('Thousands of Dollars')
plt.ylabel('Years')
plt.show()
ps = [i / 1000. for i in range(10**5, int(7.5 * 10**5), 100)]
ys = [terminator(p * 1000, upper_limit=1000) for p in ps]
plt.plot(ps, ys)
plt.show()
from scipy import stats as ss
def s_annual(P, r=0.05, W=30000, sig_r=0.03, sig_W=10000):
'''Computes the change in principal after one year with stochastic
Args:
P: The principal - amount of money at the beginning of the year
r: The rate of return from principal
W: The amount withdrawn
Returns:
The new principal'''
P -= ss.norm.rvs(size=1,scale=sig_W, loc=W)
P *= (ss.norm.rvs(size=1, scale=sig_r, loc=r) + 1)
return P
def s_terminator(P, r=0.05, W=30000, upper_limit=50):
for i in range(upper_limit):
if(P < 0):
break
P = s_annual(P, r, W)
return i
samples = []
for i in range(1000):
samples.append(s_terminator(2.5 * 10**5))
plt.hist(samples)
plt.show()
def s_threshold(P, y):
'''Returns the fraction of times the principal P lasts longer than y'''
success = 0
for i in range(1000):
if s_terminator(P) > y:
success += 1
return success / 1000
s_threshold(2.5 * 10**5, 10)
p = np.linspace(1 * 10**5, 10 * 10**5, 200)
for pi in p:
if s_threshold(pi, 25) > 0.95:
print(pi)
break
import random
import scipy.stats
scipy.stats.geom?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $$2^{10} \approx 10^3$$
Step2: 1.3 Answer
Step3: 1.4 Answer
Step4: 2. Watching Youtube with the Geometric Distribution (15 Points)
Step5: 2.4 Answer
Step6: You will return after watching the 4th video
Step7: The assumption is OK, only 3% of our probability is in "impossible" values of negative numbers
Step8: The probability is 0.4%. It appears this is a bad model since income is much more spread than this.
Step9: The top 1% of earners is anyone above $62,395
Step10: 5. The Excellent Retirement Simulator - Stochastic (9 Points)
|
2,753
|
<ASSISTANT_TASK:>
Python Code:
# Execute this cell
import numpy as np
import scipy.stats
from astroML import stats as astroMLstats
data = np.random.random(1000)
# Execute this cell
mean = np.mean(data)
print mean
# Execute this cell. Think about what it is doing.
median = np.median(data)
mask = data>0.75
data[mask] = data[mask]*2
newmedian = np.median(data)
newmean = np.mean(data)
print median,newmedian
print mean,newmean
# Execute this cell
var = np.var(data)
# Execute this cell
std = np.std(data)
# Complete this cell and execute
q25,q50,q75 = # Complete
# Execute this cell
astroMLstats.sigmaG(data)
# Execute this cell
mode = scipy.stats.mode(data)
# Execute this cell
modealt = 3*q50 - 2*mean
# Execute this cell
skew = scipy.stats.skew(data)
kurt = scipy.stats.kurtosis(data)
# Excute this cell
print mean, median, var, std, skew, kurt, mode.mode, modealt, q25, q50, q75
# Complete and Execute this cell
ndata = # Make this a normal distribution with mean=0, sigma=1 with a sample size of 10000
# Compute all the above stats for this distribution
print np.mean(ndata), np.median(ndata), np.var(ndata), np.std(ndata)
print scipy.stats.skew(ndata), scipy.stats.kurtosis(ndata), scipy.stats.mode(ndata).mode
print np.percentile(ndata, [25,50,75])
# Execute this cell
%matplotlib inline
%run code/fig_uniform_distribution.py
# Complete and execute this cell
from scipy import stats
dist = # Complete for left edge = 0, width = 2
r = dist # Complete for 10 random draws
print r
p = dist # Complete for pdf evaluated at x=1
print p
# Execute this cell
%run code/fig_gaussian_distribution.py
# Complete and execute this cell
from scipy import stats
dist = # Normal distribution with mean = 0, stdev = 1
r = # 10 random draws
p = # pdf evaluated at x=0
print p,r
# Uncomment the next line and run
# I just want you to know that this magic function exists.
#%load code/fig_gaussian_distribution.py
# Complete and execute this cell
N=10000
mu=0
sigma=1
dist = # Complete
v = np.linspace( # Complete
prob = # Complete
print prob.sum()
# Execute this cell
x = stats.norm(0,1) # mean = 0, stdev = 1
y = np.exp(x)
print y.mean()
print x
# Complete and execute this cell
dist = stats.norm(0,1) # mean = 0, stdev = 1
x = # Complete
y = # Complete
print x.mean(),np.log(y.mean())
# Execute this cell
%run code/fig_chi2_distribution.py
# Execute this cell
%run code/fig_student_t_distribution.py
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
N= # Complete
mu= # Complete
sigma = # Complete
u = # Complete
dist = # Complete
plt.plot(u, # Complete
x = # Complete
plt.plot(x, 0*x, '|', markersize=50)
# Copy your code from above
# Add a histogram that is the 2-sample mean of 1,000,000 draws
yy = # Complete
plt.hist(yy, #Complete
# Copy your code from above and edit accordingly (or just edit your code from above)
# Execute this cell
%run code/fig_central_limit.py
# Base code drawn from Ivezic, Figure 3.22, edited by G. Richards to simplify the example
from matplotlib.patches import Ellipse
from astroML.stats.random import bivariate_normal
from astroML.stats import fit_bivariate_normal
#------------------------------------------------------------
# Create 10,000 points from a multivariate normal distribution
mean = [0, 0]
cov = [[1, 0.3], [0.3, 1]]
x, y = np.random.multivariate_normal(mean, cov, 10000).T
# Fit those data with a bivariate normal distribution
mean, sigma_x, sigma_y, alpha = fit_bivariate_normal(x,y)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
plt.scatter(x,y,s=2,edgecolor='none')
# draw 1, 2, 3-sigma ellipses over the distribution
for N in (1, 2, 3):
ax.add_patch(Ellipse(mean, N * sigma_x, N * sigma_y, angle=alpha * 180./np.pi, lw=1, ec='k', fc='none'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The arithmetic mean (or Expectation value) is
Step2: While it is perhaps most common to compute the mean, the median is a more robust estimator of the (true) mean location of the distribution. That's because it is less affected by outliers.
Step3: In addition to the "average", we'd like to know something about deviations from the average. The simplest thing to compute is $$d_i = x_i - \mu.$$ However, the average deviation is zero by definition of the mean. The next simplest thing to do is to compute the mean absolute deviation
Step4: And we define the standard deviation as
Step5: There is also the Median Absolute Deviation (MAD) given by
Step6: Where we call the difference between the 25th and 75th percentiles, $q_{75} - q_{25}$, the interquartile range.
Step7: The mode is the most probable value, determined from the peak of the distribution, which is the value where the derivative is 0
Step8: Another way to estimate the mode (at least for a Gaussian distribution) is
Step9: Other useful measures include the "higher order" moments (the skewness and kurtosis)
Step10: We could do the same with a normal distribution
Step11: Sample vs. Population Statistics
Step12: We can implement uniform in scipy as follows. Use the methods listed at the bottom of the link to complete the cell.
Step13: Did you expect that answer for the pdf? Why? What would the pdf be if you changed the width to 4?
Step14: Note that the convolution of two Gaussians results in a Gaussian. So $\mathscr{N}(\mu,\sigma)$ convolved with $\mathscr{N}(\nu,\rho)$ is $\mathscr{N}(\mu+\nu,\sqrt{\sigma^2+\rho^2})$
Step15: Log Normal
Step16: The catch here is that stats.norm(0,1) returns an object and not something that we can just do math on in the expected manner. What can you do with it? Try dir(x) to get a list of all the methods and properties.
Step17: $\chi^2$ Distribution
Step18: Student's $t$ Distribution
Step19: What's the point?
Step20: Now let's average those two draws and plot the result (in the same panel). Do it as a histogram for 1,000,000 samples (of 2 each). Use a stepfilled histogram that is normalized with 50% transparency and 100 bins.
Step21: Now instead of averaging 2 draws, average 3. Then do it for 10. Then for 100. Each time for 1,000,000 samples.
Step22: For 100 you will note that your draws are clearly sampling the full range, but the means of those draws are in a much more restrictred range. Moreover they are very closely following a Normal Distribution. This is the power of the Central Limit Theorem. We'll see this more later when we talk about maximum likelihood.
Step23: If you are confused, then watch this video from the Khan Academy
|
2,754
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(784,))
# Just for demonstration purposes.
img_inputs = keras.Input(shape=(32, 32, 3))
inputs.shape
inputs.dtype
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
model.summary()
keras.utils.plot_model(model, "my_first_model.png")
keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=["accuracy"],
)
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
model.save("path_to_my_model")
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model("path_to_my_model")
encoder_input = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
autoencoder.summary()
encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
decoder_input = keras.Input(shape=(16,), name="encoded_img")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
decoder = keras.Model(decoder_input, decoder_output, name="decoder")
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
autoencoder.summary()
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1)(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
keras.losses.CategoricalCrossentropy(from_logits=True),
],
loss_weights=[1.0, 0.2],
)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"priority": keras.losses.BinaryCrossentropy(from_logits=True),
"department": keras.losses.CategoricalCrossentropy(from_logits=True),
},
loss_weights={"priority": 1.0, "department": 0.2},
)
# Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32")
# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit(
{"title": title_data, "body": body_data, "tags": tags_data},
{"priority": priority_targets, "department": dept_targets},
epochs=2,
batch_size=32,
)
inputs = keras.Input(shape=(32, 32, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(inputs)
x = layers.Conv2D(64, 3, activation="relu")(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation="relu")(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation="relu")(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs, outputs, name="toy_resnet")
model.summary()
keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["acc"],
)
# We restrict the data to the first 1000 samples so as to limit execution time
# on Colab. Try to train on the entire dataset until convergence!
model.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2)
# Embedding for 1000 unique words mapped to 128-dimensional vectors
shared_embedding = layers.Embedding(1000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype="int32")
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype="int32")
# Reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
vgg19 = tf.keras.applications.VGG19()
features_list = [layer.output for layer in vgg19.layers]
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype("float32")
extracted_features = feat_extraction_model(img)
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config, custom_objects={"CustomDense": CustomDense})
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
self.classifier = layers.Dense(1)
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Note that you specify a static batch size for the inputs with the `batch_shape`
# arg, because the inner computation of `CustomRNN` requires a static batch size
# (when you create the `state` zeros tensor).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functional API
Step2: 前書き
Step3: データの形状は、784次元のベクトルとして設定されます。各サンプルの形状のみを指定するため、バッチサイズは常に省略されます。
Step4: 返されるinputsには、モデルに供給する入力データの形状とdtypeについての情報を含みます。形状は次のとおりです。
Step5: dtype は次のとおりです。
Step6: このinputsオブジェクトのレイヤーを呼び出して、レイヤーのグラフに新しいノードを作成します。
Step7: 「レイヤー呼び出し」アクションは、「入力」から作成したこのレイヤーまで矢印を描くようなものです。denseレイヤーに入力を「渡して」、xを取得します。
Step8: この時点で、レイヤーのグラフの入力と出力を指定することにより、Modelを作成できます。
Step9: モデルの概要がどのようなものか、確認しましょう。
Step10: また、モデルをグラフとしてプロットすることも可能です。
Step11: そしてオプションで、プロットされたグラフに各レイヤーの入力形状と出力形状を表示します 。
Step12: この図とコードはほぼ同じです。コードバージョンでは、接続矢印は呼び出し演算に置き換えられています。
Step13: さらに詳しくはトレーニングと評価ガイドをご覧ください。
Step14: 詳細は、モデルシリアライゼーションと保存ガイドをご覧ください。
Step15: ここでは、デコーディングアーキテクチャはエンコーディングアーキテクチャに対して厳密に対称的であるため、出力形状は入力形状(28, 28, 1)と同じです。
Step16: ご覧のように、モデルはネストすることができ、(モデルはちょうどレイヤーのようなものであるため)サブモデルを含むことができます。モデル・ネスティングのための一般的なユースケースは アンサンブル です。モデルのセットを (それらの予測を平均する) 単一のモデルにアンサンブルする方法の例を次に示します。
Step17: 複雑なグラフトポロジーを操作する
Step18: では、モデルをプロットします。
Step19: このモデルをコンパイルする時に、各出力に異なる損失を割り当てることができます。また、各損失に異なる重みを割り当てて、トレーニング損失全体へのそれらの寄与をモジュール化することも可能です。
Step20: 出力レイヤーの名前が異なるため、対応するレイヤー名を使用して、損失と損失の重みを指定することも可能です。
Step21: 入力とターゲットの NumPy 配列のリストを渡し、モデルをトレーニングします。
Step22: Datasetオブジェクトで fit を呼び出す時、それは([title_data, body_data, tags_data], [priority_targets, dept_targets])などのリストのタプル、または({'title'
Step23: モデルをプロットします。
Step24: モデルをトレーニングします。
Step25: レイヤーを共有する
Step26: レイヤーのグラフのノードを抽出して再利用する
Step27: そしてこれらはグラフデータ構造をクエリして得られる、モデルの中間的なアクティブ化です。
Step28: これらの機能を使用して、中間レイヤーのアクティブ化の値を返す新しい特徴抽出モデルを作成します。
Step29: これは特に、ニューラルスタイル転送などのタスクに有用です。
Step30: カスタムレイヤーでシリアル化をサポートするには、レイヤーインスタンスのコンストラクタ引数を返す get_config メソッドを定義します。
Step31: オプションで、config ディクショナリが与えられたレイヤーインスタンスを再作成する際に使用されたクラスメソッド from_config(cls, config) を実装します。デフォルトの from_config の実装は以下の通りです。
Step32: 次のいずれかのパターンに従ったcallメソッドを実装していれば、Functional API で任意のサブクラス化されたレイヤーやモデルを使用することができます。
|
2,755
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.linalg as la
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
m = np.matrix([[1,2,3], [4,5,6]])
print('a=', a)
print(19*'-')
print('m=', m)
print(19*'-')
print('ndim(a) = ', np.ndim(a))
print('a.ndim = ', a.ndim)
print(19*'-')
print('np.ndim(m) = ', np.ndim(m))
print('m.ndim = ', m.ndim)
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
m = np.matrix([[1,2,3], [4,5,6]])
print('a=', a)
print(19*'-')
print('m=', m)
print(19*'-')
print('size(a) = ', np.size(a))
print('a.size = ', a.size)
print(19*'-')
print('np.size(m) = ', np.size(m))
print('m.size = ', m.size)
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
m = np.matrix([[1,2,3], [4,5,6]])
print('a=', a)
print(19*'-')
print('m=', m)
print(19*'-')
print('shape(a) = ', np.shape(a))
print('a.shape = ', a.shape)
print(19*'-')
print('np.shape(m) = ', np.shape(m))
print('m.shape = ', m.shape)
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
m = np.matrix([[1,2,3], [4,5,6]])
print('a=', a)
print(19*'-')
print('m=', m)
print(19*'-')
print('a.shape[2-1] = ', a.shape[2-1])
print(19*'-')
print('m.shape[2-1] = ', m.shape[2-1])
a = np.array([[1.,2.,3.], [4.,5.,6.]])
m = np.matrix([[1.,2.,3.], [4.,5.,6.]])
print('a=', a)
print(19*'-')
print('m=', m)
a = np.array([[1.,2.,3.], [4.,5.,6.]])
b = np.array([[10.,20.,30.], [40.,50.,60.]])
c = np.array([[11.,12.,13.], [14.,15.,16.]])
d = np.array([[19.,29.,39.], [49.,59.,69.]])
m = np.matrix([[1.,2.,3.], [4.,5.,6.]])
n = np.matrix([[10.,20.,30.], [40.,50.,60.]])
r = np.matrix([[11.,12.,13.], [14.,15.,16.]])
p = np.matrix([[19.,29.,39.], [49.,59.,69.]])
e = np.vstack([np.hstack([a,b]), np.hstack([c,d])])
q = np.vstack([np.hstack([m,n]), np.hstack([r,p])])
print ('e = ', e)
print ('q = ', q)
f = np.bmat('a b; c d').A
s = np.bmat('m n; r p').A
print ('f = ', f)
print ('s = ', s)
a = np.array([1,2,3,4])
print(a[-1])
a = np.array([[1.,2.,3.,4.,5.], [6.,7.,8.,9.,10.],[11.,12.,13.,14.,15.]])
m = np.matrix([[1.,2.,3.,4.,5.], [6.,7.,8.,9.,10.],[11.,12.,13.,14.,15.]])
print('a=', a)
print(39*'-')
print('m=', m)
print(39*'-')
print(a[1,4])
print(39*'-')
print(m[1,4])
a = np.array([[1.,2.,3.,4.,5.], [6.,7.,8.,9.,10.],[11.,12.,13.,14.,15.]])
m = np.matrix([[1.,2.,3.,4.,5.], [6.,7.,8.,9.,10.],[11.,12.,13.,14.,15.]])
print('a=', a)
print(39*'-')
print('m=', m)
print(39*'-')
print('2nd row of a: a[1] = ', a[1])
print('2nd row of a: a[1:] = ', a[1,:])
print(39*'-')
print('2nd row of m: m[1] = ', m[1])
print('2nd row of m: m[1:]', m[1,:])
a=np.random.rand(8,5)
print(a)
a[0:5]
a[:5]
a[0:5,:]
m = np.mat(a) #or np.asmatrix(a)
m
m[0:5]
m[:5]
m[0:5,:]
a[-5:]
m=np.mat(a)
m
m[-5:]
a=np.random.rand(5,10)
b= a.copy()
print(a)
a[0:3][:,4:9]
a[np.ix_([1,3,4],[0,2])]
a[2:21:2,:]
a[::2,:]
a[ ::-1,:]
a[np.r_[:len(a),0]]
a.transpose()
a.T
a.conj().T
a.conj().transpose()
a.dot(np.asarray(m))
a*a
a/a
a**3
a>0.5
np.nonzero(a>0.5)
v = a[:,4].T
a[:,np.nonzero(v>0.5)[0]]
a[:,v.T>0.5]
a = b.copy()
a[a<0.5]=0
print(a)
a = b.copy()
a * (a>0.5)
a[:] = 3
a
a=np.random.rand(2,2)
c = a.copy()
b = a
print(a), print(b)
# changing b now changes a (anf vise veras)
b[1,1] = 0
print(a)
print(b)
a[0,0] = -1
print(a)
print(b)
# changing c does not change a (and vise versa)
a[0,1] = -10
print(a)
print(c)
c[1,0] = 20
print(a)
print(c)
y=a[1,:]
print(a)
print(y)
a[1,1]=1
print(y)
a.flatten()
a = np.arange(1,11)
a = np.arange(10)
a = np.zeros((3,5))
print(a)
a = np.zeros((2,3,5))
print(a)
a = np.ones((3,4))
print(a)
a = np.eye(3)
print(a)
a=np.random.rand(4,4)
b = np.diag(a)
print(a)
print(b)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span style="color
Step2: <span style="color
Step3: <span style="color
Step4: <span style="color
Step5: <span style="color
Step6: <span style="color
Step7: <span style="color
Step8: <span style="color
Step9: <span style="color
Step10: <span style="color
Step11: <span style="color
Step12: <span style="color
Step13: <span style="color
Step14: <span style="color
Step15: <span style="color
Step16: <span style="color
Step17: <span style="color
Step18: <span style="color
Step19: <span style="color
Step20: <span style="color
Step21: <span style="color
Step22: <span style="color
Step23: <span style="color
Step24: <span style="color
Step25: <span style="color
Step26: <span style="color
Step27: <span style="color
Step28: <span style="color
Step29: <span style="color
Step30: <span style="color
Step31: <span style="color
Step32: <span style="color
Step33: <span style="color
Step34: <span style="color
Step35: <span style="color
Step36: <span style="color
Step37: <span style="color
Step38: <span style="color
Step39: <span style="color
Step40: <span style="color
|
2,756
|
<ASSISTANT_TASK:>
Python Code:
ipd.display( ipd.YouTubeVideo("ajCYQL8ouqw") )
ipd.display( ipd.YouTubeVideo("PrVu9WKs498", start=8) )
ipd.display( ipd.YouTubeVideo("Cxj8vSS2ELU", start=540) )
ipd.display( ipd.YouTubeVideo("ECvinPjmBVE", start=6) )
ipd.display( ipd.YouTubeVideo("DiW6XVFeFgo", start=60))
ipd.display( ipd.YouTubeVideo("YDjsoZKlG04", start=155))
ipd.display( ipd.YouTubeVideo("TG-ivjyyYhM", start=35))
ipd.display( ipd.VimeoVideo("96140435") )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (Chords)
Step2: One more
Step3: Why MIR?
Step4: Example
Step5: Example
Step6: Example
Step7: Example
|
2,757
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
import seaborn
from IPython.html.widgets import interact, interactive, fixed#functii necesare pt interactivitate
from IPython.display import clear_output, display, HTML
rcdef = plt.rcParams.copy()
plt.rcParams['figure.figsize'] = 5, 3
seaborn.palplot(seaborn.color_palette())
colors=seaborn.color_palette()# colors[i] va fi culoarea i, i=0,...5
theta=2.0
V=st.expon(scale=theta)# se declara variabila aleatoare exponential distribuita
#de parametru theta=2
x=np.linspace(0,20, 100)
y=V.pdf(x)
plt.title('Densitatea distributiei de probabilitate Exp(2)')
plt.plot(x, y)
print 'Media M(V)=', V.mean()
theta=1.5
x=np.linspace(0,20, 100)
y=st.expon.cdf(x,scale=theta)
plt.title('Functia de repartitie a distributiei Exp(1.5)')
plt.plot(x,y)
print 'Dispersia pentru Exp(1.5):', st.expon.var(scale=theta)
def Expopdf(theta):
V=st.expon(scale=theta)
x=np.linspace(0,10,100)
y=V.pdf(x)#
fig, ax = plt.subplots()
ax.plot(x,y)
ax.fill_between(x, y, alpha=0.3)
ax.set_xlim(0,10)
ax.set_ylim(0,2)
ax.set_title(r'Variatia densitatilor de probabilitate $Exp(\theta)$')
interact(Expopdf, theta=(0.5, 5.0, 0.5))
def Expocdf(theta):
V=st.expon(scale=theta)
x=np.linspace(0,10,100)
y=V.cdf(x)
fig, ax = plt.subplots()
ax.plot(x,y)
ax.fill_between(x, y, alpha=.33)
ax.set_title(r'Variatia functiilor de repartitie $Exp(\theta)$')
ax.set_xlim(0,10)
ax.set_ylim(0, 1.0)
interact(Expocdf, theta=(0.2, 3.2))
print 'Valoarea medie si dispersia sunt:', st.expon.mean(scale=2), st.expon.var(scale=2)
vals=V.rvs(size=1000)# apeleaza simulatorul ce implementeaza metoda inversarii pt Exp(theta)
histo=plt.hist(vals, bins=50, normed=True)
valM=np.max(vals)
x=np.linspace(0,valM, 100)
plt.plot(x, V.pdf(x), color=colors[2])
alpha=3.0
V=st.pareto(alpha)#
beta=2.0
x=np.linspace(0,20, 100)
y=(1.0/beta)*V.pdf(x/beta)
plt.plot(x,y)
plt.fill_between(x, y, alpha=.33)
plt.title('Densitatea de probabilitate Pareto')
def Paretopdf(beta, alpha):
x=np.linspace(0,20,100)
y=(1.0/beta)*st.pareto.pdf(x/beta, alpha)
fig, ax = plt.subplots()
ax.plot(x,y)
ax.fill_between(x, y, alpha=.33)
ax.set_title(r'Variatia densitatilor de probabilitate $Pareto(\alpha, \beta)$')
ax.set_ylim(0,3)
interact(Paretopdf, beta=(0.2, 2, 0.2), alpha=(0.1, 3, 0.3))
plt.rcParams['figure.figsize'] = 8, 5
fig, ax = plt.subplots()
beta=1
alphaVals=np.arange(0.2, 2.2, 0.4)
for alpha in alphaVals:
x=np.linspace(beta,20, 100)
y=st.pareto.pdf(x,alpha)
ax.plot(x,y, lw=2, label=r'$\alpha = %.1f $'%alpha)
ax.set_title(r'Densitati ale distributiilor de probabilitate $Pareto(\alpha, 1)$')
ax.legend()
fig, ax = plt.subplots()
alpha=0.8
betaVals=np.arange(0.2, 2.2, 0.4)
for beta in betaVals:
x=np.linspace(beta,20, 200)
y=(1.0/beta)*st.pareto.pdf(x/beta,alpha)
ax.plot(x,y, lw=2, label=r'$\beta = %.1f $'%beta)
ax.set_title(r'Densitati ale distributiiilor de probabilitate $Pareto(0.8, \beta)$')
ax.legend()
plt.rcParams['figure.figsize'] = 5, 3
beta=0.5
def ParetoMeanVar(alpha):
V=st.pareto(alpha)# beta setat pe 1
print 'media=', beta*V.mean()
print 'dispersia=', beta**2*V.var()
interact(ParetoMeanVar, beta=0.5, alpha=(0.1, 3))
plt.rcParams['figure.figsize'] = 6, 4
V=st.pareto(3.0)# alpha a fost setat pe 3.0
beta=1.5
c=1.0/beta
fX=lambda x: c*V.pdf(c*x)
vals=beta*V.rvs(size=1000)
fig, ax=plt.subplots()
ax.hist(vals, bins=50, normed=True)
x=np.linspace(beta,20, 500)
ax.plot(x, fX(x) , color=colors[2])#seaborn dark red
V=st.norm()
plt.rcParams['figure.figsize'] = 5, 3
x=np.linspace(-4, 4,100)
plt.title('Densitatea de probabilitate N(0,1)')
plt.plot(x, V.pdf(x))
m=2
sigma=0.75
W=st.norm(loc=m, scale=sigma)
xx=np.linspace(m-4*sigma, m+4*sigma, 100)
plt.title('Densitatea de probabilitate N(2, 0.75)')
plt.plot(xx,W.pdf(xx))
def MedieVar(m):
sigma=1
x=np.linspace(m-4*sigma, m+4*sigma, 100)
y=st.norm.pdf(x,loc=m, scale=sigma)
fig, ax = plt.subplots()
ax.plot(x,y)
ax.fill_between(x, y, alpha=.33)
ax.set_title(r'Variatia graficului densitatii $N(m,1)$')
ax.set_xlim(-4, 6)
interact(MedieVar, m=(0,2, 0.2))
plt.rcParams['figure.figsize'] = 8, 5
fig, ax = plt.subplots()
m=0
sigmaVals=np.arange(0.2, 2.2, 0.4)
x=np.linspace(-7,7, 300)
for sigma in sigmaVals:
y=st.norm.pdf(x,loc=0, scale=sigma)
ax.plot(x,y, label=r'$\sigma = %.1f $'%sigma)
ax.legend()
plt.rcParams['figure.figsize'] = 10, 4
fig=plt.figure()
ax1=fig.add_subplot(121)
x=np.linspace(-4,4, 200)
y=st.norm.cdf(x)
ax1.set_title('Graficul functiei de repartitiei $\Phi$')
ax1.plot(x,y)
ax2=fig.add_subplot(122)
u=np.linspace(0,1,100)
xx=st.norm.ppf(u)
ax2.set_title('Graficul inversei, $\Phi^{-1}$')
ax2.plot(u,xx)
print 'alpha cvantila 0.4 si respectiv 0.78 este:', st.norm.ppf([0.4, 0.78])
plt.rcParams['figure.figsize'] = 5, 3
X=st.norm(loc=1.5, scale=0.8)
fig,ax=plt.subplots()
ax.set_xlim(-1.5, 4.5)
vals=X.rvs(size=1000)
histo=plt.hist(vals, bins=50, normed=True)
x=np.linspace(np.min(vals), np.max(vals), 100)
ax.plot(x, X.pdf(x), color=colors[2])
p=[0.3, 0.48, 0.22]#probabilitatile din definitia mixturii
m=[-4.8, -0.5, 3.5]#lista mediilor celor 3 distributii
sigma=[0.8, 1.0, 1.7]# lista abaterilor standard
x=np.linspace(-10,10, 1000)
y=np.zeros(x.shape)
for i in range(3):
y+=p[i]*st.norm.pdf(x,loc=m[i], scale=sigma[i])
plt.plot(x,y)
plt.fill_between(x, y, alpha=.33)
def simDiscrete(pr):
k=0
F=pr[0]
u=np.random.random()
while(u>F):
k+=1
F=F+pr[k]
return k
def GaussianMixture1D(N, p, m,sig):
dis=[simDiscrete(p) for i in xrange(N)]#genereaza N observatii asupra distr discrete
vals=np.empty(N, dtype=float)# vals va contine valorile generate ale mixturii
n=len(p)
for k in range(n):
I=[j for j in range(N) if dis[j] == k]# lista indicilor elementelor listei dis, egale
#cu k (k=0, 1, ..., m-1)
s=len(I)
va=st.norm.rvs(size=s, loc=m[k], scale=sig[k])#generam atatea valori din f_k
#cat este lungimea lui I
vals[I]=va # in pozitiile I copiem valorile generate
return vals
N=1000
vals=GaussianMixture1D(N,p, m, sigma)
histo=plt.hist(vals, bins=50, normed=True)
plt.plot(x,y, color=colors[2])
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: seaborn este o biblioteca destinata vizualizarii in statistica. Nu este inclusa in distributia Anaconda. Se bazeaza pe matplotlib si se instaleaza folosind pip. In cmd sau terminal se da comanda
Step2: Pentru a le folosi in acest notebook in locul culorilor oferite de matplotlib, le salvam in
Step3: In continuare evidentiem modalitatea de lucru cu variabile aleatoare continue, ce sunt instante ale clasei scipy.stats.NumeDistributie.
Step4: O astfel de definitie a variabilei aleatoare se numeste "inghetata" (frozen random variable), in sensul
Step5: O alta modalitate este sa nu inghetam definitia, ci sa apelam metodele pentru o variabila aleatoare,
Step6: Pentru a vizualiza dependenta densitatii $Exp(\theta)$ de parametrul $\theta$ folosim functia interact
Step7: Apelam functia interact care genereaza un widget si permite interactiunea cu graficul densitatii
Step8: A se observa ca in $0$ densitatea ia valoarea $1/\theta$. Deci pe masura ce $\theta$ creste,
Step9: Valoarea medie a distributiei $Exp(\theta)$ este $M(V)=\theta$, iar dispersia
Step10: Sa simulam acum variabila aleatoare $Exp(\theta=2.0)$, generand 1000 de valori de observatie.
Step11: Distributia Pareto
Step12: Daca insa studiem o variabila aleatoare $X\sim Pareto(\alpha, \beta\neq 1)$, atunci exploatam relatiile de mai sus.
Step13: Sa ilustram acum dependenta graficului densitatii Pareto de cei doi parametri, $\beta$ si $\alpha$
Step14: O alta modalitate de a vizualiza variatia graficului densitatii, cand unul din parametri este fix
Step15: Fixand $\alpha=0.8$ si variind $\beta$ avem
Step16: Am analizat in Cursul 15 media si dispersia distributiei Pareto si am concluzionat ca o variabila aleatoare
Step17: Simulatorul distributiei Pareto standard implementeaza metoda inversarii,
Step18: Distributia Pareto se mai numeste si distributie cu coada lunga (long tail distribution),
Step19: O variabila aleatoare, $X\sim N(m,\sigma)$, normal distribuita de parametri $m, \sigma$, arbitrari, are densitatea de probabilitate
Step20: Fixand parametrul $\sigma$ si variind media, $m$, graficul densitatii corespunzatoare se translateaza
Step21: Fixand acum media pe 0 si variind abaterea standard, $\sigma$ avem
Step22: Se observa ca pe masura ce abaterea standard creste, valorile unei varabile aleatoare $N(0,\sigma)$ sunt mai imprastiate in jurul mediei.
Step23: Sa calculam $\alpha$-cvantila distributiei $N(0,1)$, pentru $\alpha=0.4$ si $\alpha=0.78$
Step24: Prin urmare $z_{0.4}\approx-0.25$, iar $z_{0.78}\approx 0.77$.
Step25: Mixtura Gaussiana 1D
Step26: Generam $N=1000$ de valori din mixtura de mai sus
Step27: Histograma valorilor generate are alura graficului densitatii mixturii.
|
2,758
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
a=0.3 # diffusion constant
L = 1. # length of domain
J = 41 # number of grid points
dx = float(L)/float(J-1) # mesh size h = dx
#x_grid = np.array([j*dx for j in range(J)]) # spatial grid points
x_grid = np.linspace(0,1.0,J) #
T = 1. # length of time
N = 20 # number of time steps
#dt = float(T)/float(N-1) # time step size
sigma = .4 # stability if a dt/dx <= 1/2 ==> dt <= dx**2/(2*a)
# hence, sigma < 0.5
dt = sigma*dx**2/a # stability: dt <= dx**2/(2*a)
u = np.ones(J) #numpy function ones()
lbound = np.where(x_grid >= 0.125)
ubound = np.where(x_grid <= 0.25)
bounds = np.intersect1d(lbound, ubound)
u[bounds]=2 #setting u = 2 between 0.5 and 1 as per our I.C.s
u0 = np.ones(J)
pyplot.plot(x_grid, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
for n in range(N):
u0 = u.copy()
u[1:-1] = u0[1:-1] + a*dt/dx**2*(u0[2:] -2*u0[1:-1] +u0[0:-2])
# Set Dirichlet boundary conditions
u[0] = 1; u[N] = 1
if n%3 == 0:
pyplot.plot(x_grid, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
pyplot.plot(x_grid, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0.8,2.2);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Physical parameters
Step2: Specify spatial grid in Python
Step3: Specify temporal grid in Python
Step4: Goal
Step5: That leaves us with two vectors
|
2,759
|
<ASSISTANT_TASK:>
Python Code:
#relatively fast networks package (pip install python-igraph) that I used for these homeworks
import igraph
# slow-and-steady networks package. fewer bugs, easier drawing
import networkx as nx
# plots!
import matplotlib.pyplot as plt
from matplotlib import style
%matplotlib inline
# other packages
from __future__ import division
from random import random, shuffle
from numpy import percentile
from operator import itemgetter
from tabulate import tabulate
from collections import Counter
real_graph = nx.karate_club_graph()
positions = nx.spring_layout(real_graph)
nx.draw(real_graph, node_color = 'blue', pos = positions)
# Use the same number of nodes for each example
num_nodes = 500
# list of the sizes of the largest components
big_comp = []
# number of nodes in the graph
#num_nodes = 500
# vector of edge probabilities
p_values = [(1-x*.0001) for x in xrange(9850,10000)]
# try it a few times to get a smoother curve
iterations = 10
for p in p_values:
size_comps = []
for h in xrange(0, iterations):
edge_list = []
for i in xrange(0,num_nodes):
for j in xrange(i,num_nodes):
if (random() < p):
edge_list.append((i,j))
G = igraph.Graph(directed = False)
G.add_vertices(num_nodes)
G.add_edges(edge_list)
comps = [len(x) for x in G.clusters()]
size_comps.append(comps)
big_comp.append((sum([max(x) for x in size_comps])/len(size_comps)/float(num_nodes)))
plt.plot([x*(num_nodes-1) for x in p_values], big_comp, '.')
plt.title("Phase transitions in connectedness")
plt.ylabel("Fraction of nodes in the largest component")
plt.xlabel("Average degree (k = p(n-1)), {} < p < {}".format(p_values[99],p_values[0]))
# vector of edge probabilities
p_values_clustering = [x*.01 for x in xrange(0,100)]
# try it a few times to get a smoother curve
iterations = 1
# store the clustering coefficient
clustering = []
for p in p_values_clustering:
size_comps = []
for h in xrange(0, iterations):
edge_list = []
for i in xrange(0,num_nodes):
for j in xrange(i,num_nodes):
if (random() < p):
edge_list.append((i,j))
G = igraph.Graph(directed = False)
G.add_vertices(num_nodes)
G.add_edges(edge_list)
clustering.append((p, G.transitivity_undirected(mode="zero")))
plt.plot([x[0]*(num_nodes-1) for x in clustering], [x[1] for x in clustering], '.')
plt.title("Clustering coeff vs avg degree in a random graph")
plt.ylabel("Clustering coefficient")
plt.xlabel("Average degree (k = (n-1)p), 0 < p < 1")
# list of the average (over X iterations) diameters of the largest components
diam = []
# the degree distribution of the network for each average degree
degrees = {}
# vector of edge probabilities
p_values = [(1-x*.0001) for x in xrange(9850,10000)]
# try it a few times to get a smoother curve
iterations = 10
for p in p_values:
size_comps = []
diameters = []
for h in xrange(0, iterations):
edge_list = []
for i in xrange(0,num_nodes):
for j in xrange(i,num_nodes):
if (random() < p):
edge_list.append((i,j))
G = igraph.Graph(directed = False)
G.add_vertices(num_nodes)
G.add_edges(edge_list)
diameters.append(G.diameter())
degrees[p*(num_nodes-1)] = G.degree()
diam.append(sum(diameters)/len(diameters))
fig, ax1 = plt.subplots(figsize = (8,6))
plt.title("Graph metrics vs avg degree in a random graph", size = 16)
ax1.plot([x*(num_nodes-1) for x in p_values], big_comp, 'o', color = "red", markersize=4)
ax1.set_xlabel('Average degree (k = (n-1)p), 0 < p < 1', size = 16)
ax1.set_ylim(0,1.01)
ax1.set_xlim(0,6)
# Make the y-axis label and tick labels match the line color.
ax1.set_ylabel('Fraction of nodes in giant component', color='red', size = 16)
ax1.grid(True)
for tl in ax1.get_yticklabels():
tl.set_color('red')
tl.set_size(16)
ax2 = ax1.twinx()
ax2.set_xlim(0,6)
ax2.plot([x*(num_nodes-1) for x in p_values], diam, 's', color = "blue", markersize=4)
ax2.set_ylabel('Diameter of the giant component', color='blue', size = 16)
for tl in ax2.get_yticklabels():
tl.set_color('blue')
tl.set_size(16)
avg_degree_near_5 = min(degrees.keys(), key = lambda x: abs(x-5))
xy = Counter(degrees[avg_degree_near_5]).items()
plt.bar([x[0] for x in xy], [x[1] for x in xy], edgecolor = "none", color = "blue")
plt.ylabel("# of nodes with degree X", size = 16)
plt.xlabel("Degree", size = 16)
plt.title("Degree distribution of the random graph", size = 16)
print("The number of nodes in the graph (all are connected): {}".format(len(real_graph.nodes())))
print("The number of edges in the graph: {}".format(len(real_graph.edges())))
print("The average degree: {}".format(sum(nx.degree(real_graph).values())/len(real_graph.nodes())))
print("The clustering coefficient: {}".format(nx.average_clustering(real_graph)))
print("The clustering coefficient that a random graph with the same degree would predict (k/(n-1)): {}"
.format(sum(nx.degree(real_graph).values())/len(real_graph.nodes())/(len(real_graph.nodes())-1)))
print("The diameter of the graph: {}".format(nx.diameter(real_graph)))
A = []
for v in real_graph.nodes():
for x in range(0, real_graph.degree(v)):
A.append(v)
shuffle(A)
# make the edge list
_E = [(A[2*x], A[2*x+1]) for x in range(0,int(len(A)/2))]
E = set([x for x in _E if x[0]!=x[1]])
# add the edges to a new graph with the name node list
C = real_graph.copy()
C.remove_edges_from(real_graph.edges())
C.add_edges_from(E)
nx.draw(C, node_color = 'blue', pos = positions)
print("The number of nodes in the graph (all are connected): {}".format(len(C.nodes())))
print("The number of edges in the graph: {}".format(len(C.edges())))
print("The average degree: {}".format(sum(nx.degree(C).values())/len(C.nodes())))
print("The clustering coefficient: {}".format(nx.average_clustering(C)))
print("The clustering coefficient that a random graph with the same degree would predict (k/(n-1)): {}"
.format(sum(nx.degree(real_graph).values())/len(C.nodes())/(len(C.nodes())-1)))
print("The diameter of the graph: {}".format(nx.diameter(C)))
# get the graph
florentine_families = igraph.Nexus.get("padgett")["PADGB"]
# degree centrality
d = florentine_families.degree()
d_rank = [(x, florentine_families.vs[x]['name'], d[x]) for x in range(0,len(florentine_families.vs()))]
d_rank.sort(key = itemgetter(2), reverse = True)
# harmonic centrality
distances = florentine_families.shortest_paths_dijkstra()
h = [sum([1/x for x in dist if x != 0])/(len(distances)-1) for dist in distances]
h_rank = [(x, florentine_families.vs[x]['name'], h[x]) for x in range(0,len(florentine_families.vs()))]
h_rank.sort(key = itemgetter(2), reverse = True)
# make the table
d_table = []
d_table.append(["Rank (by degree)", "degree", "Rank (h centrality)", "harmonic"])
for n in xrange(0,len(florentine_families.vs())):
table_row = []
table_row.extend([d_rank[n][1], str(d_rank[n][2])[0:5]])
table_row.extend([h_rank[n][1], str(h_rank[n][2])[0:5]])
#table_row.extend([e_rank[n][1], str(e_rank[n][2])[0:5]])
#table_row.extend([b_rank[n][1], str(b_rank[n][2])[0:5]])
d_table.append(table_row)
print tabulate(d_table)
config_model_centrality = [[] for x in florentine_families.vs()]
config_model_means = []
hc_differences = [[] for x in range(0,16)]
for i in xrange(0,1000):
# build a random graph based on the configuration model
C = florentine_families.copy()
# graph with the same edge list as G
C.delete_edges(None)
# print C.summary()
# Add random edges
# vertex list A
A = []
for v in florentine_families.vs().indices:
for x in range(0,florentine_families.degree(v)):
A.append(v)
shuffle(A)
# print A
# make the edge list
_E = [(A[2*x], A[2*x+1]) for x in range(0,int(len(A)/2))]
E = set([x for x in _E if x[0]!=x[1]])
# add the edges to C
# print E
C.add_edges(E)
# rank the vertices by harmonic centrality
C_distances = C.shortest_paths_dijkstra()
C_h = [sum([1/x for x in dist if x != 0])/(len(C_distances)-1) for dist in C_distances]
del C
for vertex in range(0,16):
hc_differences[vertex].append(h[vertex] - C_h[vertex])
plt.plot([percentile(diff, 50) for diff in hc_differences], '--')
plt.plot([percentile(diff, 25) for diff in hc_differences], 'r--')
plt.plot([percentile(diff, 75) for diff in hc_differences], 'g--')
plt.xticks(range(0,16))
plt.gca().set_xticklabels(florentine_families.vs()['name'])
plt.xticks(rotation = 90)
plt.gca().grid(True)
plt.ylabel("(centrality) - (centrality on the null model)")
plt.title("How much of harmonic centrality is explained by degree?")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphs!
Step2: Now. What's the difference between that (^) drawing of nodes and edges and a completely random assembly of dots and lines? How can we quantify the difference between a social network, which we think probably has important structure, and a completely random network, whose structure contains very little useful information? Which aspects of a network can be explained by simple statistics like average degree, the number of nodes, or the degree distribution? Which characteristics of a network depend on a structure or generative process that could reveal an underlying truth about the way the network came about?
Step3: Clustering coefficient
Step4: Small diameter graphs
Step5: A comparison with a real social graph
Step6: The Configuration Model
Step7: Asking questions using a null model
Step8: First, let's show the relative rankings of the families with respect to vertex degree in the network and with respect to our chosen centrality measure, harmonic centrality. I won't go into various centrality measures here, beyond to say that harmonic centrality is formulated
Step9: Now the fun (?) part. Create a bunch of different random configuration models based on the florentine families graph, then measure the harmonic centrality on those graphs. The harmonic centality of a node on the null model will deend only on its degree (as the graph structure is now ranom).
|
2,760
|
<ASSISTANT_TASK:>
Python Code:
import ipyrad as ip
## this is a comment, it is not executed, but the code below it is.
import ipyrad as ip
## here we print the version
print ip.__version__
## create an Assembly object named data1.
data1 = ip.Assembly("data1")
## setting/modifying parameters for this Assembly object
data1.set_params('project_dir', "pedicularis")
data1.set_params('sorted_fastq_path', "./example_empirical_rad/*.gz")
data1.set_params('filter_adapters', 2)
data1.set_params('datatype', 'rad')
## prints the parameters to the screen
data1.get_params()
## this should raise an error, since clust_threshold cannot be 2.0
data1.set_params("clust_threshold", 2.0)
print data1.name
## another example attribute listing directories
## associated with this object. Most are empty b/c
## we haven't started creating files yet. But you
## can see that it shows the fastq directory.
print data1.dirs
## run step 1 to create Samples objects
data1.run("1")
## The force flag allows you to re-run a step that is already finished
data1.run("1", force=True)
## this is the explicit way to connect to ipcluster
import ipyparallel
## connect to a running ipcluster instance
ipyclient = ipyparallel.Client()
## or, if you used a named profile then enter that
ipyclient = ipyparallel.Client(profile="default")
## call the run function of ipyrad and pass it the ipyclient
## process that you want the work distributed on.
data1.run("1", ipyclient=ipyclient, force=True)
## Sample objects stored as a dictionary
data1.samples
## run step 1 to create Samples objects
data1.run("1", show_cluster=True, force=True)
## print full stats summary
print data1.stats
## print full stats for step 1 (in this case it's the same but for other
## steps the stats_dfs often contains more information.)
print data1.stats_dfs.s1
## access all Sample names in data1
allsamples = data1.samples.keys()
print "Samples in data1:\n", "\n".join(allsamples)
## Drop the two samples from this list that have "prz" in their names.
## This is a programmatic way to remove the outgroup samples.
subs = [i for i in allsamples if "prz" not in i]
## use branching to create new Assembly named 'data2'
## with only Samples whose name is in the subs list
data2 = data1.branch("data2", subsamples=subs)
print "Samples in data2:\n", "\n".join(data2.samples)
## Start by creating an initial assembly, setting the path to your data,
## and running step1. I set a project-dir so that all of our data sets
## will be grouped into a single directory called 'branch-test'.
data = ip.Assembly("base")
data.set_params("project_dir", "branch-test")
data.set_params("raw_fastq_path", "./ipsimdata/rad_example_R1_.fastq.gz")
data.set_params("barcodes_path", "./ipsimdata/rad_example_barcodes.txt")
## step 1: load in the data
data.run('1')
## let's create a dictionary to hold the finished assemblies
adict = {}
## iterate over parameters settings creating a new named assembly
for filter_setting in [1, 2]:
## create a new name for the assembly and branch
newname = data.name + "_f{}".format(filter_setting)
child1 = data.branch(newname)
child1.set_params("filter_adapters", filter_setting)
child1.run("2")
## iterate over clust thresholds
for clust_threshold in ['0.85', '0.90']:
newname = child1.name + "_c{}".format(clust_threshold[2:])
child2 = child1.branch(newname)
child2.set_params("clust_threshold", clust_threshold)
child2.run("3456")
## iterate over min_sample coverage
for min_samples_locus in [4, 12]:
newname = child2.name + "_m{}".format(min_samples_locus)
child3 = child2.branch(newname)
child3.set_params("min_samples_locus", min_samples_locus)
child3.run("7")
## store the complete assembly in the dictionary by its name
## so it is easy for us to access and retrieve, since we wrote
## over the variable name 'child' during the loop. You can do
## this using dictionaries, lists, etc., or, as you'll see below,
## we can use the 'load_json()' command to load a finished assembly
## from its saved file object.
adict[newname] = child3
## run an assembly up to step 3
data.run("123", force=True)
## select clade 1 from the sample names
subs = [i for i in data.samples if "1" in i]
## branch selecting only those samples
data1 = data.branch("data1", subs)
## select clade 2 from the sample names
subs = [i for i in data.samples if "2" in i]
## branch selecting only those samples
data2 = data.branch("data2", subs)
## make diploid base calls on 'data1' samples
data1.set_params("max_alleles_consens", 2)
## make haploid base calls on 'data2' samples
data2.set_params("max_alleles_consens", 1)
## run both assemblies through base-calling steps
data1.run("45", force=True)
data2.run("45", force=True)
## merge assemblies back together for across-sample steps
data3 = ip.merge("data3", [data1, data2])
data3.run("67")
## create a branch for a population-filtered assembly
pops = data3.branch("populations")
## assign samples to populations
pops.populations = {
"clade1": (1, [i for i in pops.samples if "1" in i]),
"clade2": (1, [i for i in pops.samples if "2" in i]),
}
## print the population dictionary
pops.populations
## run assembly
pops.run("7")
## save assembly object (also auto-saves after every run() command)
data1.save()
## load assembly object
data1 = ip.load_json("pedicularis/data1.json")
## write params file for use by the CLI
data1.write_params(force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting started with Jupyter notebooks
Step2: The ipyrad API data structures
Step3: Setting parameters
Step4: Instantaneous parameter (and error) checking
Step5: Attributes of Assembly objects
Step6: Sample Class objects
Step7: The .run() command
Step8: The run command will automatically parallelize work across all cores of a running ipcluster instance (remember, you should have started this outside of notebook. Or you can start it now.) If ipcluster is running on the default profile then ipyrad will detect and use it when the run command is called. However, if you start an ipcluster instance with a specific profile name then you will need to connect to it using the ipyparallel library and then pass the connection client object to ipyrad. I'll show an example of that here.
Step9: Samples stored in an Assembly
Step10: The progress bar
Step11: Viewing results of Assembly steps
Step12: Branching to subsample taxa
Step13: Branching to iterate over parameter settings
Step14: Working with your data programmatically
Step15: Population assignments
Step16: Saving Assembly objects
|
2,761
|
<ASSISTANT_TASK:>
Python Code:
import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
source_sentences[:50].split('\n')
target_sentences[:50].split('\n')
def extract_character_vocab(data):
special_words = ['<pad>', '<unk>', '<s>', '<\s>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):
new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in source_ids]
new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in target_ids]
return new_source_ids, new_target_ids
# Use the longest sequence as sequence length
sequence_length = max(
[len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])
# Pad all sequences up to sequence length
source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int,
target_letter_ids, target_letter_to_int, sequence_length)
print("Sequence Length")
print(sequence_length)
print("\n")
print("Input sequence example")
print(source_ids[:3])
print("\n")
print("Target sequence example")
print(target_ids[:3])
from distutils.version import LooseVersion
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 32
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 13
decoding_embedding_size = 13
# Learning Rate
learning_rate = 0.001
input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])
targets = tf.placeholder(tf.int32, [batch_size, sequence_length])
lr = tf.placeholder(tf.float32)
source_vocab_size = len(source_letter_to_int)
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)
import numpy as np
# Process the input we'll feed to the decoder
ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)
demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))
sess = tf.InteractiveSession()
print("Targets")
print(demonstration_outputs[:2])
print("\n")
print("Processed Decoding Input")
print(sess.run(dec_input, {targets: demonstration_outputs})[:2])
target_vocab_size = len(target_letter_to_int)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decoder RNNs
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'],
sequence_length - 1, target_vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([batch_size, sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
import numpy as np
train_source = source_ids[batch_size:]
train_target = target_ids[batch_size:]
valid_source = source_ids[:batch_size]
valid_target = target_ids[:batch_size]
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch, targets: target_batch, lr: learning_rate})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source})
train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))
valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))
input_sentence = 'hello'
input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]
input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))
batch_shell = np.zeros((batch_size, sequence_length))
batch_shell[0] = input_sentence
chatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in input_sentence]))
print(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))
print(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
Step2: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
Step3: Preprocess
Step4: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
Step5: This is the final shape we need them to be in. We can now proceed to building the model.
Step6: Hyperparameters
Step7: Input
Step8: Sequence to Sequence
Step9: Process Decoding Input
Step10: Decoding
Step11: Decoder During Training
Step12: Decoder During Inference
Step13: Optimization
Step14: Train
Step15: Prediction
|
2,762
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sys
from casadi import *
import os
import time
# Add do_mpc to path. This is not necessary if it was installed via pip
sys.path.append('../../../')
# Import do_mpc package:
import do_mpc
import matplotlib.pyplot as plt
import pandas as pd
sp = do_mpc.sampling.SamplingPlanner()
sp.set_param(overwrite = True)
# This generates the directory, if it does not exist already.
sp.data_dir = './sampling_test/'
sp.set_sampling_var('alpha', np.random.randn)
sp.set_sampling_var('beta', lambda: np.random.randint(0,5))
plan = sp.gen_sampling_plan(n_samples=10)
pd.DataFrame(plan)
plan = sp.add_sampling_case(alpha=1, beta=-0.5)
print(plan[-1])
sampler = do_mpc.sampling.Sampler(plan)
sampler.set_param(overwrite = True)
def sample_function(alpha, beta):
time.sleep(0.1)
return alpha*beta
sampler.set_sample_function(sample_function)
sampler.data_dir = './sampling_test/'
sampler.set_param(sample_name = 'dummy_sample')
sampler.sample_data()
ls = os.listdir('./sampling_test/')
ls.sort()
ls
dh = do_mpc.sampling.DataHandler(plan)
dh.data_dir = './sampling_test/'
dh.set_param(sample_name = 'dummy_sample')
dh.set_post_processing('res_1', lambda x: x)
dh.set_post_processing('res_2', lambda x: x**2)
pd.DataFrame(dh[:3])
pd.DataFrame(dh.filter(input_filter = lambda alpha: alpha<0))
pd.DataFrame(dh.filter(output_filter = lambda res_2: res_2>10))
sys.path.append('../../../examples/oscillating_masses_discrete/')
from template_model import template_model
from template_mpc import template_mpc
from template_simulator import template_simulator
# Initialize sampling planner
sp = do_mpc.sampling.SamplingPlanner()
sp.set_param(overwrite=True)
# Sample random feasible initial states
def gen_initial_states():
x0 = np.random.uniform(-3*np.ones((4,1)),3*np.ones((4,1)))
return x0
# Add sampling variable including the corresponding evaluation function
sp.set_sampling_var('X0', gen_initial_states)
plan = sp.gen_sampling_plan(n_samples=9)
model = template_model()
mpc = template_mpc(model)
estimator = do_mpc.estimator.StateFeedback(model)
simulator = template_simulator(model)
def run_closed_loop(X0):
mpc.reset_history()
simulator.reset_history()
estimator.reset_history()
# set initial values and guess
x0 = X0
mpc.x0 = x0
simulator.x0 = x0
estimator.x0 = x0
mpc.set_initial_guess()
# run the closed loop for 150 steps
for k in range(100):
u0 = mpc.make_step(x0)
y_next = simulator.make_step(u0)
x0 = estimator.make_step(y_next)
# we return the complete data structure that we have obtained during the closed-loop run
return simulator.data
%%capture
# Initialize sampler with generated plan
sampler = do_mpc.sampling.Sampler(plan)
# Set directory to store the results:
sampler.data_dir = './sampling_closed_loop/'
sampler.set_param(overwrite=True)
# Set the sampling function
sampler.set_sample_function(run_closed_loop)
# Generate the data
sampler.sample_data()
# Initialize DataHandler
dh = do_mpc.sampling.DataHandler(plan)
dh.data_dir = './sampling_closed_loop/'
dh.set_post_processing('input', lambda data: data['_u', 'u'])
dh.set_post_processing('state', lambda data: data['_x', 'x'])
res = dh[:]
n_res = min(len(res),80)
n_row = int(np.ceil(np.sqrt(n_res)))
n_col = int(np.ceil(n_res/n_row))
fig, ax = plt.subplots(n_row, n_col, sharex=True, sharey=True, figsize=(8,8))
for i, res_i in enumerate(res):
ax[i//n_col, np.mod(i,n_col)].plot(res_i['state'][:,1],res_i['state'][:,0])
for i in range(ax.size):
ax[i//n_col, np.mod(i,n_col)].axis('off')
fig.tight_layout(pad=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Toy example
Step 1
Step2: We then introduce new variables to the SamplingPlanner which will later jointly define a sampling case. Think of header rows in a table (see figure above).
Step3: In this example we have two variables alpha and beta. We have
Step4: We can inspect the plan conveniently by converting it to a pandas DataFrame. Natively, the plan is a list of dictionaries.
Step5: If we do not wish to automatically generate a sampling plan, we can also add sampling cases one by one with
Step6: Typically, we finish the process of generating the sampling plan by saving it to the disc. This is simply done with
Step7: Most important settting of the sampler is the sample_function. This function takes as arguments previously the defined sampling_var (from the configuration of the SamplingPlanner).
Step8: Before we sample, we want to set the directory for the created files and a name
Step9: Now we can actually create all the samples
Step10: The sampler will now create the sampling results as a new file for each result and store them in a subfolder with the same name as the sampling_plan
Step11: Step 3
Step12: We then need to point out where the data is stored and how the samples are called
Step13: Next, we define the post-processing functions. For this toy example we do some "dummy" post-processing and request to compute two results
Step14: The interface of DataHandler.set_post_processing requires a name that we will see again later and a function that processes the output of the previously defined sample_function.
Step15: Or we use a more complex filter with the DataHandler.filter method. This method requires either an input or an output filter in the form of a function.
Step16: Or we can filter by outputs, e.g. with
Step17: Sampling closed-loop trajectories
Step18: Step 1
Step19: This implementation is sufficient to generate the sampling plan
Step20: Since we want to run the system in the closed-loop in our sample function, we need to load the corresponding configuration
Step21: We can now define the sampling function
Step22: Now we have all the ingredients to make our sampler
Step23: Step 3
Step24: In this case, we are interested in the states and the inputs of all trajectories. We define the following post processing functions
Step25: To retrieve all post-processed data from the datahandler we use slicing. The result is stored in res.
Step26: To inspect the sampled closed-loop trajectories, we create an array of plots where in each plot $x_2$ is plotted over $x_1$. This shows the different behavior, based on the sampled initial conditions
|
2,763
|
<ASSISTANT_TASK:>
Python Code:
!rm -rf *
!rm -rf .config
!rm -rf .git
!git clone https://github.com/google-research/scenic.git .
!python -m pip install -q .
!python -m pip install -r scenic/projects/baselines/clip/requirements.txt
!echo "Done."
import os
import jax
from matplotlib import pyplot as plt
import numpy as np
from scenic.projects.owl_vit import models
from scenic.projects.owl_vit.configs import clip_b32
from scipy.special import expit as sigmoid
import skimage
from skimage import io as skimage_io
from skimage import transform as skimage_transform
config = clip_b32.get_config()
module = models.TextZeroShotDetectionModule(
body_configs=config.model.body,
normalize=config.model.normalize,
box_bias=config.model.box_bias)
variables = module.load_variables(config.init_from.checkpoint_path)
# Load example image:
filename = os.path.join(skimage.data_dir, 'astronaut.png')
image_uint8 = skimage_io.imread(filename)
image = image_uint8.astype(np.float32) / 255.0
# Pad to square with gray pixels on bottom and right:
h, w, _ = image.shape
size = max(h, w)
image_padded = np.pad(
image, ((0, size - h), (0, size - w), (0, 0)), constant_values=0.5)
# Resize to model input size:
input_image = skimage.transform.resize(
image_padded,
(config.dataset_configs.input_size, config.dataset_configs.input_size),
anti_aliasing=True)
text_queries = ['human face', 'rocket', 'nasa badge', 'star-spangled banner']
tokenized_queries = np.array([
module.tokenize(q, config.dataset_configs.max_query_length)
for q in text_queries
])
# Pad tokenized queries to avoid recompilation if number of queries changes:
tokenized_queries = np.pad(
tokenized_queries,
pad_width=((0, 100 - len(text_queries)), (0, 0)),
constant_values=0)
# Note: The model expects a batch dimension.
predictions = module.apply(
variables,
input_image[None, ...],
tokenized_queries[None, ...],
train=False)
# Remove batch dimension and convert to numpy:
predictions = jax.tree_map(lambda x: np.array(x[0]), predictions )
%matplotlib inline
score_threshold = 0.1
logits = predictions['pred_logits'][..., :len(text_queries)] # Remove padding.
scores = sigmoid(np.max(logits, axis=-1))
labels = np.argmax(predictions['pred_logits'], axis=-1)
boxes = predictions['pred_boxes']
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.imshow(input_image, extent=(0, 1, 1, 0))
ax.set_axis_off()
for score, box, label in zip(scores, boxes, labels):
if score < score_threshold:
continue
cx, cy, w, h = box
ax.plot([cx - w / 2, cx + w / 2, cx + w / 2, cx - w / 2, cx - w / 2],
[cy - h / 2, cy - h / 2, cy + h / 2, cy + h / 2, cy - h / 2], 'r')
ax.text(
cx - w / 2,
cy + h / 2 + 0.015,
f'{text_queries[label]}: {score:1.2f}',
ha='left',
va='top',
color='red',
bbox={
'facecolor': 'white',
'edgecolor': 'red',
'boxstyle': 'square,pad=.3'
})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Choose config
Step2: Load the model and variables
Step3: Prepare image
Step4: Prepare text queries
Step5: Get predictions
Step6: Plot predictions
|
2,764
|
<ASSISTANT_TASK:>
Python Code:
peyton_dataset_url = 'https://github.com/facebookincubator/prophet/blob/master/examples/example_wp_peyton_manning.csv'
peyton_filename = '../datasets/example_wp_peyton_manning.csv'
import pandas as pd
import numpy as np
from fbprophet import Prophet
# NB: this didn't work as of 8/22/17
#import io
#import requests
#s=requests.get(peyton_dataset_url).content
#df=pd.read_csv(io.StringIO(s.decode('utf-8')))#df = pd.read_csv(peyton_dataset_url)
df = pd.read_csv(peyton_filename)
# transform to log scale
df['y']=np.log(df['y'])
df.head()
m = Prophet()
m.fit(df);
future = m.make_future_dataframe(periods=365)
future.tail()
forecast = m.predict(future)
forecast[['ds','yhat','yhat_lower','yhat_upper']].tail()
m.plot(forecast)
m.plot_components(forecast)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fit the model by instantiating a new Prophet object. Any settings required for the forecasting procedure are passed to this object upon construction. You then can call this object's fit method and pass in the historical dataframe. Fitting should take 1-5 seconds.
Step2: Predictions are then made on a dataframe with a column ds containing the dates for which a prediction is to be made. You can get a suitable dataframe that extends into the future a specified number of days using the helper method Prophet.make_future_dataframe. By default it will also include the dates from the history, so we will see the model fit as well.
Step3: The predict method will assign each row in future a predicted value which it names yhat. If you pass in historical dates, it will provide an in-sample fit. The forecast object here is a new dataframe that includes a column yhat with the forecast, as well as columns for components and uncertainty intervals.
Step4: You can plot the forecast by calling the Prophet.plot method and passing in your forecast dataframe
Step5: If you want to see the forecast components, you can use the Prophet.plot_components method. By default you’ll see the trend, yearly seasonality, and weekly seasonality of the time series.
|
2,765
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
data = pd.read_csv('data/glucose_insulin.csv', index_col='time');
I = interpolate(data.insulin)
params = Params(G0 = 290,
k1 = 0.03,
k2 = 0.02,
k3 = 1e-05)
def make_system(params, data):
Makes a System object with the given parameters.
params: sequence of G0, k1, k2, k3
data: DataFrame with `glucose` and `insulin`
returns: System object
G0, k1, k2, k3 = params
Gb = data.glucose[0]
Ib = data.insulin[0]
I = interpolate(data.insulin)
t_0 = get_first_label(data)
t_end = get_last_label(data)
init = State(G=G0, X=0)
return System(params,
init=init, Gb=Gb, Ib=Ib, I=I,
t_0=t_0, t_end=t_end, dt=2)
system = make_system(params, data)
def update_func(state, t, system):
Updates the glucose minimal model.
state: State object
t: time in min
system: System object
returns: State object
G, X = state
k1, k2, k3 = system.k1, system.k2, system.k3
I, Ib, Gb = system.I, system.Ib, system.Gb
dt = system.dt
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
G += dGdt * dt
X += dXdt * dt
return State(G=G, X=X)
update_func(system.init, system.t_0, system)
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
results = run_simulation(system, update_func);
results
subplot(2, 1, 1)
plot(results.G, 'b-', label='simulation')
plot(data.glucose, 'bo', label='glucose data')
decorate(ylabel='Concentration (mg/dL)')
subplot(2, 1, 2)
plot(results.X, 'C1', label='remote insulin')
decorate(xlabel='Time (min)',
ylabel='Concentration (arbitrary units)')
savefig('figs/chap18-fig01.pdf')
def slope_func(state, t, system):
Computes derivatives of the glucose minimal model.
state: State object
t: time in min
system: System object
returns: derivatives of G and X
G, X = state
k1, k2, k3 = system.k1, system.k2, system.k3
I, Ib, Gb = system.I, system.Ib, system.Gb
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
return dGdt, dXdt
slope_func(system.init, 0, system)
results2, details = run_ode_solver(system, slope_func)
details
results2
plot(results.G, 'C0', label='run_simulation')
plot(results2.G, 'C2--', label='run_ode_solver')
decorate(xlabel='Time (min)', ylabel='Concentration (mg/dL)')
savefig('figs/chap18-fig02.pdf')
diff = results.G - results2.G
percent_diff = diff / results2.G * 100
percent_diff
max(abs(percent_diff))
# Solution
system3 = System(system, dt=1)
results3, details = run_ode_solver(system3, slope_func)
details
# Solution
plot(results2.G, 'C2--', label='run_ode_solver (dt=2)')
plot(results3.G, 'C3:', label='run_ode_solver (dt=1)')
decorate(xlabel='Time (m)', ylabel='mg/dL')
# Solution
diff = (results2.G - results3.G).dropna()
percent_diff = diff / results2.G * 100
# Solution
max(abs(percent_diff))
source_code(run_ode_solver)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Code from the previous chapter
Step2: Interpolate the insulin data.
Step3: The glucose minimal model
Step5: Here's a version of make_system that takes the parameters and data
Step7: And here's the update function.
Step8: Before running the simulation, it is always a good idea to test the update function using the initial conditions. In this case we can veryify that the results are at least qualitatively correct.
Step10: Now run_simulation is pretty much the same as it always is.
Step11: And here's how we run it.
Step12: The results are in a TimeFrame object with one column per state variable.
Step13: The following plot shows the results of the simulation along with the actual glucose data.
Step15: Numerical solution
Step16: We can test the slope function with the initial conditions.
Step17: Here's how we run the ODE solver.
Step18: details is a ModSimSeries object with information about how the solver worked.
Step19: results is a TimeFrame with one row for each time step and one column for each state variable
Step20: Plotting the results from run_simulation and run_ode_solver, we can see that they are not very different.
Step21: The differences in G are less than 2%.
Step22: Exercises
Step23: Under the hood
|
2,766
|
<ASSISTANT_TASK:>
Python Code:
m = 1.00
k = 4*pi*pi
wn = 2*pi
T = 1.0
z = 0.02
wd = wn*sqrt(1-z*z)
c = 2*z*wn*m
NSTEPS = 200 # steps per second
h = 1.0 / NSTEPS
def load(t):
return np.where(t<0, 0, np.where(t<5, sin(0.5*wn*t)**2, 0))
t = np.linspace(-1, 6, 7*NSTEPS+1)
plt.plot(t, load(t))
plt.ylim((-0.05, 1.05));
kstar = k + 2*c/h + 4*m/h/h
astar = 2*m
vstar = 2*c + 4*m/h
t = np.linspace(0, 8+h, NSTEPS*8+2)
P = load(t)
DP = P[+1:]-P[:-1]
x, v, a = [], [], []
x0, v0 = 0.0, 0.0
for p, dp in zip(P, DP):
a0 = (p - k*x0 - c*v0)/m
x.append(x0), v.append(v0), a.append(a0)
dx = (dp + astar*a0 + vstar*v0)/kstar
dv = 2*(dx/h-v0)
x0, v0 = x0+dx, v0+dv
x, v = np.array(x), np.array(v)
plt.plot(t[:-1],x)
xe = (((1 - 2*z**2)*sin(wd*t) / (2*z*sqrt(1-z*z)) - cos(wd*t)) *exp(-z*wn*t) + 1. - sin(wn*t)/(2*z))/2/k
plt.plot(t[:1001], xe[:1001], lw=2)
plt.plot(t[:1001:10], x[:1001:10], 'ko')
plt.plot(t[:1001], x[:1001]-xe[:1001])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the Loading
Step2: Numerical Constants
Step3: Vectorize the time and the load
Step4: Integration
Step5: Results
Step6: Comparison
Step7: Eventually we plot the difference between exact and approximate response,
|
2,767
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
scatx=np.random.rand(50)
scaty=np.random.randn(50)
f= plt.figure(figsize=(9,6))
plt.scatter(scatx,scaty,c=u'k',marker=u'o',alpha=1)
plt.xlabel('X')
plt.ylabel('Y')
plt.title("Scatter Plot of A Set of Random Data")
x= np.random.rand(50)
plt.hist(x)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("One Dimensional Histogram of a Random Set of Data")
plt.hist(x,bins=50, normed=True, facecolor="r",stacked=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scatter plots
Step2: Histogram
|
2,768
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('../data/train.csv')
df.head(10)
df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df.info()
df = df.dropna()
df['Sex'].unique()
df['Gender'] = df['Sex'].map({'female': 0, 'male':1}).astype(int)
df['Embarked'].unique()
df['Port'] = df['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
df = df.drop(['Sex', 'Embarked'], axis=1)
cols = df.columns.tolist()
print(cols)
cols = [cols[1]] + cols[0:1] + cols[2:]
df = df[cols]
df.head(10)
df.info()
train_data = df.values
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators = 100)
model = model.fit(train_data[0:,2:],train_data[0:,0])
df_test = pd.read_csv('../data/test.csv')
df_test.head(10)
df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df_test = df_test.dropna()
df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male':1})
df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3})
df_test = df_test.drop(['Sex', 'Embarked'], axis=1)
test_data = df_test.values
output = model.predict(test_data[:,1:])
result = np.c_[test_data[:,0].astype(int), output.astype(int)]
df_result = pd.DataFrame(result[:,0:2], columns=['PassengerId', 'Survived'])
df_result.head(10)
df_result.to_csv('../results/titanic_1-0.csv', index=False)
df_result.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas - Cleaning data
Step2: We notice that the columns describe features of the Titanic passengers, such as age, sex, and class. Of particular interest is the column Survived, which describes whether or not the passenger survived. When training our model, what we are essentially doing is assessing how each feature impacts whether or not the passenger survived (or if the feature makes an impact at all).
Step3: Next, we review the type of data in the columns, and their respective counts.
Step4: We notice that the columns Age and Embarked have NAs or missing values. As previously discussed, we take the approach of simply removing the rows with missing values.
Step5: Question
Step6: Similarly for Embarked, we review the range of values and create a new column called Port that represents, as a numerical value, where each passenger embarks from.
Step7: Question
Step8: We review the columns our final, processed data set.
Step9: For convenience, we move the column Survived to the left-most column. We note that the left-most column is indexed as 0.
Step10: In our final review of our training data, we check that (1) the column Survived is the left-most column (2) there are no NA values, and (3) all the values are in numerical form.
Step11: Finally, we convert the processed training data from a Pandas dataframe into a numerical (Numpy) array.
Step12: Scikit-learn - Training the model
Step13: We use the processed training data to 'train' (or 'fit') our model. The column Survived will be our first input, and the set of other features (with the column PassengerId omitted) as the second.
Step14: Scikit-learn - Making predictions
Step15: We then review a selection of the data.
Step16: We notice that test data has columns similar to our training data, but not the column Survived. We'll use our trained model to predict values for the column Survived.
Step17: We now apply the trained model to the test data (omitting the column PassengerId) to produce an output of predictions.
Step18: Pandas - Preparing submission
Step19: We briefly review our predictions.
Step20: Finally, we output our results to a .csv file.
Step21: However, it appears that we have a problem. The Kaggle submission website expects "the solution file to have 418 predictions."
|
2,769
|
<ASSISTANT_TASK:>
Python Code:
from k2datascience import hr_analytics
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
hr = hr_analytics.HR()
print(f'Data Shape\n\n{hr.data.shape}')
print('\n\nColumns\n\n{}'.format('\n'.join(hr.data.columns)))
hr.data.head()
hr.box_plot();
print(f'P(employee left the company) = {hr.p_left_company:.3f}')
print(f'P(employee experienced a work accident) = {hr.p_work_accident:.3f}')
print(f'P(employee experienced accident and left company) = {hr.p_left_and_accident:.3f}')
hr.compare_satisfaction()
hours_variance, hours_std = hr.calc_hours_stats()
print(f'Hours Worked Variance: {hours_variance:.3f}')
print(f'Hours Worked Standard Deviation: {hours_std:.3f}')
satisfaction_ex, satisfaction_current = hr.compare_satisfaction_variance()
print(f'Ex-Employee Job Satisfaction Variance: {satisfaction_ex:.3f}')
print(f'Current Employee Job Satisfaction Variance: {satisfaction_current:.3f}')
hr.calc_satisfaction_salary()
hr.calc_p_hours_salary()
hr.calc_p_left_salary()
hr.calc_p_salary_promotion()
print(f'Approximate Sample Satisfaction Mean: {hr.data.satisfaction.mean():.3f}')
sample_n = 10
sample_means = []
for n in range(sample_n):
sample_means.append(hr.calc_satisfaction_random_sample(50))
sample_mean = '\n'.join([f'{x:.3f}' for x in sample_means])
print(f'Actual Sample Satisfaction Mean: {sum(sample_means) / sample_n}')
print('Actual Sample Satisfaction Values:')
print(f'{sample_mean}')
print('\n'.join(hr.bernoulli_vars))
hr.calc_p_bernoulli()
hr.calc_bernoulli_variance()
hr.calc_p_bernoulli_k()
hr.calc_p_bernoulli_k(cumulative=True)
hr.bernoulli_plot()
print('\n'.join(hr.normal_vars))
hr.gaussian_plot()
hr.norm_stats
hr.gaussian_plot(normal_overlay=True)
print('\n'.join(hr.poisson_vars))
poisson = hr.poisson_distributions()
poisson
poisson[[f'p_{x}' for x in hr.poisson_vars]]
print('\n'.join(hr.central_limit_vars))
hr.central_limit_plot()
left = hr.data.query('left == 1').satisfaction
stayed = hr.data.query('left == 0').satisfaction
comparison = hr.compare_confidence(left, 'left', stayed, 'stayed', 0.95)
comparison
ttest = hr.t_test(left, 'left', stayed, 'stayed')
mean_diff = abs(left.mean() - stayed.mean())
variance_diff = abs(left.var() - stayed.var())
print(f'T-test P-value: {ttest[1]:.3f}')
print(f'Difference of Means: {mean_diff:.3f}')
print(f'Difference of Variances: {variance_diff:.3f}')
low = hr.data.query('salary == "low"').satisfaction
medium = hr.data.query('salary == "medium"').satisfaction
high = hr.data.query('salary == "high"').satisfaction
hr.t_test(low, 'low',
hr.data.satisfaction, 'All Satisfaction Data',
independent_vars=False)
hr.t_test(medium, 'medium',
hr.data.satisfaction, 'All Satisfaction Data',
independent_vars=False)
hr.t_test(high, 'high',
hr.data.satisfaction, 'All Satisfaction Data',
independent_vars=False)
hr.t_test(low, 'low', medium, 'medium', high, 'high');
low = hr.data.query('salary == "low"').evaluation
medium = hr.data.query('salary == "medium"').evaluation
high = hr.data.query('salary == "high"').evaluation
hr.t_test(low, 'low',
hr.data.evaluation, 'All Evaluation Data',
independent_vars=False)
hr.t_test(medium, 'medium',
hr.data.evaluation, 'All Evaluation Data',
independent_vars=False)
hr.t_test(high, 'high',
hr.data.evaluation, 'All Evaluation Data',
independent_vars=False)
hr.t_test(low, 'low', medium, 'medium', high, 'high');
high.mean()
hr.data.evaluation.mean()
medium = hr.data.query('salary == "medium"').satisfaction
high = hr.data.query('salary == "high"').satisfaction
hr.calc_power(medium, high)
hr.bootstrap(hr.data.satisfaction, n=100, sets=100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Explore the data
Step3: Probability, Expectation Values, and Variance
Step4: Compute the 25th, 50th, and 90th percentiles for the satisfaction level score for all employees that left the company. Compare these results to the same percentiles for those that did not leave. What can you say about the results?
Step5: Findings
Step6: Compare the variance between the satisfaction levels of employees who left versus those who stayed. Which is larger? What does this mean?
Step7: Findings
Step8: Findings
Step9: Findings
Step10: Findings
Step11: Findings
Step12: Distributions and The Central Limit Theorem
Step13: For the k variables you identified in part 1, compute the probabilities $p_k$, of each having a positive $(x = 1)$ result.
Step14: Compute the variance of each of the variables in part 2 using $p_k$ as described above.
Step15: For each of the k variables, compute the probability of randomly selecting 3500 employees with a positive result. Comment on your answer.
Step16: Findings
Step17: Findings
Step18: The Normal Distribution
Step19: For the variables in part 1, plot some histograms.
Step20: Compute the mean and variance for each of the variables used in parts 1 and 2.
Step21: Using the mean and variance in part 3, construct normal distributions for each and overlay them on top of the histograms you made in part one. Are they well approximated by normals?
Step22: Findings
Step23: For each variable in part 1, divide each by salary and fit a Poisson distribution to each.
Step24: Findings
Step25: Using the variables chosen in part 1, randomly select a set of n = 10, n = 100, n = 500 and n = 1000 samples and take the mean. Repeat this 1000 times for each variable.
Step26: Findings
Step27: Findings
Step28: Findings
Step29: Findings
Step30: Findings
Step31: Bootstrapping
|
2,770
|
<ASSISTANT_TASK:>
Python Code:
chain = sisl.Geometry([0]*3, sisl.Atom(1, R=1.), sc=[1, 1, 10])
chain.set_nsc([3, 3, 1])
# Transport along y-direction
chain = chain.tile(20, 0)
He = sisl.Hamiltonian(chain)
He.construct(([0.1, 1.1], [0, -1]))
Hd = He.tile(20, 1)
He.write('ELEC.nc')
Hd.write('DEVICE.nc')
with open('RUN.fdf', 'w') as f:
f.write(
TBT.k [ 3 1 1 ]
TBT.DOS.A
TBT.Current.Orb
TBT.HS DEVICE.nc
%block TBT.Elec.Left
HS ELEC.nc
semi-inf-direction -a2
electrode-position 1
%endblock
%block TBT.Elec.Right
HS ELEC.nc
semi-inf-direction +a2
electrode-position end -1
%endblock
)
trs = sisl.get_sile('TRS/siesta.TBT.nc')
no_trs = sisl.get_sile('NO_TRS/siesta.TBT.nc')
def plot_bond(tbt, E):
xy = tbt.geometry.xyz[:, :2]
vector = tbt.vector_current(0, E)[:, :2]
atom = tbt.atom_current(0, E)
# Normalize atomic current
atom += 1
atom *= 10 / atom.max()
plt.scatter(xy[:, 0], xy[:, 1], atom);
plt.quiver(xy[:, 0], xy[:, 1], vector[:, 0], vector[:, 1]);
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.tight_layout()
plot_bond(trs, 1.)
plt.savefig('fig/trs.png')
plot_bond(no_trs, 1.)
plt.savefig('fig/no_trs.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example of the $k$-point sampling for TBtrans.
Step2: Run these two executables
|
2,771
|
<ASSISTANT_TASK:>
Python Code::
def load_doc(filename):
# open the file as read only
file = open(filename, 'r')
# read all text
text = file.read()
# close the file
file.close()
return text
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
2,772
|
<ASSISTANT_TASK:>
Python Code:
#import tm1 service module
from TM1py.Services import TM1Service
#import tm1 utils module
from TM1py.Utils import Utils
#import pandas
import pandas as pd
#import matplotlib
import matplotlib.pyplot as plt
#inline plotting for matplotlib
%matplotlib inline
#import statsmodels package
from fbprophet import Prophet
#Server address
address = 'localhost'
#HTTP port number - this can be found in your config file
port = '8892'
#username
user = 'admin'
#password
password = 'apple'
#SSL parameter - this can be found in your config file
ssl = True
#specify the cube
cube_name = 'Retail'
#specify the view
view_name = 'Time Series'
with TM1Service(address= address, port=port, user=user, password=password, ssl=ssl) as tm1:
# Extract pnl data from specified cube view
raw_data = tm1.cubes.cells.get_view_content(cube_name=cube_name, view_name=view_name, private=False)
# Build pandas DataFrame fram raw cellset data
df = Utils.build_pandas_dataframe_from_cellset(raw_data, multiindex=False)
ts = df
ts.dtypes
ts['Date'] = ts['Year'] + '-' + ts['Period']
ts['Date'] = pd.DatetimeIndex(ts['Date'])
ts = ts.rename(columns={'Date': 'ds',
'Values': 'y'})
region = '13'
product = '315'
sub = ts[ts['Region']==region]
sub = sub[sub['Product']==product]
sub_model = Prophet(interval_width=0.95)
sub_model.fit(sub)
#specify the number of future periods
future_dates = sub_model.make_future_dataframe(periods=24, freq='MS')
forecast = sub_model.predict(future_dates)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
sub_model.plot(forecast,
uncertainty=True)
sub_model.plot_components(forecast)
#Combine the forecast dataframe and our original subset
sub = sub.set_index('ds')
forecast = forecast.set_index('ds')
result = pd.concat([sub, forecast], axis = 1)
#fill out other dimensions
result['Version'] = result.Version.fillna('3')
result['Currency'] = result.Currency.fillna('Local')
result['Region'] = result.Region.fillna(region)
result['Product'] = result.Product.fillna(product)
result['Retail Measure'] = result['Retail Measure'].fillna('Sales Amount')
#make ds accessible
result = result.reset_index()
result['Year'] = pd.DatetimeIndex(result['ds']).year
result['Period'] = pd.DatetimeIndex(result['ds']).month
with TM1Service(address= address, port=port, user=user, password=password, ssl=ssl) as tm1:
# cellset to store the new data
cellset = {}
# Populate cellset with coordinates and value pairs
for index, row in result.iterrows():
cellset[(row['Year'], row['Period'], 'Forecast', row['Product'], row['Currency'], row['Region'],row['Retail Measure'])] = row['yhat']
tm1.cubes.cells.write_values('Retail', cellset)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Rather than looping through each region and product within the data set, the following cell creates a subset of data to forecast. If you'd like to forecast for an alternate region/product combination this is where you're able to make the change by subbing out Region <i>13</i> and Product <i>315</i>.
Step5: Step 5
|
2,773
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
from statsmodels.formula.api import ols
sm.formula.ols
import statsmodels.formula.api as smf
sm.OLS.from_formula
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True)
df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
df.head()
mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df)
res = mod.fit()
print(res.summary())
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit()
print(res.params)
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit()
print(res.params)
res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit()
res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit()
print(res1.params, '\n')
print(res2.params)
res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit()
print(res.params)
def log_plus_1(x):
return np.log(x) + 1.
res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit()
print(res.params)
import patsy
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='matrix')
print(y[:5])
print(X[:5])
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
print(sm.OLS(y, X).fit().summary())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import convention
Step2: Alternatively, you can just use the formula namespace of the main statsmodels.api.
Step3: Or you can use the following conventioin
Step4: These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
Step5: All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
Step6: Fit the model
Step7: Categorical variables
Step8: Patsy's mode advanced features for categorical variables are discussed in
Step9: Multiplicative interactions
Step10: Many other things are possible with operators. Please consult the patsy docs to learn more.
Step11: Define a custom function
Step12: Any function that is in the calling namespace is available to the formula.
Step13: To generate pandas data frames
|
2,774
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
from sympy import init_printing
from sympy import S
from sympy import sin, cos, tanh, exp, pi, sqrt, log
from boutdata.mms import x, y, z, t
from boutdata.mms import DDX
import os, sys
# If we add to sys.path, then it must be an absolute path
common_dir = os.path.abspath('./../../../../common')
# Sys path is a list of system paths
sys.path.append(common_dir)
from CELMAPy.MES import make_plot, BOUT_print
init_printing()
folder = '../gaussianWSinAndParabola/'
# Initialization
the_vars = {}
# We need Lx
from boututils.options import BOUTOptions
myOpts = BOUTOptions(folder)
Lx = eval(myOpts.geom['Lx'])
Ly = eval(myOpts.geom['Ly'])
mu = eval(myOpts.cst['mu']) # Needed for Lambda
Lambda = eval(myOpts.cst['Lambda'])
phiRef = eval(myOpts.cst['phiRef'])
# The potential
# The skew sinus
# In cartesian coordinates we would like a sinus with with a wave-vector in the direction
# 45 degrees with respect to the first quadrant. This can be achieved with a wave vector
# k = [1/sqrt(2), 1/sqrt(2)]
# sin((1/sqrt(2))*(x + y))
# We would like 2 nodes, so we may write
# sin((1/sqrt(2))*(x + y)*(2*pi/(2*Lx)))
the_vars['phi'] = sin((1/sqrt(2))*(x + y)*(2*pi/(2*Lx)))
# The density
the_vars['n'] = 1
# The current
the_vars['uIPar'] = 0
# The parallel electron velocity, given by the sheath boundary condition
the_vars['uEPar'] = exp(Lambda-(phiRef+the_vars['phi']))
# The parallel velocity, given by the sheath boundary condition
the_vars['jPar'] = the_vars['n']*(the_vars['uIPar']-the_vars['uEPar'])
#make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False, direction='y')
BOUT_print(the_vars, rational=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize
Step2: Define the variables
Step3: Plot
Step4: Print the variables in BOUT++ format
|
2,775
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from matplotlib.patches import Rectangle
# define all variables for convergence script
# these will pass to the bash magic below used to call plumed sum_hills
dir="MetaD_converge" #where the intermediate fes will be stored
hills="other/HILLS" #your HILLS file from the simulation
finalfes='other/fes.dat' #the final fes.dat file
stride=1000
kT=8.314e-3*300 #throughout we convert to kcal, but the HILLS are assumed to be in GROMACS units (kJ)
## here is where you set the boxes to define convergence regions
C1=[-1.5,1.0] #center of box 1
C2=[1.0,-.5]
edge1=1.0 #edge of box1
edge2=1.0
%%bash -s "$dir" "$hills" "$stride" "$kT"
# calling sum hills and output to devnul
HILLSFILE=HILLS
rm -rf $1
mkdir $1
cp $2 $1
cd $1
plumed sum_hills --hills $HILLSFILE --kt $4 --stride $3 >& /dev/null
%matplotlib inline
#read the data in from a text file
fesdata = np.genfromtxt(finalfes,comments='#');
fesdata = fesdata[:,0:3]
#what was your grid size? this calculates it
dim=int(np.sqrt(np.size(fesdata)/3))
#some post-processing to be compatible with contourf
X=np.reshape(fesdata[:,0],[dim,dim],order="F") #order F was 20% faster than A/C
Y=np.reshape(fesdata[:,1],[dim,dim],order="F")
Z=np.reshape((fesdata[:,2]-np.min(fesdata[:,2]))/4.184,[dim,dim],order="F") #convert to kcal/mol
#what spacing do you want? assume units are in kJ/mol
spacer=1 #this means 1kcal/mol spacing
lines=20
levels=np.linspace(0,lines*spacer,num=(lines+1),endpoint=True)
fig=plt.figure(figsize=(8,6))
axes = fig.add_subplot(111)
xlabel='$\Phi$'
ylabel='$\Psi$'
plt.contourf(X, Y, Z, levels, cmap=plt.cm.bone,)
plt.colorbar()
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
currentAxis = plt.gca()
currentAxis.add_patch(Rectangle((C1[0]-edge1/2, C1[1]-edge1/2), edge1, edge1,facecolor='none',edgecolor='yellow',linewidth='3'))
currentAxis.add_patch(Rectangle((C2[0]-edge2/2, C2[1]-edge2/2), edge2, edge2,facecolor='none',edgecolor='yellow',linewidth='3'))
plt.show()
def diffNP(file):
#read the data in from a text file
# note - this is very slow
fesdata = np.genfromtxt(file,comments='#');
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184 #output in kcal
return diff
def diff(file):
kT=8.314e-3*300
A=0.0
B=0.0
f = open(file, 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
return diff
diffvec=None
rootdir = '/Users/jpfaendt/Learning/Python/ALA2_MetaD/MetaD_converge'
i=0
diffvec=np.zeros((1,2))
#the variable func defines which function you are going to call to read in your data files fes_*.dat
#func=diffNP uses the numpy read in (SLOW)
#func=diff streams in data from a text file
#to experience the differnece , uncomment out the print statements and run each way
func=diff
for infile in glob.glob( os.path.join(rootdir, 'fes_?.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_??.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
for infile in glob.glob( os.path.join(rootdir, 'fes_???.dat') ):
if i >= i:
diffvec.resize((i+1,2))
#print "current file is: " + infile
diffvec[i][0]=i*1.0
diffvec[i][1]=func(infile)
i+=1
fig = plt.figure(figsize=(6,6))
axes = fig.add_subplot(111)
xlabel='time (generic)'
ylabel='diff (A-B) (kcal/mol)'
axes.plot(diffvec[:,0],diffvec[:,1])
axes.set_xlabel(xlabel, fontsize=20)
axes.set_ylabel(ylabel, fontsize=20)
plt.show()
##
#read the data in from a text file using genfrom txt
fesdata = np.genfromtxt('MetaD_converge/fes_1.dat',comments='#');
kT=8.314e-3*300
A=0.0
B=0.0
dim=np.shape(fesdata)[0]
for i in range(0, dim):
x=fesdata[i][0]
y=fesdata[i][1]
z=fesdata[i][2]
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
##
#read the data in from a text file using read in commands
kT=8.314e-3*300
A=0.0
B=0.0
f = open('MetaD_converge/fes_1.dat', 'r')
for line in f:
if line[:1] != '#':
line=line.strip()
if line:
columns = line.split()
x=float(columns[0])
y=float(columns[1])
z=float(columns[2])
if x < C1[0]+edge1/2 and x > C1[0]-edge1/2 and y > C1[1]-edge1/2 and y < C1[1]+edge1/2:
A+=np.exp(-z/kT)
if x < C2[0]+edge2/2 and x > C2[0]-edge2/2 and y > C2[1]-edge2/2 and y < C2[1]+edge2/2:
B+=np.exp(-z/kT)
f.close
A=-kT*np.log(A)
B=-kT*np.log(B)
diff=(A-B)/4.184
diff
file='MetaD/fes.dat'
%timeit diffNP(file)
%timeit diff(file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graph the final FES and plot the two squares on top of it
Step2: The two functions below calculate the average free energy of a region by integrating over whichever boxes you defined above. Since the FES is discrete and points are equally spaced, this is trivially taken as a summation
Step3: Below this is all testing of different read-in options
Step4: Profiling speed of different read in options
|
2,776
|
<ASSISTANT_TASK:>
Python Code:
# Librerias
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df1
df2
df3
pd.concat([df1,df2,df3])
pd.concat([df1,df2,df3],axis=1)
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
pd.merge(left,right,how='inner',on='key')
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer', on=['key1', 'key2'])
pd.merge(left, right, how='right', on=['key1', 'key2'])
pd.merge(left, right, how='left', on=['key1', 'key2'])
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left.join(right)
left.join(right, how='outer')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Concatenar
Step2: Ejemplos de DataFrames
Step3: Merge
Step4: Ejemplo Complicado
Step5: Joining
|
2,777
|
<ASSISTANT_TASK:>
Python Code:
# Step 1: Configure your cluster with gcloud
# `gcloud container clusters get-credentials <cluster_name> --zone <cluster-zone> --project <project-id>
# Step 2: Get the port where the gRPC service is running on the cluster
# `kubectl get configmap metadata-grpc-configmap -o jsonpath={.data}`
# Use `METADATA_GRPC_SERVICE_PORT` in the next step. The default port used is 8080.
# Step 3: Port forwarding
# `kubectl port-forward deployment/metadata-grpc-deployment 9898:<METADATA_GRPC_SERVICE_PORT>`
# Troubleshooting
# If getting error related to Metadata (For examples, Transaction already open). Try restarting the metadata-grpc-service using:
# `kubectl rollout restart deployment metadata-grpc-deployment`
import sys, os
PROJECT_DIR=os.path.join(sys.path[0], '..')
%cd {PROJECT_DIR}
import json
from examples import config as cloud_config
import examples.tuner_data_utils as tuner_utils
from ml_metadata.proto import metadata_store_pb2
from ml_metadata.metadata_store import metadata_store
from nitroml.benchmark import results
import seaborn as sns
import tensorflow as tf
import qgrid
sns.set()
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = 'localhost'
connection_config.port = 9898
store = metadata_store.MetadataStore(connection_config)
# Name of the dataset/subbenchmark
# This is used to filter out the component path.
testdata = 'ilpd'
def get_metalearning_data(meta_algorithm: str = '', test_dataset: str = '', multiple_runs: bool = True):
d_list = []
execs = store.get_executions_by_type('nitroml.automl.metalearning.tuner.component.AugmentedTuner')
model_dir_map = {}
for tuner_exec in execs:
run_id = tuner_exec.properties['run_id'].string_value
pipeline_root = tuner_exec.properties['pipeline_root'].string_value
component_id = tuner_exec.properties['component_id'].string_value
pipeline_name = tuner_exec.properties['pipeline_name'].string_value
if multiple_runs:
if '.run_' not in component_id:
continue
if test_dataset not in component_id:
continue
if f'metalearning_benchmark' != pipeline_name and meta_algorithm not in pipeline_name:
continue
config_path = os.path.join(pipeline_root, component_id, 'trial_summary_plot', str(tuner_exec.id))
model_dir_map[tuner_exec.id] = config_path
d_list.append(config_path)
return d_list
# Specify the path to tuner_dir from above
# You can get the list of tuner_dirs by calling: get_metalearning_data(multiple_runs=False)
example_plot = ''
if not example_plot:
raise ValueError('Please specify the path to the tuner plot dir.')
with tf.io.gfile.GFile(os.path.join(example_plot, 'tuner_plot_data.txt'), mode='r') as fin:
data = json.load(fin)
tuner_utils.display_tuner_data(data, save_plot=False)
algorithm = 'majority_voting'
d_list = get_metalearning_data(algorithm, testdata)
d_list
# Select the runs from `d_list` to visualize.
data_list = []
for d in d_list:
with tf.io.gfile.GFile(os.path.join(d, 'tuner_plot_data.txt'), mode='r') as fin:
data_list.append(json.load(fin))
tuner_utils.display_tuner_data_with_error_bars(data_list, save_plot=True)
algorithm = 'nearest_neighbor'
d_list = get_metalearning_data(algorithm, testdata)
d_list
# Select the runs from `d_list` to visualize.
data_list = []
for d in d_list:
with tf.io.gfile.GFile(os.path.join(d, 'tuner_plot_data.txt'), mode='r') as fin:
data_list.append(json.load(fin))
tuner_utils.display_tuner_data_with_error_bars(data_list, save_plot=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to the ML Metadata (MLMD) database
Step2: Get trial summary data (used to plot Area under Learning Curve) stored as AugmentedTuner artifacts.
Step3: Majority Voting
Step4: Nearest Neighbor
|
2,778
|
<ASSISTANT_TASK:>
Python Code:
# Importa la librería financiera.
# Solo es necesario ejecutar la importación una sola vez.
import cashflows as cf
cflo = cf.cashflow(const_value=[100] * 11 + [0], start='2016-1', freq='M')
cflo
nrate = cf.interest_rate([24] * 12, start='2016-1', freq='M')
nrate
cf.savings(deposits = cflo, # depósito periodico
initbal = 100, # balance inicial
nrate = nrate) # tasa de interés nominal
x = cf.savings(deposits = cflo, # depósito períodico
initbal = 100, # balance inicial
nrate = nrate) # tasa de interés nominal
x.Earned_Interest
## intereses como lista
x.Earned_Interest.tolist()
## suma aritmética de los intereses
sum(x.Earned_Interest)
# balance final
x.Ending_Balance[-1]
## tasa de interés
nrate = cf.interest_rate([24] * 12, start='2016-1', freq='M', chgpts={5:16})
## depósitos
cflo = cf.cashflow(const_value=[100]*11 + [0], start='2016-1', freq='M')
## modelado
x = cf.savings(deposits = cflo, # deposito periodico
initbal = 100, # balance inicial
nrate = nrate) # tasa de interes mensual
x
nrate = cf.interest_rate(const_value=[36]*24, start='2000-01', freq='M', chgpts={'2001-01':24})
nrate
cflo = cf.cashflow(const_value=0, periods=24, start='2000-01', freq='M')
cflo[[3*t-1 for t in range(1, 9)]] = 50
cflo
x = cf.savings(deposits = cflo, # depósito períodico
initbal = 100, # balance inicial
nrate = nrate) # tasa de interés mensual
x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejercicio.-- Usando Microsoft Excel u otra herramienta solucione el siguiente problema
Step2: *Ejemplo.--* Realice el mismo ejemplo anterior, pero considerando que la tasa de interés es del 16% nominal a partir de 2016-06.
Step3: *Ejemplo.--* Se tiene una cuenta con un saldo inicial de $ 100. Se hacen depósitos al final de cada trimestre por $ 50 (se hará el primer depósito en 3 meses). La tasa nominal es del 36% con capitalización mensual y cambiará a 24% a partir del 13avo mes (incluido). ¿Cuál será el saldo al final de mes 24?.
|
2,779
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(29384924)
data = np.random.randint(10, size = 100) # 100 random numbers, from 0 to 9
print(data)
print("Number of data points: ", data.shape[0]) # Remember our friend, ndarray.shape?
print("Largest value: ", data.max())
print("Smallest value: ", data.min())
print(data.mean()) # Mean
outlier = np.array([1, 1, 2, 3, 2, 1, 3, 2, 38]) # Note the outlier of 38 at the end.
print(outlier.mean())
print(np.median(data))
print(outlier)
print(np.median(outlier))
print(np.mean(outlier))
print(data)
print(data.var())
print(np.sqrt(data.var()))
print(data.std())
print(np.percentile(data, 75) - np.percentile(data, 25))
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# Observe 100 data points from a Gaussian random variable with mean = 0.0 and variance = 1.0.
observations = np.random.normal(size = 100)
_ = plt.hist(observations)
# Observe **1000** data points from a Gaussian random variable with mean = 0.0 and variance = 1.0.
observations = np.random.normal(size = 1000)
_ = plt.hist(observations)
# Observe **** 10,000 **** data points from a Gaussian random variable with mean = 0.0 and variance = 1.0.
observations = np.random.normal(size = 10000)
_ = plt.hist(observations, bins = 25)
print(observations)
print("Mean: {:.2f}".format(observations.mean()))
print("Variance: {:.2f}".format(observations.var()))
from scipy.stats import norm
xs = np.linspace(-5, 5, 100)
plt.plot(xs, norm.pdf(xs, loc = 0, scale = 1), '-', label = "mean=0, var=1")
plt.plot(xs, norm.pdf(xs, loc = 0, scale = 2), '--', label = "mean=0, var=2")
plt.plot(xs, norm.pdf(xs, loc = 0, scale = 0.5), ':', label = "mean=0, var=0.5")
plt.plot(xs, norm.pdf(xs, loc = -1, scale = 1), '-.', label = "mean=-1, var=1")
plt.legend(loc = 0)
plt.title("Various Normal Distributions")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some very straightforward statistics are the number of data points, the largest value, and the smallest value. These shouldn't be immediately ignored, but they are of limited utility.
Step2: Mean
Step3: The mean is very simple to compute
Step4: The mean is sensitive to outliers, meaning one or two data points that lie well beyond all the others can disproportionately affect the value of the mean. In the above simple example, the lone outlier of 38 pulls the mean to be larger than all the other data points except for 38; not exactly a representative statistic!
Step5: The median is computed by
Step6: For comparison
Step7: Quite a difference! But which is more representative of data will, ultimately, depend on your data and what you're trying to do with it.
Step8: The variance is computed by subtracting each individual data point from the average of the whole data set, squaring this difference, and summing all these differences together before finally dividing by the number of data points.
Step9: (they should indeed both show the same number--this is just to show that the standard deviation is defined precisely as the square root of the variance)
Step10: This, like the median, is robust to outliers. But also like the median, it relies on sorting the data first, then picking out the value 1/4 of the way down the dataset and subtracting it from the value 3/4 of the way down the dataset. This can be expensive in large datasets.
Step11: It's tough to see, isn't it? Let's try 1000 observations.
Step12: That looks a little better! Maybe 10,000 data points, just for grins?
Step13: There's the bell curve we know and love!
Step14: This could be any old dataset! In fact, forget for a moment that we generated this dataset ourselves, and instead think that this could be a dataset we picked up from the web.
Step15: You'll notice the mean is very close to 0, and the variance is likewise very close to 1. Since we ourselves set the mean and variance for the random number generator, we know that these are very close to the true mean and true variance, but in general we wouldn't necessarily know that.
|
2,780
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (9,6)
df = pd.read_csv("data/creditRisk.csv")
df.head()
import seaborn as sns
sns.stripplot(data = df, x = "Income", y = "Credit History", hue = "Risk", size = 10)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df['Credit History'].unique()
df['Credit History'].unique()
le.fit(df['Credit History'].unique())
df['Credit History'].tail()
# Converting the categorical data using label encoder
df['Credit History'] = le.transform(df['Credit History'])
df['Credit History'].tail()
le.classes_
df.Risk.unique()
Risk_mapping = {
'High': 2,
'Moderate': 1,
'Low': 0}
df.Risk.tail()
df['Risk'] = df['Risk'].map(Risk_mapping)
df.Risk.tail()
df.head()
data = df.iloc[:,0:2]
target = df.iloc[:,2:3]
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf
clf = clf.fit(data, target)
import pydotplus
from IPython.display import Image
dot_data = tree.export_graphviz(clf, out_file='tree.dot', feature_names=data.columns,
class_names=['Low', 'Moderate', 'High'], filled=True,
rounded=True, special_characters=True)
graph = pydotplus.graph_from_dot_file('tree.dot')
Image(graph.create_png())
import modelvis
modelvis.render_tree(clf, feature_names=data.columns,class_names=['Low', 'Moderate', 'High'] )
modelvis.print_tree_as_code(clf)
def plot_classifier_2d(clf, data, target):
x_min, x_max = data.iloc[:,0].min(), data.iloc[:,0].max()
y_min, y_max = data.iloc[:,1].min(), data.iloc[:,1].max()
xx, yy = np.meshgrid(
np.arange(x_min, x_max, (x_max - x_min)/100),
np.arange(y_min, y_max, (y_max - y_min)/100))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap="viridis", alpha = 0.5)
plt.colorbar(cs)
plt.scatter(x = data.iloc[:,0], y = data.iloc[:,1], c = target, s = 100, cmap="magma")
modelvis.plot_decision_boundaries(clf, data, target, feature_names=data.columns, show_input=True,
class_names=['Low', 'Moderate', 'High'])
plot_classifier_2d(clf, data,target)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting the Data
Step2: Preparing Data
Step3: Lets use a dictionary for encoding nominal variable
Step4: Decision Tree Classifier
Step5: Visualise the Tree
Step6: Now we need to first define an Impurity measure. The three popular impurity measures are
|
2,781
|
<ASSISTANT_TASK:>
Python Code:
person_name = "Mike"
person_age = 50
person_faculty = True
person = {}
person['name'] = "Mike"
person['age'] = 50
person['faculty'] = True
print(person)
'dob' in person.keys()
if 'dob' not in person.keys():
dob = input("Enter your DOB")
person['dob'] = dob
person
person['hjklsdagkljhfaSDGfkjasd'] = 10
print(person)
50 in person.values()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: More Lab Questions...
|
2,782
|
<ASSISTANT_TASK:>
Python Code:
# Create the message variable and assign the value "Hello World" to it
message="Hello World"
# Use the variable in a print statement
# The print statement retrieves the value assigned to the variable and displays the value
print(message)
message
#Assign raw numbers to variables
apples=5
oranges=10
#Do a sum with the values represented by the variables and assign the result to a new variable
items_in_basket = apples + oranges
#Display the resulting value as the cell output
items_in_basket
%run 'Set-up.ipynb'
%run 'Loading scenes.ipynb'
%run 'vrep_models/PioneerP3DX.ipynb'
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
#side 1
robot.move_forward()
time.sleep(1)
#turn 1
robot.rotate_left(1.8)
time.sleep(0.45)
#side 2
robot.move_forward()
time.sleep(1)
#turn 2
robot.rotate_left(1.8)
time.sleep(0.45)
#side 3
robot.move_forward()
time.sleep(1)
#turn 3
robot.rotate_left(1.8)
time.sleep(0.45)
#side 4
robot.move_forward()
time.sleep(1)
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
#YOUR CODE HERE
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
side_length_time=1
turn_speed=1.8
turn_time=0.45
#side 1
robot.move_forward()
time.sleep(side_length_time)
#turn 1
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#side 2
robot.move_forward()
time.sleep(side_length_time)
#turn 2
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#side 3
robot.move_forward()
time.sleep(side_length_time)
#turn 3
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#side 4
robot.move_forward()
time.sleep(side_length_time)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Try changing the message in the previous code cell and re-running it. Does it behave as you expect?
Step2: You can assign whatever object you like to a variable.
Step3: See if you can add the count of a new set of purchases to the number of items in your basket in the cell above. For example, what if you also bought 3 pears. And a bunch of bananas.
Step4: The original programme appears in the code cell below.
Step5: Using the above programme as a guide, see if you can write a programme in the code cell below that makes it easier to maintin and simplifies the act of changing the numerical parameter values.
Step6: How did you get on?
|
2,783
|
<ASSISTANT_TASK:>
Python Code:
# Constants
D = 2
N = 100
K = 2
w = np.random.randn(D)
w = normalize(w)
theta = np.arctan2(w[0], w[1])
X = np.random.randn(N,D,K)
y = np.zeros(N)
for i in range(N):
m = w.dot(X[i])
X[i] = X[i][:,np.argsort(-m)]
y[i] = np.sign(max(m))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), c='black')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
# Split data for convenience
X_ = np.concatenate((X[y<0,:,0],X[y<0,:,1]))
Xa = X[y>0,:,0]
Xb = X[y>0,:,1]
Xp = np.concatenate((Xa, Xb))
Na, N_, Np = y[y>0].shape[0], 2*y[y<0].shape[0], 2*y[y>0].shape[0]
Nb = Na
Na, N_, Np
from cvxpy import *
# Beta is the coefficients, and e is the slack.
beta = Variable(D)
ea, e_ = Variable(Na), Variable(N_)
loss = 0.5 * norm(beta, 2) ** 2 + 1*sum_entries(e_)/N_ + 1* sum_entries(ea)/Na
constr = [mul_elemwise(-1,X_*beta) > 1 - e_, mul_elemwise(1,Xa*beta) > 1 - ea]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
# Beta is the coefficients, and e is the slack.
beta = Variable(D)
ep, e_ = Variable(Np), Variable(N_)
loss = 0.5 * norm(beta, 2) ** 2 + 1*sum_entries(e_)/N_ + 1* sum_entries(ep)/Np
constr = [mul_elemwise(-1,X_*beta) > 1 - e_, mul_elemwise(1,Xp*beta) > 1 - ep]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
# Beta is the coefficients, and e is the slack.
Da, = y[y>0].shape
beta = Variable(D)
e = Variable()
loss = 0.5 * norm(beta, 2) ** 2 + 1 * e
constr = [mul_elemwise(-1,X_*beta) > 1 - e,
Xa*beta + Xb*beta > 1 - e,
]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
# Beta is the coefficients, and e is the slack.
Da, = y[y>0].shape
beta = Variable(D)
e = Variable()
C = 1.
C_ = 1
loss = 0.5 * norm(beta, 2) ** 2 + C * e + C_ * sum_entries(pos(Xa*beta) + pos(Xb*beta) - 1) / Na
constr = [mul_elemwise(-1,X_*beta) > 1 - e,
]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Method
Step2: Exact SVM solution
Step3: Naive
Step4: As expected, the naive approach does really poorly.
Step5: Proposed 2
|
2,784
|
<ASSISTANT_TASK:>
Python Code:
# This will take a few minutes
r = requests.get("http://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_2015_1.zip",
stream=True)
with open("otp-1.zip", "wb") as f:
for chunk in r.iter_content(chunk_size=1024):
f.write(chunk)
f.flush()
r.close()
z = zipfile.ZipFile("otp-1.zip")
fp = z.extract('On_Time_On_Time_Performance_2015_1.csv')
columns = ['FlightDate', 'Carrier', 'TailNum', 'FlightNum',
'Origin', 'OriginCityName', 'OriginStateName',
'Dest', 'DestCityName', 'DestStateName',
'DepTime', 'DepDelay', 'TaxiOut', 'WheelsOn',
'WheelsOn', 'TaxiIn', 'ArrTime', 'ArrDelay',
'Cancelled', 'Diverted', 'ActualElapsedTime',
'AirTime', 'Distance', 'CarrierDelay', 'WeatherDelay',
'NASDelay', 'SecurityDelay', 'LateAircraftDelay']
df = pd.read_csv('On_Time_On_Time_Performance_2015_1.csv', usecols=columns,
dtype={'DepTime': str})
dep_time = df.DepTime.fillna('').str.pad(4, side='left', fillchar='0')
df['ts'] = pd.to_datetime(df.FlightDate + 'T' + dep_time,
format='%Y-%m-%dT%H%M%S')
df = df.drop(['FlightDate', 'DepTime'], axis=1)
carriers = ['AA', 'AS', 'B6', 'DL', 'US', 'VX', 'WN', 'UA', 'NK', 'MQ', 'OO',
'EV', 'HA', 'F9']
df.pipe(ck.within_set, items={'Carrier': carriers}).Carrier.value_counts().head()
df.pipe(ck.none_missing, columns=['Carrier', 'TailNum', 'FlightNum'])
import engarde.decorators as ed
@ed.within_range({'Counts!': (4000, 110000)})
@ed.within_n_std(3)
def pretty_counts(df):
return df.Carrier.value_counts().to_frame(name='Counts!')
pretty_counts(df)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's suppose that down the road our probram can only handle certain carriers; an update to the data adding a new carrier would violate an assumpetion we hold. We'll use the within_set method to check our assumption
Step2: Great, our assumption was true (at least for now).
Step3: Note
|
2,785
|
<ASSISTANT_TASK:>
Python Code:
import math
import os
import pandas as pd
import numpy as np
from datetime import datetime
import tensorflow as tf
from tensorflow import data
print "TensorFlow : {}".format(tf.__version__)
SEED = 19831060
DATA_DIR='data'
# !mkdir $DATA_DIR
# !gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR
# !gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
TRAIN_DATA_SIZE = 32561
EVAL_DATA_SIZE = 16278
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],
[0], [0], [0], [''], ['']]
NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']
CATEGORICAL_FEATURE_NAMES = ['gender', 'race', 'education', 'marital_status', 'relationship',
'workclass', 'occupation', 'native_country']
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
TARGET_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'
NUM_CLASSES = len(TARGET_LABELS)
def get_categorical_features_vocabolary():
data = pd.read_csv(TRAIN_DATA_FILE, names=HEADER)
return {
column: list(data[column].unique())
for column in data.columns if column in CATEGORICAL_FEATURE_NAMES
}
feature_vocabolary = get_categorical_features_vocabolary()
print(feature_vocabolary)
def create_feature_columns():
feature_columns = []
for column in NUMERIC_FEATURE_NAMES:
feature_column = tf.feature_column.numeric_column(column)
feature_columns.append(feature_column)
for column in CATEGORICAL_FEATURE_NAMES:
vocabolary = feature_vocabolary[column]
embed_size = round(math.sqrt(len(vocabolary)) * 1.5)
feature_column = tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_vocabulary_list(column, vocabolary),
embed_size)
feature_columns.append(feature_column)
return feature_columns
from tensorflow.python.ops import math_ops
def find_learning_rate(params):
training_step = tf.cast(tf.train.get_global_step(), tf.float32)
factor = tf.cast(tf.multiply(1.e-5, training_step*training_step), tf.float32)
learning_rate = tf.add(params.learning_rate, factor)
return learning_rate
def update_learning_rate(params):
training_step = tf.cast(tf.train.get_global_step(), tf.int32)
base_cycle = tf.floordiv(training_step, params.cycle_length)
current_cycle = tf.cast(tf.round(tf.sqrt(tf.cast(base_cycle, tf.float32))) + 1, tf.int32)
current_cycle_length = tf.cast(tf.multiply(current_cycle, params.cycle_length), tf.int32)
cycle_step = tf.mod(training_step, current_cycle_length)
learning_rate = tf.cond(
tf.equal(cycle_step, 0),
lambda: params.learning_rate,
lambda: tf.train.cosine_decay(
learning_rate=params.learning_rate,
global_step=cycle_step,
decay_steps=current_cycle_length,
alpha=0.0,
)
)
tf.summary.scalar('base_cycle', base_cycle)
tf.summary.scalar('current_cycle', current_cycle)
tf.summary.scalar('current_cycle_length', current_cycle_length)
tf.summary.scalar('cycle_step', cycle_step)
tf.summary.scalar('learning_rate', learning_rate)
return learning_rate
def model_fn(features, labels, mode, params):
is_training = True if mode == tf.estimator.ModeKeys.TRAIN else False
# model body
def _inference(features, mode, params):
feature_columns = create_feature_columns()
input_layer = tf.feature_column.input_layer(features=features, feature_columns=feature_columns)
dense_inputs = input_layer
for i in range(len(params.hidden_units)):
dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs)
dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense, training=is_training)
dense_inputs = dense_dropout
fully_connected = dense_inputs
logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected)
return logits
# model head
head = tf.contrib.estimator.binary_classification_head(
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME
)
learning_rate = find_learning_rate(params) if params.lr_search else update_learning_rate(params)
return head.create_estimator_spec(
features=features,
mode=mode,
logits=_inference(features, mode, params),
labels=labels,
optimizer=tf.train.AdamOptimizer(learning_rate)
)
def create_estimator(params, run_config):
feature_columns = create_feature_columns()
estimator = tf.estimator.Estimator(
model_fn,
params=params,
config=run_config
)
return estimator
def make_input_fn(file_pattern, batch_size, num_epochs,
mode=tf.estimator.ModeKeys.EVAL):
def _input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
column_names=HEADER,
column_defaults=HEADER_DEFAULTS,
label_name=TARGET_NAME,
field_delim=',',
use_quote_delim=True,
header=False,
num_epochs=num_epochs,
shuffle=(mode==tf.estimator.ModeKeys.TRAIN)
)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
return _input_fn
def train_and_evaluate_experiment(params, run_config):
# TrainSpec ####################################
train_input_fn = make_input_fn(
TRAIN_DATA_FILE,
batch_size=params.batch_size,
num_epochs=None,
mode=tf.estimator.ModeKeys.TRAIN
)
train_spec = tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps=params.traning_steps
)
###############################################
# EvalSpec ####################################
eval_input_fn = make_input_fn(
EVAL_DATA_FILE,
num_epochs=1,
batch_size=params.batch_size,
)
eval_spec = tf.estimator.EvalSpec(
name=datetime.utcnow().strftime("%H%M%S"),
input_fn = eval_input_fn,
steps=None,
start_delay_secs=0,
throttle_secs=params.eval_throttle_secs
)
###############################################
tf.logging.set_verbosity(tf.logging.INFO)
if tf.gfile.Exists(run_config.model_dir):
print("Removing previous artefacts...")
tf.gfile.DeleteRecursively(run_config.model_dir)
print ''
estimator = create_estimator(params, run_config)
print ''
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
# tf.estimator.train_and_evaluate(
# estimator=estimator,
# train_spec=train_spec,
# eval_spec=eval_spec
# )
estimator.train(train_input_fn, steps=params.traning_steps)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
MODELS_LOCATION = 'models/census'
MODEL_NAME = 'dnn_classifier-01'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
BATCH_SIZE = 64
NUM_EPOCHS = 10
steps_per_epoch = int(math.ceil((TRAIN_DATA_SIZE / BATCH_SIZE)))
training_steps = int(steps_per_epoch * NUM_EPOCHS)
print("Training data size: {}".format(TRAIN_DATA_SIZE))
print("Btach data size: {}".format(BATCH_SIZE))
print("Steps per epoch: {}".format(steps_per_epoch))
print("Traing epochs: {}".format(NUM_EPOCHS))
print("Training steps: {}".format(training_steps))
params = tf.contrib.training.HParams(
batch_size=BATCH_SIZE,
traning_steps=training_steps,
hidden_units=[64, 32],
learning_rate=1.e-3,
cycle_length=500,
dropout_prob=0.1,
eval_throttle_secs=0,
lr_search=False
)
run_config = tf.estimator.RunConfig(
tf_random_seed=SEED,
save_checkpoints_steps=steps_per_epoch,
log_step_count_steps=100,
save_summary_steps=1,
keep_checkpoint_max=3,
model_dir=model_dir,
)
train_and_evaluate_experiment(params, run_config)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download the Data
Step2: Dataset Metadata
Step3: Building a TensorFlow Custom Estimator
Step4: 2. Create model_fn
Step5: 3. Create estimator
Step6: 4. Data Input Function
Step7: 5. Experiment Definition
Step8: 6. Run Experiment with Parameters
|
2,786
|
<ASSISTANT_TASK:>
Python Code:
number = 3.14159265359
number = "1.7724538509055743"
number = 3.14159265359
number = number ** 0.5 # Raise to the 0.5, which means square root.
number = str(number) # Cast to a string.
def first_negative(numbers):
num = 0
index = 0
while numbers[index] > 0:
index += 1
num = numbers[index]
return num
first_negative([1, 2, 3, -1])
first_negative([10, -10, -100, -50, -75, 10])
def compute_3dmagnitudes(X, Y, Z):
magnitudes = []
### BEGIN SOLUTION
### END SOLUTION
return magnitudes
def compute_3dmagnitudes(X, Y, Z):
magnitudes = []
length = len(X)
for i in range(length):
# Pull out the corresponding (x, y, z) coordinates.
x = X[i]
y = Y[i]
z = Z[i]
### Do the magnitude computation ###
return magnitudes
def compute_3dmagnitudes(X, Y, Z):
magnitudes = []
for x, y, z in zip(X, Y, Z):
pass
### Do the magnitude computation ###
return magnitudes
def list_of_positive_indices(numbers):
indices = []
for index, element in enumerate(numbers):
if element > 0:
indices.append(index)
else:
pass # Why are we here? What is our purpose? Do we even exist?
return indices
def list_of_positive_indices(numbers):
indices = []
for index, element in enumerate(numbers):
if element > 0:
indices.append(index)
return indices
def make_matrix(rows, cols):
pre_built_row = []
# Build a single row that has <cols> 0s.
for j in range(cols):
pre_built_row.append(0)
# Now build a list of the rows.
matrix = []
for i in range(rows):
matrix.append(pre_built_row)
return matrix
m = make_matrix(3, 4)
print(m)
m[0][0] = 99
print(m)
def make_matrix(rows, cols):
matrix = []
for i in range(rows):
matrix.append([]) # First, append an empty list for the new row.
for j in range(cols):
matrix[i].append(0) # Now grow that empty list.
return matrix
m = make_matrix(3, 4)
print(m)
m[0][0] = 99
print(m)
# Some test data
import numpy as np
x = np.random.random(10)
y = np.random.random(10)
len(x) == len(y)
x = np.random.random((5, 5)) # A 5x5 matrix
y = np.random.random((5, 10)) # A 5x10 matrix
len(x) == len(y)
x = np.random.random((5, 5)) # A 5x5 matrix
y = np.random.random((5, 10)) # A 5x10 matrix
x.shape == y.shape
import numpy as np
# Generate a random list to work with as an example.
some_list = np.random.random(10).tolist()
print(some_list)
for element in some_list:
print(element)
list_length = len(some_list)
for index in range(list_length): # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
element = some_list[index]
print(element)
def count_substring(base_string, substring, case_insensitive = True):
count = 0
if case_insensitive == True:
base_string = base_string.lower()
#base_string = base_string.upper()
length = len(substring)
index = 0
while (index + length) < len(base_string):
# Sliding window.
substring_to_test = base_string[index : (index + length)]
if substring_to_test == substring:
count += 1
index += 1
return count
numbers = [10, 20, 30, 40]
print(sum(numbers))
numbers = [10/100, 20/100, 30/100, 40/100]
# (0.1 + 0.2 + 0.3 + 0.4) = 1.0
print(sum(numbers))
import numpy as np
def normalize(something):
# Compute the normalizing constant
s = something.sum()
# Use vectorized programming (broadcasting) to normalize each element
# without the need for any loops
normalized = (something / s)
return normalized
import numpy as np # 1
def normalize(something): # 2
return something / something.sum() # 3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Answering this is not simply taking what's in the autograder and copy-pasting it into your solution
Step2: The whole point is that your code should generalize to any possible input.
Step3: With great looping power comes great looping responsibility.
Step4: From A2
Step5: Since all three lists--X, Y, and Z--are the same length, you could run a for loop with an index through one of them, and use that index across all three. That would work just fine.
Step6: ...but it's very verbose, and can get a bit difficult to follow.
Step7: Look how much cleaner that is!
Step8: if statements are adults; they can handle being short-staffed, as it were. If there's literally nothing to do in an else clause, you're perfectly able to omit it entirely
Step9: An actual example of reference versus value.
Step10: Certainly looks ok--3 "rows" (i.e. lists), each with 4 0s in them. Why is this a problem?
Step11: Now if we print this... what do you think we'll see?
Step12: cue The Thriller
Step13: From A4
Step14: which works, but only for one-dimensional arrays.
Step15: These definitely are not equal in length. But that's because len doesn't measure length of matrices...it only measures the number of rows (i.e., the first axis--which in this case is 5 in both, hence it thinks they're equal).
Step16: We get the answer we expect.
Step17: 1
Step18: 2
Step19: In general
Step20: Normalization by any other name
Step21: This can then be condensed into the 3 lines required by the question
|
2,787
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import pandas as pd
autos = pd.read_csv("../input/fe-course-data/autos.csv")
autos["make_encoded"] = autos.groupby("make")["price"].transform("mean")
autos[["make", "price", "make_encoded"]].head(10)
#$HIDE_INPUT$
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
warnings.filterwarnings('ignore')
df = pd.read_csv("../input/fe-course-data/movielens1m.csv")
df = df.astype(np.uint8, errors='ignore') # reduce memory footprint
print("Number of Unique Zipcodes: {}".format(df["Zipcode"].nunique()))
X = df.copy()
y = X.pop('Rating')
X_encode = X.sample(frac=0.25)
y_encode = y[X_encode.index]
X_pretrain = X.drop(X_encode.index)
y_train = y[X_pretrain.index]
from category_encoders import MEstimateEncoder
# Create the encoder instance. Choose m to control noise.
encoder = MEstimateEncoder(cols=["Zipcode"], m=5.0)
# Fit the encoder on the encoding split.
encoder.fit(X_encode, y_encode)
# Encode the Zipcode column to create the final training data
X_train = encoder.transform(X_pretrain)
plt.figure(dpi=90)
ax = sns.distplot(y, kde=False, norm_hist=True)
ax = sns.kdeplot(X_train.Zipcode, color='r', ax=ax)
ax.set_xlabel("Rating")
ax.legend(labels=['Zipcode', 'Rating']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target Encoding
Step2: This kind of target encoding is sometimes called a mean encoding. Applied to a binary target, it's also called bin counting. (Other names you might come across include
Step3: With over 3000 categories, the Zipcode feature makes a good candidate for target encoding, and the size of this dataset (over one-million rows) means we can spare some data to create the encoding.
Step4: The category_encoders package in scikit-learn-contrib implements an m-estimate encoder, which we'll use to encode our Zipcode feature.
Step5: Let's compare the encoded values to the target to see how informative our encoding might be.
|
2,788
|
<ASSISTANT_TASK:>
Python Code:
import vcsn
def aut(e):
return vcsn.context('lal_char, b').expression(e, 'binary').standard()
a1 = aut('a*+b*'); a1
a2 = aut('b*+a*'); a2
a1.is_isomorphic(a2), a1 == a2
%%automaton -s a1
$ -> 0
0 -> 1 a
%%automaton -s a2
$ -> 0
0 -> 1 b
a1.is_isomorphic(a1), a1.is_isomorphic(a2)
a1 = aut('a+a')
a2 = aut('a')
a1.is_isomorphic(a2), a1.is_equivalent(a2)
def aut(e):
return vcsn.context('lal_char, z').expression(e, 'binary').standard()
a1 = aut('<2>a+<3>b')
a2 = aut('<3>b+<2>a')
a1.is_isomorphic(a2)
a1 = aut('<2>a')
a2 = aut('a+a')
a1.is_isomorphic(a2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The automata must be accessible, but coaccessibility is not required.
Step2: Equivalent automata can be non isomorphic.
Step3: Weighted Automata
|
2,789
|
<ASSISTANT_TASK:>
Python Code:
df = pd.DataFrame(np.fromfile("./output.bni", dtype=np.uint16).astype(np.float32) * (3300 / 2**12))
#df.describe()
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df[20000:20100].plot(ax=ax)
df_r = df.groupby(df.index//10).mean()
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r[:30000].plot(ax=ax)
df_r1000 = df.groupby(df.index//1000).mean()
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000.plot(ax=ax)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[40000:41000].plot(ax=ax)
times = [
51540,
52010,
52502,
52857,
53317,
53660,
54118,
54504,
54966,
55270,
60000 + 1916, # PageLoadStarted
60000 + 3453, # DownloadStarted
60000 + 9147, # DownloadFinished
60000 + 13336,
60000 + 13691,
60000 + 14051,
60000 + 14377,
60000 + 14783,
60000 + 15089,
60000 + 32190,
60000 + 34015,
60000 + 37349,
60000 + 37491,
]
sync = 40200
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
sync = 40205
df_r1000[40000:43000].plot(ax=ax)
for t in times:
sns.plt.axvline(sync + t - times[0])
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df[41100000:41250000].plot(ax=ax)
sns.plt.axvline(40200000 + 470000 + 498000 + 5000)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df[40100000:40250000].plot(ax=ax)
sns.plt.axvline(40200000 + 5000)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
sync = 40205
df_r1000[40000:65000].plot(ax=ax)
for t in times:
sns.plt.axvline(sync + t - times[0])
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
sync = 40205
df_r1000[52000:58000].plot(ax=ax)
for t in times:
sns.plt.axvline(sync + t - times[0])
df[1010000:1025000].plot()
for i in range(5):
df[10230+256*i:10250+256*(i+1)].plot()
df[:2048].plot()
df4 = pd.DataFrame(np.fromfile("./output3.bin", dtype=np.uint16).astype(np.float32) * (3300 / 2**12))
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df4[:16384].plot(ax=ax)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df4[6000000:6001000].plot(ax=ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Данных много, миллион сэмплов в секунду. Мы насобирали почти 70 миллионов сэмплов. Если строить их все сразу, питон ОЧЕНЬ задумается. Поэтому будем строить кусочки. 100 сэмплов, или 100 микросекунд
Step2: Возьмем более мелкий масштаб, для этого сгруппируем данные по 10 мкс и возьмем среднее
Step3: Посмотрим на таймстемпы в logcat. У нас три события из Браузера, остальное -- включение/выключение фонарика.
Step4: Интересные всплески потребления начинаются где-то с 40000-ной миллисекунды (их пять подряд, мы моргали лампочкой пять раз).
Step5: Предполагаем, что первый всплеск был в 40200-ю миллисекунду. Теперь посчитаем относительные времена
Step6: И построим их на нашем графике
Step7: У второй вспышки более резкий фронт, поэтому попробуем синхронизироваться более точно по нему (и используем микросекундные данные)
Step8: То же для первой вспышки, видно, что фронт у нее размытый
Step9: Теперь построим данные за весь тесткейс, учитывая изменение синхронизации
Step10: И увеличим до периода загрузки файла
Step11: Фиксим неприятный баг
Step12: Расстояние между пиками -- 256 сэмплов
Step13: В самом начале -- пустые сэмплы с пиками, по числу буферов. На столько же "предсказывается" значение
Step14: Оказалось, в исходнике баг, описание тут
Step15: Одна миллисекунда, как мы ее видим
|
2,790
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.optimize as sciopt
x = np.array([[ 1247.04, 1274.9 , 1277.81, 1259.51, 1246.06, 1230.2 ,
1207.37, 1192. , 1180.84, 1182.76, 1194.76, 1222.65],
[ 589. , 581.29, 576.1 , 570.28, 566.45, 575.99,
601.1 , 620.6 , 637.04, 631.68, 611.79, 599.19]])
y = np.array([ 1872.81, 1875.41, 1871.43, 1865.94, 1854.8 , 1839.2 ,
1827.82, 1831.73, 1846.68, 1856.56, 1861.02, 1867.15])
fp = lambda p, x: p[0]*x[0]+p[1]*x[1]
e = lambda p, x, y: ((fp(p,x)-y)**2).sum()
pmin = np.array([0.5,0.7]) # mimimum bounds
pmax = np.array([1.5,1.8]) # maximum bounds
p_guess = (pmin + pmax)/2
bounds = np.c_[pmin, pmax]
fp = lambda p, x: p[0]*x[0]+p[1]*x[1]
e = lambda p, x, y: ((fp(p,x)-y)**2).sum()
sol = sciopt.minimize(e, p_guess, bounds=bounds, args=(x,y))
result = sol.x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
2,791
|
<ASSISTANT_TASK:>
Python Code:
import main
raw_data = main.load_raw_data([]) # asume el cache
gdf = main.get_grouped_dataset(raw_data, level=4)
m, dfX = main.get_model_to_draw(4)
import model
figure()
distr = model.ConditionalDistribution(gdf['131_pct'], gdf['135_pct']).fit()
distr.draw_joint()
xlabel('Proporción de votos a Scioli'.decode('utf8'))
ylabel('Proporción de votos a Macri'.decode('utf8'))
figure()
distr = model.ConditionalDistribution(gdf['138_pct'], gdf['131_pct']).fit()
distr.draw_joint()
xlabel('Proporción de votos a Massa'.decode('utf8'))
ylabel('Proporción de votos a Scioli'.decode('utf8'))
figure()
distr = model.ConditionalDistribution(gdf['138_pct'], gdf['135_pct']).fit()
distr.draw_joint()
xlabel('Proporción de votos a Massa'.decode('utf8'))
ylabel('Proporción de votos a Macri'.decode('utf8'))
level = 4
raw_data = main.load_raw_data([]) # asume el cache
gdf = main.get_grouped_dataset(raw_data, level=level)
totals = {}
for k in u'131', u'132', u'133', u'135', u'137', u'138':
totals[k] = gdf[k].sum()
s = sum(totals.values())
# Como el proceso es lento, en lugar de ejecutarlo en este notebook,
# se ejecuta en un proceso que guarda los resultados en una collection de mongodb
import main
# row_id = max(main.save_collection.find({'predict_level': level}).distinct('row_id'))
# docs = list(main.save_collection.find({'predict_level': level, 'row_id': row_id}))
row_id = max(main.save_collection.find({'level': level}).distinct('row_id'))
docs = list(main.save_collection.find({'level': level, 'row_id': row_id}))
print len(docs)
import pandas as pd
df = pd.DataFrame([e['pred'] for e in docs])
df['131_pct'] += totals['131']
df['135_pct'] += totals['135']
s = df['135_pct'] + df['131_pct']
df['131_pct_pct'] = df['131_pct'] / s
df['135_pct_pct'] = df['135_pct'] / s
figure()
(df['131_pct_pct']).hist(alpha=0.65, color='b')
(df['135_pct_pct']).hist(alpha=0.65, color='y')
legend(('Scioli', 'Macri'))
print np.percentile(df['131_pct_pct'], [.5, 99.5])
print np.percentile(df['135_pct_pct'], [.5, 99.5])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Patrones de voto
Step2: Hay una relación inversa entre la gente que votó a Macri y a Scioli.
Step3: Este patrón es super interesante, parecería haber dos secciones, una donde Massa tiene al rededor de 15%, y luego una cola, donde Massa tiene mayor cantidad de votos, y esa mayor cantidad de votos correlaciona con la cantidad de votos a Scioli.
Step4: Este es otro patrón muy interesante, tiene claramente una correlación negativa
Step5: Este resultado es bastante sorprendente. Hay dos posibles interpretaciones
|
2,792
|
<ASSISTANT_TASK:>
Python Code:
# Pykep imports
from pykep.trajopt import mga_1dsm, launchers
from pykep.planet import jpl_lp
from pykep import epoch
from pykep.core import lambert_problem, propagate_lagrangian, fb_prop
from pykep import DAY2SEC, DAY2YEAR, AU, RAD2DEG, ic2par
from pykep.trajopt.gym import solar_orbiter_resdsm, solar_orbiter_1dsm
from pykep.trajopt.gym._solar_orbiter import _solar_orbiter_udp
# Other imports
import numpy as np
from numpy.linalg import norm
from numpy import sign
from math import acos, asin, cos, exp, log, pi, sin
import matplotlib.pyplot as plt
from copy import deepcopy
import pygmo
import time
%matplotlib notebook
# Solution found by Dietmar Wolz
x = [7454.239968096991, 404.2427682075494, 224.7007818627643, 166.76492404089777, 332.1943747024565, 224.70067465717648, 224.70064854708585, 674.102011793403, 449.4013940418292, 0.46175426496713623, 0.15595624336471145, 1.057897958249542, 0.5111221499713654, -0.23551244689346862, 1.0578678140141846, 0.6126539084565384, 0.0363223837410434, 1.0578389473452743, 0.15966690593566818, 0.9904935428678754, 1.05789170937413, 0.1958888817342986, 0.4998253324280215, 1.0580079586797548, 0.676246650510244, 1.0581815819568168]
earth = jpl_lp("earth")
venus = jpl_lp("venus")
seq = [earth, venus, venus, earth, venus, venus, venus, venus, venus]
solar_orbiter = _solar_orbiter_udp([7000, 8000], seq=seq, max_revs=3, dummy_DSMs=True)
# Show angle, delta v, mass and sun distance constraints
print (str(solar_orbiter.fitness(x)))
solar_orbiter.pretty(x)
solar_orbiter.plot(x)
solar_orbiter.plot_distance_and_flybys(x)
plt.show()
# Plot inclination and distance
timeframe = range(1,5*365)
distances = []
inclinations = []
for i in timeframe:
epoch = x[0]+i
pos, _ = solar_orbiter.eph(x, epoch)
inclination = sign(pos[2])*acos(norm(pos[:2]) / norm(pos))
distances.append(norm(pos) / AU)
inclinations.append(inclination)
color = 'tab:red'
fig2, ax2 = plt.subplots()
ax2.plot(list(timeframe), inclinations, color=color)
ax2.set_ylabel("Inclination", color=color)
ax2.set_xlabel("Days")
ax2.set_title("Distance and Inclination")
ax3 = ax2.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax3.set_ylabel('AU', color=color)
ax3.plot(list(timeframe), distances, color=color)
ax3.tick_params(axis='y', labelcolor=color)
fig2.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
# Solution from Dietmar Wolz, at https://gist.github.com/dietmarwo/86f24e1b9a702e18615b767e226e883f
x = [7454.227192641146, 0.4423260291238188, 0.6302883057062232, 5599.894458722009, 0.7401095862496564, 404.2492226073736, 0.15597148623687598, 1.0580076618074068, 0.5101575847266027, 224.70077856657431, -2.4933856179307377, 1.6963855192653385, 0.2901090720064779, 166.75556035570827, 1.6526396514193433, 1.0991472801826805, 0.7121915384668397, 332.20042106080774, -0.23546201156899235, 1.0578347798406122, 0.4655083705172811, 224.70070445614851, 0.036302301169642275, 1.0585127275180912, 0.5295285687741118, 224.70066944051027, 0.9905167586625055, 1.0579496308868774, 0.40373301194586536, 674.102018785453, 0.4999699781508085, 1.0585074637313125, 0.862087191395979, 449.4013947111085, 0.09134030084117842, 1.0611243930405114, 0.506628071700843, 449.40141784746976, 1.042759861414433, 1.5785454129349583]
seq = [earth, venus, venus, earth, venus, venus, venus, venus, venus, venus]
solar_orbiter = solar_orbiter_1dsm#([7000, 8000], seq=seq, max_revs=3)
print (str(solar_orbiter.fitness(x)))
solar_orbiter.pretty(x)
solar_orbiter.plot(x)
solar_orbiter.plot_distance_and_flybys(x)
plt.show()
# Plot inclination and distance
distances = []
inclinations = []
eph_function = solar_orbiter.get_eph_function(x)
for i in timeframe:
epoch = x[0]+i
pos, _ = eph_function(epoch)
inclination = sign(pos[2])*acos(norm(pos[:2]) / norm(pos))
distances.append(norm(pos) / AU)
inclinations.append(inclination)
color = 'tab:red'
fig2, ax2 = plt.subplots()
ax2.plot(list(timeframe), inclinations, color=color)
ax2.set_ylabel("Inclination", color=color)
ax2.set_xlabel("Days")
ax2.set_title("Distance and Inclination")
ax3 = ax2.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax3.set_ylabel('AU', color=color)
ax3.plot(list(timeframe), distances, color=color)
ax3.tick_params(axis='y', labelcolor=color)
fig2.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Solar Orbiter, modelles with DSMs only on resonant arcs
Step2: Solar Orbiter modeled as mga_1dsm
|
2,793
|
<ASSISTANT_TASK:>
Python Code:
folium.Map().add_child(ClickForMarker())
folium.Map().add_child(LatLngPopup())
folium.Map().add_child(ClickForLatLng(format_str='"[" + lat + "," + lng + "]"'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Click on the map to see the effects
Step2: Click on the map to see the effects
|
2,794
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from math import log, exp
%matplotlib inline
# Evaluate beta for this sensor
T_0=273.15+20;
N=(1/273.15-1/293.15)-(1/298.15-1/293.15);
beta= log(3000/1000)/N;
R_0=1000/exp(beta*((1/298.15)-(1/293.15)));
## Results
print('Beta for this sensor = %2.2f and the resistance of the sensor at temperature T0 is R_0 = = %2.2f' % (beta, R_0))
# Plot the sensor transfer function for the intended span.
# T= np.arange(start = -45, stop = 121, step = 1)+273.15;
T = np.linspace(-45,120)+273.15
R_T= R_0*np.exp(beta*(1/T-1/T_0));
# Plot
# plt.plot(T,R_T,T[45],R_T[45],'ro',T[45+25],R_T[45+25],'ro')
plt.plot(T,R_T)
plt.ylabel('Resistance of the sensor[ohm]')
plt.xlabel('Temperature [K]')
plt.show()
# Linear fit with just end points
a, b = np.polyfit(np.array([T[0],T[-1]]),np.array([R_T[0],R_T[-1]]),1)
print('The coefficients are a = %2.4f and b = %2.4f' % (a, b))
# Linear approximation
R_T_linear = a*T+b
# Plot
plt.plot(T,R_T_linear,'b:',label='Linear approximation')
plt.plot(T,R_T,label='Transfer function')
plt.ylabel('Resistance of the sensor[ohm]')
plt.xlabel('Temperature [K]')
plt.legend(loc='upper right')
plt.show()
# Output Full scale
FS = np.abs(np.max(R_T)-np.min(R_T))
error=np.abs(R_T-R_T_linear)/FS*100;
# error_X=np.abs(error_Y/a2);
plt.ylabel('error [%]')
plt.plot(T,error)
plt.xlabel('Temperature [K]')
plt.show()
print('The maximum error expected as a percentage of full scale is = %2.2f %%' % (np.max(error)))
# polyfit computes the coefficients a and b of degree=1
a,b = np.polyfit(T,R_T,1)
# Linear approximation
R_T_lsq = a*T+b
# Plot
plt.plot(T,R_T_lsq,'b:',label='Least Squares fit')
plt.plot(T,R_T,label='Transfer function')
plt.ylabel('Resistance of sensor [ohm]')
plt.xlabel('Temperature [K]')
plt.legend(loc='upper right')
plt.show()
error=np.abs(R_T-R_T_lsq)/FS*100;
# error_X=np.abs(error_Y/a2);
plt.ylabel('error [%]')
plt.plot(T,error)
plt.xlabel('Temperature [K]')
plt.show()
print('The maximum error expected as a percentage of full scale is = %2.1f %%' % (np.max(error)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The plot shows the nonlinear behaviour of the sensor and the two points used for estimating the curve.
Step2: Note how the error starts from zero reaches a maximum of 66.5 % and comes back down to zero at the other end point as expected.
|
2,795
|
<ASSISTANT_TASK:>
Python Code:
import ipywidgets
import IPython.display
args = ipywidgets.Text(
description='Input string:',
value='cube')
IPython.display.display(args)
print args.value
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Construct widget with a default value and display using IPython display call.
Step2: Print widget value for clarity.
|
2,796
|
<ASSISTANT_TASK:>
Python Code:
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
text[:10]
y[:10]
type(text)
type(y)
from sklearn.cross_validation import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y, random_state=42)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
print(len(vectorizer.vocabulary_))
X_train.shape
print(vectorizer.get_feature_names()[:20])
print(vectorizer.get_feature_names()[3000:3020])
print(X_train.shape)
print(X_test.shape)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
clf.score(X_train, y_train)
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(50), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 51), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(clf, vectorizer.get_feature_names())
vectorizer = CountVectorizer(min_df=2)
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
visualize_coefficients(clf, vectorizer.get_feature_names())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training a Classifier on Text Features
Step2: We can now evaluate the classifier on the testing set. Let's first use the builtin score function, which is the rate of correct classification in the test set
Step3: We can also compute the score on the training set, to see how well we do there
Step4: Visualizing important features
|
2,797
|
<ASSISTANT_TASK:>
Python Code:
import imaginet.task
model = imaginet.task.load(path="model-ipa.zip")
emb = imaginet.task.embeddings(model)
print(emb.shape)
symb = imaginet.task.symbols(model)
print " ".join(symb.values())
%pylab inline
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
xy = pca.fit_transform(emb)
pylab.rc('font', family='DejaVu Sans')
pylab.figure(figsize=(8,8))
pylab.scatter(xy[:,0], xy[:,1], alpha=0.1)
for j,symb_j in symb.items():
pylab.text(xy[j,0], xy[j,1], symb_j)
from imaginet.data_provider import getDataProvider
# Adjust the root to point to the directory above data
prov = getDataProvider('coco', root="..")
sents = list(prov.iterSentences(split='val'))
from imaginet.simple_data import phonemes
sents_ipa = [ phonemes(sent) for sent in sents ]
reps = imaginet.task.representation(model, sents_ipa)
from scipy.spatial.distance import cdist
distance = cdist(reps, reps, metric='cosine')
import numpy
def neighbors(k, distance=distance):
nn = numpy.argsort(distance[k,:])[1:5]
print sents[k]['raw'], ''.join(sents_ipa[k])
for n in nn:
print u"✔" if sents[n]['imgid']==sents[k]['imgid'] else u"✘", \
sents[n]['raw'], ''.join(sents_ipa[n])
import random
for _ in range(10):
neighbors(random.randint(0, len(sents)))
print
import imaginet.tracer
reload(imaginet.tracer)
tr = imaginet.tracer.Tracer()
tr.fit(reps)
tr.proj.explained_variance_
from subprocess import check_output
def espeak(words):
return phon(check_output(["espeak", "-q", "--ipa",
'-v', 'en-us',
words]).decode('utf-8'))
def phon(inp):
return list(''.join(inp.split()))
def trace(orths, tracer=tr, model=model, eos=True, size=(6,6)):
ipas = [ espeak(orth) for orth in orths ]
states = [ imaginet.task.states(model, ipa) for ipa in ipas ]
pylab.figure(figsize=size)
tracer.traces(ipas, states, eos=eos)
trace(["A bowl of salad","A plate of pizza","A brown dog", "A black cat"])
trace(["a girl skiing", "a girl wind surfing", "a girl water skiing",])
trace(["a cow", "a baby cow","a tiny baby cow"])
trace(["some food on a table","a computer on a table","a table with food"])
pylab.axis('off')
trace(["a bear in a cage", "a brown bear in the zoo","a teddy bear on a chair"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the model
Step2: Symbol embeddings
Step3: The table of IPA symbols corresponding to the 49 dimensions
Step4: Let's display the embeddings projected to 2D via PCA
Step5: Seems mostly random...
Step6: Project sentences to state space
Step7: Find similar sentences in state space
Step8: Display neighbors for a sentence
Step9: Tracing the evolution of states
Step10: Use espeak to convert graphemes to phonemes
Step11: Plot traces of example sentences
|
2,798
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
A = np.array([[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0]])
mat1 = (A.T).dot(A)
print mat1
a = np.array([.25, .25, .25, .25])
for i in xrange(3):
a = mat1.dot(a)
print a
mat2 = (A).dot(A.T)
print mat2
h = np.array([.25, .25, .25, .25])
for i in xrange(3):
h = mat2.dot(h)
print h
A = np.array([[0,1,1,1,1],
[1,0,1,1,1],
[1,1,0,1,1],
[1,1,1,0,1],
[1,1,1,1,0]])
D = np.array([[4,0,0,0,0],
[0,4,0,0,0],
[0,0,4,0,0],
[0,0,0,4,0],
[0,0,0,0,4]])
L = D - A
s = 0
## output the sum of squares in laplacian matrix for G
for row in L:
s += row.dot(row)
print s
import math
def phi(bid, spent, budget):
frac = 1 - float(spent) / budget
return bid * (1 - math.exp( - frac) )
print "A's phi value = {}".format(phi(1, 20, 100))
print "B's phi value = {}".format(phi(2, 40, 100))
print "C's phi value = {}".format(phi(3, 60, 100))
print "D's phi value = {}".format(phi(4, 80, 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 8.
Step2: Question 9.
|
2,799
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import thinkstats2
import thinkplot
%matplotlib inline
import scipy.stats
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
dist.mean(), dist.std()
dist.cdf(mu-sigma)
low = dist.cdf(177.8)
high = dist.cdf(185.4)
print("177.8 - 185.4 : ", dist.cdf(185.4) - dist.cdf(177.8))
# 5'10'' (177.8cm), 6'1'' (185.4cm)
alpha = 1.7
xmin = 1
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(xlabel='height (m)', ylabel='CDF')
dist.mean()
dist.cdf(dist.mean())
(1- dist.cdf(1000)) * 7e9 # 70억 = 7e9
dist.sf(1000) * 7e9
dist.sf(600000) * 7e9
# sf는 생존함수로 1-cdf 보다 더 정확하다. (http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pareto.html#scipy.stats.pareto)
import random
import thinkstats2
import thinkplot
sample = [random.weibullvariate(2,1) for _ in range(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, transform='weibull')
thinkplot.Show(legend=False)
cdf.Random()
cdf.Sample(10)
prs = [cdf.PercentileRank(x) for x in cdf.Sample(1000)]
pr_cdf = thinkstats2.Cdf(prs)
thinkplot.Cdf(pr_cdf)
values = [random.random() for _ in range(1000)]
pmf = thinkstats2.Pmf(values)
thinkplot.Pmf(pmf, linewidth=0.1)
cdf = thinkstats2.Cdf(values)
thinkplot.Cdf(cdf)
thinkplot.Show(legend=False)
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
n = len(diffs)
Iam = 44.0 / 24 / 60
sample = [random.expovariate(Iam) for _ in range(n)]
model = thinkstats2.Cdf(sample, label='model')
thinkplot.PrePlot(2)
thinkplot.Cdfs([cdf,model], complement=True)
thinkplot.Show(title='Time between births',
xlabel='minutes',
ylabel='CCDF',
yscale='log')
from mystery import *
funcs = [uniform_sample, triangular_sample, expo_sample,
gauss_sample, lognorm_sample, pareto_sample,
weibull_sample, gumbel_sample]
for i in range(len(funcs)):
sample = funcs[i](1000)
filename = 'mystery%d.dat' % i
print(filename, funcs[i].__name__)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 연습문제 5.1
Step2: 예를 들어, <tt>scipy.stats.norm</tt>은 정규분포를 나타낸다.
Step3: "고정된 확률변수(frozen random variable)"는 평균과 표준편차를 계산할 수 있다.
Step4: CDF도 평가할 수 있다. 평균아래 1 표준편차를 벗어난 사람은 얼마나 될까? 약 16%
Step5: 5'10"과 6'1" 사이 얼마나 많은 사람이 분포하고 있는가?
Step6: 연습문제 5.2
Step7: 파레토 세상에서 평균신장이 얼마인가?
Step8: 평균보다 더 키가 작은 사람의 비율은 얼마나 될까?
Step9: 70억 사람중에서, 1 km 보다 더 키가 클 것으로 예상되는 사람은 얼마나 될까? <tt>dist.cdf</tt> 혹은 <tt>dist.sf</tt>을 사용한다.
Step10: 가장 키가 큰 사람은 얼마나 키가 클 것으로 예상되는가? 힌트
Step11: 연습문제 5.3
Step12: thinkplot.Cdf 메쏘드는 와이블 분포 CDF를 직선처럼 보이게 만드는 변환기능을 제공한다.
Step13: cdf로부터 무작위 선택을 하시오.
Step14: cdf로부터 임의표본을 뽑으시오.
Step15: cdf로부터 임의표본을 뽑고 나서, 각 값에 대한 백분위순위를 계산하시오. 그리고, 백분위순위 분포를 도식화하시오.
Step16: random.random() 메쏘드를 사용해서 1000 개 난수를 생성하고 해당 표본 PMF를 도식화하시오.
Step17: PMF가 잘 동작하지 않는다고 가정하고, 대신에 CDF 도식화를 시도한다.
Step18: 연습문제 5.4
Step19: 연습문제 5.5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.