repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
maojrs/riemann_book | Kitchen_sink_problem.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
from scipy.optimize import fsolve
from scipy import integrate
import matplotlib.pyplot as plt
from clawpack import pyclaw
from clawpack import riemann
from clawpack.visclaw.ianimate import ianimate
import matplotlib
plt.style.use('seaborn-talk')
from IPython.display import HTML
"""
Explanation: The "kitchen sink" problem
End of explanation
"""
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/V_obYAebT8g" frameborder="0" allowfullscreen></iframe>')
"""
Explanation: Our next example is something you can experiment with in your own home. Go to the kitchen sink, position the faucet over a flat part of the sink (not over the drain), and turn it on. Carefully examine the flow of water in the sink. You will see that near the jet of water coming from the faucet, there is a region of very shallow, very fast-flowing water. Then at a certain distance, the water suddenly becomes deeper and slower moving. This jump is relatively stable and will approximately form a circle if the bottom of the sink is flat around the area under the faucet.
Here's a demonstration in case you don't have a sink handy:
End of explanation
"""
def steady_rhs(h,r,alpha,g=1.):
return h/(g/alpha**2 * r**3 * h**3 - r)
r = np.linspace(0.5, 10)
h0 = 1.; u0 = 1.; alpha = r[0]*h0*u0
h = np.squeeze(integrate.odeint(steady_rhs,h0,r,args=(alpha,0.))) # Zero gravity
plt.plot(r,h); plt.title('$F=\infty$'); plt.xlabel('r'); plt.ylabel('h');
"""
Explanation: This jump (known as a hydraulic jump) is a shock wave and the entire flow can be modeled as a sort of multidimensional Riemann problem. Instead of left and right states, we have inner and outer states. To investigate this phenomenon we'll again use the shallow water equations. We'll assume the flow has cylindrical symmetry -- in other words, it depends on the distance away from the center (where water falls from the faucet), but not on the angular coordinate.
Shallow water flow in cylindrical symmetry
The amount of water contained in an annular region $r_1< r <r_2$ is proportional to $r$ and to the depth, so in cylindrical coordinates the conserved mass is $rh$. Similarly, the conserved momentum is $rhu$. The conservation laws for these two quantities read
\begin{align}
(rh)_t + (rhu)_r & = 0 \label{mass1} \
(rhu)_t + (rhu^2)_r + r \left(\frac{1}{2}gh^2\right)_r = 0. \label{mom1}
\end{align}
We have placed the coordinate $r$ inside the time derivative in order to emphasize what the conserved quantities are; of course, $r$ does not depend on $t$. We can rewrite the equations above so that the left hand side is identical to the 1D shallow water equations, but at the cost of introducing geometric source terms on the right:
\begin{align}
h_t + (hu)_r & = -\frac{hu}{r} \label{mass2} \
(hu)_t + \left(hu^2 + \frac{1}{2}gh^2\right)_r & = -\frac{hu^2}{r} \label{mom2}
\end{align}
Steady profiles
Let us first look for time-independent solutions of the equations above. Setting the time derivatives to zero, we obtain
\begin{align}
(rhu)_r & = 0 \label{constant_mass} \
\left(hu^2 + \frac{1}{2}gh^2\right)_r & = -\frac{hu^2}{r} \label{constant_2}
\end{align}
Equation (\ref{constant_mass}) can be integrated to obtain $rhu = \beta$ where $\beta$ (evidently the flux through a circle) is an arbitrary constant. Using this to eliminate $u$ in equation (\ref{constant_2}) leads to the ODE
\begin{align} \label{hdiff0}
h'(r) = \frac{h}{\frac{g}{\beta^2} r^3 h^3 -r}.
\end{align}
Let us define the Froude number, which is a measure of the ratio of fluid velocity to gravitational effects:
$$
F = \frac{|u|}{\sqrt{gh}}
$$
We say that the flow is subcritical if $F<1$, and supercritical if $F>1$. Since the characteristic speeds for the system are $u \pm \sqrt{gh}$, in generic terms the flow is subsonic if $F<1$ and supersonic if $F>1$. We can rewrite (\ref{hdiff0}) as
\begin{align} \label{hdiff}
h'(r) = \frac{h}{r} \cdot \frac{F^2}{1-F^2}.
\end{align}
The sign of $h'(r)$ thus depends entirely on the Froude number. Notice that in the limit $F\to\infty$ (i.e., in the absence of gravity), we have simply $h'(r) = -h/r$, with solution $h \propto 1/r$. This corresponds to the purely geometric effect of water spreading as it flows outward at constant velocity.
Notice also that the RHS of \eqref{hdiff} blows up as $|F|$ approaches unity. This means that a smooth steady flow must be either subsonic everywhere or supersonic everywhere; there is no smooth way to transition between the two.
End of explanation
"""
h0 = 1.; u0 = 2.; alpha = r[0]*h0*u0; g=1.
h = np.squeeze(integrate.odeint(steady_rhs,h0,r,args=(alpha,g)));
u = alpha/(h*r)
plt.figure(figsize=(12,4));
plt.subplot(1,3,1); plt.title('Depth');
plt.xlabel('r'); plt.ylabel('h');
plt.plot(r,h);
plt.subplot(1,3,2); plt.title('Velocity');
plt.xlabel('r'); plt.ylabel('u');
plt.plot(r,alpha/(r*h));
plt.subplot(1,3,3); plt.title('Froude number');
plt.xlabel('r'); plt.ylabel('$F$');
plt.plot(r,alpha/(r*h*np.sqrt(g*h)));
plt.tight_layout();
"""
Explanation: Supercritical flow ($F>1$)
Suppose we have a steady flow that is supercritical (everywhere, by the argument above).
In the presence of gravity, $h$ is still a decreasing function of $r$ if $F>1$. We see that the depth $h$ falls off somewhat faster than $1/r$. Since $rhu$ is constant, this means that the velocity $u=1/(hr)$ must increase with $r$. Hence the flow becomes shallower and faster as it moves outward; the Froude number increases. Asymptotically, the falloff in depth approaches the $1/r$ rate and the velocity approaches a constant value.
End of explanation
"""
h0 = 1.; u0 = 0.5; alpha = r[0]*h0*u0; g=1.
h = np.squeeze(integrate.odeint(steady_rhs,h0,r,args=(alpha,g)));
u = alpha/(h*r)
plt.figure(figsize=(12,4));
plt.subplot(1,3,1); plt.title('Depth');
plt.xlabel('r'); plt.ylabel('h');
plt.plot(r,h);
plt.subplot(1,3,2); plt.title('Velocity');
plt.xlabel('r'); plt.ylabel('u');
plt.plot(r,alpha/(r*h));
plt.subplot(1,3,3); plt.title('Froude number');
plt.xlabel('r'); plt.ylabel('$F$');
plt.plot(r,alpha/(r*h*np.sqrt(g*h)));
plt.tight_layout();
"""
Explanation: Subcritical flow ($F<1$)
Meanwhile, if the flow is subsonic then $h(r)$ is increasing and the steady profile is subsonic everywhere.
End of explanation
"""
def initial_and_boundary_data(r_jump = 1.,r_inner = 0.5,r_outer = 5.,
num_cells = 501,g=1.,h_in=1.,u_in=2.):
r = pyclaw.Dimension(r_inner, r_outer, num_cells, name='r')
rc = r.centers
i_jump = np.argmin(np.abs(rc-r_jump))
# Left boundary
h_inner = h_in
u_inner = u_in
beta_inner = r_inner*h_inner*u_inner
h = 0*rc
u = 0*rc
d = r.delta
rvals = np.insert(rc[:i_jump+1],(0),[rc[0]-2*d,rc[0]-d])
beta = rvals[0]*h_inner*u_inner
hh = integrate.odeint(steady_rhs,h_inner,rvals,args=(beta,g))
hh = np.squeeze(hh)
uu = beta/(hh*rvals)
h[:i_jump+1] = np.squeeze(hh[2:])
u[:i_jump+1] = uu[2:]
lower_bc_data = [hh[:2], uu[:2]]
# Jump in h
# Left side of jump
h_m = h[i_jump]; u_m = u[i_jump]
aleph = (-3*h_m+np.sqrt(h_m**2+8*h_m*u_m**2/g))/2.
# Right side of jump
h_p = h_m + aleph; u_p = h_m*u_m/h_p
h[i_jump+1] = h_p; u[i_jump+1] = u_p
# Outer part of solution
beta_outer = rc[i_jump+1]*h[i_jump+1]*u[i_jump+1]
rvals = np.append(rc[i_jump+1:],[rc[-1]+d,rc[-1]+2*d])
hh = integrate.odeint(steady_rhs,h_p,rvals,args=(beta_outer,g))
hh = np.squeeze(hh)
uu = beta_outer/(rvals*hh)
h[i_jump+1:] = hh[:-2]
u[i_jump+1:] = uu[:-2]
upper_bc_data = [hh[-2:],uu[-2:]]
return h, u, upper_bc_data, lower_bc_data, rc
"""
Explanation: A different and complementary approach to deriving steady profiles (see <cite data-cite="Ivings1998"><a href="riemann.html#zobeyer2013radial">(Zobeyer 2013)</a></cite> is to recognize that in such a solution, the energy $gh^2 + \frac{hu^2}{2}$ is constant. More simply
$$
h + \frac{u^2}{2g} = \gamma,
$$
for some constant $\gamma$. Combining this with conservation of mass yields a cubic equation for the depth:
$$
h^3 - \beta h^2 + \frac{\alpha^2}{2gr^2}=0.
$$
In non-dimensionalized coordinates, with $H=h/h_0$, $R = r/r_0$, and letting $F_0$ denote the Froude number at $r_0$, this becomes simply
$$
H^3 - \left(1+\frac{1}{2}F_0^2\right)H^2 + \frac{F_0^2}{2R^2} = 0.
$$
This can also be solved to obtain the depth as a function of radius; the result, of course, agrees with that obtained from the differential equation above. The supercritical and subcritical flows correspond to different roots of the cubic.
The hydraulic jump
To transition from supersonic to subsonic, the flow must jump over the pole of the ODE above, through the presence of a hydraulic jump. This is a standing shock wave; since we have an outward-oriented flow, this jump must be a 1-shock (in order to be stationary).
Rankine-Hugoniot jump conditions
The jump condition arising from the continuity equation is the same as that obtained for shallow water in one dimension, since the value of $r$ at the shock location simply divides out:
$$
s (h_r - h_l) = h_r u_r - h_l u_l.
$$
The momentum equation (\ref{mom1}) seems harder to deal with. We might try to derive appropriate conditions from (\ref{mom2}) by using some averaged values of $h$ and $u$ to model the source term as a delta function (this makes sense for the shallow water equations with a bathymetric source term). A simpler tactic is to return to (\ref{mom1}), which yields the jump condition
$$
s (r h_r u_r - r h_l u_l) = r h_r u_r^2 + \frac{gr}{2}h_r^2 - r h_l u_l^2 - \frac{gr}{2}h_l^2.
$$
Again, we can divide through by $r$ to obtain the same jump condition that is familiar from the one-dimensional shallow water equations:
$$
s (h_r u_r - h_l u_l) = h_r u_r^2 - h_l u_l^2 + \frac{g}{2}(h_r^2 - h_l^2).
$$
It makes sense that the jump conditions for the cylindrical SW system are the same as those for the 1D SW system, since a cylindrical shock occurs at a single value of $r$.
Unlike the 1D case, however, it does not make sense to consider a Riemann problem in which the left and right states are uniform in space, since those are not temporally steady states of the system. Instead, we can consider two steady profiles with a jump between them. For infinitesimal times, in a neighborhood of the initial jump, the solution would then be close to the solution of the 1D problem; at later times it could be much more complex, as the waves from the Riemann problem interact with the structure of the steady states. But if the Riemann solution consists of a single stationary shock, then the solution will be steady for all time. This is just the kind of solution that is relevant to our kitchen sink experiment.
A stationary 1-shock
We know from the analysis above that the hydraulic jump we are looking for is a stationary 1-shock (if it were a 2-shock it would necessarily move outward, since the fluid velocity $u>0$). In this case the shock speed $s=0$ and the first jump condition is simply
$$
h_r u_r = h_l u_l.
$$
From (LeVeque, pp. 265-266, Eqn. (13.18)), we have that for 1-shocks,
$$
h_r u_r = h_l u_l + \alpha\left[ u_l - \sqrt{gh_l\left(1+\frac{\alpha}{h_l}\right)\left(1+\frac{\alpha}{2h_l}\right)}\right],
$$
where $\alpha = h_r - h_l$. We can find a shock that satisfies the jump condition either by setting $\alpha=0$ (which is the uninteresting case where there is no jump) or by setting the quantity in brackets equal to zero. The latter condition yields
$$
\alpha = \frac{-3h_l \pm \sqrt{h_l^2 + 8 h_l u_l^2/g}}{2}
$$
Since we know the depth should increase (from left to right) at the hydraulic jump, we take the plus sign. Then the value above can be written in terms of the Froude number as
\begin{align}
\alpha = \frac{3h_l}{2}\left(\sqrt{1+\frac{8}{9}(F_l^2-1)}-1\right), \label{depth_jump}
\end{align}
where $F_l = u_l/\sqrt{gh_l}$ is the Froude number of the left state.
A steady solution with a hydraulic jump
To find a steady solution with a hydraulic jump, we impose steady supercritical flow for $r<r_0$, a jump defined by \eqref{depth_jump} at $r=r_0$ (with $h_l$ and $F_l$ being the depth and Froude number just inside the jump radius), and steady subcritical flow for $r>r_0$. The steady flow profiles can be obtained by numerically integrating \eqref{hdiff0}.
The code below implements this solution. The parameters $h_{in}, u_{in}$ are the depth and velocity at the inner radius of the domain. The function returns the depth and velocity at equispaced points in $r$, as well as values of $h$ and $u$ that will be needed to impose appropriate boundary conditions via ghost cells in a finite volume simulation.
End of explanation
"""
h, u, _, _, rc = initial_and_boundary_data()
plt.plot(rc, h)
plt.xlim(0.5,5)
"""
Explanation: Here's an example of a solution:
End of explanation
"""
def step_radial_src(solver,state,dt):
"""
Geometric source terms for SW equations with cylindrical symmetry.
Integrated using a 2-stage, 2nd-order Runge-Kutta method.
This is a Clawpack-style source term routine, which approximates
the integral of the source terms over a step.
"""
dt2 = dt/2.
q = state.q
rad = state.grid.r.centers
h = q[0,:]
u = q[1,:]/h
qstar = np.empty(q.shape)
qstar[0,:] = q[0,:] - dt2/rad * h*u
qstar[1,:] = q[1,:] - dt2/rad * h*u*u
h = qstar[0,:]
u = qstar[1,:]/h
q[0,:] = q[0,:] - dt/rad * h*u
q[1,:] = q[1,:] - dt/rad * h*u*u
def inner_state(state,dim,t,qbc,auxbc,num_ghost):
h = state.problem_data['lower_bc_data'][0]
u = state.problem_data['lower_bc_data'][1]
qbc[0,:num_ghost] = h
qbc[1,:num_ghost] = h*u
def outer_state(state,dim,t,qbc,auxbc,num_ghost):
h = state.problem_data['upper_bc_data'][0]
u = state.problem_data['upper_bc_data'][1]
qbc[0,-num_ghost:] = h
qbc[1,-num_ghost:] = h*u
def setup(r_jump=1.,r_inner=0.5,r_outer=3.,num_cells=501,g=1.):
r = pyclaw.Dimension(r_inner, r_outer, num_cells=num_cells, name='r')
h, u, upper_bc_data, lower_bc_data, _ = \
initial_and_boundary_data(r_jump=r_jump,g=g, r_inner=r_inner,
r_outer=r_outer, num_cells=num_cells)
solver = pyclaw.ClawSolver1D(riemann_solver=riemann.shallow_roe_with_efix_1D)
solver.bc_lower[0] = pyclaw.BC.custom
solver.user_bc_lower = inner_state
solver.bc_upper[0] = pyclaw.BC.custom
solver.user_bc_upper = outer_state
solver.step_source = step_radial_src
domain = pyclaw.Domain([r])
state = pyclaw.State(domain,solver.num_eqn)
state.problem_data['grav'] = g
state.problem_data['lower_bc_data'] = lower_bc_data
state.problem_data['upper_bc_data'] = upper_bc_data
state.q[0,:] = h
state.q[1,:] = h*u
claw = pyclaw.Controller()
claw.solver = solver
claw.solution = pyclaw.Solution(state,domain)
claw.tfinal = 15.0
claw.keep_copy = True
claw.num_output_times = 50
return claw
claw = setup()
claw.verbosity=0
claw.run()
anim = ianimate(claw)
plt.close('all')
HTML(anim.to_jshtml())
"""
Explanation: Finite volume simulation
To check that the solution we've obtained is truly steady, we set up a finite volume simulation using PyClaw.
End of explanation
"""
def setup_constant_initial_data(r_jump=1.,r_inner=0.5,r_outer=3.,
num_cells=501,g=1.):
r = pyclaw.Dimension(r_inner, r_outer, num_cells=num_cells, name='r')
solver = pyclaw.ClawSolver1D(riemann_solver=riemann.shallow_roe_with_efix_1D)
solver.bc_lower[0] = pyclaw.BC.custom
solver.user_bc_lower = inner_state
solver.bc_upper[0] = pyclaw.BC.custom
solver.user_bc_upper = outer_state
solver.step_source = step_radial_src
domain = pyclaw.Domain([r])
state = pyclaw.State(domain,solver.num_eqn)
state.problem_data['grav'] = g
hl = 0.5; hul = 3.
hr = 2.; hur = 0.1
state.problem_data['lower_bc_data'] = np.array([[hl,hl],[hul,hul]])
state.problem_data['upper_bc_data'] = np.array([[hr,hr],[hur,hur]])
state.q[0,:] = 1.
state.q[1,:] = 0.
claw = pyclaw.Controller()
claw.solver = solver
claw.solution = pyclaw.Solution(state,domain)
claw.tfinal = 15.0
claw.keep_copy = True
claw.num_output_times = 50
return claw
claw = setup_constant_initial_data()
claw.verbosity=0
claw.run()
anim = ianimate(claw)
HTML(anim.to_jshtml())
"""
Explanation: Dirichlet BCs with transition from super- to subsonic flow generically create a hydraulic jump
Although the result above took a lot of work, it is not very impressive -- it doesn't do anything! You might be wondering whether this steady solution is dynamically stable -- i.e., whether the flow will converge to this state if it is initially different. The answer is yes; in fact, any boundary data that implies a transition from supercritical to subcritical flow will lead to a hydraulic jump. In the example below, we impose such boundary data but initialize $h$ and $u$ with constant values, to show the emergence of the jump.
End of explanation
"""
def compute_inner_values(Q,a,r0):
"""
Q: flow rate
a: jet radius
r0: inner domain radius
"""
assert r0 >= a
h0 = a**2/(2*r0)
u0 = Q/(2*np.pi*r0*h0)
return h0, u0
"""
Explanation: Comparison with experimental results
We may ask how well our model corresponds to reality. There are many approximations made in deriving the shallow water equations used here; perhaps most notably, we have completely neglected viscosity and surface tension. Viscosity in particular is believed to be very important in the very shallow flow just inside the jump radius.
Inflow conditions
Experimentally it is difficult to measure the depth near the jet. We can eliminate that dependence (and the apparent dependence on our choice of inner radius) by considering the radius of the vertical jet, which we denote by $a$, and the flow rate, denoted by $Q$. Then
$$
Q = u_{jet} \pi a^2 = 2 \pi r u(r) h(r)
$$
The first expression comes from considering flow in the jet, while the second comes from considering flow through a circle anywhere outside the jet (centered on the jet). If we suppose that $u$ is approximately constant (recall that it actually increases somewhat with $r$) then we have $u(r)\approx u_{jet}$, and consequently
$$
h(r) = a^2/(2r).
$$
Using these equations, with a specified flow rate $Q$ and jet radius $a$, along with a chosen inner radius $r_0\ge a$, we can determine the correct values of $h_0$ and $u_0.$ It can be shown that the results obtained in this way are only very weakly sensitive to our choice of $r_0$.
End of explanation
"""
def jump_location(Q,r_jet,h_inf,r_inf=100.,g=1.,r0=None,tol=1./10000):
r"""Predict location of hydraulic jump for given inner
radius flow and asymptotic depth."""
if r0 == None:
r0 = r_jet
h0, u0 = compute_inner_values(Q,r_jet,r0)
F_in = u0/np.sqrt(g*h0) # Inflow Froude number
assert F_in > 1 # Initial flow must be supercritical
r = np.linspace(r0,r_inf,int(round(1./tol)))
beta = r0 * h0 * u0
u_inf = u0 * (r0/r_inf) * (h0/h_inf)
F_inf = u_inf/np.sqrt(g*h_inf) # Far-field Froude number
assert F_inf < 1 # Far field flow must be subcritical
# Integrate outward
hh_in = np.squeeze(integrate.odeint(steady_rhs,h0,r,args=(beta,g)))
uu_in = beta/(r*hh_in)
hh_out = np.squeeze(integrate.odeint(steady_rhs,h_inf,-r[::-1],args=(beta,g)))
hh_out = hh_out[::-1]
F_l = uu_in/np.sqrt(g*hh_in) # Froude number for left state
phi = hh_in - hh_out + 1.5*hh_in*(np.sqrt(1.+8./9.*(F_l**2-1.))-1)
jump_loc = np.argmin(np.abs(phi))
profile = 0*r
profile[:jump_loc] = hh_in[:jump_loc]
profile[jump_loc:] = hh_out[jump_loc:]
return r[jump_loc], r, profile
r_jump, r, profile = jump_location(Q=200.,r_jet=1.,h_inf=1.,g=980.,tol=1.e-6)
print('Jump radius: '+str(r_jump)+' cm')
plt.clf()
plt.plot(r,profile)
plt.xlim(r[0],10);
"""
Explanation: Locating the jump
In the examples above, we selected the boundary data based on a prescribed jump location. But in practice we can't choose where the jump is -- we'd like to predict that!
We can predict the location of the jump based on prescribed inflow conditions ($r_0, h_0, u_0$ and a prescribed far-field depth $h_\infty$) as follows:
Set $\beta = r_0 h_0 u_0$. Choose a finite outer radius $r_\infty \gg r_0$. Set $u_\infty$ so that $r_\infty h_\infty u_\infty = \beta$.
Integrate (\ref{hdiff}) outward from $(r_0,h_0)$ to obtain a profile $h_\text{outward}$.
Integrate (\ref{hdiff}) inward from $(r_\infty, h_\infty)$ to obtain a profile $h_\text{inward}$.
Compute $\phi(r) = h_\text{outward} - h_\text{inward}$ and determine the value of $r$ such that $\phi(r) = \alpha$, with $\alpha$ given by (\ref{depth_jump}).
End of explanation
"""
Q = 202. # Flow rate (in cm^3/s)
r_jet = 0.3175 # Nozzle radius (in cm)
h_inf = 0.343 # Depth at infinity (in cm)
g = 980. # Gravity (in cm/s^2)
r_jump, r, profile = jump_location(Q,r_jet,h_inf,r_inf=500.,g=980.,tol=1.e-6)
print('Predicted jump radius: '+str(r_jump)+' cm')
print('Measured jump radius: 17 cm')
plt.plot(r,profile)
plt.xlim(r[0],r[-1]);
"""
Explanation: Watson's experiment
Here we use data from an experiment conducted in <cite data-cite="watson1964radial"><a href="riemann.html#watson1964radial">(Watson, 1964)</a></cite>; see p. 496 therein. The unit of length therein is feet; here we have converted everything to centimeters. The experimental jump location is at a radius of about 17 cm. Let's see what our model gives.
End of explanation
"""
Q = 4.48
r_jet = 0.1
h_inf = 0.18
r_jump, r, profile = jump_location(Q,r_jet,h_inf,r_inf=50.,g=980.,tol=1.e-6)
print('Predicted jump radius: '+str(r_jump)+' cm')
print('Measured jump radius: 1.2 cm')
plt.plot(r,profile)
plt.xlim(r[0],r[-1]);
Q = 26.
r_jet = 0.215
h_inf = 0.33
r_jump, r, profile = jump_location(Q,r_jet,h_inf,r_inf=200.,g=980.,tol=1.e-6)
print('Predicted jump radius: '+str(r_jump)+' cm')
print('Measured jump radius: 2.3 cm')
plt.plot(r,profile)
plt.xlim(r[0],r[-1]);
"""
Explanation: Clearly, some of the effects we have ignored must be important! In particular, as Watson (and others) argue, viscosity or friction becomes very significant in the shallow flow before the jump, causing the jump to emerge much closer to the jet than this inviscid model predicts.
Experiments of Craik et. al.
Here we compare with two more experiments; see Table 1 of <cite data-cite="craik1981circular"><a href="riemann.html#craik1981circular">(Craik et. al., 1981)</a></cite>.
End of explanation
"""
claw = setup()
claw.solver.bc_upper[0] = pyclaw.BC.extrap
claw.verbosity = 0
claw.run()
anim = ianimate(claw);
plt.close('all')
HTML(anim.to_jshtml())
"""
Explanation: The difference is less extreme, but still very substantial. It is worth noting also that if we consider water flowing onto an infinite flat plate, the purely hyperbolic model (with no viscosity or friction) doesn't predict any jump at all, because there is no mechanism forcing the flow to transition to a subsonic state. We can observe this in the simulation if we set the boundary condition at the outer radius to outflow.
In the simulation below, we start with a steady-state solution involving a hydraulic jump, but allow outflow at the outer boundary ((here this is imposed approximately using zero-order extrapolation).
End of explanation
"""
|
mikarubi/notebooks | worker/notebooks/neurofinder/tutorials/custom-example-thunder.ipynb | mit | %matplotlib inline
from thunder import Colorize
image = Colorize.image
tile = Colorize.tile
"""
Explanation: Writing an algorithm (using Spark/Thunder)
In this notebook, we show how to write an algorithm and put it in a function that can be submitted to the NeuroFinder challenge. In these examples, the algorithms will use functionality from Spark / Thunder for distributed image and time series processing. See the other tutorials for an example submission that does the entire job using only the core Python scientific stack (numpy, scipy, etc.)
Setup plotting
End of explanation
"""
bucket = "s3n://neuro.datasets/"
path = "challenges/neurofinder/01.00/"
images = tsc.loadImages(bucket + path + 'images', startIdx=0, stopIdx=100)
"""
Explanation: Load the data
First, let's load some example data so we have something to play with. We'll load the first 100 images from one of the data sets.
End of explanation
"""
images.cache()
images.count()
ref = images.mean()
"""
Explanation: Our images is a class from Thunder for representing time-varying image sequences. Let's cache and count it, which forces it to be loaded and saved, and we'll also compute a reference mean image, which will be useful for displays
End of explanation
"""
sources = tsc.loadSources(bucket + path + 'sources')
info = tsc.loadJSON(bucket + path + 'info.json')
"""
Explanation: We'll also load the ground truth and the metadata for this data set
End of explanation
"""
def run(data, info=None):
# do an analysis on the images
# optionally make use of the metadata
# return a set of sources
pass
"""
Explanation: Algorithm structure
We're going to write a function that takes the images variable as an input, as well as an info dictionary with data-set specific metadata, and returns identified sources as an output. It'll look like this (for now our function will just pass and thus do nothing):
End of explanation
"""
def run(data, info):
from thunder import SourceExtraction
method = SourceExtraction('localmax')
result = method.fit(data)
return result
"""
Explanation: The first thing we could do is use one of Thunder's built-in methods for spatio-temporal feature detection, for example, the localmax algorithm. This is a very simple algorithm that computes the mean, and then applies some very simple image processing to detect local image peaks.
End of explanation
"""
out = run(images, info)
image(out.masks((512,512), base=ref, outline=True))
"""
Explanation: Let's run our function on the example data and inspect the output
End of explanation
"""
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
"""
Explanation: Let's see how well it did on the example data
End of explanation
"""
from thunder.extraction.feature.methods.localmax import LocalMaxFeatureAlgorithm
LocalMaxFeatureAlgorithm?
"""
Explanation: This algorithm isn't doing particularly well, but you could submit this right now to the challenge. Take the run function we wrote, put it in a file run.py in a folder called run, and add an empty __init__.py file in the same folder. Then fork the the neurofinder repository on GitHub and add this folder inside submissions. See here for more detailed instructions.
Tweaking a built-in algorithm
Let's try to improve the algorithm a bit. One option is to use the same algorithm, but just tweak the parameters. We can inspect the algorithm we used with ? to see all the available parameters.
End of explanation
"""
def run(data, info):
from thunder import SourceExtraction
method = SourceExtraction('localmax', maxSources=500, minDistance=5)
result = method.fit(data)
return result
out = run(images, info)
image(out.masks((512,512), base=ref, outline=True))
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
"""
Explanation: Try increasing the maximum number of sources, and decrease the minimum distance
End of explanation
"""
print('precision: %.2f' % precision)
"""
Explanation: Hmm, that did a bit better, but still not great. Note that the precision (the number of extra sources the algorithm found) is particularly bad.
End of explanation
"""
def run(data, info):
from thunder import SourceExtraction
from thunder.extraction import OverlapBlockMerger
merger = OverlapBlockMerger(0.1)
method = SourceExtraction('nmf', merger=merger, componentsPerBlock=5, percentile=95, minArea=100, maxArea=500)
result = method.fit(data, size=(32, 32), padding=8)
return result
"""
Explanation: You probably don't want to submit this one, but using and tweaking the existing algorithms is a perfectly valid way to submit algorithms! You might end up with something that works really well.
Trying a different algorithm class
Most likely the algorithm about just isn't the right algorithm for these data. Let's try a block algorithm, which does more complex spatio-temporal feature extraction on sub-regions, or blocks, of the full movie
End of explanation
"""
out = run(images, info)
"""
Explanation: Let's run this algorithm. It'll take a little longer because it's more complex, that's one of the reasons we try to parallelize these computations!
End of explanation
"""
image(out.masks((512,512), base=ref, outline=True))
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
"""
Explanation: Inspect the result
End of explanation
"""
print('precision: %.2f' % precision)
"""
Explanation: The overall score is worse, but note that the precision is incredibly high. We missed a lot of sources, but the ones we found are all good. You can see that in the image above: every identified region does indeed look like it found a neuron.
End of explanation
"""
b = images.toBlocks(size=(40,40)).values().filter(lambda x: x.std() > 1000).first()
"""
Explanation: Writing a custom block algorithm
For our final example, we'll build a custom algorithm from strach using the constructors from Thunder. First, we'll define a function to run on each block. For testing and debugging our function, we'll grab a single block. We'll pick one with a large total standard deviation (in both space and time), so it's likely to have some structure.
End of explanation
"""
b.shape
"""
Explanation: This should be a single numpy array with shape (100,40,40), corresponding to the dimensions in time and space.
End of explanation
"""
def stdpeak(block):
# compute the standard deviation over time
s = block.std(axis=0)
# get the indices of the peak
from numpy import where
r, c = where(s == s.max())
# define a circle around the center, clipping at the boundaries
from skimage.draw import circle
rr, cc = circle(r[0], c[0], 10, shape=block.shape[1:])
coords = zip(rr, cc)
# return as a list of sources (in this case it's just one)\n",
from thunder.extraction.source import Source
if len(coords) > 0:
return [Source(coords)]
else:
return []
"""
Explanation: Let's write a function that computes the standard deviation over time, finds the index of the max, draws a circle around the peak, and returns it as a Source.
End of explanation
"""
s = stdpeak(b)
tile([s[0].mask((40,40)), b.std(axis=0)])
"""
Explanation: Test that our function does something reasonable on the test block, showing the recovered source and the mean of the block over time side by side
End of explanation
"""
def run(data, info):
# import the classes we need for construction
from thunder.extraction.block.base import BlockAlgorithm, BlockMethod
# create a custom class by extending the base method
class TestBlockAlgorithm(BlockAlgorithm):
# write an extract function which draws a circle around the pixel
# in each block with peak standard deviation
def extract(self, block):
return stdpeak(block)
# now instaitiate our new method and use it to fit the data
method = BlockMethod(algorithm=TestBlockAlgorithm())
result = method.fit(data, size=(40, 40))
return result
"""
Explanation: Now we can build a block method that uses this function. We just need to import the classes for constructing block methods, and define an extract function to run on each block. In this case, we'll just call our stdpeak function from above, but to form a complete submission you'd need to include this function alongside run. See the inline comments for what we're doing at each step.
End of explanation
"""
out = run(images, info)
image(out.masks((512,512), base=sources, outline=True))
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
"""
Explanation: Now run and evaluate the algorithm
End of explanation
"""
|
the-deep-learners/nyc-ds-academy | notebooks/deep_net_in_tensorflow.ipynb | mit | import numpy as np
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
"""
Explanation: Deep Neural Network in TensorFlow
In this notebook, we convert our intermediate-depth MNIST-classifying neural network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's Multi-Layer Perceptron Notebook style.
Load dependencies
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Load data
End of explanation
"""
lr = 0.1
epochs = 10
batch_size = 128
weight_initializer = tf.contrib.layers.xavier_initializer()
"""
Explanation: Set neural network hyperparameters (tidier at top of file!)
End of explanation
"""
n_input = 784
n_dense_1 = 64
n_dense_2 = 64
n_dense_3 = 64
n_classes = 10
"""
Explanation: Set number of neurons for each layer
End of explanation
"""
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
"""
Explanation: Define placeholders Tensors for inputs and labels
End of explanation
"""
# dense layer with ReLU activation:
def dense(x, W, b):
z = tf.add(tf.matmul(x, W), b)
a = tf.nn.relu(z)
return a
"""
Explanation: Define types of layers
End of explanation
"""
bias_dict = {
'b1': tf.Variable(tf.zeros([n_dense_1])),
'b2': tf.Variable(tf.zeros([n_dense_2])),
'b3': tf.Variable(tf.zeros([n_dense_3])),
'b_out': tf.Variable(tf.zeros([n_classes]))
}
weight_dict = {
'W1': tf.get_variable('W1', [n_input, n_dense_1], initializer=weight_initializer),
'W2': tf.get_variable('W2', [n_dense_1, n_dense_2], initializer=weight_initializer),
'W3': tf.get_variable('W3', [n_dense_2, n_dense_3], initializer=weight_initializer),
'W_out': tf.get_variable('W_out', [n_dense_3, n_classes], initializer=weight_initializer),
}
"""
Explanation: Define dictionaries for storing weights and biases for each layer -- and initialize
End of explanation
"""
def network(x, weights, biases):
# two dense hidden layers:
dense_1 = dense(x, weights['W1'], biases['b1'])
dense_2 = dense(dense_1, weights['W2'], biases['b2'])
dense_3 = dense(dense_2, weights['W3'], biases['b3'])
# linear output layer (softmax):
out_layer_z = tf.add(tf.matmul(dense_3, weights['W_out']), biases['b_out'])
return out_layer_z
"""
Explanation: Design neural network architecture
End of explanation
"""
predictions = network(x, weights=weight_dict, biases=bias_dict)
"""
Explanation: Build model
End of explanation
"""
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr).minimize(cost)
"""
Explanation: Define model's loss and its optimizer
End of explanation
"""
# calculate accuracy by identifying test cases where the model's highest-probability class matches the true y label:
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy_pct = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) * 100
"""
Explanation: Define evaluation metrics
End of explanation
"""
initializer_op = tf.global_variables_initializer()
"""
Explanation: Create op for variable initialization
End of explanation
"""
with tf.Session() as session:
session.run(initializer_op)
print("Training for", epochs, "epochs.")
# loop over epochs:
for epoch in range(epochs):
avg_cost = 0.0 # track cost to monitor performance during training
avg_accuracy_pct = 0.0
# loop over all batches of the epoch:
n_batches = int(mnist.train.num_examples / batch_size)
for i in range(n_batches):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# feed batch data to run optimization and fetching cost and accuracy:
_, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct], feed_dict={x: batch_x, y: batch_y})
# accumulate mean loss and accuracy over epoch:
avg_cost += batch_cost / n_batches
avg_accuracy_pct += batch_acc / n_batches
# output logs at end of each epoch of training:
print("Epoch ", '%03d' % (epoch+1),
": cost = ", '{:.3f}'.format(avg_cost),
", accuracy = ", '{:.2f}'.format(avg_accuracy_pct), "%",
sep='')
print("Training Complete. Testing Model.\n")
test_cost = cost.eval({x: mnist.test.images, y: mnist.test.labels})
test_accuracy_pct = accuracy_pct.eval({x: mnist.test.images, y: mnist.test.labels})
print("Test Cost:", '{:.3f}'.format(test_cost))
print("Test Accuracy: ", '{:.2f}'.format(test_accuracy_pct), "%", sep='')
"""
Explanation: Train the network in a session
End of explanation
"""
|
kescobo/gender-comp-bio | notebooks/gender_detection.ipynb | gpl-3.0 | import os
os.chdir("../data/pubdata")
names = []
with open("comp.csv") as infile:
for line in infile:
names.append(line.split(",")[5])
"""
Explanation: 2. Gender Detection
Figuring out genders from names
We're going to use 3 different methods, all of which use a similar philosophy. Essentially, each of these services have build databases from datasets where genders are known or can be identified. For example, national census data and social media profiles.
GenderDetector can be run locally, but only provides "male", "female" or "unknown", and has a limitted number of names in the database.
genderize.io and Gender API are web services that allow us to query names and return genders
Each of these services provides a "probability" that the gender is correct (so if "Jamie" shows up 80 times in their data as a female name, and 20 times as a male name, they'll say it's "female" with a probability of 0.8)
They also tell us how certain we can be of that gender by telling us how many times that name shows up (in the above example, the count would be 100. This is useful because some names might only have 1 or 2 entries, in which case a 100% probability of being male would be less reliable than a name that has 1000 entries.
The web APIs have superior data, but the problem is that they are services that require you to pay if you make more than a certain number of queries in a short period of time. The owners of both services have generously provided me with enough queries to do this research for free.
Getting names to query
First, we'll take the names from our pubmed queries and collapse them into sets. We don't really need to query the
name "John" a thousand times - once will do. I'm going to loop through the csv we wrote out in the last section and pull the fourth column, which contains our author name.
End of explanation
"""
print(len(names))
names = set(names)
print(len(names))
"""
Explanation: Then we'll convert the list to a set, which is an unordered array of unique values (so it removes duplicates)
End of explanation
"""
def get_unique_names(csv_file):
names = []
with open(csv_file) as infile:
for line in infile:
names.append(line.split(",")[5])
return set(names)
"""
Explanation: Here's a function that does the same thing.
End of explanation
"""
names = names.union(get_unique_names("bio.csv"))
print(len(all_names))
"""
Explanation: The set.union() function will merge 2 sets into a single set, so we'll do this with our other datasets.
End of explanation
"""
from gender_detector import GenderDetector
detector = GenderDetector('us')
print(detector.guess("kevin"))
print(detector.guess("melanie"))
print(detector.guess("ajasja"))
gender_dict = {}
counter = 0
for name in names:
try:
gender = detector.guess(name)
gender_dict[name] = gender
except:
print(name)
print(len(gender_dict))
print(sum([1 for x in gender_dict if gender_dict[x] == 'unknown']))
print(sum([1 for x in gender_dict if gender_dict[x] != 'unknown']))
"""
Explanation: Getting genders from names
GenderDetector
First up - GenderDetector. The usage is pretty straighforward:
End of explanation
"""
import json
with open("GenderDetector_genders.json", "w+") as outfile:
outfile.write(json.dumps(gender_dict, indent=4))
"""
Explanation: Output datasets
End of explanation
"""
from api_keys import genderize_key
from genderize import Genderize
all_names = list(all_names)
genderize = Genderize(
user_agent='Kevin_Bonham',
api_key=genderize_key)
genderize_dict = {}
for i in range(0, len(all_names), 10):
query = all_names[i:i+10]
genders = genderize.get(query)
for gender in genders:
n = gender["name"]
g = gender["gender"]
if g != None:
p = gender["probability"]
c = gender["count"]
else:
p = None
c = 0
genderize_dict[n] = {"gender":g, "probability":p, "count": c}
with open("genderize_genders.json", "w+") as outfile:
outfile.write(json.dumps(genderize_dict, indent=4))
print(len(genderize_dict))
print(sum([1 for x in genderize_dict if genderize_dict[x]["gender"] == 'unknown']))
print(sum([1 for x in genderize_dict if genderize_dict[x]["gender"] != 'unknown']))
"""
Explanation: Genderize.io
This one is a bit more complicated, since we have to make a call to the web api, and then parse the json that's returned. Happily, someone already wrote a python package to do most of the work. We can query 10 names at a time rather than each one individually, and we'll get back a list of dictionaries, one for each query:
[{u'count': 1037, u'gender': u'male', u'name': u'James', u'probability': 0.99},
{u'count': 234, u'gender': u'female', u'name': u'Eva', u'probability': 1.0},
{u'gender': None, u'name': u'Thunderhorse'}]
I will turn that into a dictionary of dictionaries, where the name is the key, and the other elements are stored under them. Eg:
{
u'James':{
u'count': 1037,
u'gender': u'male',
u'probability': 0.99
},
u'Eva':{
u'count': 234,
u'gender': u'female',
u'probability': 1.0
},
u'Thunderhorse':{
u'count: 0,
u'gender': None,
u'probability': None
}
}
Note:
I've got an API key stored in a separate file called api_keys.py (that I'm not putting on git because you can't have my queries!) that looks like this:
genderize_key = "s0m3numb3rsandl3tt3rs"
genderAPI_key = "0th3rnumb3rsandl3tt3rs"
You can get a key from both services for free, but you'll be limited in the number of queries you can make. Just make a similar file, or add them in below in place of the proper variables.
End of explanation
"""
from api_keys import genderAPI_key
import urllib2
genderAPI_dict = {}
counter = 0
for i in range(counter, len(all_names), 20):
names = all_names[i:i+20]
query = ";".join(names)
data = json.load(urllib2.urlopen("https://gender-api.com/get?key={}&name={}".format(genderAPI_key, query)))
for r in data['result']:
n = r["name"]
g = r["gender"]
if g != u"unknown":
p = float(r["accuracy"]) / 100
c = r["samples"]
else:
p = None
c = 0
genderAPI_dict[n] = {"gender":g, "probability":p, "count": c}
with open("../data/pubs/genderAPI_genders.json", "w+") as outfile:
outfile.write(json.dumps(genderAPI_dict, indent=4))
"""
Explanation: Gender-API
This is a similar service, but I didn't find a python package for it. Thankfully, it's pretty easy too. The following code is for python2, but you can find the python3 code on the website. The vaule that gets returned comes in the form of a dictionary as well:
{u'accuracy': 99,
u'duration': u'26ms',
u'gender': u'male',
u'name': u'markus',
u'samples': 26354}
Which I'll convert to the same keys and value types used from genderize above (eg. "probability" instead of "accuracy", "count" instead of "samples", and 0.99 instead of 99),
End of explanation
"""
|
phasedchirp/Assorted-Data-Analysis | exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb | gpl-2.0 | %matplotlib inline
from __future__ import division
import matplotlib
matplotlib.rcParams['figure.figsize'] = (15.0,5.0)
import pandas as pd
import numpy as np
from scipy import stats
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
print "Total count: ",len(data)
print "race == 'b': ",len(data[data.race=='b'])
print "race == 'w': ",len(data[data.race=='w'])
data.head()
# number of callbacks and proportion of callbacks
print "Callback count for black-sounding names: ",sum(data[data.race=='b'].call)
print "Callback proportion for black-sounding names: ",sum(data[data.race=='b'].call)/len(data[data.race=='b'])
print "Callback count for white-sounding names: ",sum(data[data.race=='w'].call)
print "Callback proportion for white-sounding names: ",sum(data[data.race=='w'].call)/len(data[data.race=='w'])
"""
Explanation: Examining racial discrimination in the US job market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes.
Exercise
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Discuss statistical significance.
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Experiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states
Scipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
"""
xb = sum(data[data.race=='b'].call)
nb = len(data[data.race=='b'])
xw = sum(data[data.race=='w'].call)
nw = len(data[data.race=='w'])
pHat = (nb*(xb/nb) + nw*(xw/nw))/(nb+nw)
se = np.sqrt(pHat*(1-pHat)*(1/nb + 1/nw))
z = (xb/nb -xw/nw)/se
print "z-score:",round(z,3),"p =", round(stats.norm.sf(abs(z))*2,6)
"""
Explanation: The outcome variable here is binary, so this might be treated in several ways. First, it might be possible to apply the normal approximation to the binomial distribution. In this case, the distribution proportions is $\mathcal{N}(np,np(1-p))$
There are a number of guidelines as to whether this is a suitable approximation (see Wikipedia for a list of such conditions), some of which include:
n > 20 (or 30)
np > 5, np(1-p) > 5 (or 10)
But these conditions can be roughly summed up as not too small of a sample and an estimated proportion far enough from 0 and 1 that the distribution isn't overly skewed. If the normal approximation is reasonable, a z-test can be used, with the following standard error calculation:
$$SE = \sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}$$
where $$\hat{p}=\frac{np_1+np_2}{n_1+n_2}$$
giving
$$z = \frac{p_1-p2}{SE}$$
End of explanation
"""
pb = xb/nb
x = np.arange(110,210)
matplotlib.pyplot.vlines(x,0,stats.binom.pmf(x,nb,pb))
"""
Explanation: So, the difference in probability of a call-back is statistically significant here.
Plotting the distribution for call-backs with black-sounding names, it looks fairly symmetrical and well-behaved, so it's quite likely that the normal approximation is fairly reasonable here.
End of explanation
"""
intervalB = (stats.beta.ppf(0.025,xb+0.5,nb-xb+0.5),stats.beta.ppf(0.975,xb+0.5,nb-xb+0.5))
intervalW = (stats.beta.ppf(0.025,xw+0.5,nw-xw+0.5),stats.beta.ppf(0.975,xw+0.5,nw-xw+0.5))
print "Interval for black-sounding names: ",map(lambda x: round(x,3),intervalB)
print "Interval for white-sounding names: ",map(lambda x: round(x,3),intervalW)
"""
Explanation: Alternatives
Because the normal distribution is only an approximation, the assumptions don't always work out for a particular data set. There are several methods for calculating confidence intervals around the estimated proportion. For example, with a significance level of $\alpha$, the Jeffrey's interval is defined as the $\frac{\alpha}{2}$ and 1-$\frac{\alpha}{2}$ quantiles of a beta$(x+\frac{1}{2}, n-x+\frac{1}{2})$ distribution. Using scipy:
End of explanation
"""
import pystan
modelCode = '''
data {
int<lower=0> N;
int<lower=1,upper=2> G[N];
int<lower=0,upper=1> y[N];
}
parameters {
real<lower=0,upper=1> theta[2];
}
model {
# beta(0.5,0.5) prior
theta ~ beta(0.5,0.5);
# bernoulli likelihood
# This could be modified to use a binomial with successes and counts instead
for (i in 1:N)
y[i] ~ bernoulli(theta[G[i]]);
}
generated quantities {
real diff;
// difference in proportions:
diff <- theta[1]-theta[2];
}
'''
model = pystan.StanModel(model_code=modelCode)
dataDict = dict(N=len(data),G=np.where(data.race=='b',1,2),y=map(int,data.call))
fit = model.sampling(data=dataDict)
print fit
samples = fit.extract(permuted=True)
MCMCIntervalB = np.percentile(samples['theta'].transpose()[0],[2.5,97.5])
MCMCIntervalW = np.percentile(samples['theta'].transpose()[1],[2.5,97.5])
fit.plot().show()
"""
Explanation: The complete lack of overlap in the intervals here implies a significant difference with $p\lt 0.05$ (Cumming & Finch,2005). Given that this particular interval can be interpreted as a Bayesian credible interval, this is a fairly comfortable conclusion.
Calculating credible intervals using Markov Chain Monte Carlo
Slightly different method of calculating approximately the same thing (the beta distribution used above the posterior distribution given given the observations with a Jeffreys prior):
End of explanation
"""
print map(lambda x: round(x,3),MCMCIntervalB)
print map(lambda x: round(x,3),MCMCIntervalW)
"""
Explanation: Estimating rough 95% credible intervals:
End of explanation
"""
print map(lambda x: round(x,3),np.percentile(samples['diff'],[2.5,97.5]))
"""
Explanation: So, this method gives a result that fits quite nicely with previous results, while allowing more flexible specification of priors.
Interval for sampled differences in proportions:
End of explanation
"""
data.columns
# The data is balanced by design, and this mostly isn't a problem for relatively simple models.
# For example:
pd.crosstab(data.computerskills,data.race)
import statsmodels.formula.api as smf
"""
Explanation: And this interval does not include 0, so that we're left fairly confident that black-sounding names get less call-backs, although the estimated differences in proportions are fairly small (significant in the technical sense isn't really the right word to describe this part).
Accounting for additional factors:
A next step here would be to check whether other factors influence the proportion of call-backs. This can be done using logistic regression, although there will be a limit to the complexity of the model to be fit, given that the proportion of call-backs is quite small, potentially leading to small cell-counts and unstable estimates (one rule of thumb being n>30 per cell is reasonably safe).
End of explanation
"""
glm = smf.Logit.from_formula(formula="call~race+computerskills",data=data).fit()
glm.summary()
"""
Explanation: Checking to see if computer skills have a significant effect on call-backs:
End of explanation
"""
glm2 = smf.Logit.from_formula(formula="call~race*computerskills",data=data).fit()
glm2.summary()
"""
Explanation: The effect might be described as marginal, but probably best not to over-interpret. But maybe the combination of race and computer skills makes a difference? Apparently not in this data (not even an improvement to the model log-likelihood or other measures of model fit):
End of explanation
"""
|
mjasher/gac | original_libraries/flopy-master/examples/Notebooks/flopy3_Zaidel_example.ipynb | gpl-2.0 | %matplotlib inline
import sys
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy
import flopy.utils as fputl
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfusg'
if platform.system() == 'Windows':
exe_name = 'mfusg.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'zaidel'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
"""
Explanation: FloPy3
MODFLOW-USG $-$ Discontinuous water table configuration over a stairway impervious base
One of the most challenging numerical cases for MODFLOW arises from drying-rewetting problems often associated with abrupt changes in the elevations of impervious base of a thin unconfined aquifer. This problem simulates a discontinuous water table configuration over a stairway impervious base and flow between constant-head boundaries in column 1 and 200. This problem is based on
Zaidel, J. (2013), Discontinuous Steady-State Analytical Solutions of the Boussinesq Equation and Their Numerical Representation by Modflow. Groundwater, 51: 952–959. doi: 10.1111/gwat.12019
The model consistes of a grid of 200 columns, 1 row, and 1 layer; a bottom altitude of ranging from 20 to 0 m; constant heads of 23 and 5 m in column 1 and 200, respectively; and a horizontal hydraulic conductivity of $1x10^{-4}$ m/d. The discretization is 5 m in the row direction for all cells.
In this example results from MODFLOW-USG will be evaluated.
End of explanation
"""
#--model dimensions
nlay, nrow, ncol = 1, 1, 200
delr = 50.
delc = 1.
#--boundary heads
h1 = 23.
h2 = 5.
#--cell centroid locations
x = np.arange(0., float(ncol)*delr, delr) + delr / 2.
#--ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
ibound[:, :, 0] = -1
ibound[:, :, -1] = -1
#--bottom of the model
botm = 25 * np.ones((nlay + 1, nrow, ncol), dtype=np.float)
base = 20.
for j in xrange(ncol):
botm[1, :, j] = base
#if j > 0 and j % 40 == 0:
if j+1 in [40,80,120,160]:
base -= 5
#--starting heads
strt = h1 * np.ones((nlay, nrow, ncol), dtype=np.float)
strt[:, :, -1] = h2
"""
Explanation: Model parameters
End of explanation
"""
#make the flopy model
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=botm[0, :, :], botm=botm[1:, :, :],
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowLpf(mf, hk=0.0001, laytyp=4)
oc = flopy.modflow.ModflowOc(mf,
stress_period_data={(0,0): ['print budget', 'print head',
'save head', 'save budget']})
sms = flopy.modflow.ModflowSms(mf, nonlinmeth=1, linmeth=1,
numtrack=50, btol=1.1, breduc=0.70, reslim = 0.0,
theta=0.85, akappa=0.0001, gamma=0., amomentum=0.1,
iacl=2, norder=0, level=5, north=7, iredsys=0, rrctol=0.,
idroptol=1, epsrn=1.e-5,
mxiter=500, hclose=1.e-3, hiclose=1.e-3, iter1=50)
mf.write_input()
#--remove any existing head files
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
#--run the model
mf.run_model()
"""
Explanation: Create and run the MODFLOW-USG model
End of explanation
"""
#--Create the mfusg headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = fputl.HeadFile(headfile, precision='single')
times = headobj.get_times()
mfusghead = headobj.get_data(totim=times[-1])
"""
Explanation: Read the simulated MODFLOW-USG model results
End of explanation
"""
fig = plt.figure(figsize=(8,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, mfusghead[0, 0, :], linewidth=0.75, color='blue', label='MODFLOW-USG')
ax.fill_between(x, y1=botm[1, 0, :], y2=-5, color='0.5', alpha=0.5)
leg = ax.legend(loc='upper right')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax.set_ylim(-5,25);
"""
Explanation: Plot MODFLOW-USG results
End of explanation
"""
|
mp4096/controlboros | examples/simple_control_loop.ipynb | bsd-3-clause | from controlboros import StateSpaceBuilder
import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
"""
Explanation: Simulating a simple control loop
Mikhail Pak, 2017
End of explanation
"""
t_begin, t_end = 0.0, 10.0
dt = 0.1
"""
Explanation: In this notebook, we shall simulate the step response of a simple control loop using scipy.signal and controlboros.
The control loop consists of a controller $C(s)$ and a plant $H(s)$ defined by the following transfer functions:
$$
\begin{aligned}
C(s) &= \frac{0.1 (5 s + 1)}{s + 1} & H(s) &= \frac{1}{(5 s + 1)(s^2 + 0.2 s + 1)} = \frac{1}{5 s^3 + 2 s^2 + 5.2 s + 1}
\end{aligned}
$$
The closed-loop transfer function is given by:
$$T(s) = \frac{C(s) H(s)}{1 + C(s) H(s)} = \frac{0.1}{s^3 + 1.2 s^2 + 1.2 s + 1.1}$$
We want to simulate from 0 to 10 seconds and we use the same coarse sample time (100 ms) for both scipy.signal and controlboros:
End of explanation
"""
t_ref, y_ref = signal.step(
([0.1], [1.0, 1.2, 1.2, 1.1]),
T=np.arange(t_begin, t_end, dt),
)
"""
Explanation: Ok. Now we compute the reference step response using scipy.signal:
End of explanation
"""
ctrl = StateSpaceBuilder().from_tf([0.5, 0.1], [1.0, 1.0])\
.discretise(dt)\
.build()
plant = StateSpaceBuilder().from_tf([1.0], [5.0, 2.0, 5.2, 1.0])\
.discretise(dt)\
.build()
"""
Explanation: Now we create two controlboros.StateSpace models for the plant and controller using the builder pattern.
Notice that discretise() is just a wrapper around scipy.signal.cont2discrete(). It uses the zero-order hold method per default. You can play around and see the difference e.g. when using the Tustin's approximation (method="bilinear").
End of explanation
"""
t_cb = np.arange(t_begin, t_end, dt)
setpoint = np.array([1.0]) # Setpoint, constantly 1.0 because step response
command = np.array([0.0]) # Controller output
feedback = np.array([0.0]) # Feedback value
y_cb = np.zeros((len(t_cb),)) # Array for the step response
"""
Explanation: And we initalise variables and arrays for our signals:
End of explanation
"""
# Reset the inital state of the systems,
# helpful if you run this cell multiple times!
ctrl.set_state_to_zero()
plant.set_state_to_zero()
for i in range(len(t_cb)):
command = ctrl.push_stateful(setpoint - feedback)
y_cb[i] = plant.push_stateful(command)
feedback = y_cb[i] # Unit delay!
"""
Explanation: We're ready to run the main loop. Notice how we resolve the control loop by using a unit delay:
End of explanation
"""
plt.figure(figsize=(7, 5))
plt.plot(t_ref, y_ref)
plt.step(t_cb, y_cb, where="post")
plt.xlabel("Time (s)")
plt.ylabel("Output")
plt.legend(["scipy.signal", "controlboros"])
plt.grid()
plt.show()
"""
Explanation: Plot and compare results:
End of explanation
"""
dt_fine = 1.0e-3
ctrl_fine = StateSpaceBuilder().from_tf([0.5, 0.1], [1.0, 1.0])\
.discretise(dt_fine)\
.build()
plant_fine = StateSpaceBuilder().from_tf([1.0], [5.0, 2.0, 5.2, 1.0])\
.discretise(dt_fine)\
.build()
t_cb_fine = np.arange(t_begin, t_end, dt_fine)
setpoint = np.array([1.0]) # Setpoint, constantly 1.0 because step response
command = np.array([0.0]) # Controller output
feedback = np.array([0.0]) # Feedback value
y_cb_fine = np.zeros((len(t_cb_fine),)) # Array for the step response
# Reset the inital state of the systems
ctrl_fine.set_state_to_zero()
plant_fine.set_state_to_zero()
for i in range(len(t_cb_fine)):
command = ctrl_fine.push_stateful(setpoint - feedback)
y_cb_fine[i] = plant_fine.push_stateful(command)
feedback = y_cb_fine[i]
"""
Explanation: Obviously, the step response simulated with controlboros is very inaccurate due to 100 ms time delay in the feedback loop. We can alleviate (but never solve!) this problem by using a finer sample time, e.g. 1 ms:
End of explanation
"""
plt.figure(figsize=(7, 5))
plt.plot(t_ref, y_ref)
plt.step(t_cb, y_cb, where="post")
plt.step(t_cb_fine, y_cb_fine, "r", where="post")
plt.xlabel("Time (s)")
plt.ylabel("Output")
plt.legend(["scipy.signal", "controlboros, 100 ms", "controlboros, 1 ms"])
plt.grid()
plt.show()
"""
Explanation: We plot the results and see that the controlboros solution with 1 ms step size is very close to the one computed with scipy.signal:
End of explanation
"""
|
kingb12/languagemodelRNN | model_comparisons/noingX_compared.ipynb | mit | report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
"""
Explanation: Comparing Encoder-Decoders Analysis
Model Architecture
End of explanation
"""
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
"""
Explanation: Perplexity on Each Dataset
End of explanation
"""
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Loss vs. Epoch
End of explanation
"""
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
"""
Explanation: Perplexity vs. Epoch
End of explanation
"""
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
"""
Explanation: Generations
End of explanation
"""
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
"""
Explanation: BLEU Analysis
End of explanation
"""
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
"""
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
"""
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
"""
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/9e70404d3a55a6b6d1c1877784347c14/mixed_source_space_inverse.ipynb | bsd-3-clause | # Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>
#
# License: BSD-3-Clause
import os.path as op
import matplotlib.pyplot as plt
from nilearn import plotting
import mne
from mne.minimum_norm import make_inverse_operator, apply_inverse
# Set dir
data_path = mne.datasets.sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
"""
Explanation: Compute MNE inverse solution on evoked data with a mixed source space
Create a mixed source space and compute an MNE inverse solution on an evoked
dataset.
End of explanation
"""
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
"""
Explanation: Set up our source space
List substructures we are interested in. We select only the
sub structures we want to include in the source space:
End of explanation
"""
src = mne.setup_source_space(subject, spacing='oct5',
add_dist=False, subjects_dir=subjects_dir)
"""
Explanation: Get a surface-based source space, here with few source points for speed
in this demonstration, in general you should use oct6 spacing!
End of explanation
"""
vol_src = mne.setup_volume_source_space(
subject, mri=fname_aseg, pos=10.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir,
add_interpolator=False, # just for speed, usually this should be True
verbose=True)
# Generate the mixed source space
src += vol_src
print(f"The source space contains {len(src)} spaces and "
f"{sum(s['nuse'] for s in src)} vertices")
"""
Explanation: Now we create a mixed src space by adding the volume regions specified in the
list labels_vol. First, read the aseg file and the source space bounds
using the inner skull surface (here using 10mm spacing to save time,
we recommend something smaller like 5.0 in actual analyses):
End of explanation
"""
src.plot(subjects_dir=subjects_dir)
"""
Explanation: View the source space
End of explanation
"""
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True, overwrite=True)
plotting.plot_img(nii_fname, cmap='nipy_spectral')
"""
Explanation: We could write the mixed source space with::
write_source_spaces(fname_mixed_src, src, overwrite=True)
We can also export source positions to NIfTI file and visualize it again:
End of explanation
"""
fwd = mne.make_forward_solution(
fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=1)
del src # save memory
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
print(f"The fwd source space contains {len(fwd['src'])} spaces and "
f"{sum(s['nuse'] for s in fwd['src'])} vertices")
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
"""
Explanation: Compute the fwd matrix
End of explanation
"""
snr = 3.0 # use smaller SNR for raw data
inv_method = 'dSPM' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
loose = dict(surface=0.2, volume=1.)
lambda2 = 1.0 / snr ** 2
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, depth=None, loose=loose, verbose=True)
del fwd
stc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
src = inverse_operator['src']
"""
Explanation: Compute inverse solution
End of explanation
"""
initial_time = 0.1
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori='vector')
brain = stc_vec.plot(
hemi='both', src=inverse_operator['src'], views='coronal',
initial_time=initial_time, subjects_dir=subjects_dir,
brain_kwargs=dict(silhouette=True), smoothing_steps=7)
"""
Explanation: Plot the mixed source estimate
End of explanation
"""
brain = stc.surface().plot(initial_time=initial_time,
subjects_dir=subjects_dir, smoothing_steps=7)
"""
Explanation: Plot the surface
End of explanation
"""
fig = stc.volume().plot(initial_time=initial_time, src=src,
subjects_dir=subjects_dir)
"""
Explanation: Plot the volume
End of explanation
"""
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(
subject, parc=parc, subjects_dir=subjects_dir)
label_ts = mne.extract_label_time_course(
[stc], labels_parc, src, mode='mean', allow_empty=True)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stc.times, label_ts[0][-1, :].T, 'r', label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
"""
Explanation: Process labels
Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space
End of explanation
"""
|
wasit7/cs439_python | week03/Class.ipynb | bsd-3-clause | class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
dir(m)
m.__doc__
m=MyClass()
m.i
m.f()
"""
Explanation: References
https://docs.python.org/2/tutorial/classes.html
Simple Python Class Components
End of explanation
"""
class AnotherClass:
def __init__(self,i=1234):
self.i=i
a=AnotherClass()
a.i
a=AnotherClass(456)
a.i
class Rect:
def __init__(self,x=0,y=0,w=0,h=0):
self.x=x
self.y=y
self.w=w
self.h=h
r=Rect(1)
r=Rect(7,8,9,10)
"""
Explanation: Initialization
End of explanation
"""
class Rect(object):
def __init__(self,x=0,y=0,w=0,h=0):
self.x=x
self.y=y
self.w=w
self.h=h
def __str__(self):
return "x: %s, y: %s, w: %s, h: %s"%(self.x, self.y, self.w, self.h)
r=Rect(7,8,9,10)
print r
"""
Explanation: Representation
End of explanation
"""
class Square(Rect):
def __init__(self,x=0,y=0,w=0):
super(Square,self).__init__(x,y,w,w)
def __str__(self):
return super(Square,self).__str__()
s=Square(2,3,4)
print s
r.__str__()
x.i
y.i
z=x+y
type(z)
"""
Explanation: Inherite
End of explanation
"""
x="Hello"
y="World"
x+" "+y
"""
Explanation: String
End of explanation
"""
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
x=[1,2,3,4,5,6]
for i in reverse(x):
print i
"""
Explanation: Generator
End of explanation
"""
class Node(object):
L=None
R=None
def __init__(self, x=[],depth=0, side="T"):
self.depth=depth
self.side=side
self.x=x
if 1<len(x):
index=len(x)/2
self.theta=x[index]
self.L=Node(x[:index], depth+1, side="L")
self.R=Node(x[index:], depth+1, side="R")
def show(self):
if self is not None:
print self
if self.L:
self.L.show()
if self.R:
self.R.show()
def __str__(self):
if len(self.x)==1:
return "%s%s, x: %s"%(" "*self.depth, self.side, self.x)
else:
return "%s%s, theta: %s"%(" "*self.depth, self.side, self.theta)
n=Node([1,2,3,4,5,6,7,8,9,10])
n.show()
"""
Explanation: Binary Tree
End of explanation
"""
class Book(object):
def __init__(self,name="untitled", pages=0):
self.name=name
self.pages=pages
def __str__(self):
return "name: %s, pages: %s"%(self.name, self.pages)
def __repr__(self):
return "name: %s, pages: %s"%(self.name, self.pages)
def __add__(self, other):
return Book(name=self.name+"&"+other.name, pages=self.pages+other.pages)
math=Book("Mathematics",306)
phy=Book("Physics",210)
math+phy
"""
Explanation: Operator Overloading
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Neural Structured Learning Authors
End of explanation
"""
!pip install --quiet neural-structured-learning
"""
Explanation: 画像分類のための敵対的正則化
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
このチュートリアルでは、Neural Structured Learning(NSL)フレームワークを用いた画像分類のための敵対的学習(Goodfellow et al.、2014)の使用について説明します。
敵対的学習の中核となる考え方は、有機的な学習データに加え、敵対的なデータ(敵対的サンプルと呼ばれる)を用いてモデルをトレーニングすることです。これらの敵対的サンプルは人間の目には元のデータと同じように見えますが、摂動がモデルを混乱させ、誤った予測や分類が行われる原因となります。敵対的サンプルはモデルに誤った予測や分類を行わせ、誤認識させるように意図的に構成されています。このような例を用いてトレーニングを行うことによって、モデルは予測を行う際に敵対的摂動に対してロバストになるよう学習します。
このチュートリアルでは、Neural Structured Learning のフレームワークを使用し、ロバストなモデルを得るために敵対的学習を適用する手順を以下に説明します。
基本モデルとしてニューラルネットワークを作成します。このチュートリアルでは、tf.keras Functional API で基本モデルを作成します。この手順は、tf.keras Sequential API および Subclassing API で作成されたモデルと互換性があります。TensorFlow における Keras モデルの詳細については、こちらのドキュメントをご覧ください。
NSL フレームワークが提供するラッパークラス AdversarialRegularization で基本モデルをラップして、新しい tf.keras.Model インスタンスを作成します。この新しいモデルには、トレーニング目的の正則化項として敵対的損失が含まれます。
トレーニングデータの例を特徴ディクショナリに変換します。
新しいモデルをトレーニングして評価します。
初心者のための概要
画像分類の敵対的学習に関するビデオによる説明は「TensorFlow Neural Structured Leaning」YouTubeシリーズにあります。以下では、このビデオで説明されている重要な概念について、上記の「概要」セクションの説明を広げながらまとめています。
NSL フレームワークは、ニューラルネットワークによる学習を改善できるように、画像特徴量と構造化シグナルを同時に最適化します。ただし、ニューラルネットワークをトレーニングするために使用できる明示的な構造がない場合はどうでしょうか。このチュートリアルでは、構造を動的に作成する敵対的近傍値の作成(元のサンプルから変更されたもの)を伴う 1 つのアプローチを説明します。
まず、敵対的近傍値は、ニューラルネットを不正確な分類を出力させるように導く小さな摂動が適用された変更バージョンのサンプル画像として定義されます。これらの慎重に設計された摂動は通常、逆勾配方向に基づいており、トレーニング中のニューラルネットを混乱させることが意図されています。人間はサンプル画像と生成された敵対的近傍値を見分けることはできませんが、ニューラルネットにおいては、適用された摂動によって不正確な結論が有効に導びかれます。
生成された敵対的近傍値はサンプルに接続されるため、エッジごとに動的に構造が作成されます。この接続を使用して、ニューラルネットは、誤分類による混乱を避けて、全体的なニューラルネットワークの品質と精度を改善しながら、サンプルと敵対的近傍値の類似性を維持するよう学習します。
以下のコードセグメントはこの手順の概要ではありますが、以降のチュートリアルでは、より詳細に技術面を説明しています。
データを読み取って準備します。MNIST データセットを読み込み、特徴量値が [0,1] の範囲に収まるように正規化します。
```
import neural_structured_learning as nsl
(x_train, y_train), (x_train, y_train) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
```
ニューラルネットワークを構築します。この例では、Sequential Keras ベースモデルを使用します。
model = tf.keras.Sequential(...)
敵対的モデルを構成します。敵対的正則化に適用される乗数、経験的に選択されたステップサイズ/学習率の異なる値などのハイパーパラメータを含みます。構築されたニューラルネットワークのラッパークラスを使用して、敵対的正則化を呼び出します。
adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.05)
adv_model = nsl.keras.AdversarialRegularization(model, adv_config)
標準 Keras ワークフロー(コンパイル、適合、評価)で終了します。
adv_model.compile(optimizer='adam', loss='sparse_categorizal_crossentropy', metrics=['accuracy'])
adv_model.fit({'feature': x_train, 'label': y_train}, epochs=5)
adv_model.evaluate({'feature': x_test, 'label': y_test})
ここでは、2 つの手順と 3 行の単純なコードで敵対的学習が有効化されているのがわかります。これが、ニューラル構造化学習フレームワークの持つ単純さです。以下のセクションでは、この手順をさらに説明します。
セットアップ
Neural Structured Learning パッケージをインストールします。
End of explanation
"""
import matplotlib.pyplot as plt
import neural_structured_learning as nsl
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
"""
Explanation: ライブラリをインポートします。neural_structured_learning を nsl と略します。
End of explanation
"""
class HParams(object):
def __init__(self):
self.input_shape = [28, 28, 1]
self.num_classes = 10
self.conv_filters = [32, 64, 64]
self.kernel_size = (3, 3)
self.pool_size = (2, 2)
self.num_fc_units = [64]
self.batch_size = 32
self.epochs = 5
self.adv_multiplier = 0.2
self.adv_step_size = 0.2
self.adv_grad_norm = 'infinity'
HPARAMS = HParams()
"""
Explanation: ハイパーパラメータ
モデルのトレーニングと評価のための(HParams オブジェクト内の)ハイパーパラメータを集めて説明します。
入力/出力:
input_shape:入力テンソルの形状。各画像は 28×28 ピクセルで 1 チャンネルです。
num_classes:[0-9] に対応する数字 10 個分があり、合計 10 クラスです。
モデルアーキテクチャ:
conv_filters:各畳み込みレイヤーのフィルタ数を指定する数値のリスト。
kernel_size:すべての畳み込みレイヤーで共有する 2 次元の畳み込みウィンドウのサイズ。
pool_size:各最大 Pooling レイヤーで画像をダウンスケールするための係数。
num_fc_units:完全に接続された各レイヤーの単位(すなわち幅)の数。
トレーニングと評価:
batch_size:トレーニングや評価に使用するバッチサイズ。
epochs:トレーニングのエポック数。
敵対的学習:
adv_multiplier:ラベル付けされた損失に対して相対的な、トレーニング目的内の敵対的損失の重み。
adv_step_size:敵対的摂動の大きさ。
adv_grad_norm:敵対的摂動の大きさを測るノルム。
End of explanation
"""
datasets = tfds.load('mnist')
train_dataset = datasets['train']
test_dataset = datasets['test']
IMAGE_INPUT_NAME = 'image'
LABEL_INPUT_NAME = 'label'
"""
Explanation: MNIST データセット
MNIST データセットには手書きの数字( '0' から '9' まで)のグレースケール画像が含まれています。各画像は低解像度(28×28 ピクセル)で数字 1 文字を示しています。このタスクでは、画像を数字ごとに 1 つずつ、10 のカテゴリに分類します。
ここで TensorFlow Datasets から MNIST データセットを読み込みます。これはデータのダウンロードと tf.data.Dataset の構築処理をします。読み込んだデータセットには 2 つのサブセットがあります。
6 万個の例を含む train
1 万個の例を含む test
両方のサブセットに含まれる例は、以下の 2 つのキーを持つ特徴ディクショナリに格納されています。
image:0 から 255 までのピクセル値の配列。
label:0 から 9までの真の正解ラベル。
End of explanation
"""
def normalize(features):
features[IMAGE_INPUT_NAME] = tf.cast(
features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0
return features
def convert_to_tuples(features):
return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME]
def convert_to_dictionaries(image, label):
return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label}
train_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples)
test_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples)
"""
Explanation: モデルを数値的に安定させるには、normalize 関数にデータセットをマッピングし、ピクセル値を [0, 1] に正規化します。トレーニングセットをシャッフルしてバッチ処理を行った後、基本モデルをトレーニングするために例を特徴タプル (image, label) に変換します。また、後で使用できるようにタプルをディクショナリに変換する関数も用意しています。
End of explanation
"""
def build_base_model(hparams):
"""Builds a model according to the architecture defined in `hparams`."""
inputs = tf.keras.Input(
shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME)
x = inputs
for i, num_filters in enumerate(hparams.conv_filters):
x = tf.keras.layers.Conv2D(
num_filters, hparams.kernel_size, activation='relu')(
x)
if i < len(hparams.conv_filters) - 1:
# max pooling between convolutional layers
x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x)
x = tf.keras.layers.Flatten()(x)
for num_units in hparams.num_fc_units:
x = tf.keras.layers.Dense(num_units, activation='relu')(x)
pred = tf.keras.layers.Dense(hparams.num_classes)(x)
model = tf.keras.Model(inputs=inputs, outputs=pred)
return model
base_model = build_base_model(HPARAMS)
base_model.summary()
"""
Explanation: 基本モデル
ここでの基本モデルは、3 つの畳み込みレイヤーと 2 つの全結合レイヤー(HPARAMS の定義)で構成されるニューラルネットワークです。ここでは、Kerasu の Functional API を使用して定義しています。他の API やモデルアーキテクチャ(サブクラス化など)を自由に試してみてください。NSL フレームワークでは 3 つすべての Keras API をサポートしていることに注意してください。
End of explanation
"""
base_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
base_model.fit(train_dataset, epochs=HPARAMS.epochs)
results = base_model.evaluate(test_dataset)
named_results = dict(zip(base_model.metrics_names, results))
print('\naccuracy:', named_results['acc'])
"""
Explanation: 次に、基本モデルのトレーニングと評価を行います。
End of explanation
"""
adv_config = nsl.configs.make_adv_reg_config(
multiplier=HPARAMS.adv_multiplier,
adv_step_size=HPARAMS.adv_step_size,
adv_grad_norm=HPARAMS.adv_grad_norm
)
"""
Explanation: 基本モデルがテストセットで 99% の精度を達成していることが分かります。下記の敵対的摂動下でのロバスト性では、モデルがどれだけロバストであるかを確認します。
敵対的正則化モデル
ここでは、NSL フレームワークを使用して、数行のコードで Keras モデルに敵対的トレーニングを組み込む方法を示します。基本モデルをラップして、トレーニング目的に敵対的正則化を含む新しい tf.Keras.Model を作成します。
まず最初に、ヘルパー関数 nsl.configs.make_adv_reg_config を使用して、関連するすべてのハイパーパラメータを含む構成オブジェクトを作成します。
End of explanation
"""
base_adv_model = build_base_model(HPARAMS)
adv_model = nsl.keras.AdversarialRegularization(
base_adv_model,
label_keys=[LABEL_INPUT_NAME],
adv_config=adv_config
)
train_set_for_adv_model = train_dataset.map(convert_to_dictionaries)
test_set_for_adv_model = test_dataset.map(convert_to_dictionaries)
"""
Explanation: これで AdversarialRegularization を使って基本モデルをラップすることができます。ここでは、既存の基本モデル(base_model)を後で比較に使用できるように、新しい基本モデル(base_adv_model)を作成します。
返される adv_model は tf.keras.Model のオブジェクトであり、そのトレーニング目的には敵対的損失の正則化項を含みます。この損失を計算するためには、モデルは通常の正規入力(特徴 image)に加えてラベル情報(特徴 label)にアクセスする必要があります。この理由から、データセットの例は変換してタプルからディクショナリに戻します。そして label_keys パラメータを介してどの特徴がラベル情報を含んでいるかをモデルに伝えます。
End of explanation
"""
adv_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs)
results = adv_model.evaluate(test_set_for_adv_model)
named_results = dict(zip(adv_model.metrics_names, results))
print('\naccuracy:', named_results['sparse_categorical_accuracy'])
"""
Explanation: 次に、敵対的正則化モデルをコンパイルしてトレーニングし、評価します。「損失ディクショナリに出力がありません」というような警告が出るかもしれませんが、これは adv_model が基本実装に依存せずトータルの損失を計算しているため、問題ありません。
End of explanation
"""
reference_model = nsl.keras.AdversarialRegularization(
base_model, label_keys=[LABEL_INPUT_NAME], adv_config=adv_config)
reference_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
"""
Explanation: 敵対的正則化モデルもまた、テストセットで非常に優れた性能(99% の精度)を達成していることが分かります。
敵対的摂動下におけるロバスト性
ここでは基本モデルと敵対的正則化モデルの、敵対的摂動下におけるロバスト性を比較します。
敵対的摂動の例の生成には AdversarialRegularization.perturb_on_batch 関数を使用します。そして、これは基本モデルを基にして生成します。これを行うためには、AdversarialRegularization を使用して基本モデルをラップします。トレーニング(Model.fit)を呼び出さない限り、モデル内の学習変数に変更はなく、モデルは基本モデルのセクションのものと同じであることに注意してください。
End of explanation
"""
models_to_eval = {
'base': base_model,
'adv-regularized': adv_model.base_model
}
metrics = {
name: tf.keras.metrics.SparseCategoricalAccuracy()
for name in models_to_eval.keys()
}
"""
Explanation: 評価対象のモデルをディクショナリに収集し、各モデルのメトリックオブジェクトを作成します。
基本モデルと同じ(ラベル情報が不要な)入力形式を持つために adv_model.base_model を取る必要があることに注意してください。adv_model.base_model で学習する変数は、adv_model で学習する変数と同じです。
End of explanation
"""
perturbed_images, labels, predictions = [], [], []
for batch in test_set_for_adv_model:
perturbed_batch = reference_model.perturb_on_batch(batch)
# Clipping makes perturbed examples have the same range as regular ones.
perturbed_batch[IMAGE_INPUT_NAME] = tf.clip_by_value(
perturbed_batch[IMAGE_INPUT_NAME], 0.0, 1.0)
y_true = perturbed_batch.pop(LABEL_INPUT_NAME)
perturbed_images.append(perturbed_batch[IMAGE_INPUT_NAME].numpy())
labels.append(y_true.numpy())
predictions.append({})
for name, model in models_to_eval.items():
y_pred = model(perturbed_batch)
metrics[name](y_true, y_pred)
predictions[-1][name] = tf.argmax(y_pred, axis=-1).numpy()
for name, metric in metrics.items():
print('%s model accuracy: %f' % (name, metric.result().numpy()))
"""
Explanation: 摂動された例を生成し、それを用いてモデルを評価するループをここに示します。摂動された画像、ラベル、予測は次のセクションで可視化するために保存しておきます。
End of explanation
"""
batch_index = 0
batch_image = perturbed_images[batch_index]
batch_label = labels[batch_index]
batch_pred = predictions[batch_index]
batch_size = HPARAMS.batch_size
n_col = 4
n_row = (batch_size + n_col - 1) // n_col
print('accuracy in batch %d:' % batch_index)
for name, pred in batch_pred.items():
print('%s model: %d / %d' % (name, np.sum(batch_label == pred), batch_size))
plt.figure(figsize=(15, 15))
for i, (image, y) in enumerate(zip(batch_image, batch_label)):
y_base = batch_pred['base'][i]
y_adv = batch_pred['adv-regularized'][i]
plt.subplot(n_row, n_col, i+1)
plt.title('true: %d, base: %d, adv: %d' % (y, y_base, y_adv))
plt.imshow(tf.keras.utils.array_to_img(image), cmap='gray')
plt.axis('off')
plt.show()
"""
Explanation: 入力が敵対的に摂動されると、基本モデルの精度が劇的に(99% から約 50% に)低下することが分かります。一方で、敵対的正則化されたモデルの精度低下はごくわずか(99% から 95% に)です。これは、敵対的学習がモデルのロバスト性向上に有効であることを示しています。
敵対的摂動された画像の例
ここで、敵対的摂動された画像を確認してみます。摂動された画像は人間が認識可能な数字を表示していますが、基本モデルをうまく騙せることが分かります。
End of explanation
"""
|
junpenglao/Bayesian-Cognitive-Modeling-in-Pymc3 | CaseStudies/TheBARTModelofRiskTaking.ipynb | gpl-3.0 | p = .15 # (Belief of) bursting probability
ntrials = 90 # Number of trials for the BART
Data = pd.read_csv('data/GeorgeSober.txt', sep='\t')
# Data.head()
cash = np.asarray(Data['cash']!=0, dtype=int)
npumps = np.asarray(Data['pumps'], dtype=int)
options = cash + npumps
d = np.full([ntrials,30], np.nan)
k = np.full([ntrials,30], np.nan)
# response vector
for j, ipumps in enumerate(npumps):
inds = np.arange(options[j],dtype=int)
k[j,inds] = inds+1
if ipumps > 0:
d[j,0:ipumps] = 0
if cash[j] == 1:
d[j,ipumps] = 1
indexmask = np.isfinite(d)
d = d[indexmask]
k = k[indexmask]
with pm.Model():
gammap = pm.Uniform('gammap', lower=0, upper=10, testval=1.2)
beta = pm.Uniform('beta', lower=0, upper=10, testval=.5)
omega = pm.Deterministic('omega', -gammap/np.log(1-p))
thetajk = 1 - pm.math.invlogit(- beta * (k - omega))
djk = pm.Bernoulli('djk', p=thetajk, observed=d)
trace = pm.sample(3e3, njobs=2)
pm.traceplot(trace, varnames=['gammap', 'beta']);
from scipy.stats.kde import gaussian_kde
burnin=2000
gammaplus = trace['gammap'][burnin:]
beta = trace['beta'][burnin:]
fig = plt.figure(figsize=(15, 5))
gs = gridspec.GridSpec(1, 3)
ax0 = plt.subplot(gs[0])
ax0.hist(npumps, bins=range(1, 9), rwidth=.8, align='left')
plt.xlabel('Number of Pumps', fontsize=12)
plt.ylabel('Frequency', fontsize=12)
ax1 = plt.subplot(gs[1])
my_pdf1 = gaussian_kde(gammaplus)
x1=np.linspace(.5, 1, 200)
ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function
plt.xlim((.5, 1))
plt.xlabel(r'$\gamma^+$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12)
ax2 = plt.subplot(gs[2])
my_pdf2 = gaussian_kde(beta)
x2=np.linspace(0.3, 1.3, 200)
ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6,) # distribution function
plt.xlim((0.3, 1.3))
plt.xlabel(r'$\beta$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12);
"""
Explanation: Chapter 16 - The BART model of risk taking
16.1 The BART model
Balloon Analogue Risk Task (BART: Lejuez et al., 2002): Every trial in this task starts by showing a balloon representing a small monetary value. The subject can then either transfer the money to a virtual bank account, or choose to pump, which adds a small amount of air to the balloon, and increases its value. There is some probability, however, that pumping the balloon will cause it to burst, causing all the money to be lost. A trial finishes when either the subject has transferred the money, or the balloon has burst.
$$ \gamma^{+} \sim \text{Uniform}(0,10) $$
$$ \beta \sim \text{Uniform}(0,10) $$
$$ \omega = -\gamma^{+} \,/\,\text{log}(1-p) $$
$$ \theta_{jk} = \frac{1} {1+e^{\beta(k-\omega)}} $$
$$ d_{jk} \sim \text{Bernoulli}(\theta_{jk}) $$
End of explanation
"""
p = .15 # (Belief of) bursting probability
ntrials = 90 # Number of trials for the BART
Ncond = 3
dall = np.full([Ncond,ntrials,30], np.nan)
options = np.zeros((Ncond,ntrials))
kall = np.full([Ncond,ntrials,30], np.nan)
npumps_ = np.zeros((Ncond,ntrials))
for icondi in range(Ncond):
if icondi == 0:
Data = pd.read_csv('data/GeorgeSober.txt',sep='\t')
elif icondi == 1:
Data = pd.read_csv('data/GeorgeTipsy.txt',sep='\t')
elif icondi == 2:
Data = pd.read_csv('data/GeorgeDrunk.txt',sep='\t')
# Data.head()
cash = np.asarray(Data['cash']!=0, dtype=int)
npumps = np.asarray(Data['pumps'], dtype=int)
npumps_[icondi,:] = npumps
options[icondi,:] = cash + npumps
# response vector
for j, ipumps in enumerate(npumps):
inds = np.arange(options[icondi,j],dtype=int)
kall[icondi,j,inds] = inds+1
if ipumps > 0:
dall[icondi,j,0:ipumps] = 0
if cash[j] == 1:
dall[icondi,j,ipumps] = 1
indexmask = np.isfinite(dall)
dij = dall[indexmask]
kij = kall[indexmask]
condall = np.tile(np.arange(Ncond,dtype=int),(30,ntrials,1))
condall = np.swapaxes(condall,0,2)
cij = condall[indexmask]
with pm.Model() as model2:
mu_g = pm.Uniform('mu_g', lower=0, upper=10)
sigma_g = pm.Uniform('sigma_g', lower=0, upper=10)
mu_b = pm.Uniform('mu_b', lower=0, upper=10)
sigma_b = pm.Uniform('sigma_b', lower=0, upper=10)
gammap = pm.Normal('gammap', mu=mu_g, sd=sigma_g, shape=Ncond)
beta = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=Ncond)
omega = -gammap[cij]/np.log(1-p)
thetajk = 1 - pm.math.invlogit(- beta[cij] * (kij - omega))
djk = pm.Bernoulli("djk", p=thetajk, observed=dij)
approx = pm.fit(n=100000, method='advi',
obj_optimizer=pm.adagrad_window
) # type: pm.MeanField
start = approx.sample(draws=2, include_transformed=True)
trace2 = pm.sample(3e3, njobs=2, init='adapt_diag', start=list(start))
pm.traceplot(trace2, varnames=['gammap', 'beta']);
burnin=1000
gammaplus = trace2['gammap'][burnin:]
beta = trace2['beta'][burnin:]
ylabels = ['Sober', 'Tipsy', 'Drunk']
fig = plt.figure(figsize=(15, 12))
gs = gridspec.GridSpec(3, 3)
for ic in range(Ncond):
ax0 = plt.subplot(gs[0+ic*3])
ax0.hist(npumps_[ic], bins=range(1, 10), rwidth=.8, align='left')
plt.xlabel('Number of Pumps', fontsize=12)
plt.ylabel(ylabels[ic], fontsize=12)
ax1 = plt.subplot(gs[1+ic*3])
my_pdf1 = gaussian_kde(gammaplus[:, ic])
x1=np.linspace(.5, 1.8, 200)
ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function
plt.xlim((.5, 1.8))
plt.xlabel(r'$\gamma^+$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12)
ax2 = plt.subplot(gs[2+ic*3])
my_pdf2 = gaussian_kde(beta[:, ic])
x2=np.linspace(0.1, 1.5, 200)
ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6) # distribution function
plt.xlim((0.1, 1.5))
plt.xlabel(r'$\beta$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12);
"""
Explanation: 16.2 A hierarchical extension of the BART model
$$ \mu_{\gamma^{+}} \sim \text{Uniform}(0,10) $$
$$ \sigma_{\gamma^{+}} \sim \text{Uniform}(0,10) $$
$$ \mu_{\beta} \sim \text{Uniform}(0,10) $$
$$ \sigma_{\beta} \sim \text{Uniform}(0,10) $$
$$ \gamma^{+}i \sim \text{Gaussian}(\mu{\gamma^{+}}, 1/\sigma_{\gamma^{+}}^2) $$
$$ \beta_i \sim \text{Gaussian}(\mu_{\beta}, 1/\sigma_{\beta}^2) $$
$$ \omega_i = -\gamma^{+}i \,/\,\text{log}(1-p) $$
$$ \theta{ijk} = \frac{1} {1+e^{\beta_i(k-\omega_i)}} $$
$$ d_{ijk} \sim \text{Bernoulli}(\theta_{ijk}) $$
End of explanation
"""
|
LxMLS/lxmls-toolkit | labs/notebooks/basic_tutorials/python_basics.ipynb | mit | print('Hello World!')
"""
Explanation: Installation
Make sure to have all the required software installed after proceeding.
For installation help, please consult the school guide.
Python Basics
End of explanation
"""
print(3 + 5)
print(3 - 5)
print(3 * 5)
print(3 ** 5)
# Observation: this code gives different results for python2 and python3
# because of the behaviour for the division operator
print(3 / 5.0)
print(3 / 5)
# for compatibility, make sure to use the follow statement
from __future__ import division
print(3 / 5.0)
print(3 / 5)
"""
Explanation: Basic Math Operations
End of explanation
"""
countries = ['Portugal','Spain','United Kingdom']
print(countries)
"""
Explanation: Data Strutures
End of explanation
"""
countries[0:2]
"""
Explanation: Exercise 0.1
Use L[i:j] to return the countries in the Iberian Peninsula.
End of explanation
"""
i = 2
while i < 10:
print(i)
i += 2
for i in range(2,10,2):
print(i)
a=1
while a <= 3:
print(a)
a += 1
"""
Explanation: Loops and Indentation
End of explanation
"""
a=1
while a <= 3:
print(a)
a += 1
"""
Explanation: Exercise 0.2
Can you then predict the output of the following code?:
End of explanation
"""
hour = 16
if hour < 12:
print('Good morning!')
elif hour >= 12 and hour < 20:
print('Good afternoon!')
else:
print('Good evening!')
"""
Explanation: Control Flow
End of explanation
"""
def greet(hour):
if hour < 12:
print('Good morning!')
elif hour >= 12 and hour < 20:
print('Good afternoon!')
else:
print('Good evening!')
"""
Explanation: Functions
End of explanation
"""
greet(50)
greet(-5)
"""
Explanation: Exercise 0.3
Note that the previous code allows the hour to be less than 0 or more than 24. Change the code in order to
indicate that the hour given as input is invalid. Your output should be something like:
greet(50)
Invalid hour: it should be between 0 and 24.
greet(-5)
Invalid hour: it should be between 0 and 24.
End of explanation
"""
%prun greet(22)
"""
Explanation: Profiling
End of explanation
"""
def greet2(hour):
if hour < 12:
print('Good morning!')
elif hour >= 12 and hour < 20:
print('Good afternoon!')
else:
import pdb; pdb.set_trace()
print('Good evening!')
# try: greet2(22)
"""
Explanation: Debugging in Python
End of explanation
"""
raise ValueError("Invalid input value.")
while True:
try:
x = int(input("Please enter a number: "))
break
except ValueError:
print("Oops! That was no valid number. Try again...")
"""
Explanation: Exceptions
for a complete list of built-in exceptions, see http://docs.python.org/2/library/exceptions.html
End of explanation
"""
import numpy as np
np.var?
np.random.normal?
"""
Explanation: Extending basic Functionalities with Modules
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
X = np.linspace(-4, 4, 1000)
plt.plot(X, X**2*np.cos(X**2))
plt.savefig("simple.pdf")
"""
Explanation: Organizing your Code with your own modules
See details in guide
Matplotlib – Plotting in Python
End of explanation
"""
# This will import the numpy library
# and give it the np abbreviation
import numpy as np
# This will import the plotting library
import matplotlib.pyplot as plt
# Linspace will return 1000 points,
# evenly spaced between -4 and +4
X = np.linspace(-4, 4, 1000)
# Y[i] = X[i]**2
Y = X**2
# Plot using a red line ('r')
plt.plot(X, Y, 'r')
# arange returns integers ranging from -4 to +4
# (the upper argument is excluded!)
Ints = np.arange(-4,5)
# We plot these on top of the previous plot
# using blue circles (o means a little circle)
plt.plot(Ints, Ints**2, 'bo')
# You may notice that the plot is tight around the line
# Set the display limits to see better
plt.xlim(-4.5,4.5)
plt.ylim(-1,17)
plt.show()
import matplotlib.pyplot as plt
import numpy as np
X = np.linspace(0, 4 * np.pi, 1000)
C = np.cos(X)
S = np.sin(X)
plt.plot(X, C)
plt.plot(X, S)
plt.show()
"""
Explanation: Exercise 0.5
Try running the following on Jupyter, which will introduce you to some of the basic numeric and plotting
operations.
End of explanation
"""
A = np.arange(100)
# These two lines do exactly the same thing
print(np.mean(A))
print(A.mean())
np.ptp?
"""
Explanation: Exercise 0.6
Run the following example and lookup the ptp function/method (use the ? functionality in Jupyter)
End of explanation
"""
def f(x):
return(x**2)
sum([f(x*1./1000)/1000 for x in range(0,1000)])
"""
Explanation: Exercise 0.7
Consider the following approximation to compute an integral
\begin{equation}
\int_0^1 f(x) dx \approx \sum_{i=0}^{999} \frac{f(i/1000)}{1000}
\end{equation}
Use numpy to implement this for $f(x) = x^2$. You should not need to use any loops. Note that integer division in Python 2.x returns the floor division (use floats – e.g. 5.0/2.0 – to obtain rationals). The exact value is 1/3. How close
is the approximation?
End of explanation
"""
import numpy as np
m = 3
n = 2
a = np.zeros([m,n])
print(a)
"""
Explanation: Exercise 0.8
In the rest of the school we will represent both matrices and vectors as numpy arrays. You can create arrays
in different ways, one possible way is to create an array of zeros.
End of explanation
"""
print(a.shape)
print(a.dtype.name)
"""
Explanation: You can check the shape and the data type of your array using the following commands:
End of explanation
"""
a = np.zeros([m,n],dtype=int)
print(a.dtype)
"""
Explanation: This shows you that “a” is an 3*2 array of type float64. By default, arrays contain 64 bit6 floating point numbers. You
can specify the particular array type by using the keyword dtype.
End of explanation
"""
a = np.array([[2,3],[3,4]])
print(a)
"""
Explanation: You can also create arrays from lists of numbers:
End of explanation
"""
a = np.array([[2,3],[3,4]])
b = np.array([[1,1],[1,1]])
a_dim1, a_dim2 = a.shape
b_dim1, b_dim2 = b.shape
c = np.zeros([a_dim1,b_dim2])
for i in range(a_dim1):
for j in range(b_dim2):
for k in range(a_dim2):
c[i,j] += a[i,k]*b[k,j]
print(c)
"""
Explanation: Exercise 0.9
You can multiply two matrices by looping over both indexes and multiplying the individual entries.
End of explanation
"""
d = np.dot(a,b)
print(d)
a = np.array([1,2])
b = np.array([1,1])
np.dot(a,b)
np.outer(a,b)
I = np.eye(2)
x = np.array([2.3, 3.4])
print(I)
print(np.dot(I,x))
A = np.array([ [1, 2], [3, 4] ])
print(A)
print(A.T)
"""
Explanation: This is, however, cumbersome and inefficient. Numpy supports matrix multiplication with the dot function:
End of explanation
"""
|
vbarua/PythonWorkshop | Code/Introduction To Python/2 - Tuples and Lists.ipynb | mit | ('x', 'y', 'z')
"""
Explanation: Tuples and Lists
Tuples
A Python Tuple is an immutable
sequence of fixed sized. They are created using round brackets () with commas to separate the elements.
End of explanation
"""
(1, 'b', 2.5)
"""
Explanation: The elements of a tuple need not have the same type.
End of explanation
"""
# Assigning a tuple to the variable tup
tup = ('first', 'second', 'third')
tup[0] # Extract the first element.
tup[1] # Extract the second element.
tup[2] # Extract the third element.
tup[3] # Extracting a non-existent element.
"""
Explanation: Extracting Elements from Tuples
Given a tuple it is possible to extract the elements from it in various ways. Note that Python uses 0-based indexing, meaning that the first element of a tuple is at position 0, the second element at position 1, and so on.
End of explanation
"""
a, b, c = tup
print(a)
print(b)
print(c)
"""
Explanation: Note that this last example results in an error from attempting to extract an element that doesn't exist. It is also possible to extract the elements of a tuple as follows.
End of explanation
"""
["a", "b", "c"]
"""
Explanation: The immutable aspect of tuples will be explained in a bit.
Lists
A Python List is a mutable sequence. Unlike tuples they don't have a fixed size. They are created using square brackets [] with commas to separate the elements.
End of explanation
"""
[1, 2] + [3, 4] + [5, 6]
"""
Explanation: Lists can be added together to create larger lists.
End of explanation
"""
# Creating a list and assigning it to the variable x.
lis = [1, 2, 3, 4, 5]
lis
lis[0] # Extract the first element.
lis[1] # Extract the second element.
lis[-1] # Extract the last element.
lis[-2] # Extract the second to last element.
"""
Explanation: Extracting Elements from Lists
Given a list it is possible to extract its elements in much the same way you would a tuple.
End of explanation
"""
lis[:3] # Extract the first three elements or equivalently
# extract elements up to (but not including) the fourth element.
lis[3:] # Drop the first three elements and return the rest or equivalently
# extract elements from the fourth element onwards.
lis[1:4] # Extract elements from the second element up to
# (but not including the fifth).
"""
Explanation: List Slicing
It's also possible to slice out chunks of a list.
End of explanation
"""
lis
# Adding an element to the end of a list.
lis.append(6)
lis
# Adding a list to the end of a list.
lis.extend([7,8,9])
lis
# Removing an element from the end of a list.
element = lis.pop()
(element, lis)
# Changing an element in a list.
lis[3] = 42
lis
"""
Explanation: Mutability
Lists are mutable, so let's mutate (ie. change) them.
End of explanation
"""
tup[0] = 0
tup.append("fourth")
"""
Explanation: Tuples and Mutability
Compare this behaviour to that of tuples.
End of explanation
"""
range(10)
range(5, 15)
range(4, 24, 2)
"""
Explanation: Trying to add or change an element in a tuple results in an error. Tuples cannot be changed after they are constructed, hence they are immutable unlike lists.
Useful List Functions
range
The range function can be used to generate lists of equidistantly spaced integers in various forms.
End of explanation
"""
x = ["a", "b", "c"]
y = [1 , 2, 3]
zip(x, y)
"""
Explanation: zip
The zip function takes two or more lists and zips them together. This is easier to understand with an example.
End of explanation
"""
zip(x, y, ["Do", "Re", "Mi"])
"""
Explanation: Notice how the first elements of x and y are "zipped" together into a tuple in the new list, as are the second elements, and the third elements.
End of explanation
"""
x
list(enumerate(x))
"""
Explanation: enumerate
The enumerate functions generates a list of of pairs (two element tuples) in which the first element is the index/position of the element and the second element is the element in the original list.
End of explanation
"""
lis
lis_copy = lis
lis_copy.append(9)
lis_copy
"""
Explanation: Mutability Gotchas
End of explanation
"""
lis
"""
Explanation: As expected lis_copy now has 9 at the end of it.
End of explanation
"""
lis_copy = lis[:]
lis_copy.pop()
print(lis)
print(lis_copy)
"""
Explanation: However now lis also has 9 at the end of it. The line
lis_copy = lis
makes lis_copy point to the same underlying list as lis. What's needed here is a copy of the list. There are many ways of copying a list in Python, one of which follows.
End of explanation
"""
|
JanetMatsen/Machine_Learning_CSE_546 | HW2/notebooks/Q-1-2_Neural_Nets_with_a_random_first_layer.ipynb | mit | import numpy as np
import matplotlib as mpl
%matplotlib inline
import time
import pandas as pd
import seaborn as sns
from mnist import MNIST # public package for making arrays out of MINST data.
import sys
sys.path.append('../code/')
from ridge_regression import RidgeMulti
from hyperparameter_explorer import HyperparameterExplorer
from mnist_helpers import mnist_training, mnist_testing
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 4, 3
"""
Explanation: Q-1-2_Neural_Nets_with_a_random_first_layer
Janet Matsen
Code notes:
End of explanation
"""
train_X, train_y = mnist_training()
test_X, test_y = mnist_testing()
"""
Explanation: Prepare MNIST training data
End of explanation
"""
hyper_explorer = HyperparameterExplorer(X=train_X, y=train_y,
model=RidgeMulti,
validation_split=0.1, score_name = 'training RMSE',
use_prev_best_weights=False,
test_X = test_X, test_y = test_y)
hyper_explorer.train_model(lam=1e10, verbose=False)
hyper_explorer.train_model(lam=1e+08, verbose=False)
hyper_explorer.train_model(lam=1e+07, verbose=False)
hyper_explorer.train_model(lam=1e+06, verbose=False)
hyper_explorer.train_model(lam=1e5, verbose=False)
hyper_explorer.train_model(lam=1e4, verbose=False)
hyper_explorer.train_model(lam=1e03, verbose=False)
hyper_explorer.train_model(lam=1e2, verbose=False)
hyper_explorer.train_model(lam=1e1, verbose=False)
hyper_explorer.train_model(lam=1e0, verbose=False)
hyper_explorer.train_model(lam=1e-1, verbose=False)
hyper_explorer.train_model(lam=1e-2, verbose=False)
hyper_explorer.train_model(lam=1e-3, verbose=False)
hyper_explorer.train_model(lam=1e-4, verbose=False)
hyper_explorer.train_model(lam=1e-5, verbose=False)
hyper_explorer.summary
hyper_explorer.plot_fits()
t = time.localtime(time.time())
hyper_explorer.plot_fits(filename = "Q-1-1-3_val_and_train_RMSE_{}-{}".format(t.tm_mon, t.tm_mday))
hyper_explorer.plot_fits(ylim=(.6,.7),
filename = "Q-1-1-3_val_and_train_RMSE_zoomed_in{}-{}".format(t.tm_mon, t.tm_mday))
hyper_explorer.best('score')
hyper_explorer.best('summary')
hyper_explorer.best('best score')
hyper_explorer.train_on_whole_training_set(lam=1e7)
hyper_explorer.final_model.results_row()
hyper_explorer.evaluate_test_data()
"""
Explanation: Explore hyperparameters before training model on all of the training data.
End of explanation
"""
|
khrapovs/metrix | notebooks/basic_data_io_analysis.ipynb | mit | import re
import requests
import zipfile
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
import seaborn as sns
import statsmodels.formula.api as sm
sns.set_context('talk')
pd.set_option('float_format', '{:6.2f}'.format)
%matplotlib inline
"""
Explanation: Basic data IO and analysis
First, we need to import all the necessary libraries and set up some environment variables.
End of explanation
"""
url = 'http://databank.worldbank.org/data/download/Edstats_csv.zip'
path = '../data/WorldBank/Edstats_csv.zip'
response = requests.get(url)
with open(path, "wb") as file:
file.write(response.content)
"""
Explanation: Load the zip file from the web and save it to your hard drive.
End of explanation
"""
zf = zipfile.ZipFile(path)
files = zf.namelist()
print(files)
"""
Explanation: Show contents of the zip file.
End of explanation
"""
data = pd.read_csv(zf.open(files[0]))
series = pd.read_csv(zf.open(files[2]))
series.rename(columns={series.columns[0]: 'Series Code'}, inplace=True)
data.rename(columns={data.columns[0]: 'Country Name'}, inplace=True)
print(series.columns)
"""
Explanation: Read csv-formatted data directly from the zip file into pandas DataFrame. Also rename some columns for prettier output.
End of explanation
"""
print(series['Topic'].unique())
"""
Explanation: Show unique values of the Topic column.
End of explanation
"""
subset = series.query("Topic == 'Expenditures'")[['Series Code', 'Indicator Name']]
subset = subset[subset['Indicator Name'].str.contains('PPP')]
print(subset.values)
xvar = {'UIS.XUNIT.PPP.1.FSGOV': 'Expenditure per student'}
"""
Explanation: Leave only those rows that have Expenditures in the column Topic. Next, leave only those that contain PPP in the Indicator Name column values. Finally, create a dictionary with a pair of variable key and its meaningful name.
End of explanation
"""
subset = series.query("Topic == 'Attainment'")[['Series Code', 'Indicator Name']]
subset = subset[subset['Indicator Name'].str.contains('(?=.*with primary schooling)(?=.*15)')]
print(subset.values)
yvar = {'BAR.PRM.CMPT.15UP.ZS': 'Pct with schooling'}
"""
Explanation: Do the same for Attainment among Topic values and slightly more involved subset of Indicator Name. Here we require that it contains both strings, with primary schooling and 15.
End of explanation
"""
print(data.columns)
"""
Explanation: Now show all column names in the primary data set.
End of explanation
"""
renames = xvar.copy()
renames.update(yvar)
print(renames)
"""
Explanation: Combine two dictionaries into one.
End of explanation
"""
cols = ['Country Name', 'Indicator Code', '2010']
data_sub = data.ix[data['Indicator Code'].isin(renames.keys()), cols].dropna()
data_sub.replace({'Indicator Code': renames}, inplace=True)
data_sub.set_index(cols[:2], inplace=True)
data_sub = data_sub[cols[-1]].unstack(cols[1]).dropna()
data_sub.columns.name = 'Indicator'
data_sub.index.name = 'Country'
print(data_sub.head())
"""
Explanation: Subset the data to include only three interesting columns that we have found above and only for the year 2010.
End of explanation
"""
data_sub.to_excel('../data/WorldBank/education.xlsx', sheet_name='data')
"""
Explanation: Export data to Excel.
End of explanation
"""
education = pd.read_excel('../data/WorldBank/education.xlsx', sheet_name='data', index_col=0)
print(education.head())
"""
Explanation: Now suppose we already have the data saved in the Excel file. Let's read it from scratch into pandas DataFrame.
End of explanation
"""
education['Expenditure per student (log)'] = np.log(education['Expenditure per student'])
fig = plt.figure(figsize=(8, 6))
sns.regplot(x='Expenditure per student (log)', y='Pct with schooling',
data=education, ax=fig.gca())
plt.savefig('../plots/education.pdf')
plt.show()
"""
Explanation: Let's see how percentage of educated population depends on government expenditures on primary students. Also, save the picture to the pdf file.
End of explanation
"""
formula = 'Q("Pct with schooling") ~ np.log(Q("Expenditure per student"))'
result = sm.ols(formula=formula, data=education).fit()
print(result.summary())
"""
Explanation: To be more precise we can quantify the effect of expenditures on schooling via simple OLS regression.
End of explanation
"""
out = pd.DataFrame({'Parameter': result.params, 't-stat': result.tvalues})
out.to_latex('../tables/education_ols.tex')
print(out)
"""
Explanation: And save the key result to the LaTeX table.
End of explanation
"""
|
jmcs/ecological | tutorial.ipynb | mit | import os
os.environ["INTEGER_LIST"] = "[1, 2, 3, 4, 5]"
os.environ["DICTIONARY"] = "{'key': 'value'}"
os.environ["INTEGER"] = "42"
os.environ["BOOLEAN"] = "False"
os.environ["OVERRIDE_DEFAULT"] = "This is NOT the default value"
"""
Explanation: Ecological Tutorial
Getting Started
Before we start to set some environment variables, note than in a real application this would be set outside of your application.
End of explanation
"""
import ecological
class Configuration(ecological.Config):
integer_list: list
integer: int
dictionary: dict
boolean: bool
with_default: str = "This is the default value"
override_default: str = "This is the default value"
"""
Explanation: Now let's create a configuration class:
End of explanation
"""
print(repr(Configuration.integer_list))
print(type(Configuration.integer_list))
print(repr(Configuration.integer))
print(type(Configuration.integer))
print(repr(Configuration.dictionary))
print(type(Configuration.dictionary))
print(repr(Configuration.boolean))
print(type(Configuration.boolean))
print(repr(Configuration.with_default))
print(type(Configuration.with_default))
print(repr(Configuration.override_default))
print(type(Configuration.override_default))
"""
Explanation: Easy right?
Now that we created the configuration class. Let's look at what's inside:
End of explanation
"""
from typing import List, Dict
class ConfigurationTyping(ecological.Config):
integer_list: List
dictionary: Dict
"""
Explanation: As you can see all the values where cast from str to the expected types, and if a default value is set it will be used if the corresponding environment variable doesn't exist.
Typing Support
Ecological also supports some of the types defined in PEP 484, for example:
End of explanation
"""
print(repr(ConfigurationTyping.integer_list))
print(type(ConfigurationTyping.integer_list))
print(repr(ConfigurationTyping.dictionary))
print(type(ConfigurationTyping.dictionary))
"""
Explanation: As expected the variables were converted to the real types:
End of explanation
"""
os.environ["HOME"] = "/home/myuser/"
os.environ["VALUE"] = "Not Prefixed"
os.environ["CONFIG_HOME"] = "/app/home"
os.environ["CONFIG_VALUE"] = "Prefixed"
class ConfigurationPrefix(ecological.Config, prefix="config"):
home: str
value: str
"""
Explanation: Prefixed Configuration
You can also decide to prefix your application configuration, for example, to avoid collisions:
End of explanation
"""
print(repr(ConfigurationPrefix.home))
print(repr(ConfigurationPrefix.value))
"""
Explanation: In this case the home and value properties will be fetched from the CONFIG_HOME and CONFIG_VALUE environment properties:
End of explanation
"""
os.environ["Integer"] = "42"
def times_2(value, wanted_type):
assert wanted_type is int
return int(value) * 2
class ConfigurationVariable(ecological.Config, prefix="this_is_going_to_be_ignored"):
integer = ecological.Variable("Integer", transform=lambda v, wt: int(v))
integer_x2: int = ecological.Variable("Integer", transform=times_2)
integer_as_str: str = ecological.Variable("Integer", transform=lambda v, wt: v)
boolean: bool = ecological.Variable("404", default=False)
"""
Explanation: Fine-grained control
You can control how the configuration properties are set by providing a ecological.Variable instance as the default value.
ecological.Variable receives the following parameters:
variable_name (optional) - exact name of the environment variable that will be used.
default (optional) - default value for the property if it isn't set.
transform (optional) - function that converts the string in the environment to the value and type you expect in your application. The default transform function will try to cast the string to the annotation type of the property.
Transformation function
The transformation function receive two parameters, a string representation with the raw value, and a wanted_type with the value of the annotation (usually, but not necessarily a type).
End of explanation
"""
print(repr(ConfigurationVariable.integer))
print(repr(ConfigurationVariable.integer_x2))
print(repr(ConfigurationVariable.integer_as_str))
"""
Explanation: integer, integer_x2 and integer_as_str will use the same enviroment variable but return different values:
End of explanation
"""
print(repr(ConfigurationVariable.boolean))
"""
Explanation: Because the environment variable 404 is not set, boolean will have the default value:
End of explanation
"""
os.environ["INTEGER"] = "42"
os.environ["NESTED_BOOLEAN"] = "True"
class ConfigurationNested(ecological.Config):
integer: int
class Nested(ecological.Config, prefix='nested'):
boolean: bool
"""
Explanation: Nested Configuration
ecological.Config also supports nested configurations, for example:
End of explanation
"""
print(repr(ConfigurationNested.integer))
print(repr(ConfigurationNested.Nested.boolean))
"""
Explanation: This way you can group related configuration properties hierarchically:
End of explanation
"""
|
damienstanton/nanodegree | p1_lessons/5 - Hough Transform.ipynb | mit | import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# convert to grayscale and smooth with a Gaussian
img = mpimg.imread('testimg.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel_size = 5
blurred = cv2.GaussianBlur(gray_img, (kernel_size, kernel_size), 0)
# edge detect with Canny
low = 50
high = 150
edges = cv2.Canny(blurred, low, high)
# build lines with Hough transform
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 10
max_line_gap = 1
line_img = np.copy(img) * 0 # blank of same dim as our img
lines = cv2.HoughLinesP(edges, rho, theta,
threshold,np.array([]), min_line_length, max_line_gap)
# draw!
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1,y1), (x2,y2), (255,0,0), 10)
# colorized binary image
colorized = np.dstack((edges, edges, edges))
# draw the colorized lines
combined = cv2.addWeighted(colorized, 0.8, line_img, 1, 0)
plt.imshow(combined)
plt.show()
"""
Explanation: Hough Transform
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# convert to grayscale and smooth with a Gaussian
img = mpimg.imread('testimg.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel_size = 5
blurred = cv2.GaussianBlur(gray_img, (kernel_size, kernel_size), 0)
# edge detect with Canny
low = 50
high = 150
edges = cv2.Canny(blurred, low, high)
# build masked edge
mask = np.zeros_like(edges)
mask_ignored = 255
imshape = img.shape
# TODO turn these knobs until the mask selects the lane area
#def draw(n1, n2, n3, n4):
vertices = np.array([[(275,imshape[0]),(650, 200),
(imshape[1], 1000),
(imshape[1],imshape[0])]], dtype=np.int32)
cv2.fillPoly(mask, vertices, mask_ignored)
masked_edges = cv2.bitwise_and(edges, mask)
# build lines with Hough transform
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 5
max_line_gap = 1
line_img = np.copy(img) * 0 # blank of same dim as our img
lines = cv2.HoughLinesP(masked_edges, rho, theta,
threshold, np.array([]),
min_line_length, max_line_gap)
# draw!
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_img, (x1,y1), (x2,y2), (255,0,0), 10)
# colorized binary image
colorized = np.dstack((edges, edges, edges))
# draw the colorized lines
combined = cv2.addWeighted(colorized, 0.8, line_img, 1, 0)
plt.imshow(combined)
plt.show()
"""
Explanation: Hough transform combined with a polygonal mask
Notice that lines are more well-defined
End of explanation
"""
|
ocefpaf/intro_python_notebooks | 02-NumPy.ipynb | mit | a = [0.1, 0.25, 0.03]
b = [400, 5000, 6e4]
c = a + b
c
[e1+e2 for e1, e2 in zip(a, b)]
import math
math.tanh(c)
[math.tanh(e) for e in c]
"""
Explanation: Aula 02 - NumPy
Objetivos
Apresentar o objeto array de N-dimensões
Guia de funções sofisticadas (broadcasting)
Tour nos sub-módulos para: Álgebra Linear, transformada de Fourier, números aleatórios, etc
Uso para integrar código C/C++ e Fortran
End of explanation
"""
import numpy as np
"""
Explanation: Python é uma linguagem excelente para "propósitos gerais", com um sintaxe
clara elegível, tipos de dados (data types) funcionais (strings, lists, sets,
dictionaries, etc) e uma biblioteca padrão vasta.
Entretanto não é um linguagem desenhada especificamente para matemática e
computação científica. Não há forma fácil de representar conjunto de dados
multidimensionais nem ferramentas para álgebra linear e manipulação de
matrizes.
(Os blocos essenciais para quase todos os problemas de computação
científica.)
Por essas razões que o NumPy existe. Em geral importamos o NumPy como np:
End of explanation
"""
lst = [10, 20, 30, 40]
arr = np.array([10, 20, 30, 40])
print(lst)
print(arr)
print(lst[0], arr[0])
print(lst[-1], arr[-1])
print(lst[2:], arr[2:])
"""
Explanation: NumPy, em seu núcleo, fornece apenas um objeto array.
<img height="300" src="files/anatomyarray.png" >
End of explanation
"""
lst[-1] = 'Um string'
lst
arr[-1] = 'Um string'
arr
arr.dtype
arr[-1] = 1.234
arr
"""
Explanation: A diferença entre list e array é que a arrays são homógenas!
End of explanation
"""
a = [0.1, 0.25, 0.03]
b = [400, 5000, 6e4]
a = np.array(a)
b = np.array(b)
c = a + b
c
np.tanh([a, b])
a * b
np.dot(a, b)
np.matrix(a) * np.matrix(b).T
"""
Explanation: Voltando às nossas lista a e b
End of explanation
"""
np.array(255, dtype=np.uint8)
float_info = '{finfo.dtype}: max={finfo.max:<18}, approx decimal precision={finfo.precision};'
print(float_info.format(finfo=np.finfo(np.float32)))
print(float_info.format(finfo=np.finfo(np.float64)))
"""
Explanation: Data types
bool
uint8
int (Em Python2 é machine dependent)
int8
int32
int64
float (Sempre é machine dependent Matlab double)
float32
float64
(http://docs.scipy.org/doc/numpy/user/basics.types.html.)
Curiosidades...
End of explanation
"""
np.zeros(3, dtype=int)
np.zeros(5, dtype=float)
np.ones(5, dtype=complex)
a = np.empty([3, 3])
a
a.fill(np.NaN)
a
"""
Explanation: https://en.wikipedia.org/wiki/Floating_point
Criando arrays:
End of explanation
"""
a = np.array([[1, 2, 3], [1, 2, 3]])
a
print('Tipo de dados : {}'.format(a.dtype))
print('Número total de elementos : {}'.format(a.size))
print('Número de dimensões : {}'.format(a.ndim))
print('Forma : {}'.format(a.shape))
print('Memória em bytes : {}'.format(a.nbytes))
"""
Explanation: Métodos das arrays
End of explanation
"""
print('Máximo e mínimo : {}'.format(a.min(), a.max()))
print('Some é produto de todos os elementos : {}'.format(a.sum(), a.prod()))
print('Média e desvio padrão : {}'.format(a.mean(), a.std()))
a.mean(axis=0)
a.mean(axis=1)
"""
Explanation: Outros métodos matemáticos/estatísticos úteis:
End of explanation
"""
np.zeros(a.shape) == np.zeros_like(a)
np.arange(1, 2, 0.2)
a = np.linspace(1, 10, 5) # Olhe também `np.logspace`
a
"""
Explanation: Métodos que auxiliam na criação de arrays.
End of explanation
"""
np.random.randn(5)
"""
Explanation: 5 amostras aleatórias tiradas da distribuição normal de média 0 e variância 1.
End of explanation
"""
np.random.normal(10, 3, 5)
"""
Explanation: 5 amostras aleatórias tiradas da distribuição normal de média 10 e variância 3.
End of explanation
"""
mask = np.where(a <= 5) # Para quem ainda vive em MatlabTM world.
mask
mask = a <= 5 # Melhor não?
mask
a[mask]
"""
Explanation: Máscara condicional
End of explanation
"""
import numpy.ma as ma
ma.masked_array(a, mask)
"""
Explanation: Temos também as masked_arrays
End of explanation
"""
a = np.random.rand(10)
b = np.linspace(0, 10, 10)
np.save('arquivo_a', a)
np.save('arquivo_b', b)
np.savez('arquivo_ab', a=a, b=b)
%%bash
ls *.np*
c = np.load('arquivo_ab.npz')
c.files
"""
Explanation: Salvando e carregando novamente os dados:
np.save
np.savez
np.load
End of explanation
"""
c['b'] // c['a']
a = np.array([1, 2, 3])
a **= 2
a
"""
Explanation: Operações: +, -, , /, //, *, %
End of explanation
"""
np.loadtxt("./data/dados_pirata.csv", delimiter=',')
!head -3 ./data/dados_pirata.csv
data = np.loadtxt("./data/dados_pirata.csv", skiprows=1, usecols=range(2, 16), delimiter=',')
data.shape, data.dtype
data[data == -99999.] = np.NaN
data
data.max(), data.min()
np.nanmax(data), np.nanmin(data)
np.nanargmax(data), np.nanargmin(data)
np.unravel_index(np.nanargmax(data), data.shape), np.unravel_index(np.nanargmin(data), data.shape)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(data[:, 0])
ax.plot(data[:, -1])
"""
Explanation: Manipulando dados reais
Vamos utilizar os dados do programa de observação do oceano Pirata.
http://www.goosbrasil.org/pirata/dados/
End of explanation
"""
plt.pcolormesh(data)
import numpy.ma as ma
data = ma.masked_invalid(data)
plt.pcolormesh(np.flipud(data.T))
plt.colorbar()
data.max(), data.min(), data.mean()
z = [1, 10, 100, 120, 13, 140, 180, 20, 300, 40,5, 500, 60, 80]
fig, ax = plt.subplots()
ax.plot(data[42, :], z, 'ko')
ax.invert_yaxis()
"""
Explanation: Dados com máscara (Masked arrays)
End of explanation
"""
|
KUrushi/knocks | 02/chapter2.ipynb | mit | hightemp = "".join(map(str, [i.replace('\t', ' ') for i in open('hightemp.txt', 'r')]))
print(hightemp)
"""
Explanation: 11. タブをスペースに置換
タブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ.
End of explanation
"""
col1 = open('col1.txt', 'w')
col2 = open('col2.txt', 'w')
hightemp = [i.replace('\t', ' ').split() for i in open('hightemp.txt', 'r')]
col1.write("\n".join(map(str, [i[0] for i in hightemp])))
col1.close()
col2.write("\n".join(map(str, [i[1] for i in hightemp])))
col2.close()
"""
Explanation: 12. 1列目をcol1.txtに,2列目をcol2.txtに保存
各行の1列目だけを抜き出したものをcol1.txtに,
2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ.
確認にはcutコマンドを用いよ.
End of explanation
"""
%%timeit
col3 = open('col3.txt', 'w')
f1 = [i for i in open('col1.txt', 'r')]
f2 = [i for i in open('col2.txt', 'r')]
# [col3.write(i+'\t'+j) for i, j in zip(f1, f2)]
col3.close()
"""
Explanation: 13. col1.txtとcol2.txtをマージ
12で作ったcol1.txtとcol2.txtを結合し,
元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ.
確認にはpasteコマンドを用いよ.
End of explanation
"""
def display_nline(n, filename):
return "".join(map(str, [i for i in open(filename, 'r')][:n]))
display = display_nline(5, "col3.txt")
print(display)
"""
Explanation: 14. 先頭からN行を出力
自然数Nをコマンドライン引数などの手段で受け取り,
入力のうち先頭のN行だけを表示せよ.
確認にはheadコマンドを用いよ.
End of explanation
"""
def display_back_nline(n, filename):
return "".join(map(str, [i for i in open(filename, 'r')][-n-1:-1]))
display = display_back_nline(5, "col3.txt")
print(display)
"""
Explanation: 15. 末尾のN行を出力
自然数Nをコマンドライン引数などの手段で受け取り, 入力のうちの末尾のN行だけを表示せよ.
確認にはtailコマンドを用いよ.
End of explanation
"""
%%timeit
def split_file(n, filename):
line = [i.strip('\n') for i in open(filename, 'r')]
length = len(line)
n = length//n
return [line[i:i+n] for i in range(0, length, n)]
split_file(2, "col1.txt")
"""
Explanation: 16. ファイルをN分割する
自然数Nをコマンドライン引数などの手段で受け取り, 入力のファイルを行単位でN分割せよ.
同様の処理をsplitコマンドで実現せよ.
End of explanation
"""
def first_char(filename):
return set(i[0] for i in open(filename, 'r'))
print(first_char('hightemp.txt'))
"""
Explanation: 17. 1列目の文字列の異なり
1列目の文字列の種類(異なる文字列の集合)を求めよ.
確認にはsort, uniqコマンドを用いよ.
End of explanation
"""
%%timeit
from operator import itemgetter
def column_sort(sort_key, filename):
return sorted([i.split() for i in open(filename, 'r')], key=itemgetter(sort_key-1))
column_sort(3, 'hightemp.txt')
"""
Explanation: 18. 各行を3コラム目の数値の降順にソート
各行を3コラム目の数値の逆順で整列せよ (注意:各行の内容は変更せずに並び替えよ).
確認にはsortコマンドを用いよ(この問題はコマンドで実行した結果とあわなくても良い).
End of explanation
"""
%%timeit
from operator import itemgetter
def frequency(filename):
first_char = [i[0] for i in open(filename, 'r')]
dictionary = set([(i, first_char.count(i)) for i in first_char])
return sorted(dictionary, key=itemgetter(1), reverse=True)
frequency('hightemp.txt')
%%timeit
from operator import itemgetter
first_char = [i[0] for i in open('hightemp.txt', 'r')]
dictionary = set([(i, first_char.count(i)) for i in first_char])
sorted(dictionary, key=itemgetter(1), reverse=True)
"""
Explanation: 19. 各行の1コラム目の文字列の出現頻度を求め, 出現頻度の高い順に並べる
各行の1列目の文字列の出現頻度を求め, その高い順に並べて表示せよ.
確認にはcut, uniq, sortコマンド
End of explanation
"""
|
smorton2/think-stats | code/chap02ex.ipynb | gpl-3.0 | from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
"""
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
t = [1, 2, 2, 3, 5]
"""
Explanation: Given a list of values, there are several ways to count the frequency of each value.
End of explanation
"""
hist = {}
for x in t:
hist[x] = hist.get(x, 0) + 1
hist
"""
Explanation: You can use a Python dictionary:
End of explanation
"""
from collections import Counter
counter = Counter(t)
counter
"""
Explanation: You can use a Counter (which is a dictionary with additional methods):
End of explanation
"""
import thinkstats2
hist = thinkstats2.Hist([1, 2, 2, 3, 5])
hist
"""
Explanation: Or you can use the Hist object provided by thinkstats2:
End of explanation
"""
hist.Freq(2)
"""
Explanation: Hist provides Freq, which looks up the frequency of a value.
End of explanation
"""
hist[2]
"""
Explanation: You can also use the bracket operator, which does the same thing.
End of explanation
"""
hist[4]
"""
Explanation: If the value does not appear, it has frequency 0.
End of explanation
"""
hist.Values()
"""
Explanation: The Values method returns the values:
End of explanation
"""
for val in sorted(hist.Values()):
print(val, hist[val])
"""
Explanation: So you can iterate the values and their frequencies like this:
End of explanation
"""
for val, freq in hist.Items():
print(val, freq)
"""
Explanation: Or you can use the Items method:
End of explanation
"""
import thinkplot
thinkplot.Hist(hist)
thinkplot.Config(xlabel='value', ylabel='frequency')
"""
Explanation: thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.
For example Hist plots the values and their frequencies as a bar graph.
Config takes parameters that label the x and y axes, among other things.
End of explanation
"""
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
"""
Explanation: As an example, I'll replicate some of the figures from the book.
First, I'll load the data from the pregnancy file and select the records for live births.
End of explanation
"""
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')
"""
Explanation: Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.
End of explanation
"""
ages = np.floor(live.agepreg)
hist = thinkstats2.Hist(ages, label='agepreg')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='years', ylabel='Count')
"""
Explanation: Before plotting the ages, I'll apply floor to round down:
End of explanation
"""
# Solution goes here
length = live.prglngth
hist = thinkstats2.Hist(length, label = 'preglngth')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Weeks', ylabel='Count')
"""
Explanation: As an exercise, plot the histogram of pregnancy lengths (column prglngth).
End of explanation
"""
for weeks, freq in hist.Smallest(10):
print(weeks, freq)
"""
Explanation: Hist provides smallest, which select the lowest values and their frequencies.
End of explanation
"""
# Solution goes here
for weeks, freq in hist.Largest(10):
print(weeks, freq)
"""
Explanation: Use Largest to display the longest pregnancy lengths.
End of explanation
"""
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
first_hist = thinkstats2.Hist(firsts.prglngth, label='first')
other_hist = thinkstats2.Hist(others.prglngth, label='other')
"""
Explanation: From live births, we can selection first babies and others using birthord, then compute histograms of pregnancy length for the two groups.
End of explanation
"""
width = 0.45
thinkplot.PrePlot(2)
thinkplot.Hist(first_hist, align='right', width=width)
thinkplot.Hist(other_hist, align='left', width=width)
thinkplot.Config(xlabel='weeks', ylabel='Count', xlim=[27, 46])
"""
Explanation: We can use width and align to plot two histograms side-by-side.
End of explanation
"""
mean = live.prglngth.mean()
var = live.prglngth.var()
std = live.prglngth.std()
"""
Explanation: Series provides methods to compute summary statistics:
End of explanation
"""
mean, std
"""
Explanation: Here are the mean and standard deviation:
End of explanation
"""
# Solution goes here
std == np.sqrt(var)
"""
Explanation: As an exercise, confirm that std is the square root of var:
End of explanation
"""
firsts.prglngth.mean(), others.prglngth.mean()
"""
Explanation: Here's are the mean pregnancy lengths for first babies and others:
End of explanation
"""
firsts.prglngth.mean() - others.prglngth.mean()
"""
Explanation: And here's the difference (in weeks):
End of explanation
"""
def CohenEffectSize(group1, group2):
"""Computes Cohen's effect size for two groups.
group1: Series or DataFrame
group2: Series or DataFrame
returns: float if the arguments are Series;
Series if the arguments are DataFrames
"""
diff = group1.mean() - group2.mean()
var1 = group1.var()
var2 = group2.var()
n1, n2 = len(group1), len(group2)
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / np.sqrt(pooled_var)
return d
"""
Explanation: This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations:
End of explanation
"""
# Solution goes here
CohenEffectSize(firsts.prglngth, others.prglngth)
"""
Explanation: Compute the Cohen effect size for the difference in pregnancy length for first babies and others.
End of explanation
"""
# Solution goes here
firsts_hist=thinkstats2.Hist(np.round(firsts.totalwgt_lb), label='firsts')
others_hist=thinkstats2.Hist(np.round(others.totalwgt_lb), label='others')
print('mean')
print('firsts:', firsts.totalwgt_lb.mean())
print('others:', others.totalwgt_lb.mean())
print('')
print('stdev')
print('firsts:', firsts.totalwgt_lb.mean())
print('others:', others.totalwgt_lb.mean())
print('')
print('median')
print('firsts:', firsts.totalwgt_lb.median())
print('others:', others.totalwgt_lb.median())
width=0.45
thinkplot.PrePlot(2)
thinkplot.Hist(firsts_hist, align='left', width=width)
thinkplot.Hist(others_hist, align='right', width=width)
thinkplot.Config(xlabel='pounds', ylabel='Count', xlim=(0, 15))
# Solution goes here
CohenEffectSize(firsts.totalwgt_lb, others.totalwgt_lb)
"""
Explanation: Exercises
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others.
Compute Cohen’s effect size to quantify the difference between the groups. How does it compare to the difference in pregnancy length?
End of explanation
"""
resp = nsfg.ReadFemResp()
"""
Explanation: For the next few exercises, we'll load the respondent file:
End of explanation
"""
# Solution goes here
inc_hist = thinkstats2.Hist(resp.totincr)
thinkplot.Hist(inc_hist)
"""
Explanation: Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.
End of explanation
"""
# Solution goes here
age_hist = thinkstats2.Hist(resp.age_r)
thinkplot.Hist(age_hist)
"""
Explanation: Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.
End of explanation
"""
# Solution goes here
fmhh_hist = thinkstats2.Hist(resp.numfmhh)
thinkplot.Hist(fmhh_hist)
"""
Explanation: Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.
End of explanation
"""
# Solution goes here
parity_hist = thinkstats2.Hist(resp.parity)
thinkplot.Hist(parity_hist)
"""
Explanation: Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?
End of explanation
"""
# Solution goes here
parity_hist.Largest(10)
"""
Explanation: This data is left-skewed with a long tail. Women are about as likely to have 1 as 2 children, though parity drops off significantly after 2 children.
Use Hist.Largest to find the largest values of <tt>parity</tt>.
End of explanation
"""
# Solution goes here
hinc = resp[resp['totincr'] == 14]
other = resp[resp['totincr'] < 14]
hinc_hist = thinkstats2.Hist(hinc.parity)
thinkplot.Hist(hinc_hist)
"""
Explanation: Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.
Use <tt>totincr</tt> to select the respondents with the highest income (level 14). Plot the histogram of <tt>parity</tt> for just the high income respondents.
End of explanation
"""
# Solution goes here
hinc_hist.Largest(5)
"""
Explanation: Find the largest parities for high income respondents.
End of explanation
"""
# Solution goes here
print('mean parity, high income:', hinc.parity.mean())
print('mean parity, other:', other.parity.mean())
"""
Explanation: Compare the mean <tt>parity</tt> for high income respondents and others.
End of explanation
"""
# Solution goes here
CohenEffectSize(hinc.parity, other.parity)
"""
Explanation: Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others?
End of explanation
"""
|
flohorovicic/pynoddy | docs/notebooks/9-Topology.ipynb | gpl-2.0 | from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
# Basic settings
import sys, os
import subprocess
# Now import pynoddy
import pynoddy
%matplotlib inline
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
"""
Explanation: Simulation of a Noddy history and analysis of its voxel topology
Example of how the module can be used to run Noddy simulations and analyse the output.
End of explanation
"""
# Change to sandbox directory to store results
os.chdir(os.path.join(repo_path, 'sandbox'))
# Path to exmaple directory in this repository
example_directory = os.path.join(repo_path,'examples')
# Compute noddy model for history file
history_file = 'strike_slip.his'
history = os.path.join(example_directory, history_file)
nfiles = 1
files = '_'+str(nfiles).zfill(4)
print "files", files
root_name = 'noddy_out'
output_name = root_name + files
print root_name
print output_name
# call Noddy
# NOTE: Make sure that the noddy executable is accessible in the system!!
sys
print subprocess.Popen(['noddy.exe', history, output_name, 'TOPOLOGY'],
shell=False, stderr=subprocess.PIPE,
stdout=subprocess.PIPE).stdout.read()
#
sys
print subprocess.Popen(['topology.exe', root_name, files],
shell=False, stderr=subprocess.PIPE,
stdout=subprocess.PIPE).stdout.read()
"""
Explanation: Compute the model
The simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module:
End of explanation
"""
pynoddy.compute_model(history, output_name)
pynoddy.compute_topology(root_name, files)
"""
Explanation: For convenience, the model computations are wrapped into a Python function in pynoddy:
End of explanation
"""
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
import numpy as np
N1 = pynoddy.NoddyOutput(output_name)
AM= pynoddy.NoddyTopology(output_name)
am_name=root_name +'_uam.bin'
print am_name
print AM.maxlitho
image = np.empty((int(AM.maxlitho),int(AM.maxlitho)), np.uint8)
image.data[:] = open(am_name).read()
cmap=plt.get_cmap('Paired')
cmap.set_under('white') # Color for values less than vmin
plt.imshow(image, interpolation="nearest", vmin=1, cmap=cmap)
plt.show()
"""
Explanation: Note: The Noddy call from Python is, to date, calling Noddy through the subprocess function. In a future implementation, this call could be subsituted with a full wrapper for the C-functions written in Python. Therefore, using the member function compute_model is not only easier, but also the more "future-proof" way to compute the Noddy model.
Loading Topology output files
Here we load the binary adjacency matrix for one topology calculation and display it as an image
End of explanation
"""
|
BillyLjm/CS100.1x.__CS190.1x | lab4_machine_learning_student.ipynb | mit | import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab4', 'small')
ratingsFilename = os.path.join(baseDir, inputPath, 'ratings.dat.gz')
moviesFilename = os.path.join(baseDir, inputPath, 'movies.dat')
"""
Explanation: version 1.0.1
+
Introduction to Machine Learning with Apache Spark
Predicting Movie Ratings
One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions.
For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings.
In this lab:
Part 0: Preliminaries
Part 1: Basic Recommendations
Part 2: Collaborative Filtering
Part 3: Predictions for Yourself
As mentioned during the first Learning Spark lab, think carefully before calling collect() on any datasets. When you are using a small dataset, calling collect() and then using Python to get a sense for the data locally (in the driver program) will work fine, but this will not work when you are using a large dataset that doesn't fit in memory on one machine. Solutions that call collect() and do local analysis that could have been done with Spark will likely fail in the autograder and not receive full credit.
Code
This assignment can be completed using basic Python and pySpark Transformations and Actions. Libraries other than math are not necessary. With the exception of the ML functions that we introduce in this assignment, you should be able to complete all parts of this homework using only the Spark functions you have used in prior lab exercises (although you are welcome to use more features of Spark if you like!).
End of explanation
"""
numPartitions = 2
rawRatings = sc.textFile(ratingsFilename).repartition(numPartitions)
rawMovies = sc.textFile(moviesFilename)
def get_ratings_tuple(entry):
""" Parse a line in the ratings dataset
Args:
entry (str): a line in the ratings dataset in the form of UserID::MovieID::Rating::Timestamp
Returns:
tuple: (UserID, MovieID, Rating)
"""
items = entry.split('::')
return int(items[0]), int(items[1]), float(items[2])
def get_movie_tuple(entry):
""" Parse a line in the movies dataset
Args:
entry (str): a line in the movies dataset in the form of MovieID::Title::Genres
Returns:
tuple: (MovieID, Title)
"""
items = entry.split('::')
return int(items[0]), items[1]
ratingsRDD = rawRatings.map(get_ratings_tuple).cache()
moviesRDD = rawMovies.map(get_movie_tuple).cache()
ratingsCount = ratingsRDD.count()
moviesCount = moviesRDD.count()
print 'There are %s ratings and %s movies in the datasets' % (ratingsCount, moviesCount)
print 'Ratings: %s' % ratingsRDD.take(3)
print 'Movies: %s' % moviesRDD.take(3)
assert ratingsCount == 487650
assert moviesCount == 3883
assert moviesRDD.filter(lambda (id, title): title == 'Toy Story (1995)').count() == 1
assert (ratingsRDD.takeOrdered(1, key=lambda (user, movie, rating): movie)
== [(1, 1, 5.0)])
"""
Explanation: Part 0: Preliminaries
We read in each of the files and create an RDD consisting of parsed lines.
Each line in the ratings dataset (ratings.dat.gz) is formatted as:
UserID::MovieID::Rating::Timestamp
Each line in the movies (movies.dat) dataset is formatted as:
MovieID::Title::Genres
The Genres field has the format
Genres1|Genres2|Genres3|...
The format of these files is uniform and simple, so we can use Python split() to parse their lines.
Parsing the two files yields two RDDS
For each line in the ratings dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this exercise.
For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the Genres because we do not need them for this exercise.
End of explanation
"""
tmp1 = [(1, u'alpha'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'delta')]
tmp2 = [(1, u'delta'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'alpha')]
oneRDD = sc.parallelize(tmp1)
twoRDD = sc.parallelize(tmp2)
oneSorted = oneRDD.sortByKey(True).collect()
twoSorted = twoRDD.sortByKey(True).collect()
print oneSorted
print twoSorted
assert set(oneSorted) == set(twoSorted) # Note that both lists have the same elements
assert twoSorted[0][0] < twoSorted.pop()[0] # Check that it is sorted by the keys
assert oneSorted[0:2] != twoSorted[0:2] # Note that the subset consisting of the first two elements does not match
"""
Explanation: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data.
We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique.
Note: It is important to use the unicode type instead of the string type as the titles are in unicode characters.
Consider the following example, and note that while the sets are equal, the printed lists are usually in different order by value, although they may randomly match up from time to time.
You can try running this multiple times. If the last assertion fails, don't worry about it: that was just the luck of the draw. And note that in some environments the results may be more deterministic.
End of explanation
"""
def sortFunction(tuple):
""" Construct the sort string (does not perform actual sorting)
Args:
tuple: (rating, MovieName)
Returns:
sortString: the value to sort with, 'rating MovieName'
"""
key = unicode('%.3f' % tuple[0])
value = tuple[1]
return (key + ' ' + value)
print oneRDD.sortBy(sortFunction, True).collect()
print twoRDD.sortBy(sortFunction, True).collect()
"""
Explanation: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy().
End of explanation
"""
oneSorted1 = oneRDD.takeOrdered(oneRDD.count(),key=sortFunction)
twoSorted1 = twoRDD.takeOrdered(twoRDD.count(),key=sortFunction)
print 'one is %s' % oneSorted1
print 'two is %s' % twoSorted1
assert oneSorted1 == twoSorted1
"""
Explanation: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# First, implement a helper function `getCountsAndAverages` using only Python
def getCountsAndAverages(IDandRatingsTuple):
""" Calculate average rating
Args:
IDandRatingsTuple: a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...))
Returns:
tuple: a tuple of (MovieID, (number of ratings, averageRating))
"""
count = 0
tot = 0.0
for rating in IDandRatingsTuple[1]:
tot += rating
count += 1;
return (IDandRatingsTuple[0], (count, tot/count))
# TEST Number of Ratings and Average Ratings for a Movie (1a)
Test.assertEquals(getCountsAndAverages((1, (1, 2, 3, 4))), (1, (4, 2.5)),
'incorrect getCountsAndAverages() with integer list')
Test.assertEquals(getCountsAndAverages((100, (10.0, 20.0, 30.0))), (100, (3, 20.0)),
'incorrect getCountsAndAverages() with float list')
Test.assertEquals(getCountsAndAverages((110, xrange(20))), (110, (20, 9.5)),
'incorrect getCountsAndAverages() with xrange')
"""
Explanation: Part 1: Basic Recommendations
One way to recommend movies is to always recommend the movies with the highest average rating. In this part, we will use Spark to find the name, number of ratings, and the average rating of the 20 movies with the highest average rating and more than 500 reviews. We want to filter our movies with high ratings but fewer than or equal to 500 reviews because movies with few reviews may not have broad appeal to everyone.
(1a) Number of Ratings and Average Ratings for a Movie
Using only Python, implement a helper function getCountsAndAverages() that takes a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...)) and returns a tuple of (MovieID, (number of ratings, averageRating)). For example, given the tuple (100, (10.0, 20.0, 30.0)), your function should return (100, (3, 20.0))
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# From ratingsRDD with tuples of (UserID, MovieID, Rating) create an RDD with tuples of
# the (MovieID, iterable of Ratings for that MovieID)
movieIDsWithRatingsRDD = (ratingsRDD
.map(lambda x:(x[1], x[2]))
.groupByKey())
print 'movieIDsWithRatingsRDD: %s\n' % movieIDsWithRatingsRDD.take(3)
# Using `movieIDsWithRatingsRDD`, compute the number of ratings and average rating for each movie to
# yield tuples of the form (MovieID, (number of ratings, average rating))
movieIDsWithAvgRatingsRDD = movieIDsWithRatingsRDD.map(getCountsAndAverages)
print 'movieIDsWithAvgRatingsRDD: %s\n' % movieIDsWithAvgRatingsRDD.take(3)
# To `movieIDsWithAvgRatingsRDD`, apply RDD transformations that use `moviesRDD` to get the movie
# names for `movieIDsWithAvgRatingsRDD`, yielding tuples of the form
# (average rating, movie name, number of ratings)
movieNameWithAvgRatingsRDD = (moviesRDD
.join(movieIDsWithAvgRatingsRDD)
.map(lambda x: (x[1][1][1], x[1][0], x[1][1][0])))
print 'movieNameWithAvgRatingsRDD: %s\n' % movieNameWithAvgRatingsRDD.take(3)
# TEST Movies with Highest Average Ratings (1b)
Test.assertEquals(movieIDsWithRatingsRDD.count(), 3615,
'incorrect movieIDsWithRatingsRDD.count() (expected 3615)')
movieIDsWithRatingsTakeOrdered = movieIDsWithRatingsRDD.takeOrdered(3)
Test.assertTrue(movieIDsWithRatingsTakeOrdered[0][0] == 1 and
len(list(movieIDsWithRatingsTakeOrdered[0][1])) == 993,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[0] (expected 993)')
Test.assertTrue(movieIDsWithRatingsTakeOrdered[1][0] == 2 and
len(list(movieIDsWithRatingsTakeOrdered[1][1])) == 332,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[1] (expected 332)')
Test.assertTrue(movieIDsWithRatingsTakeOrdered[2][0] == 3 and
len(list(movieIDsWithRatingsTakeOrdered[2][1])) == 299,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[2] (expected 299)')
Test.assertEquals(movieIDsWithAvgRatingsRDD.count(), 3615,
'incorrect movieIDsWithAvgRatingsRDD.count() (expected 3615)')
Test.assertEquals(movieIDsWithAvgRatingsRDD.takeOrdered(3),
[(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)),
(3, (299, 3.0468227424749164))],
'incorrect movieIDsWithAvgRatingsRDD.takeOrdered(3)')
Test.assertEquals(movieNameWithAvgRatingsRDD.count(), 3615,
'incorrect movieNameWithAvgRatingsRDD.count() (expected 3615)')
Test.assertEquals(movieNameWithAvgRatingsRDD.takeOrdered(3),
[(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1),
(1.0, u'Big Squeeze, The (1996)', 3)],
'incorrect movieNameWithAvgRatingsRDD.takeOrdered(3)')
"""
Explanation: (1b) Movies with Highest Average Ratings
Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings.
The steps you should perform are:
Recall that the ratingsRDD contains tuples of the form (UserID, MovieID, Rating). From ratingsRDD create an RDD with tuples of the form (MovieID, Python iterable of Ratings for that MovieID). This transformation will yield an RDD of the form: [(1, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7c90>), (2, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e79d0>), (3, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7610>)]. Note that you will only need to perform two Spark transformations to do this step.
Using movieIDsWithRatingsRDD and your getCountsAndAverages() helper function, compute the number of ratings and average rating for each movie to yield tuples of the form (MovieID, (number of ratings, average rating)). This transformation will yield an RDD of the form: [(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)), (3, (299, 3.0468227424749164))]. You can do this step with one Spark transformation
We want to see movie names, instead of movie IDs. To moviesRDD, apply RDD transformations that use movieIDsWithAvgRatingsRDD to get the movie names for movieIDsWithAvgRatingsRDD, yielding tuples of the form (average rating, movie name, number of ratings). This set of transformations will yield an RDD of the form: [(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1), (1.0, u'Big Squeeze, The (1996)', 3)]. You will need to do two Spark transformations to complete this step: first use the moviesRDD with movieIDsWithAvgRatingsRDD to create a new RDD with Movie names matched to Movie IDs, then convert that RDD into the form of (average rating, movie name, number of ratings). These transformations will yield an RDD that looks like: [(3.6818181818181817, u'Happiest Millionaire, The (1967)', 22), (3.0468227424749164, u'Grumpier Old Men (1995)', 299), (2.882978723404255, u'Hocus Pocus (1993)', 94)]
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Apply an RDD transformation to `movieNameWithAvgRatingsRDD` to limit the results to movies with
# ratings from more than 500 people. We then use the `sortFunction()` helper function to sort by the
# average rating to get the movies in order of their rating (highest rating first)
movieLimitedAndSortedByRatingRDD = (movieNameWithAvgRatingsRDD
.filter(lambda x: x[2] > 500)
.sortBy(sortFunction, False))
print 'Movies with highest ratings: %s' % movieLimitedAndSortedByRatingRDD.take(20)
# TEST Movies with Highest Average Ratings and more than 500 Reviews (1c)
Test.assertEquals(movieLimitedAndSortedByRatingRDD.count(), 194,
'incorrect movieLimitedAndSortedByRatingRDD.count()')
Test.assertEquals(movieLimitedAndSortedByRatingRDD.take(20),
[(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088),
(4.515798462852263, u"Schindler's List (1993)", 1171),
(4.512893982808023, u'Godfather, The (1972)', 1047),
(4.510460251046025, u'Raiders of the Lost Ark (1981)', 1195),
(4.505415162454874, u'Usual Suspects, The (1995)', 831),
(4.457256461232604, u'Rear Window (1954)', 503),
(4.45468509984639, u'Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1963)', 651),
(4.43953006219765, u'Star Wars: Episode IV - A New Hope (1977)', 1447),
(4.4, u'Sixth Sense, The (1999)', 1110), (4.394285714285714, u'North by Northwest (1959)', 700),
(4.379506641366224, u'Citizen Kane (1941)', 527), (4.375, u'Casablanca (1942)', 776),
(4.363975155279503, u'Godfather: Part II, The (1974)', 805),
(4.358816276202219, u"One Flew Over the Cuckoo's Nest (1975)", 811),
(4.358173076923077, u'Silence of the Lambs, The (1991)', 1248),
(4.335826477187734, u'Saving Private Ryan (1998)', 1337),
(4.326241134751773, u'Chinatown (1974)', 564),
(4.325383304940375, u'Life Is Beautiful (La Vita \ufffd bella) (1997)', 587),
(4.324110671936759, u'Monty Python and the Holy Grail (1974)', 759),
(4.3096, u'Matrix, The (1999)', 1250)], 'incorrect sortedByRatingRDD.take(20)')
"""
Explanation: (1c) Movies with Highest Average Ratings and more than 500 reviews
Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews.
Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form: [(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088), (4.515798462852263, u"Schindler's List (1993)", 1171), (4.512893982808023, u'Godfather, The (1972)', 1047)]
End of explanation
"""
trainingRDD, validationRDD, testRDD = ratingsRDD.randomSplit([6, 2, 2], seed=0L)
print 'Training: %s, validation: %s, test: %s\n' % (trainingRDD.count(),
validationRDD.count(),
testRDD.count())
print trainingRDD.take(3)
print validationRDD.take(3)
print testRDD.take(3)
assert trainingRDD.count() == 292716
assert validationRDD.count() == 96902
assert testRDD.count() == 98032
assert trainingRDD.filter(lambda t: t == (1, 914, 3.0)).count() == 1
assert trainingRDD.filter(lambda t: t == (1, 2355, 5.0)).count() == 1
assert trainingRDD.filter(lambda t: t == (1, 595, 5.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 1287, 5.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 594, 4.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 1270, 5.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 1193, 5.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 2398, 4.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 1035, 5.0)).count() == 1
"""
Explanation: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings.
Part 2: Collaborative Filtering
In this course, you have learned about many of the basic transformations and actions that Spark allows us to apply to distributed datasets. Spark also exposes some higher level functionality; in particular, Machine Learning using a component of Spark called MLlib. In this part, you will learn how to use MLlib to make personalized movie recommendations using the movie data we have been analyzing.
We are going to use a technique called collaborative filtering. Collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue x than to have the opinion on x of a person chosen randomly. You can read more about collaborative filtering here.
The image below (from Wikipedia) shows an example of predicting of the user's rating using collaborative filtering. At first, people rate different items (like videos, images, games). After that, the system is making predictions about a user's rating for an item, which the user has not rated yet. These predictions are built upon the existing ratings of other users, who have similar ratings with the active user. For instance, in the image below the system has made a prediction, that the active user will not like the video.
For movie recommendations, we start with a matrix whose entries are movie ratings by users (shown in red in the diagram below). Each column represents a user (shown in green) and each row represents a particular movie (shown in blue).
Since not all users have rated all movies, we do not know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the ratings matrix by factorizing it as the product of two matrices: one that describes properties of each user (shown in green), and one that describes properties of each movie (shown in blue).
We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The Alternating Least Squares algorithm does this by first randomly filling the users matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constrant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the "alternating" in the name.
This optimization is what's being shown on the right in the image above. Given a fixed set of user factors (i.e., values in the users matrix), we use the known ratings to find the best values for the movie factors using the optimization written at the bottom of the figure. Then we "alternate" and pick the best user factors given fixed movie factors.
For a simple example of what the users and movies matrices might look like, check out the videos from Lecture 8 or the slides from Lecture 8
(2a) Creating a Training Set
Before we jump into using machine learning, we need to break up the ratingsRDD dataset into three pieces:
A training set (RDD), which we will use to train models
A validation set (RDD), which we will use to choose the best model
A test set (RDD), which we will use for our experiments
To randomly split the dataset into the multiple groups, we can use the pySpark randomSplit() transformation. randomSplit() takes a set of splits and and seed and returns multiple RDDs.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
import math
def computeError(predictedRDD, actualRDD):
""" Compute the root mean squared error between predicted and actual
Args:
predictedRDD: predicted ratings for each movie and each user where each entry is in the form
(UserID, MovieID, Rating)
actualRDD: actual ratings where each entry is in the form (UserID, MovieID, Rating)
Returns:
RMSE (float): computed RMSE value
"""
# Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating)
predictedReformattedRDD = predictedRDD.map(lambda x: ((x[0], x[1]), x[2]))
# Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating)
actualReformattedRDD = actualRDD.map(lambda x: ((x[0], x[1]), x[2]))
# Compute the squared error for each matching entry (i.e., the same (User ID, Movie ID) in each
# RDD) in the reformatted RDDs using RDD transformtions - do not use collect()
squaredErrorsRDD = (predictedReformattedRDD
.join(actualReformattedRDD)
.map(lambda x: (x, (x[1][0] - x[1][1])**2)))
# Compute the total squared error - do not use collect()
totalError = squaredErrorsRDD.values().sum()
# Count the number of entries for which you computed the total squared error
numRatings = squaredErrorsRDD.count()
# Using the total squared error and the number of entries, compute the RMSE
return math.sqrt(float(totalError)/numRatings)
# sc.parallelize turns a Python list into a Spark RDD.
testPredicted = sc.parallelize([
(1, 1, 5),
(1, 2, 3),
(1, 3, 4),
(2, 1, 3),
(2, 2, 2),
(2, 3, 4)])
testActual = sc.parallelize([
(1, 2, 3),
(1, 3, 5),
(2, 1, 5),
(2, 2, 1)])
testPredicted2 = sc.parallelize([
(2, 2, 5),
(1, 2, 5)])
testError = computeError(testPredicted, testActual)
print 'Error for test dataset (should be 1.22474487139): %s' % testError
testError2 = computeError(testPredicted2, testActual)
print 'Error for test dataset2 (should be 3.16227766017): %s' % testError2
testError3 = computeError(testActual, testActual)
print 'Error for testActual dataset (should be 0.0): %s' % testError3
# TEST Root Mean Square Error (2b)
Test.assertTrue(abs(testError - 1.22474487139) < 0.00000001,
'incorrect testError (expected 1.22474487139)')
Test.assertTrue(abs(testError2 - 3.16227766017) < 0.00000001,
'incorrect testError2 result (expected 3.16227766017)')
Test.assertTrue(abs(testError3 - 0.0) < 0.00000001,
'incorrect testActual result (expected 0.0)')
"""
Explanation: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation.
(2b) Root Mean Square Error (RMSE)
In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.
The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function.
Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating)
Given two ratings RDDs, x and y of size n, we define RMSE as follows: $ RMSE = \sqrt{\frac{\sum_{i = 1}^{n} (x_i - y_i)^2}{n}}$
To calculate RMSE, the steps you should perform are:
Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 1), 5), ((1, 2), 3), ((1, 3), 4), ((2, 1), 3), ((2, 2), 2), ((2, 3), 4)]. You can perform this step with a single Spark transformation.
Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 2), 3), ((1, 3), 5), ((2, 1), 5), ((2, 2), 1)]. You can perform this step with a single Spark transformation.
Using only RDD transformations (you only need to perform two transformations), compute the squared error for each matching entry (i.e., the same (UserID, MovieID) in each RDD) in the reformatted RDDs - do not use collect() to perform this step. Note that not every (UserID, MovieID) pair will appear in both RDDs - if a pair does not appear in both RDDs, then it does not contribute to the RMSE. You will end up with an RDD with entries of the form $ (x_i - y_i)^2$ You might want to check out Python's math module to see how to compute these values
Using an RDD action (but not collect()), compute the total squared error: $ SE = \sum_{i = 1}^{n} (x_i - y_i)^2 $
Compute n by using an RDD action (but not collect()), to count the number of pairs for which you computed the total squared error
Using the total squared error and the number of pairs, compute the RMSE. Make sure you compute this value as a float.
Note: Your solution must only use transformations and actions on RDDs. Do not call collect() on either RDD.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
from pyspark.mllib.recommendation import ALS
validationForPredictRDD = validationRDD.map(lambda x: (x[0], x[1]))
seed = 5L
iterations = 5
regularizationParameter = 0.1
ranks = [4, 8, 12]
errors = [0, 0, 0]
err = 0
tolerance = 0.02
minError = float('inf')
bestRank = -1
bestIteration = -1
for rank in ranks:
model = ALS.train(trainingRDD, rank, seed=seed, iterations=iterations,
lambda_=regularizationParameter)
predictedRatingsRDD = model.predictAll(validationForPredictRDD)
error = computeError(predictedRatingsRDD, validationRDD)
errors[err] = error
err += 1
print 'For rank %s the RMSE is %s' % (rank, error)
if error < minError:
minError = error
bestRank = rank
print 'The best model was trained with rank %s' % bestRank
# TEST Using ALS.train (2c)
Test.assertEquals(trainingRDD.getNumPartitions(), 2,
'incorrect number of partitions for trainingRDD (expected 2)')
Test.assertEquals(validationForPredictRDD.count(), 96902,
'incorrect size for validationForPredictRDD (expected 96902)')
Test.assertEquals(validationForPredictRDD.filter(lambda t: t == (1, 1907)).count(), 1,
'incorrect content for validationForPredictRDD')
Test.assertTrue(abs(errors[0] - 0.883710109497) < tolerance, 'incorrect errors[0]')
Test.assertTrue(abs(errors[1] - 0.878486305621) < tolerance, 'incorrect errors[1]')
Test.assertTrue(abs(errors[2] - 0.876832795659) < tolerance, 'incorrect errors[2]')
"""
Explanation: (2c) Using ALS.train()
In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise.
The process we will use for determining the best model is as follows:
Pick a set of model parameters. The most important parameter to ALS.train() is the rank, which is the number of rows in the Users matrix (green in the diagram above) or the number of columns in the Movies matrix (blue in the diagram above). (In general, a lower rank will mean higher error on the training dataset, but a high rank may lead to overfitting.) We will train models with ranks of 4, 8, and 12 using the trainingRDD dataset.
Create a model using ALS.train(trainingRDD, rank, seed=seed, iterations=iterations, lambda_=regularizationParameter) with three parameters: an RDD consisting of tuples of the form (UserID, MovieID, rating) used to train the model, an integer rank (4, 8, or 12), a number of iterations to execute (we will use 5 for the iterations parameter), and a regularization coefficient (we will use 0.1 for the regularizationParameter).
For the prediction step, create an input RDD, validationForPredictRDD, consisting of (UserID, MovieID) pairs that you extract from validationRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)]
Using the model and validationForPredictRDD, we can predict rating values by calling model.predictAll() with the validationForPredictRDD dataset, where model is the model we generated with ALS.train(). predictAll accepts an RDD with each entry in the format (userID, movieID) and outputs an RDD with each entry in the format (userID, movieID, rating).
Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in validationRDD.
Which rank produces the best model, based on the RMSE with the validationRDD dataset?
Note: It is likely that this operation will take a noticeable amount of time (around a minute in our VM); you can observe its progress on the Spark Web UI. Probably most of the time will be spent running your computeError() function, since, unlike the Spark ALS implementation (and the Spark 1.4 RegressionMetrics module), this does not use a fast linear algebra library and needs to run some Python code for all 100k entries.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
myModel = ALS.train(trainingRDD, bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter)
testForPredictingRDD = testRDD.map(lambda x: (x[0], x[1]))
predictedTestRDD = myModel.predictAll(testForPredictingRDD)
testRMSE = computeError(testRDD, predictedTestRDD)
print 'The model had a RMSE on the test set of %s' % testRMSE
# TEST Testing Your Model (2d)
Test.assertTrue(abs(testRMSE - 0.87809838344) < tolerance, 'incorrect testRMSE')
"""
Explanation: (2d) Testing Your Model
So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE.
The steps you should perform are:
Train a model, using the trainingRDD, bestRank from part (2c), and the parameters you used in in part (2c): seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters.
For the prediction step, create an input RDD, testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extract from testRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)]
Use myModel.predictAll() to predict rating values for the test dataset.
For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestRDD from the model.
Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in testRDD.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
trainingAvgRating = float(trainingRDD.map(lambda x: x[2]).sum()) / trainingRDD.count()
print 'The average rating for movies in the training set is %s' % trainingAvgRating
testForAvgRDD = testRDD.map(lambda x: (x[0], x[1], trainingAvgRating))
testAvgRMSE = computeError(testRDD, testForAvgRDD)
print 'The RMSE on the average set is %s' % testAvgRMSE
# TEST Comparing Your Model (2e)
Test.assertTrue(abs(trainingAvgRating - 3.57409571052) < 0.000001,
'incorrect trainingAvgRating (expected 3.57409571052)')
Test.assertTrue(abs(testAvgRMSE - 1.12036693569) < 0.000001,
'incorrect testAvgRMSE (expected 1.12036693569)')
"""
Explanation: (2e) Comparing Your Model
Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set.
The steps you should perform are:
Use the trainingRDD to compute the average rating across all movies in that training dataset.
Use the average rating that you just determined and the testRDD to create an RDD with entries of the form (userID, movieID, average rating).
Use your computeError function to compute the RMSE between the testRDD validation RDD that you just created and the testForAvgRDD.
End of explanation
"""
print 'Most rated movies:'
print '(average rating, movie name, number of reviews)'
for ratingsTuple in movieLimitedAndSortedByRatingRDD.take(50):
print ratingsTuple
"""
Explanation: You now have code to predict how users will rate movies!
Part 3: Predictions for Yourself
The ultimate goal of this lab exercise is to predict what movies to recommend to yourself. In order to do that, you will first need to add ratings for yourself to the ratingsRDD dataset.
(3a) Your Movie Ratings
To help you provide ratings for yourself, we have included the following code to list the names and movie IDs of the 50 highest-rated movies from movieLimitedAndSortedByRatingRDD which we created in part 1 the lab.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
myUserID = 0
# Note that the movie IDs are the *last* number on each line. A common error was to use the number of ratings as the movie ID.
myRatedMovies = [
# The format of each line is (myUserID, movie ID, your rating)
# For example, to give the movie "Star Wars: Episode IV - A New Hope (1977)" a five rating, you would add the following line:
# (myUserID, 260, 5),
(myUserID, 54001, 5), # Harry Potter & the Order of the Phoenix
(myUserID, 150, 5), # Apollo 13
(myUserID, 1, 4), # Toy Story
(myUserID, 2953, 4), # Home Alone 2
(myUserID, 1882, 3), # Godzilla (1998)
(myUserID, 5313, 3), # The Scorpion King
(myUserID, 260, 2), # Star Wars: Episode IV
(myUserID, 8731, 2), # Moulin Rouge
(myUserID, 3578, 1), # Gladiator
(myUserID, 2028, 1) # Saving Private Ryan
]
myRatingsRDD = sc.parallelize(myRatedMovies)
print 'My movie ratings: %s' % myRatingsRDD.take(10)
"""
Explanation: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen).
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
trainingWithMyRatingsRDD = trainingRDD.union(myRatingsRDD)
print ('The training dataset now has %s more entries than the original training dataset' %
(trainingWithMyRatingsRDD.count() - trainingRDD.count()))
assert (trainingWithMyRatingsRDD.count() - trainingRDD.count()) == myRatingsRDD.count()
"""
Explanation: (3b) Add Your Movies to Training Dataset
Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
myRatingsModel = ALS.train(trainingWithMyRatingsRDD, bestRank, seed=seed, iterations=iterations,
lambda_=regularizationParameter)
"""
Explanation: (3c) Train a Model with Your Ratings
Now, train a model with your ratings added and the parameters you used in in part (2c): bestRank, seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
predictedTestMyRatingsRDD = myRatingsModel.predictAll(testForPredictingRDD)
testRMSEMyRatings = computeError(testRDD, predictedTestMyRatingsRDD)
print 'The model had a RMSE on the test set of %s' % testRMSEMyRatings
"""
Explanation: (3d) Check RMSE for the New Model with Your Ratings
Compute the RMSE for this new model on the test set.
For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form: [(1, 1287), (1, 594), (1, 1270)]
Use myRatingsModel.predictAll() to predict rating values for the testForPredictingRDD test dataset, set this as predictedTestMyRatingsRDD
For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestMyRatingsRDD from the model.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs
# of the form (myUserID, Movie ID) and that does not contain any movies that you have rated.
myUnratedMoviesRDD = (moviesRDD
.filter(lambda x: x[0] not in [x[1] for x in myRatedMovies])
.map(lambda x: (myUserID, x[0])))
# Use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies
predictedRatingsRDD = myRatingsModel.predictAll(myUnratedMoviesRDD)
"""
Explanation: (3e) Predict Your Ratings
So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for.
The steps you should perform are:
Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated. This transformation will yield an RDD of the form: [(0, 1), (0, 2), (0, 3), (0, 4)]. Note that you can do this step with one RDD transformation.
For the prediction step, use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Transform movieIDsWithAvgRatingsRDD from part (1b), which has the form (MovieID, (number of ratings, average rating)),
# into and RDD of the form (MovieID, number of ratings)
movieCountsRDD = movieIDsWithAvgRatingsRDD.map(lambda x: (x[0], x[1][0]))
# Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating)
predictedRDD = predictedRatingsRDD.map(lambda x: (x[1], x[2]))
# Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples
# of the form (Movie ID, (Predicted Rating, number of ratings))
predictedWithCountsRDD = (predictedRDD.join(movieCountsRDD))
# Use RDD transformations with PredictedWithCountsRDD and moviesRDD to yield an RDD with tuples
# of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings
ratingsWithNamesRDD = (predictedWithCountsRDD
.filter(lambda x: x[1][1] > 75)
.join(moviesRDD)
.map(lambda x: (x[1][0][0], x[1][1], x[1][0][1])))
predictedHighestRatedMovies = ratingsWithNamesRDD.takeOrdered(20, key=lambda x: -x[0])
print ('My highest rated movies as predicted (for movies with more than 75 reviews):\n%s' %
'\n'.join(map(str, predictedHighestRatedMovies)))
"""
Explanation: (3f) Predict Your Ratings
We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings.
The steps you should perform are:
From Parts (1b) and (1c), we know that we should look at movies with a reasonable number of reviews (e.g., more than 75 reviews). You can experiment with a lower threshold, but fewer ratings for a movie may yield higher prediction errors. Transform movieIDsWithAvgRatingsRDD from Part (1b), which has the form (MovieID, (number of ratings, average rating)), into an RDD of the form (MovieID, number of ratings): [(2, 332), (4, 71), (6, 442)]
We want to see movie names, instead of movie IDs. Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating): [(3456, -0.5501005376936687), (1080, 1.5885892024487962), (320, -3.7952255522487865)]
Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings)): [(2050, (0.6694097486155939, 44)), (10, (5.29762541533513, 418)), (2060, (0.5055259373841172, 97))]
Use RDD transformations with predictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings. For example: [(7.983121900375243, u'Under Siege (1992)'), (7.9769201864261285, u'Fifth Element, The (1997)')]
End of explanation
"""
|
AustT1996/language-recognition-with-neural-nets | Language Recognition Neural Net- Training and Testing.ipynb | mit | # imports
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import string
import math
import tabulate
import os
"""
Explanation: Summary
Use Tensorboard to build and train a neural net for recognizing
Conclusion
The Neural net has mediocre overall performance, likely because I didn't spend that much time optimizing it, and because the problem is somewhat ill-posed, since a string could be a word in multiple different languages. However, for English and Mandarin words it did better than the guessing rate of 33%, which means learning was successful.
End of explanation
"""
# Define constants for the training data
WORD_LENGTH = 20
feature_length = 26*WORD_LENGTH
languages = "english french mandarin".split()
num_of_languages = len(languages)
# Constants for saving
save_dir = '.\\nn_save\\'
# Function for converting words to vectors
# Letters are stored as a list of 26 integers, all 0 except for one, which is a 1
# E.g. a is [1, 0, 0... <25 0's>]
# E.g. z is [0, 0 ... <25 0's>, 1]
# Overall 20 letters are stored sequentially
# Punctuation and white space is ignored
def vectorize_word(word):
l_final = []
for i in range(WORD_LENGTH):
l_next = [0]*26
try:
l_next[string.ascii_lowercase.index(word[i])] = 1
except:
pass
l_final.extend(l_next)
return l_final
f_out = open(r'.\data\nn_params.txt', 'w')
f_out.write("{}\n".format(WORD_LENGTH))
f_out.write(save_dir+'\n')
f_out.write(" ".join(languages)+'\n')
f_out.close()
# Create training data
training_data = []
training_answers = []
for i, lang in enumerate(languages):
# Read files
f_in = open(".\data\{}.txt".format(lang))
words = [w.strip() for w in f_in.readlines()]
f_in.close()
# Vectorize words
vector_words = [vectorize_word(w) for w in words]
# Vectorize output
l = [0]*num_of_languages
l[i] = 1
vector_language = [l for w in words]
# Add to training data
training_data.extend(vector_words)
training_answers.extend(vector_language)
# Convert data to numpy array
training_data = np.array(training_data)
training_answers = np.array(training_answers)
# Summarize training data
print("Training data shape: {}".format(training_data.shape))
"""
Explanation: Make training data
End of explanation
"""
# Input and output variables
x = tf.placeholder(tf.float32, [None, feature_length])
y_ = tf.placeholder(tf.float32, [None, num_of_languages])
# Define the number of neurons in each layer
layer_lengths = [feature_length, 40, num_of_languages]
# Create each layer
neural_net = []
last_output = x
for i, current_layer_length in enumerate(layer_lengths[1:]):
# Define the length of the last layer
last_layer_length = layer_lengths[i]
# Create the variables for this layer
W = tf.Variable(tf.truncated_normal([last_layer_length, current_layer_length],
stddev=1 / math.sqrt(last_layer_length)))
b = tf.Variable(tf.constant(0.1, shape=[current_layer_length]))
h = tf.sigmoid(tf.matmul(last_output, W) + b)
# Store the variables for this layer
neural_net.append((W, b, h))
# Update the last output
last_output = h
# Output layer (softmax)
y = tf.nn.softmax(last_output)
# Scoring (use cross-entropy storing)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), axis=1))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
"""
Explanation: Make the neural net
End of explanation
"""
# Initialize variables
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
# Initialize accuracy metrics
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_tracker = []
# Run the training
batch_size = 500
for i in range(40000):
batch_indices = np.random.randint(training_data.shape[0], size=batch_size)
batch_xs = training_data[batch_indices]
batch_ys = training_answers[batch_indices]
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Possibly print readout
if (i+1) % 2000 == 0:
corr_pred = sess.run(correct_prediction, feed_dict={x: training_data, y_: training_answers})
correct, total = len(corr_pred[corr_pred]), len(corr_pred)
acc = float(correct)/total
accuracy_tracker.append((i+1, acc))
print("Batch {:0>5d}- {:.4f} ({:0>5d}/{})".format(i+1, acc, correct, total))
# Plot training accuracy improvement
plt.plot(*zip(*accuracy_tracker))
plt.xlabel("Batch number")
plt.ylabel("Accuracy")
plt.title("Training Accuracy for language recognition neural net")
plt.show()
# Function for testing words
def get_predictions(test_words):
test_words_vectorized = np.array([vectorize_word(w) for w in test_words])
# Get predictions
test_results = sess.run(y, feed_dict={x: test_words_vectorized})
return test_results
# Function that tests words and prints them to make a nice pretty table
def predictions_table(test_words, answers=None):
# test_words is a list of strings (the words)
# Answers will tell the net if it is correct
# Should be a list where the number n of element i correspond means that test_words[i] is of language[n]
predictions = get_predictions(test_words)
table = [[w] for w in test_words] # First column of the table is the word
table = [t + ["{:.1f}".format(p*100) for p in pred] for t, pred in zip(table, predictions)] # Next column is the predictions
headers = ["Word"] + [l.title() for l in languages]
# Possibly print wrong answers
if answers is not None:
# Find the ones it answered correctly
correct = np.array([p[i] == np.max(p) for p, i in zip(predictions, answers)])
# Add an answers column to the table
for i, c in enumerate(correct):
if c:
table[i] += [""]
else:
table[i] += ["Wrong!"]
headers += ["Correct?"]
# Print the table:
print(tabulate.tabulate(table, headers=headers))
# Possibly print the accuracy
if answers is not None:
print("Accuracy: {:.2f}%".format(100.*len(correct[correct])/len(correct)))
"""
Explanation: Train Neural net and output accuracy
End of explanation
"""
# English words
english_words = "hello my dear chap let's have a bit of coffee".split()
english_words += "oh my heavens look at what this neural net can do".split()
english_words += "it looks like english words are often quite similar to french ones".split()
predictions_table(english_words, answers=[0]*len(english_words))
"""
Explanation: Test the neural net with some words I made up
This isn't the most rigorous source of test data admittedly, but oh well
End of explanation
"""
# French words
# Note the lack of accents (the vectorizer doesn't handle accents)
# Note my poor French also
french_words = "bonjour mon ami j'adore le francais. C'est une belle langue".split()
french_words += "je mange une croissant avec une baguette et du brie".split()
french_words += "ca c'est comment on fait des choses en france!".split()
predictions_table(french_words, answers=[1]*len(french_words))
"""
Explanation: It is pretty good at English.
Most words were right, with most of the confusion mainly being with French (which has a lot of similar words anyways).
Oddly enough it thought the word "quite" was Mandarin.
End of explanation
"""
# Mandarin Words
# Note I am typing in pinyin with no tones
mandarin_words = "xuexi zhongwen zhende hen nan".split()
mandarin_words += "wo hen xihuan pinyin yinwei bangzhu wo kanshu de bijiao rongyi".split()
mandarin_words += "sishisi jiu shi tebie nan shuochulai".split()
mandarin_words += "qilai, bu yuan zuo nuli de renmen!".split() # Gotta please the censors ;)
predictions_table(mandarin_words, answers=[2]*len(mandarin_words))
"""
Explanation: It really didn't do very well with French at all...
It seemed to mix it up with English a lot. I am not sure why the confusion is so uneven...
End of explanation
"""
# Save neural net
# saver = tf.train.Saver()
# if not os.path.exists(save_dir):
# os.makedirs(save_dir)
# save_path = saver.save(sess, save_dir)
# print(save_path)
# Close the session
# sess.close()
"""
Explanation: It did ok with Mandarin
Most of the confusion seems to be with short words (e.g. 'hen', which is a word in english too).
I found it weird that "tebie" and "renmen" were French...
End of explanation
"""
|
probml/pyprobml | notebooks/book1/22/matrix_factorization_recommender_surprise_lib.ipynb | mit | import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
"""
Explanation: <a href="https://colab.research.google.com/github/Nirzu97/pyprobml/blob/matrix-factorization/notebooks/matrix_factorization_recommender_surprise_lib.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Matrix Factorization for Movie Lens Recommendations using Surprise library
End of explanation
"""
!pip install surprise
import surprise
from surprise import Dataset
data = Dataset.load_builtin("ml-1m")
trainset = data.build_full_trainset()
print([trainset.n_users, trainset.n_items, trainset.n_ratings])
"""
Explanation: Surprise library for collaborative filtering
http://surpriselib.com/
Simple Python RecommendatIon System Engine
End of explanation
"""
!wget http://files.grouplens.org/datasets/movielens/ml-1m.zip
!unzip ml-1m
!ls
folder = "ml-1m"
ratings_list = [
[int(x) for x in i.strip().split("::")] for i in open(os.path.join(folder, "ratings.dat"), "r").readlines()
]
users_list = [i.strip().split("::") for i in open(os.path.join(folder, "users.dat"), "r").readlines()]
movies_list = [
i.strip().split("::") for i in open(os.path.join(folder, "movies.dat"), "r", encoding="latin-1").readlines()
]
ratings_df = pd.DataFrame(ratings_list, columns=["UserID", "MovieID", "Rating", "Timestamp"], dtype=int)
movies_df = pd.DataFrame(movies_list, columns=["MovieID", "Title", "Genres"])
movies_df["MovieID"] = movies_df["MovieID"].apply(pd.to_numeric)
movies_df.head()
def get_movie_name(movies_df, movie_id_str):
ndx = movies_df["MovieID"] == int(movie_id_str)
name = movies_df["Title"][ndx].to_numpy()[0]
return name
print(get_movie_name(movies_df, 1))
print(get_movie_name(movies_df, "527"))
def get_movie_genres(movies_df, movie_id_str):
ndx = movies_df["MovieID"] == int(movie_id_str)
name = movies_df["Genres"][ndx].to_numpy()[0]
return name
print(get_movie_genres(movies_df, 1))
print(get_movie_genres(movies_df, "527"))
ratings_df.head()
iter = trainset.all_ratings()
nshow = 5
counter = 0
for item in iter:
# print(item)
(uid_inner, iid_inner, rating) = item
# Raw ids are strings that match the external ratings file
uid_raw = trainset.to_raw_uid(uid_inner)
iid_raw = trainset.to_raw_iid(iid_inner)
print(
"uid inner {}, raw {}, iid inner {}, raw {}, rating {}".format(uid_inner, uid_raw, iid_inner, iid_raw, rating)
)
counter += 1
if counter > nshow:
break
iid_raw = str(1318)
items_raw = list(trainset.to_raw_iid(i) for i in trainset.all_items())
print(items_raw[:10])
print(type(items_raw[0]))
print(len(np.unique(items_raw)))
users_raw = list(trainset.to_raw_uid(i) for i in trainset.all_users())
print(users_raw[:10])
print(len(np.unique(users_raw)))
# inspect user ratings for user 837
uid_raw = str(837)
uid_inner = trainset.to_inner_uid(uid_raw)
user_ratings = trainset.ur[uid_inner]
print(len(user_ratings))
print(user_ratings)
rated_raw = [trainset.to_raw_iid(iid) for (iid, rating) in user_ratings]
print(rated_raw)
unrated_raw = list(set(items_raw) - set(rated_raw))
print(len(unrated_raw))
"""
Explanation: Setting Up the Ratings Data
We read the data directly from MovieLens website, since they don't allow redistribution. We want to include the metadata (movie titles, etc), not just the ratings matrix.
End of explanation
"""
def get_true_ratings(uid_raw, trainset):
uid_inner = trainset.to_inner_uid(uid_raw)
user_ratings = trainset.ur[uid_inner]
item_list = [trainset.to_raw_iid(iid) for (iid, rating) in user_ratings]
rating_list = [rating for (iid, rating) in user_ratings]
item_list = np.array(item_list)
rating_list = np.array(rating_list)
ndx = np.argsort([-r for r in rating_list]) # largest (most negative) first
return item_list[ndx], rating_list[ndx]
def make_predictions(algo, uid_raw, trainset):
uid_inner = trainset.to_inner_uid(uid_raw)
user_ratings = trainset.ur[uid_inner]
rated_raw = [trainset.to_raw_iid(iid) for (iid, rating) in user_ratings]
items_raw = list(trainset.to_raw_iid(i) for i in trainset.all_items())
unrated_raw = list(set(items_raw) - set(rated_raw))
item_list = []
rating_list = []
for iid_raw in unrated_raw:
pred = algo.predict(uid_raw, iid_raw, verbose=False)
uid_raw, iid_raw, rating_true, rating_pred, details = pred
item_list.append(iid_raw)
rating_list.append(rating_pred)
item_list = np.array(item_list)
rating_list = np.array(rating_list)
ndx = np.argsort([-r for r in rating_list]) # largest (most negative) first
return item_list[ndx], rating_list[ndx]
def make_df(movies_df, item_list_raw, rating_list):
name_list = []
genre_list = []
for i in range(len(item_list_raw)):
item_raw = item_list_raw[i]
name = get_movie_name(movies_df, item_raw)
genre = get_movie_genres(movies_df, item_raw)
name_list.append(name)
genre_list.append(genre)
df = pd.DataFrame({"name": name_list, "genre": genre_list, "rating": rating_list, "iid": item_list_raw})
return df
uid_raw = str(837)
item_list_raw, rating_list = get_true_ratings(uid_raw, trainset)
df = make_df(movies_df, item_list_raw, rating_list)
df.head(10)
"""
Explanation: Join with meta data
End of explanation
"""
# https://surprise.readthedocs.io/en/stable/matrix_factorization.html
algo = surprise.SVD(n_factors=50, biased=True, n_epochs=20, random_state=42, verbose=True)
algo.fit(trainset)
uid_raw = str(837)
item_list_raw, rating_list = make_predictions(algo, uid_raw, trainset)
df = make_df(movies_df, item_list_raw, rating_list)
df.head(10)
"""
Explanation: Fit/ predict
End of explanation
"""
# inspect user ratings for user 837
uid_raw = str(837)
uid_inner = trainset.to_inner_uid(uid_raw)
user_ratings = trainset.ur[uid_inner]
print(len(user_ratings))
print(user_ratings)
ratings_raw = [rating for (iid, rating) in user_ratings]
rated_raw = [trainset.to_raw_iid(iid) for (iid, rating) in user_ratings]
print(rated_raw)
print(trainset.to_raw_iid(1231))
print(ratings_raw[0])
def get_rating(trainset, uid_raw, iid_raw):
uid_inner = trainset.to_inner_uid(uid_raw)
user_ratings = trainset.ur[uid_inner]
rated_iid_raw = np.array([trainset.to_raw_iid(iid) for (iid, rating) in user_ratings])
ratings = np.array([rating for (iid, rating) in user_ratings])
ndx = np.where(rated_iid_raw == iid_raw)[0]
if len(ndx) > 0:
return ratings[ndx][0]
else:
return 0
print(get_rating(trainset, "837", "1201"))
print(get_rating(trainset, "837", "0"))
users_raw = list(trainset.to_raw_uid(i) for i in trainset.all_users())
items_raw = list(trainset.to_raw_iid(i) for i in trainset.all_items())
users_raw = ["837"] + users_raw
items_raw = [str(i) for i in range(1200, 1300)]
nusers = 20
nitems = 20
Rtrue = np.zeros((nusers, nitems))
Rpred = np.zeros((nusers, nitems))
for ui in range(nusers):
for ii in range(nitems):
uid = users_raw[ui]
iid = items_raw[ii]
pred = algo.predict(uid, iid, verbose=False)
uid_raw, iid_raw, _, rating_pred, details = pred
Rpred[ui, ii] = rating_pred
Rtrue[ui, ii] = get_rating(trainset, uid_raw, iid_raw)
plt.figure()
plt.imshow(Rtrue, cmap="jet")
plt.colorbar()
plt.figure()
plt.imshow(Rpred, cmap="jet")
plt.colorbar()
"""
Explanation: Visualize matrix of predictions
End of explanation
"""
|
jpilgram/phys202-2015-work | assignments/assignment09/IntegrationEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
"""
Explanation: Integration Exercise 2
Imports
End of explanation
"""
#I worked with James Amarel on this assignement
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
def integrand(x, a):
return np.sqrt(a**2 - x**2)
def integral_approx(a):
I, e = integrate.quad(integrand, 0, a, args=(a,))
return I
def integral_exact(a):
return (np.pi*a**2)/4
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 1
$$ I_1 = \int_0^ a \sqrt{a^2 - x^2}dx = \frac{\pi a^2}{4} $$
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
def integrand(x):
return np.sin(x)**2
def integral_approx():
I, e = integrate.quad(integrand, 0, np.pi/2)
return I
def integral_exact():
return np.pi/4
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 2
$$ I_2 = \int_0^\frac{\pi}{2} \sin^2(x)dx = \frac{\pi}{4} $$
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
def integrand(x,a,b):
return 1.0/(a+b*np.sin(x))
def integral_approx(a,b):
I, e = integrate.quad(integrand, 0, 2*np.pi, args=(a,b))
return I
def integral_exact(a,b):
return (2*np.pi)/np.sqrt(a**2-b**2)
print("Numerical: ", integral_approx(10.0,1.0))
print("Exact : ", integral_exact(10.0,1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 3
$$ I_3 = \int_0^{2\pi} \frac{dx}{a+b\sin(x)} = \frac{2\pi}{\sqrt{a^2-b^2}} $$
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
def integrand(x, a):
return np.e**(-1.0*a*(x**2))
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.sqrt(np.pi/(a))
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 4
$$ I_4 = \int_0^\infty e^{-ax^2} dx = \frac{1}{2}\sqrt{\frac{\pi}{2a}} $$
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
def integrand(x):
return 1.0/np.cosh(x)
def integral_approx():
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, -np.inf, np.inf)
return I
def integral_exact():
return np.pi
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 5
$$ I_5 = \int_{-\infty}^\infty \frac{1}{\cosh x} dx = \pi $$
End of explanation
"""
|
agile-geoscience/notebooks | The_frequency_of_a_Ricker.ipynb | apache-2.0 | T, dt, f = 0.256, 0.001, 25
import bruges
w, t = bruges.filters.ricker(T, dt, f, return_t=True)
import scipy.signal
f_W, W = scipy.signal.welch(w, fs=1/dt, nperseg=256)
fig, axs = plt.subplots(figsize=(15,5), ncols=2)
axs[0].plot(t, w)
axs[0].set_xlabel("Time [s]")
axs[1].plot(f_W[:25], W[:25], c="C1")
axs[1].set_xlabel("Frequency [Hz]")
plt.show()
"""
Explanation: The frequency of a Ricker wavelet
We often use Ricker wavelets to model seismic, for example when making a synthetic seismogram with which to help tie a well. One simple way to guesstimate the peak or central frequency of the wavelet that will model a particlar seismic section is to count the peaks per unit time in the seismic. But this tends to overestimate the actual frequency because the maximum frequency of a Ricker wavelet is more than the peak frequency. The question is, how much more?
To investigate, let's make a Ricker wavelet and see what it looks like in the time and frequency domains.
End of explanation
"""
c = np.cos(2*25*np.pi*t)
f_C, C = scipy.signal.welch(c, fs=1/dt, nperseg=256)
fig, axs = plt.subplots(figsize=(15,5), ncols=2)
axs[0].plot(t, c, c="C2")
axs[0].set_xlabel("Time [s]")
axs[1].plot(f_C[:25], C[:25], c="C1")
axs[1].set_xlabel("Frequency [Hz]")
plt.show()
"""
Explanation: When we count the peaks in a section, the assumption is that this apparent frequency — that is, the reciprocal of apparent period or distance between the extrema — tells us the dominant or peak frequency.
To help see why this assumption is wrong, let's compare the Ricker with a signal whose apparent frequency does match its peak frequency: a pure cosine:
End of explanation
"""
plt.figure(figsize=(15, 5))
plt.plot(t, c, c='C2')
plt.plot(t, w)
plt.xlabel("Time [s]")
plt.show()
"""
Explanation: Notice that the signal is much narrower in bandwidth. If we allowed more oscillations, it would be even narrower. If it lasted forever, it would be a spike in the frequency domain.
Let's overlay the signals to get a picture of the difference in the relative periods:
End of explanation
"""
def ricker(t, f):
return (1 - 2*(np.pi*f*t)**2) * np.exp(-(np.pi*f*t)**2)
"""
Explanation: The practical consequence of this is that if we estimate the peak frequency to be $f\ \mathrm{Hz}$, then we need to reduce $f$ by some factor if we want to design a wavelet to match the data. To get this factor, we need to know the apparent period of the Ricker function, as given by the time difference between the two minima.
Let's look at a couple of different ways to find those minima: numerically and analytically.
Find minima numerically
We'll use scipy.optimize.minimize to find a numerical solution. In order to use it, we'll need a slightly different expression for the Ricker function — casting it in terms of a time basis t. We'll also keep f as a variable, rather than hard-coding it in the expression, to give us the flexibility of computing the minima for different values of f.
Here's the equation we're implementing:
$$w(t, f) = (1 - 2\pi^2 f^2 t^2)\ e^{-\pi^2 f^2 t^2}$$
End of explanation
"""
f = 25
np.allclose(w, ricker(t, f=25))
plt.figure(figsize=(15, 5))
plt.plot(w, lw=3)
plt.plot(ricker(t, f), '--', c='C4', lw=3)
plt.show()
"""
Explanation: Check that the wavelet looks like it did before, by comparing the output of this function when f is 25 with the wavelet w we were using before:
End of explanation
"""
import scipy.optimize
f = 25
scipy.optimize.minimize(ricker, x0=0, args=(f))
"""
Explanation: Now we call SciPy's minimize function on our ricker function. It itertively searches for a minimum solution, then gives us the x (which is really t in our case) at that minimum:
End of explanation
"""
(0.02 - 0.01559) / 0.02
"""
Explanation: So the minimum amplitude, given by fun, is $-0.44626$ and it occurs at an x (time) of $\pm 0.01559\ \mathrm{s}$.
In comparison, the minima of the cosine function occur at a time of $\pm 0.02\ \mathrm{s}$. In other words, the period appears to be $0.02 - 0.01559 = 0.00441\ \mathrm{s}$ shorter than the pure waveform, which is...
End of explanation
"""
import sympy as sp
t = sp.Symbol('t')
f = sp.Symbol('f')
r = (1 - 2*(sp.pi*f*t)**2) * sp.exp(-(sp.pi*f*t)**2)
"""
Explanation: ...about 22% shorter. This means that if we naively estimate frequency by counting peaks or zero crossings, we'll tend to overestimate the peak frequency of the wavelet by about 22% — assuming it is approximately Ricker-like; if it isn't we can use the same method to estimate the error for other functions.
This is good to know, but it would be interesting to know if this parameter depends on frequency, and also to have a more precise way to describe it than a decimal. To get at these questions, we need an analytic solution.
Find minima analytically
Python's SymPy package is a bit like Maple — it understands math symbolically. We'll use sympy.solve to find an analytic solution. It turns out that it needs the Ricker function writing in yet another way, using SymPy symbols and expressions for $\mathrm{e}$ and $\pi$.
End of explanation
"""
sp.solvers.solve(r, t)
"""
Explanation: Now we can easily find the solutions to the Ricker equation, that is, the times at which the function is equal to zero:
End of explanation
"""
dwdt = sp.diff(r, t)
sp.solvers.solve(dwdt, t)
"""
Explanation: But this is not quite what we want. We need the minima, not the zero-crossings.
Maybe there's a better way to do this, but here's one way. Note that the gradient (slope or derivative) of the Ricker function is zero at the minima, so let's just solve the first time derivative of the Ricker function. That will give us the three times at which the function has a gradient of zero.
End of explanation
"""
np.sqrt(6) / (2 * np.pi * 25)
"""
Explanation: In other words, the non-zero minima of the Ricker function are at:
$$\pm \frac{\sqrt{6}}{2\pi f}$$
Let's just check that this evaluates to the same answer we got from scipy.optimize, which was 0.01559.
End of explanation
"""
r.subs({t: sp.sqrt(6)/(2*sp.pi*f)})
"""
Explanation: The solutions agree.
While we're looking at this, we can also compute the analytic solution to the amplitude of the minima, which SciPy calculated as -0.446. We just substitute one of the expressions for the minimum time into the expression for r:
End of explanation
"""
(np.pi * 25) / np.sqrt(6)
"""
Explanation: Apparent frequency
So what's the result of all this? What's the correction we need to make?
The minima of the Ricker wavelet are $\sqrt{6}\ /\ \pi f_\mathrm{actual}\ \mathrm{s}$ apart — this is the apparent period. If we're assuming a pure tone, this period corresponds to an apparent frequency of $\pi f_\mathrm{actual}\ /\ \sqrt{6}\ \mathrm{Hz}$. For $f = 25\ \mathrm{Hz}$, this apparent frequency is:
End of explanation
"""
32.064 * np.sqrt(6) / (np.pi)
"""
Explanation: If we were to try to model the data with a Ricker of 32 Hz, the frequency will be too high. We need to multiply the frequency by a factor of $\sqrt{6} / \pi$, like so:
End of explanation
"""
np.sqrt(6) / np.pi
"""
Explanation: This gives the correct frequency of 25 Hz.
To sum up, rearranging the expression above:
$$f_\mathrm{actual} = f_\mathrm{apparent} \frac{\sqrt{6}}{\pi}$$
Expressed as a decimal, the factor we were seeking is therefore $\sqrt{6}\ /\ \pi$:
End of explanation
"""
|
Pybonacci/notebooks | Machine Learning/Regresión lineal.ipynb | bsd-2-clause | from sklearn import datasets
boston = datasets.load_boston()
"""
Explanation: Los modelos lineales son fundamentales tanto en estadística como en el aprendizaje automático, pues muchos métodos se apoyan en la combinación lineal de variables que describen los datos. Lo más sencillo será ajustar una línea recta con LinearRegression, pero veremos que contamos con un abaníco mucho más grande de herramientas.
Para mostrar cómo funcionan estos modelos vamos a emplear uno de los dataset que ya incorpora scikit-learn.
End of explanation
"""
print(boston.DESCR)
"""
Explanation: El Boston dataset es un conjunto de datos para el análisis de los precios de las viviendas en la región de Boston. Con boston.DESCR podemos obtener una descripción del dataset, con información sobre el mismo, como el tipo de atributos.
End of explanation
"""
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
"""
Explanation: Vemos que tenemos 506 muestras con 13 atributos que nos ayudarán a predecir el precio medio de la vivienda. Ahora bien, no todos los atributos serán significativos ni todos tendrán el mismo peso a la hora de determinar el precio de la vivienda; pero eso es algo que iremos viendo conforme adquiramos experiencia e intuición.
LinearRegression
Ya tenemos los datos, vamos a ajustar una línea recta para ver cuál es la tendencia que siguen los precios en función del atributo.
Lo primero es importar LinearRegression y crear un objeto.
End of explanation
"""
lr.fit(boston.data, boston.target)
"""
Explanation: Una vez tenemos claro el modelo a emplear, el siguiente paso es entrenarlo con los datos de variables independientes y variables dependientes que tenemos. Para ello, en scikit-learn tenemos funciones del tipo:
modelo.fit(X, y)
End of explanation
"""
for (feature, coef) in zip(boston.feature_names, lr.coef_):
print('{:>7}: {: 9.5f}'.format(feature, coef))
"""
Explanation: Éste, al tratarse de un modelo sencillo y con muy pocas muestra tardará muy poco en entrenarse. Una vez completado el proceso podemos ver los coeficientes que ha asignado a cada atributo y así ver de qué manera contribuyen al precio final de la vivienda.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def plot_feature(feature):
f = (boston.feature_names == feature)
plt.scatter(boston.data[:,f], boston.target, c='b', alpha=0.3)
plt.plot(boston.data[:,f], boston.data[:,f]*lr.coef_[f] + lr.intercept_, 'k')
plt.legend(['Predicted value', 'Actual value'])
plt.xlabel(feature)
plt.ylabel("Median value in $1000's")
plot_feature('AGE')
"""
Explanation: Con esto ya tendríamos una pequeña idea de cuales son los factores que más contribuyen a incrementar o disminuir el precio de la vivienda. Pero no vayamos a sacar conclusiones precipitadas como han hecho en su día Reinhart y Rogoff y visualicemos los datos primero.
End of explanation
"""
predictions = lr.predict(boston.data)
f, ax = plt.subplots(1)
ax.hist(boston.target - predictions, bins=50, alpha=0.7)
ax.set_title('Histograma de residuales')
ax.text(0.95, 0.90, 'Media de residuales: {:.3e}'.format(np.mean(boston.target - predictions)),
transform=ax.transAxes, verticalalignment='top', horizontalalignment='right')
"""
Explanation: En este caso hemos representado el precio medio la vivienda frente a la proporción de viviendas anteriores a 1940 que hay en la zona. Y como poder ver cláramente, emplear sólo un parámetro (AGE) para determinar el precio de la vivienda mediante una línea recta no parece lo ideal. Pero si tomamos en cuenta todas las variables las predicciones posiblemente mejoren.
Por tanto vamos a utilizar el modelo ya entrenado para predecir los precios de las viviendas. Aunque en este caso no vamos a utilizar datos nuevos, sino los mismos datos que hemos empleado para entrenar el modelo y así ver las diferencias.
End of explanation
"""
from sklearn.datasets import make_regression
reg_data, reg_target = make_regression(n_samples=2000, n_features=3, effective_rank=2, noise=10)
"""
Explanation: Podemos ver que el error medio es despreciable y que la mayoría de los valores se concentran entorno al 0. Pero, ¿cómo hemos llegado a esos valores?
La idea detrás de la regresión lineal es encontrar unos coeficientes $\beta$ que satisfagan
$$y = X\beta,$$
donde $X$ es nuestra matriz de datos e $y$ son nuestros valores objetivo. Puesto que es muy poco probable que a partir de nuestros valores de $X$ obtengamos los coeficientes que plenamente satisfagan la ecuación, es necesario añadir un término de error $\varepsilon$, tal que
$$y = X\beta + \varepsilon.$$
Con el fin de obtener ese conjunto de coeficientes $\beta$ que relacionan $X$ con $y$, LinearRegression recurre al método de mínimos cuadrados
$$\underset{\beta}{min\,} {|| X \beta - y||_2}^2.$$
Para éste problema también existe una solución analítica,
$$\beta = (X^T X)^{-1}X^Ty,$$
pero, ¿qué ocurre si nuestros datos no son independientes? En ese caso, $X^T X$ no es invertible y si contamos con columnas que son función de otras, o están de alguna manera correlacionadas, la estimación por mínimos cuadrados se vuelve altamente sensible a errores aleatorios incrementándose la varianza.
Regularización
Para esos casos emplearemos el modelo Ridge que añade un factor de regularización $\alpha$ que en español se conoce como factor de Tíjinov.
$$\underset{\beta}{min\,} {{|| X \beta - y||_2}^2 + \alpha {||\beta||_2}^2},$$
y así la solución analítica queda como
$$\beta = (X^T X + \alpha^2I)^{-1}X^Ty.$$
Veamos un ejemplo. Para ello, en vez de cargar un dataset crearemos nosotros uno con tres atributos, y donde sólo dos sean linealmente independientes. Para ello utilizamos la función make_regression.
End of explanation
"""
from sklearn.linear_model import RidgeCV
"""
Explanation: Nos interesará también optimizar el valor de $\alpha$. Eso lo haremos con la validación cruzada mediante el objeto RidgeCV que emplea una técnica similar al leave-one-out cross-validation (LOOCV), i.e., dejando uno fuera para test mientras entrena con el resto de las muestras.
End of explanation
"""
alphas = np.linspace(0.01, 0.5)
rcv = RidgeCV(alphas=alphas, store_cv_values=True)
rcv.fit(reg_data, reg_target)
plt.rc('text', usetex=False)
f, ax = plt.subplots()
ax.plot(alphas, rcv.cv_values_.mean(axis=0))
ax.text(0.05, 0.90, 'alpha que minimiza el error: {:.3f}'.format(rcv.alpha_),
transform=ax.transAxes)
"""
Explanation: A la hora de crear el objeto le vamos a indicar los valores de $\alpha$ a evaluar. También guardamos los datos que obtenemos al realizar la validación cruzada con store_cv_values=True para representarlos gráficamente.
End of explanation
"""
f, ax = plt.subplots()
x = np.linspace(0, 2*np.pi)
y = np.sin(x)
ax.plot(x, np.sin(x), 'r', label='sin ruido')
# añadimos algo de ruido
xr = x + np.random.normal(scale=0.1, size=x.shape)
yr = y + np.random.normal(scale=0.2, size=y.shape)
ax.scatter(xr, yr, label='con ruido')
ax.legend()
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
"""
Explanation: Con rcv.alpha_ obtenemos el valor de $\alpha$ que nuestro que nuestro método RidgeCV ha considerado minimiza el error, lo cual también acamos de comprobar gráficamente.
Pero métodos para regresión lineal hay muchos, y en la documentación de scikit-learn podréis encontrar una descripción bastante completa de cada alternativa.
Regresión no lineal
Ahora bien, ¿qué hacer cuando la relación no es lineal y creemos que un polinomio haría un mejor ajuste? Si tomamos como ejemplo una función $f$ que toma la forma
$$f(x) = a + bx + cx^2 $$
la función $f$ es no lineal en función de $x$ pero si es lineal en función de los parámetros desconocidos $a$, $b$, y $c$. O visto de otra manera: podemos sustituir nuestras variables $x$ por un array $z$ tal que
$$ z = [1, x, x^2] $$
con el que podríamos reescribir nuestra función $f$ como
$$ f(z) = az_0 + bz_1 + cz_2$$
Para ello en scikit-learn contamos con la herramienta PolynomialFeatures. Veamos un ejemplo.
En este caso vamos a tomar la función seno entre 0 y 2$\pi$ a la que añadiremos un poco de ruido.
End of explanation
"""
f, ax = plt.subplots()
ax.plot(x, np.sin(x), 'r', label='sin ruido')
ax.scatter(xr, yr, label='con ruido')
X = xr[:, np.newaxis]
for degree in [3, 4, 5]:
model = make_pipeline(PolynomialFeatures(degree), Ridge())
model.fit(X, y)
y = model.predict(x[:, np.newaxis])
ax.plot(x, y, '--', lw=2, label="degree %d" % degree)
ax.legend()
"""
Explanation: Scikit-learn tiene un objeto PolynomialFeatures que nos va a servir para convertir nuestra variable $x$ en un array $z$ del tipo $z = [1, x, x^2, \ldots, n^n]$, que es lo que nos interesa.
El resultado de esa transformación se la pasamos a nuestro modelo Ridge. Para facilitar la tarea en este tipo de casos —donde se realizan varios pasos que van desde el pre-tratamiento de los datos hasta un posible post-tratamiento pasando por el entrenamiento—, podemos hacer uso de las Pipeline que nos permiten encadenar multiples estimadores en uno. Esto es especialmente útil cuando hay secuencia de pasos predefinidos en el procesado de datos con, por ejemplo, selección de atributos, normalización y clasificación.
End of explanation
"""
f, ax = plt.subplots()
ax.plot(x, np.sin(x), 'r', label='sin ruido')
ax.scatter(xr, yr, label='con ruido')
X = xr[:, np.newaxis]
for degree in [3, 4, 5]:
model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=alphas))
model.fit(X, y)
y = model.predict(x[:, np.newaxis])
ax.plot(x, y, '--', lw=2, label="degree %d" % degree)
ax.legend()
"""
Explanation: Acabamos de utilizar un modelo Ridge que implementa regularización, pero sin optimizar. ¿Qué pasaría si optimizamos el parámetro de regularización $\alpha$ con RidgeCV?
End of explanation
"""
|
Fetisoff/Portfolio | 0. Python (Basic) Explore U.S. Births/Basics.ipynb | apache-2.0 | f = open('US_births_1994-2003_CDC_NCHS.csv', 'r')
data = f.read()
data
data_spl = data.split("\n")
data_spl
data_spl[0:10]
"""
Explanation: Explore U.S. Births
In this project, I am working with the dataset, compiled by FiveThirtyEight [https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_1994-2003_CDC_NCHS.csv]
First things first, let's read in the CSV file and explore it. Split the string on the newline character ("\n").
End of explanation
"""
def read_csv(input_csv):
f = open(input_csv, 'r')
data = f.read()
splited = data.split('\n')
string_list = splited[1:len(splited)]
final_list = []
for each in string_list:
int_fields = []
string_fields = each.split(',')
for each in string_fields:
int_fields.append(int(each))
final_list.append(int_fields)
return final_list
cdc_list = read_csv("US_births_1994-2003_CDC_NCHS.csv")
cdc_list[0:10]
"""
Explanation: Converting Data Into A List Of Lists
to convert the dataset into a list of lists where each nested list contains integer values (not strings). We also need to remove the header row.
End of explanation
"""
def month_births(input_ls):
births_per_month = {}
for each in input_ls:
month = each[1]
births = each[4]
if month in births_per_month:
births_per_month[month] = births_per_month[month] + births
else: births_per_month[month] = births
return births_per_month
cdc_month_births = month_births(cdc_list)
cdc_month_births
"""
Explanation: 3: Calculating Number Of Births Each Month
Now that the data is in a more usable format, we can start to analyze it. Let's calculate the total number of births that occured in each month, across all of the years in the dataset. We'll create a dictionary where each key is a unique month and each value is the number of births that happened in that month, across all years:
End of explanation
"""
def dow_births(input_ls):
b_per_day = {}
for each in input_ls:
day_of_week = each[3]
births = each[4]
if day_of_week in b_per_day:
b_per_day[day_of_week] = b_per_day[day_of_week] + births
else:
b_per_day[day_of_week] = births
return b_per_day
cdc_day_births = dow_births(cdc_list)
cdc_day_births
"""
Explanation: Calculating Number Of Births Each Day Of Week
End of explanation
"""
def calc_counts(input_ls, column):
dictionary = {}
for each in input_ls:
births = each[4]
key = each[column]
if key in dictionary:
dictionary[key] = dictionary[key] + births
else:
dictionary[key] = births
return dictionary
cdc_year_births = calc_counts(cdc_list, 0)
cdc_year_births
cdc_month_births = calc_counts(cdc_list, 1)
cdc_month_births
cdc_dom_births = calc_counts(cdc_list, 2)
cdc_dom_births
cdc_dow_births = calc_counts(cdc_list, 3)
cdc_dow_births
def calc_min(dict_ls):
values = dict_ls.values
min_value = 1
max_value = 0
for each in values:
print(each)
if each < (each + 1):
min_value = each
for each in values:
if each > (each+1):
max_value = each
return max_value, min_value
g = calc_min(cdc_dow_births)
g
min_value = cdc_dow_births.values()
min_value
"""
Explanation: Creating A More General Function
it's better to create a single function that works for any column and specify the column we want as a parameter each time we call the function.
End of explanation
"""
|
facemelters/data-science | Atlas/Draft Final Project.ipynb | gpl-2.0 | df = pd.read_csv('atlas-taggings.csv')
df[2:5]
"""
Explanation: First we import the table of tag-article mappings from our SQL database
(but read in as a .csv).
End of explanation
"""
articles = df[df.tagged_type == 'Article']
"""
Explanation: We only care about the content type "Article"
End of explanation
"""
articles.tag_url = articles.tag_url.apply(get_tag_name)
articles = get_dummies_and_join(articles,'tag_url')
articles = articles.drop(['tag_id','tag_url','tagged_type','tagged_id'],axis=1)
articles = unique_article_set(articles,'tagged_url')
articles = articles.reset_index().set_index('tagged_url')
"""
Explanation: But we need to get the tag name out of the url string for the tag
End of explanation
"""
pageviews = pd.read_csv('output_articles_performance.csv',header=None,names=[
'url','published','pageviews'
])
pageviews.url = ['www.atlasobscura.com/articles/' + x for x in pageviews.url]
pageviews.describe()
pageviews.set_index('url',inplace=True)
article_set = articles.join(pageviews)
article_set['ten_thousand'] = target_pageview_cutoff(10000,article_set.pageviews)
article_set['published'] = pd.to_datetime(article_set['published'])
article_set['year'] = get_year(article_set,'published')
article_set.pageviews.plot(kind='density',title='Page View Distribution, All Articles')
ax = article_set.boxplot(column='pageviews',by='year',figsize=(6,6),showfliers=False)
ax.set(title='PV distribution by year of publication, no outliers',ylabel='pageviews')
sns.factorplot(
x='year',
y='ten_thousand',
data = article_set
)
total_tagged = get_total_tagged(article_set,'num_tagged')
article_set.fillna(value=0,inplace=True)
y = article_set.ten_thousand
X = article_set.drop(['pageviews','published','ten_thousand'],axis=1)
cross_val_score = get_cross_validation_score(X,y,linear_model.LogisticRegression(penalty = 'l1'),
n_folds=5)
lr = linear_model.LogisticRegression(penalty = 'l1').fit(X,y)
lr_scores = lr.predict_proba(X)[:,1]
roc_score = get_roc_scores(y,lr_scores)
print roc_score
single_tag_probabilities = get_probabilities(lr,X)
"""
Explanation: Import the table of URLs and total pageviews
End of explanation
"""
params = {'n_neighbors' : [x for x in range(2,100,4)],
'weights' : ['distance','uniform']}
gs = GridSearchCV(estimator = KNeighborsClassifier(),param_grid=params,
n_jobs=-1,cv=10,verbose=1)
gs.fit(X,y)
print gs.best_params_
print gs.best_score_
knn = gs.best_estimator_.fit(X,y)
knn_probs = get_probabilities(knn,X)
knn_cross_val_score = get_cross_validation_score(X,y,knn,5)
knn_scores = knn.predict_proba(X)[:,1]
knn_roc_score = get_roc_scores(y,knn_scores)
params_rfc = {'max_depth': np.arange(20,100,5),
'min_samples_leaf': np.arange(90,200,5),
'n_estimators': [20],
'criterion' : ['gini','entropy']
}
gs1 = GridSearchCV(RandomForestClassifier(),param_grid=params_rfc, cv=10, scoring='roc_auc',n_jobs=-1,verbose=1)
gs1.fit(X,y)
print gs1.best_params_
print gs1.best_score_
"""
Explanation: Now we will explore how a KNN classifier does with our dataset
End of explanation
"""
rf = gs1.best_estimator_
rf.fit(X,y)
rf_cross_val_score = get_cross_validation_score(X,y,rf,5)
rf_scores = rf.predict_proba(X)[:,1]
rf_roc_score = get_roc_scores(y,rf_scores)
print "Logistic Regression Cross-validation Score: ", cross_val_score
print "K Nearest Neighbors Cross-validation Score: ", knn_cross_val_score
print "RandomForest Cross-validation Score: ", rf_cross_val_score
print "Logistic Regressions ROC AUC score: ", roc_score
print "K Nearest Neighbors ROC AUC score: ", knn_roc_score
print "RandomForest ROC AUC score: ", rf_roc_score
"""
Explanation: Now we will explore how a RandomForest classifier does with our dataset
End of explanation
"""
url, taglist = get_article_tags('http://www.atlasobscura.com/articles/the-ao-exit-interview-12-years-in-the-blue-man-group')
transformed_article = transform_article_for_prediction(url,article_set)
"""
Explanation: Prediction of a given URL
End of explanation
"""
article_set.head(1)
y1 = article_set[article_set.year >= 2016].ten_thousand
X1 = article_set[article_set.year >= 2016].drop(['pageviews','published','ten_thousand'],axis=1)
cross_val_score1 = get_cross_validation_score(X1,y1,linear_model.LogisticRegression(penalty = 'l1'),
n_folds=5)
lr1 = linear_model.LogisticRegression(penalty = 'l1').fit(X1,y1)
lr_scores1 = lr1.predict_proba(X1)[:,1]
roc_score1 = get_roc_scores(y1,lr_scores1)
print roc_score1
"""
Explanation: Refining the model
We will reproduce our results looking only at articles published in 2015 and 2016
End of explanation
"""
simplereach = pd.read_csv('~/Downloads/all-content-simplereach.csv')
simplereach.Url = simplereach.Url.apply(get_simplereach_url)
simplereach = simplereach.set_index('Url')
simplereach = simplereach[['Avg Engaged Time','Social Actions','Facebook Shares','FaceBook Referrals']]
article_set2 = article_set.join(simplereach['Facebook Shares'])
article_set2['five_hundred_shares'] = target_pageview_cutoff(500,article_set2['Facebook Shares'])
"""
Explanation: Now we will rebuild our model to have it predict if an article will receive over 500 Facebook shares.
End of explanation
"""
y2 = article_set2.five_hundred_shares
X2 = article_set2.drop(['pageviews',
'published',
'ten_thousand',
'Facebook Shares',
'five_hundred_shares'
],axis=1)
cross_val_score_social = get_cross_validation_score(X2,y2,linear_model.LogisticRegression(penalty = 'l1'),
n_folds=5)
lr_social = linear_model.LogisticRegression(penalty = 'l1').fit(X2,y2)
lr_scores_social = lr_social.predict_proba(X2)[:,1]
roc_score_social = get_roc_scores(y2,lr_scores_social)
print "Cross-val score when predicting Facebook shares > 500: ", cross_val_score_social
print "ROC AUC score when predicting Facebook shares > 500: ",roc_score_social
url = 'http://www.atlasobscura.com/articles/winters-effigies-the-deviant-history-of-the-snowman'
lr2.predict(transform_article_for_prediction(url,X2))
"""
Explanation: Logistic Regression with FB Shares > 500 as target
End of explanation
"""
params_social = {'n_neighbors' : [x for x in range(2,100,4)],
'weights' : ['distance','uniform']}
gs_social = GridSearchCV(estimator = KNeighborsClassifier(),param_grid=params,
n_jobs=-1,cv=10,verbose=1)
gs_social.fit(X2,y2)
print gs_social.best_params_
print gs_social.best_score_
knn_social = gs_social.best_estimator_.fit(X2,y2)
knn_probs_social = get_probabilities(knn_social,X2)
knn_cross_val_score_social = get_cross_validation_score(X2,y2,knn_social,5)
knn_scores_social = knn_social.predict_proba(X2)[:,1]
knn_roc_score_social = get_roc_scores(y2,knn_scores_social)
"""
Explanation: KNN with FB Shares > 500 as target
End of explanation
"""
params_rfc = {'max_depth': np.arange(20,100,5),
'min_samples_leaf': np.arange(90,200,5),
'n_estimators': [20]}
gs1_social = GridSearchCV(RandomForestClassifier(),param_grid=params_rfc, cv=10, scoring='roc_auc',n_jobs=-1,verbose=1)
gs1_social.fit(X2,y2)
rf_social = gs1_social.best_estimator_
rf_social.fit(X2,y2)
rf_cross_val_score_social = get_cross_validation_score(X2,y2,rf_social,5)
rf_scores_social = rf_social.predict_proba(X)[:,1]
rf_roc_score_social = get_roc_scores(y2,rf_scores_social)
print gs1_social.best_params_
print gs1_social.best_score_
print "Logistic Regression Cross-validation Score: ", cross_val_score_social
print "K Nearest Neighbors Cross-validation Score: ", knn_cross_val_score_social
print "RandomForest Cross-validation Score: ", rf_cross_val_score_social
print "Logistic Regressions ROC AUC score: ", roc_score_social
print "K Nearest Neighbors ROC AUC score: ", knn_roc_score_social
print "RandomForest ROC AUC score: ", rf_roc_score_social
np.mean(y)
simplereach.describe()
"""
Explanation: RandomForest with FB Shares > 500 as target
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/train_models_with_tensorFlow_decision_forests.ipynb | apache-2.0 | # Install the specified package
!pip install tensorflow_decision_forests
"""
Explanation: Building, Training and Evaluating Models with TensorFlow Decision Forests
Overview
In this lab, you use TensorFlow Decision Forests (TF-DF) library for the training, evaluation, interpretation and inference of Decision Forest models.
Learning Objective
In this notebook, you learn how to:
Train a binary classification Random Forest on a dataset containing numerical, categorical and missing features.
Evaluate the model on a test dataset and prepare the model for TensorFlow Serving.
Examine the overall structure of the model and the importance of each feature.
Re-train the model with a different learning algorithm (Gradient Boosted Decision Trees) and use a different set of input features.
Change the hyperparameters of the model.
Preprocess the features and train a model for regression.
Train a model for ranking.
Introduction
This tutorial shows how to use TensorFlow Decision Forests (TF-DF) library for the training, evaluation, interpretation and inference of Decision Forest models.
Decision Forests (DF) are a large family of Machine Learning algorithms for supervised classification, regression and ranking. As the name suggests, DFs use decision trees as a building block. Today, the two most popular DF training algorithms are Random Forests and Gradient Boosted Decision Trees. Both algorithms are ensemble techniques that use multiple decision trees, but differ on how they do it.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Installing TensorFlow Decision Forests
Install TF-DF by running the following cell.
End of explanation
"""
# Install the specified package
!pip install wurlitzer
"""
Explanation: Please ignore incompatible errors.
Install Wurlitzer to display
the detailed training logs. This is only needed in colabs.
End of explanation
"""
# Import necessary libraries
import tensorflow_decision_forests as tfdf
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import math
try:
from wurlitzer import sys_pipes
except:
from colabtools.googlelog import CaptureLog as sys_pipes
from IPython.core.magic import register_line_magic
from IPython.display import Javascript
"""
Explanation: Importing libraries
End of explanation
"""
# Some of the model training logs can cover the full
# screen if not compressed to a smaller viewport.
# This magic allows setting a max height for a cell.
@register_line_magic
def set_cell_height(size):
display(
Javascript("google.colab.output.setIframeHeight(0, true, {maxHeight: " +
str(size) + "})"))
# Check the version of TensorFlow Decision Forests
print("Found TensorFlow Decision Forests v" + tfdf.__version__)
"""
Explanation: The hidden code cell limits the output height in colab.
End of explanation
"""
# Download the dataset
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/penguins_toy.csv /tmp/penguins.csv
# Load a dataset into a Pandas Dataframe.
dataset_df = pd.read_csv("/tmp/penguins.csv")
# Display the first 3 examples.
dataset_df.head(3)
"""
Explanation: Training a Random Forest model
In this section, we train, evaluate, analyse and export a binary classification Random Forest trained on the Palmer's Penguins dataset.
<center>
<img src="https://allisonhorst.github.io/palmerpenguins/man/figures/palmerpenguins.png" width="150"/></center>
Note: The dataset was exported to a csv file without pre-processing: library(palmerpenguins); write.csv(penguins, file="penguins_toy.csv", quote=F, row.names=F).
Load the dataset and convert it in a tf.Dataset
This dataset is very small (300 examples) and stored as a .csv-like file. Therefore, use Pandas to load it.
Note: Pandas is practical as you don't have to type in name of the input features to load them. For larger datasets (>1M examples), using the
TensorFlow Dataset to read the files may be better suited.
Let's assemble the dataset into a csv file (i.e. add the header), and load it:
End of explanation
"""
# Encode the categorical label into an integer.
#
# Details:
# This stage is necessary if your classification label is represented as a
# string. Note: Keras expected classification labels to be integers.
# Name of the label column.
label = "species"
classes = dataset_df[label].unique().tolist()
print(f"Label classes: {classes}")
dataset_df[label] = dataset_df[label].map(classes.index)
"""
Explanation: The dataset contains a mix of numerical (e.g. bill_depth_mm), categorical
(e.g. island) and missing features. TF-DF supports all these feature types natively (differently than NN based models), therefore there is no need for preprocessing in the form of one-hot encoding, normalization or extra is_present feature.
Labels are a bit different: Keras metrics expect integers. The label (species) is stored as a string, so let's convert it into an integer.
End of explanation
"""
# Split the dataset into a training and a testing dataset.
def split_dataset(dataset, test_ratio=0.30):
"""Splits a panda dataframe in two."""
test_indices = np.random.rand(len(dataset)) < test_ratio
return dataset[~test_indices], dataset[test_indices]
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
"""
Explanation: Next split the dataset into training and testing:
End of explanation
"""
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_ds_pd, label=label)
"""
Explanation: And finally, convert the pandas dataframe (pd.Dataframe) into tensorflow datasets (tf.data.Dataset):
End of explanation
"""
%set_cell_height 300
# Specify the model.
model_1 = tfdf.keras.RandomForestModel()
# Optionally, add evaluation metrics.
model_1.compile(
metrics=["accuracy"])
# Train the model.
# "sys_pipes" is optional. It enables the display of the training logs.
# TODO
with sys_pipes():
model_1.fit(x=train_ds)
"""
Explanation: Notes: pd_dataframe_to_tf_dataset could have converted the label to integer for you.
And, if you wanted to create the tf.data.Dataset yourself, there is a couple of things to remember:
The learning algorithms work with a one-epoch dataset and without shuffling.
The batch size does not impact the training algorithm, but a small value might slow down reading the dataset.
Train the model
End of explanation
"""
# TODO
# Evaluate the model
evaluation = model_1.evaluate(test_ds, return_dict=True)
print()
for name, value in evaluation.items():
print(f"{name}: {value:.4f}")
"""
Explanation: Remarks
No input features are specified. Therefore, all the columns will be used as
input features except for the label. The feature used by the model are shown
in the training logs and in the model.summary().
DFs consume natively numerical, categorical, categorical-set features and
missing-values. Numerical features do not need to be normalized. Categorical
string values do not need to be encoded in a dictionary.
No training hyper-parameters are specified. Therefore the default
hyper-parameters will be used. Default hyper-parameters provide
reasonable results in most situations.
Calling compile on the model before the fit is optional. Compile can be
used to provide extra evaluation metrics.
Training algorithms do not need validation datasets. If a validation dataset
is provided, it will only be used to show metrics.
Note: A Categorical-Set feature is composed of a set of categorical values (while a Categorical is only one value). More details and examples are given later.
Evaluate the model
Let's evaluate our model on the test dataset.
End of explanation
"""
# Save the model
model_1.save("/tmp/my_saved_model")
"""
Explanation: Remark: The test accuracy is close to the Out-of-bag accuracy
shown in the training logs.
See the Model Self Evaluation section below for more evaluation methods.
Prepare this model for TensorFlow Serving.
Export the model to the SavedModel format for later re-use e.g.
TensorFlow Serving.
End of explanation
"""
# Plot the first tree of the model
tfdf.model_plotter.plot_model_in_colab(model_1, tree_idx=0, max_depth=3)
"""
Explanation: Plot the model
Plotting a decision tree and following the first branches helps learning about decision forests. In some cases, plotting a model can even be used for debugging.
Because of the difference in the way they are trained, some models are more interresting to plan than others. Because of the noise injected during training and the depth of the trees, plotting Random Forest is less informative than plotting a CART or the first tree of a Gradient Boosted Tree.
Never the less, let's plot the first tree of our Random Forest model:
End of explanation
"""
# Print the overall structure of the model
%set_cell_height 300
model_1.summary()
"""
Explanation: The root node on the left contains the first condition (bill_depth_mm >= 16.55), number of examples (240) and label distribution (the red-blue-green bar).
Examples that evaluates true to bill_depth_mm >= 16.55 are branched to the green path. The other ones are branched to the red path.
The deeper the node, the more pure they become i.e. the label distribution is biased toward a subset of classes.
Note: Over the mouse on top of the plot for details.
Model tructure and feature importance
The overall structure of the model is show with .summary(). You will see:
Type: The learning algorithm used to train the model (Random Forest in
our case).
Task: The problem solved by the model (Classification in our case).
Input Features: The input features of the model.
Variable Importance: Different measures of the importance of each
feature for the model.
Out-of-bag evaluation: The out-of-bag evaluation of the model. This is a
cheap and efficient alternative to cross-validation.
Number of {trees, nodes} and other metrics: Statistics about the
structure of the decisions forests.
Remark: The summary's content depends on the learning algorithm (e.g.
Out-of-bag is only available for Random Forest) and the hyper-parameters (e.g.
the mean-decrease-in-accuracy variable importance can be disabled in the
hyper-parameters).
End of explanation
"""
# The input features
model_1.make_inspector().features()
# The feature importances
model_1.make_inspector().variable_importances()
"""
Explanation: The information in summary are all available programatically using the model inspector:
End of explanation
"""
# TODO
# Evaluate the model
model_1.make_inspector().evaluation()
"""
Explanation: The content of the summary and the inspector depends on the learning algorithm (tfdf.keras.RandomForestModel in this case) and its hyper-parameters (e.g. compute_oob_variable_importances=True will trigger the computation of Out-of-bag variable importances for the Random Forest learner).
Model Self Evaluation
During training TFDF models can self evaluate even if no validation dataset is provided to the fit() method. The exact logic depends on the model. For example, Random Forest will use Out-of-bag evaluation while Gradient Boosted Trees will use internal train-validation.
Note: While this evaluation is computed during training, it is NOT computed on the training dataset and can be used as a low quality evaluation.
The model self evaluation is available with the inspector's evaluation():
End of explanation
"""
%set_cell_height 150
model_1.make_inspector().training_logs()
"""
Explanation: Plotting the training logs
The training logs show the quality of the model (e.g. accuracy evaluated on the out-of-bag or validation dataset) according to the number of trees in the model. These logs are helpful to study the balance between model size and model quality.
The logs are available in multiple ways:
Displayed in during training if fit() is wrapped in with sys_pipes(): (see example above).
At the end of the model summary i.e. model.summary() (see example above).
Programmatically, using the model inspector i.e. model.make_inspector().training_logs().
Using TensorBoard
Let's try the options 2 and 3:
End of explanation
"""
# Import necessary libraries
import matplotlib.pyplot as plt
logs = model_1.make_inspector().training_logs()
# Plot the logs
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot([log.num_trees for log in logs], [log.evaluation.accuracy for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Accuracy (out-of-bag)")
plt.subplot(1, 2, 2)
plt.plot([log.num_trees for log in logs], [log.evaluation.loss for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Logloss (out-of-bag)")
plt.show()
"""
Explanation: Let's plot it:
End of explanation
"""
# This cell start TensorBoard that can be slow.
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Google internal version
# %load_ext google3.learning.brain.tensorboard.notebook.extension
# Clear existing results (if any)
!rm -fr "/tmp/tensorboard_logs"
# Export the meta-data to tensorboard.
model_1.make_inspector().export_to_tensorboard("/tmp/tensorboard_logs")
# docs_infra: no_execute
# Start a tensorboard instance.
%tensorboard --logdir "/tmp/tensorboard_logs"
"""
Explanation: This dataset is small. You can see the model converging almost immediately.
Let's use TensorBoard:
End of explanation
"""
# List all algorithms
tfdf.keras.get_all_models()
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="images/beginner_tensorboard.png"/> -->
Re-train the model with a different learning algorithm
The learning algorithm is defined by the model class. For
example, tfdf.keras.RandomForestModel() trains a Random Forest, while
tfdf.keras.GradientBoostedTreesModel() trains a Gradient Boosted Decision
Trees.
The learning algorithms are listed by calling tfdf.keras.get_all_models() or in the
learner list.
End of explanation
"""
# help works anywhere.
help(tfdf.keras.RandomForestModel)
# ? only works in ipython or notebooks, it usually opens on a separate panel.
tfdf.keras.RandomForestModel?
"""
Explanation: The description of the learning algorithms and their hyper-parameters are also available in the API reference and builtin help:
End of explanation
"""
feature_1 = tfdf.keras.FeatureUsage(name="bill_length_mm")
feature_2 = tfdf.keras.FeatureUsage(name="island")
all_features = [feature_1, feature_2]
# Note: This model is only trained with two features. It will not be as good as
# the one trained on all features.
# TODO
model_2 = tfdf.keras.GradientBoostedTreesModel(
features=all_features, exclude_non_specified_features=True)
model_2.compile(metrics=["accuracy"])
model_2.fit(x=train_ds, validation_data=test_ds)
print(model_2.evaluate(test_ds, return_dict=True))
"""
Explanation: Using a subset of features
The previous example did not specify the features, so all the columns were used
as input feature (except for the label). The following example shows how to
specify input features.
End of explanation
"""
# Define the features
%set_cell_height 300
feature_1 = tfdf.keras.FeatureUsage(name="year", semantic=tfdf.keras.FeatureSemantic.CATEGORICAL)
feature_2 = tfdf.keras.FeatureUsage(name="bill_length_mm")
feature_3 = tfdf.keras.FeatureUsage(name="sex")
all_features = [feature_1, feature_2, feature_3]
model_3 = tfdf.keras.GradientBoostedTreesModel(features=all_features, exclude_non_specified_features=True)
model_3.compile( metrics=["accuracy"])
with sys_pipes():
model_3.fit(x=train_ds, validation_data=test_ds)
"""
Explanation: Note: As expected, the accuracy is lower than previously.
TF-DF attaches a semantics to each feature. This semantics controls how
the feature is used by the model. The following semantics are currently supported:
Numerical: Generally for quantities or counts with full ordering. For
example, the age of a person, or the number of items in a bag. Can be a
float or an integer. Missing values are represented with float(Nan) or with
an empty sparse tensor.
Categorical: Generally for a type/class in finite set of possible values
without ordering. For example, the color RED in the set {RED, BLUE, GREEN}.
Can be a string or an integer. Missing values are represented as "" (empty
sting), value -2 or with an empty sparse tensor.
Categorical-Set: A set of categorical values. Great to represent
tokenized text. Can be a string or an integer in a sparse tensor or a
ragged tensor (recommended). The order/index of each item doesn't matter.
If not specified, the semantics is inferred from the representation type and shown in the training logs:
int, float (dense or sparse) → Numerical semantics.
str (dense or sparse) → Categorical semantics
int, str (ragged) → Categorical-Set semantics
In some cases, the inferred semantics is incorrect. For example: An Enum stored as an integer is semantically categorical, but it will be detected as numerical. In this case, you should specify the semantic argument in the input. The education_num field of the Adult dataset is classical example.
This dataset doesn't contain such a feature. However, for the demonstration, we will make the model treat the year as a categorical feature:
End of explanation
"""
# A classical but slighly more complex model.
model_6 = tfdf.keras.GradientBoostedTreesModel(
num_trees=500, growing_strategy="BEST_FIRST_GLOBAL", max_depth=8)
model_6.fit(x=train_ds)
# TODO
# A more complex, but possibly, more accurate model.
model_7 = tfdf.keras.GradientBoostedTreesModel(
num_trees=500,
growing_strategy="BEST_FIRST_GLOBAL",
max_depth=8,
split_axis="SPARSE_OBLIQUE",
categorical_algorithm="RANDOM",
)
model_7.fit(x=train_ds)
"""
Explanation: Note that year is in the list of CATEGORICAL features (unlike the first run).
Hyper-parameters
Hyper-parameters are parameters of the training algorithm that impact
the quality of the final model. They are specified in the model class
constructor. The list of hyper-parameters is visible with the question mark colab command (e.g. ?tfdf.keras.GradientBoostedTreesModel).
Alternatively, you can find them on the TensorFlow Decision Forest Github or the Yggdrasil Decision Forest documentation.
The default hyper-parameters of each algorithm matches approximatively the initial publication paper. To ensure consistancy, new features and their matching hyper-parameters are always disable by default. That's why it is a good idea to tune your hyper-parameters.
End of explanation
"""
# A good template of hyper-parameters.
model_8 = tfdf.keras.GradientBoostedTreesModel(hyperparameter_template="benchmark_rank1")
model_8.fit(x=train_ds)
"""
Explanation: As new training methods are published and implemented, combinaisons of hyper-parameters can emerge as good or almost-always-better than the default parameters. To avoid changing the default hyper-parameter values these good combinaisons are indexed and available as hyper-parameter templates.
For example, the benchmark_rank1 template is the best combinaison on our internal benchmarks. Those templates are versioned to allow training configuration stability e.g. benchmark_rank1@v1.
End of explanation
"""
# The hyper-parameter templates of the Gradient Boosted Tree model.
print(tfdf.keras.GradientBoostedTreesModel.predefined_hyperparameters())
"""
Explanation: The available tempaltes are available with predefined_hyperparameters. Note that different learning algorithms have different templates, even if the name is similar.
End of explanation
"""
%set_cell_height 300
body_mass_g = tf.keras.layers.Input(shape=(1,), name="body_mass_g")
body_mass_kg = body_mass_g / 1000.0
bill_length_mm = tf.keras.layers.Input(shape=(1,), name="bill_length_mm")
raw_inputs = {"body_mass_g": body_mass_g, "bill_length_mm": bill_length_mm}
processed_inputs = {"body_mass_kg": body_mass_kg, "bill_length_mm": bill_length_mm}
# "preprocessor" contains the preprocessing logic.
preprocessor = tf.keras.Model(inputs=raw_inputs, outputs=processed_inputs)
# "model_4" contains both the pre-processing logic and the decision forest.
model_4 = tfdf.keras.RandomForestModel(preprocessing=preprocessor)
model_4.fit(x=train_ds)
model_4.summary()
"""
Explanation: Feature Preprocessing
Pre-processing features is sometimes necessary to consume signals with complex
structures, to regularize the model or to apply transfer learning.
Pre-processing can be done in one of three ways:
Preprocessing on the Pandas dataframe. This solution is easy to implement
and generally suitable for experimentation. However, the
pre-processing logic will not be exported in the model by model.save().
Keras Preprocessing: While
more complex than the previous solution, Keras Preprocessing is packaged in
the model.
TensorFlow Feature Columns:
This API is part of the TF Estimator library (!= Keras) and planned for
deprecation. This solution is interesting when using existing preprocessing
code.
Note: Using TensorFlow Hub
pre-trained embedding is often, a great way to consume text and image with
TF-DF. For example, hub.KerasLayer("https://tfhub.dev/google/nnlm-en-dim128/2"). See the Intermediate tutorial for more details.
In the next example, pre-process the body_mass_g feature into body_mass_kg = body_mass_g / 1000. The bill_length_mm is consumed without pre-processing. Note that such
monotonic transformations have generally no impact on decision forest models.
End of explanation
"""
def g_to_kg(x):
return x / 1000
feature_columns = [
tf.feature_column.numeric_column("body_mass_g", normalizer_fn=g_to_kg),
tf.feature_column.numeric_column("bill_length_mm"),
]
preprocessing = tf.keras.layers.DenseFeatures(feature_columns)
model_5 = tfdf.keras.RandomForestModel(preprocessing=preprocessing)
model_5.compile(metrics=["accuracy"])
model_5.fit(x=train_ds)
"""
Explanation: The following example re-implements the same logic using TensorFlow Feature
Columns.
End of explanation
"""
# Download the dataset.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/abalone_raw_toy.csv /tmp/abalone.csv
dataset_df = pd.read_csv("/tmp/abalone.csv")
print(dataset_df.head(3))
# Split the dataset into a training and testing dataset.
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
# Name of the label column.
label = "Rings"
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label, task=tfdf.keras.Task.REGRESSION)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label, task=tfdf.keras.Task.REGRESSION)
%set_cell_height 300
# TODO
# Configure the regression model.
model_7 = tfdf.keras.RandomForestModel(task = tfdf.keras.Task.REGRESSION)
# Optional.
model_7.compile(metrics=["mse"])
# Train the model.
with sys_pipes():
model_7.fit(x=train_ds)
# Evaluate the model on the test dataset.
evaluation = model_7.evaluate(test_ds, return_dict=True)
print(evaluation)
print()
print(f"MSE: {evaluation['mse']}")
print(f"RMSE: {math.sqrt(evaluation['mse'])}")
"""
Explanation: Training a regression model
The previous example trains a classification model (TF-DF does not differentiate
between binary classification and multi-class classification). In the next
example, train a regression model on the
Abalone dataset. The
objective of this dataset is to predict the number of shell's rings of an
abalone.
Note: The csv file is assembled by appending UCI's header and data files. No preprocessing was applied.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/33/LivingAbalone.JPG/800px-LivingAbalone.JPG" width="200"/></center>
End of explanation
"""
%set_cell_height 200
archive_path = tf.keras.utils.get_file("letor.zip",
"https://download.microsoft.com/download/E/7/E/E7EABEF1-4C7B-4E31-ACE5-73927950ED5E/Letor.zip",
extract=True)
# Path to the train and test dataset using libsvm format.
raw_dataset_path = os.path.join(os.path.dirname(archive_path),"OHSUMED/Data/All/OHSUMED.txt")
"""
Explanation: Training a ranking model
Finaly, after having trained a classification and a regression models, train a ranking model.
The goal of a ranking is to order items by importance. The "value" of
relevance does not matter directly. Ranking a set of documents with regard to
a user query is an example of ranking problem: It is only important to get the right order, where the top documents matter more.
TF-DF expects for ranking datasets to be presented in a "flat" format. A
document+query dataset might look like that:
query | document_id | feature_1 | feature_2 | relevance/label
----- | ----------- | --------- | --------- | ---------------
cat | 1 | 0.1 | blue | 4
cat | 2 | 0.5 | green | 1
cat | 3 | 0.2 | red | 2
dog | 4 | NA | red | 0
dog | 5 | 0.2 | red | 1
dog | 6 | 0.6 | green | 1
The relevance/label is a floating point numerical value between 0 and 5
(generally between 0 and 4) where 0 means "completely unrelated", 4 means "very
relevant" and 5 means "the same as the query".
Interestingly, decision forests are often good rankers, and many
state-of-the-art ranking models are decision forests.
In this example, use a sample of the
LETOR3
dataset. More precisely, we want to download the OHSUMED.zip from the LETOR3 repo. This dataset is stored in the
libsvm format, so we will need to convert it to csv.
End of explanation
"""
def convert_libsvm_to_csv(src_path, dst_path):
"""Converts a libsvm ranking dataset into a flat csv file.
Note: This code is specific to the LETOR3 dataset.
"""
dst_handle = open(dst_path, "w")
first_line = True
for src_line in open(src_path,"r"):
# Note: The last 3 items are comments.
items = src_line.split(" ")[:-3]
relevance = items[0]
group = items[1].split(":")[1]
features = [ item.split(":") for item in items[2:]]
if first_line:
# Csv header
dst_handle.write("relevance,group," + ",".join(["f_" + feature[0] for feature in features]) + "\n")
first_line = False
dst_handle.write(relevance + ",g_" + group + "," + (",".join([feature[1] for feature in features])) + "\n")
dst_handle.close()
# Convert the dataset.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/ohsumed_toy.csv /tmp/ohsumed.csv
csv_dataset_path="/tmp/ohsumed.csv"
convert_libsvm_to_csv(raw_dataset_path, csv_dataset_path)
# Load a dataset into a Pandas Dataframe.
dataset_df = pd.read_csv(csv_dataset_path)
# Display the first 3 examples.
dataset_df.head(3)
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
# Display the first 3 examples of the training dataset.
train_ds_pd.head(3)
"""
Explanation: The dataset is stored as a .txt file in a specific format, so first convert it into a csv file.
End of explanation
"""
# Name of the relevance and grouping columns.
relevance = "relevance"
ranking_train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=relevance, task=tfdf.keras.Task.RANKING)
ranking_test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=relevance, task=tfdf.keras.Task.RANKING)
%set_cell_height 400
# TODO
# Define the ranking model
model_8 = tfdf.keras.GradientBoostedTreesModel(
task=tfdf.keras.Task.RANKING,
ranking_group="group",
num_trees=50)
with sys_pipes():
model_8.fit(x=ranking_train_ds)
"""
Explanation: In this dataset, the relevance defines the ground-truth rank among rows of the same group.
End of explanation
"""
# Print the summary of the model
%set_cell_height 400
model_8.summary()
"""
Explanation: At this point, keras does not propose any ranking metrics. Instead, the training and validation (a GBDT uses a validation dataset) are shown in the training
logs. In this case the loss is LAMBDA_MART_NDCG5, and the final (i.e. at
the end of the training) NDCG (normalized discounted cumulative gain) is 0.510136 (see line Final model valid-loss: -0.510136).
Note that the NDCG is a value between 0 and 1. The larget the NDCG, the better
the model. For this reason, the loss to be -NDCG.
As before, the model can be analysed:
End of explanation
"""
|
fdcl-gwu/MAE3134_examples | numerical_integration.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def input(t):
f = np.cos(t)
return f
def msd(state, t, m, c, k):
x, xd = state
pos_dot = xd
vel_dot = 1/m*(input(t) - c*xd - k*x)
state_dot = [pos_dot, vel_dot]
return state_dot
num_steps = 100
tf = 10
t = np.linspace(0,tf,num_steps)
x0 = [0,0]
m = 2
c = 2
k = 1
sol_ode = odeint(msd, x0, t, args=(m, c, k))
"""
Explanation: Euler's method
We look at numerically solving differential equations. Most scientific software packages already include a wide variety of numerical integrators. Here we'll write our own simple version and compare it to the built in solutions.
Here's the built in solution using the ode integrator already available.
End of explanation
"""
sol_euler = np.zeros((num_steps,2))
delta_t = tf/(num_steps-1)
sol_euler[0,:] = x0
for ii in range(num_steps-1):
sol_euler[ii+1,0] = sol_euler[ii,0] + sol_euler[ii,1]*delta_t
a = 1/m*(input(t[ii])-c*sol_euler[ii,1] - k*sol_euler[ii,0])
sol_euler[ii+1,1] = sol_euler[ii,1]+a*delta_t
"""
Explanation: Now we implement Euler's method
End of explanation
"""
plt.figure(figsize=(16,8))
plt.plot(t,sol_ode[:,0],label='ODE')
plt.plot(t,sol_euler[:,0],label='Euler')
plt.xlabel('Time')
plt.ylabel('Position')
plt.grid(True)
plt.legend()
plt.show()
"""
Explanation: Now lets print the solutions
End of explanation
"""
|
infilect/ml-course1 | keras-notebooks/Transfer-Learning/5.3 Transfer Learning & Fine-Tuning.ipynb | mit | import numpy as np
import datetime
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from numpy import nan
now = datetime.datetime.now
"""
Explanation: Transfer Learning and Fine Tuning
Train a simple convnet on the MNIST dataset the first 5 digits [0..4].
Freeze convolutional layers and fine-tune dense layers for the classification of digits [5..9].
Using GPU (highly recommended)
-> If using theano backend:
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32
End of explanation
"""
now = datetime.datetime.now
batch_size = 128
nb_classes = 5
nb_epoch = 5
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = 2
# convolution kernel size
kernel_size = 3
if K.image_data_format() == 'channels_first':
input_shape = (1, img_rows, img_cols)
else:
input_shape = (img_rows, img_cols, 1)
def train_model(model, train, test, nb_classes):
X_train = train[0].reshape((train[0].shape[0],) + input_shape)
X_test = test[0].reshape((test[0].shape[0],) + input_shape)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(train[1], nb_classes)
Y_test = np_utils.to_categorical(test[1], nb_classes)
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
t = now()
model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1,
validation_data=(X_test, Y_test))
print('Training time: %s' % (now() - t))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Settings
End of explanation
"""
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# create two datasets one with digits below 5 and one with 5 and above
X_train_lt5 = X_train[y_train < 5]
y_train_lt5 = y_train[y_train < 5]
X_test_lt5 = X_test[y_test < 5]
y_test_lt5 = y_test[y_test < 5]
X_train_gte5 = X_train[y_train >= 5]
y_train_gte5 = y_train[y_train >= 5] - 5 # make classes start at 0 for
X_test_gte5 = X_test[y_test >= 5] # np_utils.to_categorical
y_test_gte5 = y_test[y_test >= 5] - 5
# define two groups of layers: feature (convolutions) and classification (dense)
feature_layers = [
Convolution2D(nb_filters, kernel_size, kernel_size,
border_mode='valid',
input_shape=input_shape),
Activation('relu'),
Convolution2D(nb_filters, kernel_size, kernel_size),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Dropout(0.25),
Flatten(),
]
classification_layers = [
Dense(128),
Activation('relu'),
Dropout(0.5),
Dense(nb_classes),
Activation('softmax')
]
# create complete model
model = Sequential(feature_layers + classification_layers)
# train model for 5-digit classification [0..4]
train_model(model,
(X_train_lt5, y_train_lt5),
(X_test_lt5, y_test_lt5), nb_classes)
# freeze feature layers and rebuild model
for l in feature_layers:
l.trainable = False
# transfer: train dense layers for new classification task [5..9]
train_model(model,
(X_train_gte5, y_train_gte5),
(X_test_gte5, y_test_gte5), nb_classes)
"""
Explanation: Dataset Preparation
End of explanation
"""
## your code here
"""
Explanation: Your Turn
Try to Fine Tune a VGG16 Network
End of explanation
"""
|
bsafdi/NPTFit | examples/Example2_Creating_Masks.ipynb | mit | # Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import healpy as hp
from NPTFit import create_mask as cm # Module for creating masks
"""
Explanation: Example 2: Creating Masks
In this example we show how to create masks using create_mask.py.
Often it is convenient to consider only a reduced Region of Interest (ROI) when analyzing the data. In order to do this we need to create a mask. The masks are boolean arrays where pixels labelled as True are masked and those labelled False are unmasked. In this notebook we give examples of how to create various masks.
The masks are created by create_mask.py and can be passed to an instance of nptfit via the function load_mask for a run, or an instance of dnds_analysis via load_mask_analysis for an analysis. If no mask is specified the code defaults to the full sky as the ROI.
NB: Before you can call functions from NPTFit, you must have it installed. Instructions to do so can be found here:
http://nptfit.readthedocs.io/
End of explanation
"""
example1 = cm.make_mask_total()
hp.mollview(example1, title='', cbar=False, min=0,max=1)
"""
Explanation: Example 1: Mask Nothing
If no options are specified, create mask returns an empty mask. In the plot here and for those below, purple represents unmasked, yellow masked.
End of explanation
"""
example2 = cm.make_mask_total(band_mask = True, band_mask_range = 30)
hp.mollview(example2, title='', cbar = False, min=0, max=1)
"""
Explanation: Example 2: Band Mask
Here we show an example of how to mask a region either side of the plane - specifically we mask 30 degrees either side
End of explanation
"""
example3a = cm.make_mask_total(l_mask = False, l_deg_min = -30, l_deg_max = 30,
b_mask = True, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3a,title='',cbar=False,min=0,max=1)
example3b = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30,
b_mask = False, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3b,title='',cbar=False,min=0,max=1)
example3c = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30,
b_mask = True, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3c,title='',cbar=False,min=0,max=1)
"""
Explanation: Example 3: Mask outside a band in b and l
This example shows several methods of masking outside specified regions in galactic longitude (l) and latitude (b). The third example shows how when two different masks are specified, the mask returned is the combination of both.
End of explanation
"""
example4a = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 0, ring_l = 0)
hp.mollview(example4a,title='',cbar=False,min=0,max=1)
example4b = cm.make_mask_total(mask_ring = True, inner = 30, outer = 180, ring_b = 0, ring_l = 0)
hp.mollview(example4b,title='',cbar=False,min=0,max=1)
example4c = cm.make_mask_total(mask_ring = True, inner = 30, outer = 90, ring_b = 0, ring_l = 0)
hp.mollview(example4c,title='',cbar=False,min=0,max=1)
example4d = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 45, ring_l = 45)
hp.mollview(example4d,title='',cbar=False,min=0,max=1)
"""
Explanation: Example 4: Ring and Annulus Mask
Next we show examples of masking outside a ring or annulus. The final example demonstrates that the ring need not be at the galactic center.
End of explanation
"""
random_custom_mask = np.random.choice(np.array([True, False]), hp.nside2npix(128))
example5 = cm.make_mask_total(custom_mask = random_custom_mask)
hp.mollview(example5,title='',cbar=False,min=0,max=1)
"""
Explanation: Example 5: Custom Mask
In addition to the options above, we can also add in custom masks. In this example we highlight this by adding a random mask.
End of explanation
"""
pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool)
example6 = cm.make_mask_total(band_mask = True, band_mask_range = 2,
mask_ring = True, inner = 0, outer = 30,
custom_mask = pscmask)
hp.mollview(example6,title='',cbar=False,min=0,max=1)
"""
Explanation: Example 6: Full Analysis Mask including Custom Point Source Catalog Mask
Finally we show an example of a full analysis mask that we will use for an analysis of the Galactic Center Excess in Example 3 and 8. Here we mask the plane with a band mask, mask outside a ring and also include a custom point source mask. The details of the point source mask are given in Example 1.
NB: before the point source mask can be loaded, the Fermi Data needs to be downloaded. See details in Example 1.
End of explanation
"""
|
amueller/nyu_ml_lectures | Linear models.ipynb | bsd-2-clause | from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
"""
Explanation: Linear models for regression
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
End of explanation
"""
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
"""
Explanation: Linear Regression
$$ \text{min}_{w, b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 $$
End of explanation
"""
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
"""
Explanation: Ridge Regression (L2 penalty)
$$ \text{min}_{w,b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_2^2$$
End of explanation
"""
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
"""
Explanation: Lasso (L1 penalty)
$$ \text{min}_{w, b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_1$$
End of explanation
"""
from plots import plot_linear_svc_regularization
plot_linear_svc_regularization()
"""
Explanation: Linear models for classification
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_ > 0
The influence of C in LinearSVC
End of explanation
"""
from sklearn.datasets import make_blobs
plt.figure()
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
"""
Explanation: Multi-Class linear classification
End of explanation
"""
y % 2
# %load solutions/linear_models.py
"""
Explanation: Exercises
Compare Logistic regression with l1 penalty and l2 penalty by plotting the coefficients as above for the digits dataset. Classify odd vs even digits to make it a binary task.
End of explanation
"""
|
whitead/numerical_stats | unit_8/hw_2019/homework_8_key.ipynb | gpl-3.0 | import scipy.stats as ss
data_21 = [65.58, -28.15, 21.17, -0.57, 6.04, -10.21, 36.46, 10.67, 77.98, 15.97]
se = np.std(data_21, ddof=1) / np.sqrt(len(data_21))
T = ss.t.ppf(0.9, df=len(data_21) - 1)
print(np.mean(data_21), T * se)
"""
Explanation: Homework 8 Key
CHE 116: Numerical Methods and Statistics
2/21/2019
1. Short Answer (12 Points)
[2 points] If you sum together 20 numbers sampled from a binomial distribution and 10 from a Poisson distribution, how is your sum distribted?
[2 points] If you sample 25 numbers from different beta distributions, how will each of the numbers be distributed?
[4 points] Assume a HW grade is determined as the sample mean of 3 HW problems. How is the HW grade distributed if we do not know the population standard deviation? Why?
[4 points] For part 3, how could not knowing the population standard deviation change how it's distributed? How does knowledge of that number change the behavior of a random variable?
1.1
Normal
1.2
We are not summing, no NLT. Beta distributed
1.3
t-distribution, since we do not know population standard deviation and N < 25
1.4
We have to estimate the standard error using sample standard deviation, which itself is a random variable. If we have the exact number, then we no longer have two sources of randomness.
2. Confidence Intervals (30 Points)
Report the given confidence interval for error in the mean using the data given for each problem and describe in words what the confidence interval is for each example. 6 points each
2.1
80% Double.
data_21 = [65.58, -28.15, 21.17, -0.57, 6.04, -10.21, 36.46, 10.67, 77.98, 15.97]
2.2
99% Upper (lower bound, a value such that the mean lies above that value 99% of the time)
data_22 = [-8.78, -6.06, -6.03, -6.9, -13.57, -18.76, 1.5, -8.21, -3.21, -11.85, -2.72, -10.38, -11.03, -10.85, -7.6, -7.76, -5.99, -10.02, -6.32, -8.35, -19.28, -11.53, -6.04, -0.81, -12.01, -3.22, -9.25, -4.13, -7.22, -11.0, -14.42, 1.07]
2.3
95% Double
data_23 = [14.62, 10.34, 7.68, 15.81, 14.48]
2.4
Redo part 3 with a known standard deviation of 2
2.5
95% Lower (upper bound)
data_25 = [2.47, 2.03, 1.82, 6.98, 2.41, 2.32, 7.11, 5.89, 5.77, 3.34, 2.75, 6.51]
2.1
The 80% confidence interval is $19 \pm 14$
End of explanation
"""
data_22 = [-8.78, -6.06, -6.03, -6.9, -13.57, -18.76, 1.5, -8.21, -3.21, -11.85, -2.72, -10.38, -11.03, -10.85, -7.6, -7.76, -5.99, -10.02, -6.32, -8.35, -19.28, -11.53, -6.04, -0.81, -12.01, -3.22, -9.25, -4.13, -7.22, -11.0, -14.42, 1.07]
se = np.std(data_22, ddof=1) / np.sqrt(len(data_22))
Z = ss.norm.ppf(1 - 0.99)
print(Z * se + np.mean(data_22))
"""
Explanation: 2.2
The 99% confidence interval is $\mu > -10.1$
End of explanation
"""
data_23 = [14.62, 10.34, 7.68, 15.81, 14.48]
se = np.std(data_23, ddof=1) / np.sqrt(len(data_23))
T = ss.t.ppf(0.975, df=len(data_23) - 1)
print(np.mean(data_23), T * se)
"""
Explanation: 2.3
The 85% confidence interval is $12.5 \pm 4.3$
End of explanation
"""
data_23 = [14.62, 10.34, 7.68, 15.81, 14.48]
se = 2 / np.sqrt(len(data_23))
Z = ss.norm.ppf(0.975)
print(np.mean(data_23), T * se)
"""
Explanation: 2.4
The 95% confidence interval is $12.5 \pm 2.5$
End of explanation
"""
data_25 = [2.47, 2.03, 1.82, 6.98, 2.41, 2.32, 7.11, 5.89, 5.77, 3.34, 2.75, 6.51]
se = np.std(data_25, ddof=1) / np.sqrt(len(data_25))
T = ss.t.ppf(0.95, df=len(data_25) - 1)
print(np.mean(data_25) + T * se)
"""
Explanation: 2.5
The 95% upper bound is $\mu < 5.2$
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Marks/Object Model/Market Map.ipynb | apache-2.0 | data = pd.read_csv('../../data_files/country_codes.csv', index_col=[0])
country_codes = data.index.values
country_names = data['Name']
"""
Explanation: Get Data
End of explanation
"""
market_map = MarketMap(names=country_codes,
# basic data which needs to set for each map
ref_data=data,
# Data frame which can be used for different properties of the map
# Axis and scale for color data
tooltip_fields=['Name'],
layout=Layout(width='800px', height='600px'))
market_map
market_map.colors = ['MediumSeaGreen']
market_map.font_style = {'font-size': '16px', 'fill':'white'}
market_map.title = 'Country Map'
market_map.title_style = {'fill': 'Red'}
"""
Explanation: Basic Market Map
End of explanation
"""
gdp_data = pd.read_csv('../../data_files/gdp_per_capita.csv', index_col=[0], parse_dates=True)
gdp_data.fillna(method='backfill', inplace=True)
gdp_data.fillna(method='ffill', inplace=True)
col = ColorScale(scheme='Greens')
continents = data['Continent'].values
ax_c = ColorAxis(scale=col, label='GDP per Capita', visible=False)
data['GDP'] = gdp_data.iloc[-1]
market_map = MarketMap(names=country_codes, groups=continents, # Basic data which needs to set for each map
cols=25, row_groups=3, # Properties for the visualization
ref_data=data, # Data frame used for different properties of the map
tooltip_fields=['Name', 'Continent', 'GDP'], # Columns from data frame to be displayed as tooltip
tooltip_formats=['', '', '.1f'],
scales={'color': col}, axes=[ax_c],
layout=Layout(min_width='800px', min_height='600px')) # Axis and scale for color data
deb_output = Label()
def selected_index_changed(change):
deb_output.value = str(change.new)
market_map.observe(selected_index_changed, 'selected')
VBox([deb_output, market_map])
# Attribute to show the names of the groups, in this case the continents
market_map.show_groups = True
# Setting the selected countries
market_map.show_groups = False
market_map.selected = ['PAN', 'FRA', 'PHL']
# changing selected stroke and hovered stroke variable
market_map.selected_stroke = 'yellow'
market_map.hovered_stroke = 'violet'
"""
Explanation: GDP data with grouping by continent
World Bank national accounts data, and OECD National Accounts data files. (The World Bank: GDP per capita (current US$))
End of explanation
"""
# Adding data for color and making color axis visible
market_map.colors=['#ccc']
market_map.color = data['GDP']
ax_c.visible = True
"""
Explanation: Setting the color based on data
End of explanation
"""
# Creating the figure to be displayed as the tooltip
sc_x = DateScale()
sc_y = LinearScale()
ax_x = Axis(scale=sc_x, grid_lines='dashed', label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', grid_lines='dashed',
label='GDP', label_location='end', label_offset='-1em')
line = Lines(x= gdp_data.index.values, y=[], scales={'x': sc_x, 'y': sc_y}, colors=['orange'])
fig_tooltip = Figure(marks=[line], axes=[ax_x, ax_y])
market_map = MarketMap(names=country_codes, groups=continents,
cols=25, row_groups=3,
color=data['GDP'], scales={'color': col}, axes=[ax_c],
ref_data=data, tooltip_widget=fig_tooltip,
freeze_tooltip_location=True,
colors=['#ccc'],
layout=Layout(min_width='900px', min_height='600px'))
# Update the tooltip chart
hovered_symbol = ''
def hover_handler(self, content):
global hovered_symbol
symbol = content.get('data', '')
if(symbol != hovered_symbol):
hovered_symbol = symbol
if(gdp_data.get(hovered_symbol) is not None):
line.y = gdp_data[hovered_symbol].values
fig_tooltip.title = content.get('ref_data', {}).get('Name', '')
# Custom msg sent when a particular cell is hovered on
market_map.on_hover(hover_handler)
market_map
"""
Explanation: Adding a widget as tooltip
End of explanation
"""
|
joshnsolomon/phys202-2015-work | assignments/assignment04/TheoryAndPracticeEx01.ipynb | mit | from IPython.display import Image
"""
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
"""
# Add your filename and uncomment the following line:
Image(filename='graph1.png')
"""
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation
"""
|
jtwalsh0/methods | Statistics.ipynb | mit | %%latex
\begin{align*}
f_X(X=x) &= cx^2, 0 \leq x \leq 2 \\
1 &= c\int_0^2 x^2 dx \\
&= c[\frac{1}{3}x^3 + d]_0^2 \\
&= c[\frac{8}{3} + d - d] \\
&= c[\frac{8}{3}] \\
f_X(X=x) &= \frac{3}{8}x^2, 0 \leq x \leq 2
\end{align*}
u = np.random.uniform(size=100000)
x = 2 * u**.3333
df = pd.DataFrame({'x':x})
print df.describe()
ggplot(aes(x='x'), data=df) + geom_histogram()
"""
Explanation: Inversion sampling example
First find the normalizing constant:
$$
\begin{align}
f_X(X=x) &&= cx^2, 0 \leq x \leq 2 \
1 &&= c\int_0^2 x^2 dx \
&&= c[\frac{1}{3}x^3 + d]_0^2 \
&&= c[\frac{8}{3} + d - d] \
&&= c[\frac{8}{3}] \
f_X(X=x) &&= \frac{3}{8}x^2, 0 \leq x \leq 2
\end{align}
$$
Next find the cumulative distribution function:
* $F_X(X=x) = \int_0^x \frac{3}{8}x^2dx$
* $=\frac{3}{8}[\frac{1}{3}x^3 + d]_0^x$
* $=\frac{3}{8}[\frac{1}{3}x^3 + d - d]$
* $=\frac{1}{8}x^3$
We can randomly generate values from a standard uniform distribution and set equal to the CDF. Solve for $x$. Plug the randomly generated values into the equation and plot the histogram or density of $x$ to get the shape of the distribution:
* $u = \frac{1}{8}x^3$
* $x^3 = 8u$
* $x = 2u^{\frac{1}{3}}$
End of explanation
"""
x = np.random.uniform(size=10000)
y = np.random.uniform(size=10000)
"""
Explanation: Joint Distribution
Find the normalizing constant:
* $f_{X,Y}(X=x,Y=y) = c(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
* $1 = \int_0^2 \int_0^2 c(2x + y) dy dx$
* $ = c\int_0^2 [2xy + \frac{1}{2}y^2 + d]0^2 dx$
* $ = c\int_0^2 [4x + \frac{1}{2}4 + d - d] dx$
* $ = c\int_0^2 (4x + 2) dx$
* $ = c[2x^2 + 2x + d]_0^2$
* $ = c[2(4) + 2(2) + d - d]$
* $ = 12c$
* $c = \frac{1}{12}$
* $f{X,Y}(X=x,Y=y) = \frac{1}{12}(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
End of explanation
"""
u = np.random.uniform(size=100000)
x = (-1 + (1 + 24*u)**.5) / 2
df = pd.DataFrame({'x':x})
ggplot(aes(x='x'), data=df) + geom_histogram()
"""
Explanation: Find the marginal distribution:
* $f_{X,Y}(X=x,Y=y) = \frac{1}{12}(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
* $f_X(X=x) = \int_0^2 \frac{1}{12}(2x + y) dy$
* $ = \frac{1}{12}[2xy + \frac{1}{2}y^2 + d]_0^2$
* $ = \frac{1}{12}[4x + 2 + d - d]$
* $ = \frac{4x + 2}{12}$
* $ = \frac{2x + 1}{6}$
Inversion sampling example:
* $F_X(X=x) = \int_0^x \dfrac{2x+1}{6}dx$
* $= \frac{1}{6}[x^2 + x + d]_0^x$
* $= \frac{x(x + 1)}{6}$
* $u = \frac{x^2 + x}{6}$
* $0 = x^2 + x - 6u$
* $x = \frac{-1 \pm \sqrt{1 + 4 \times 6u}}{2}$
End of explanation
"""
|
ComputoCienciasUniandes/MetodosComputacionalesLaboratorio | 2016-1/w04/sistemas_lineales.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Sistemas de ecuaciones lineales
En este notebook vamos a ver conceptos básicos para resolver sistemas de ecuaciones lineales.
La estructura de esta presentación está basada en http://nbviewer.ipython.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook_adv2/py_exp_comp_adv2_sol.ipynb
End of explanation
"""
#usando numpy se pueden resolver sistemas de este tipo.
A = np.array([[4.0,3.0,-2.0],[1.0,2.0,1.0],[-3.0,3.0,2.0]])
b = np.array([[3.0],[2.0],[1.0]])
b = np.array([[3.0],[2.0],[1.0]])
sol = np.linalg.solve(A,b)
print(A)
print(b)
print("sol",sol)
print(np.dot(A,sol))
#la inversa se puede encontrar como
Ainv = np.linalg.inv(A)
print("Ainv")
print(Ainv)
print("A * Ainv")
print(np.dot(A,Ainv))
"""
Explanation: Sistemas de ecuaciones lineales
Un ejemplo de un sistema de ecuaciones lineales puede ser el siguiente
$
\begin{split}
a_{11} x_1 + a_{12} x_2+ a_{13}x_3 = b_1 \
a_{21} x_1 + a_{22} x_2+ a_{23} x_3 = b_2 \
a_{31} x_1 + a_{32} x_2+ a_{33} x_3 = b_3 \
\end{split}
$
que puede ser escrito de manera matricial como $Ax = b$, donde la solución se puede escribir como $x=A^{-1}b$. Esto motiva el desarrollo de métodos para encontrar la inversa de una matriz.
End of explanation
"""
#primero construimos las matrices A y b
xp = np.array([-2, 1,4])
yp = np.array([ 2,-1,4])
A = np.zeros((3,3))
b = np.zeros(3)
for i in range(3):
A[i] = xp[i]**2, xp[i], 1 # Store one row at a time
b[i] = yp[i]
print 'Array A: '
print A
print 'b: ',b
#ahora resolvemos el sistema lineal y graficamos la solucion
sol = np.linalg.solve(A,b)
print 'solution is: ', sol
print 'A dot sol: ', np.dot(A,sol)
plt.plot([-2,1,4], [2,-1,4], 'ro')
x = np.linspace(-3,5,100)
y = sol[0]*x**2 + sol[1]*x + sol[2]
plt.plot(x,y,'b')
"""
Explanation: Construyendo un sistemas de ecuaciones lineales
Tenemos ahora el ejemplo siguiente. Tenemos tres puntos en el plano (x,y) y queremos encontrar la parábola que pasa por esos tres puntos.
La ecuación de la parábola es $y=ax^2+bx+c$, si tenemos tres puntos $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$ podemos definir el siguiente sistema de ecuaciones lineales.
$
\begin{split}
x_1^2a+x_1b+c&=y_1 \
x_2^2a+x_2b+c&=y_2 \
x_3^2a+x_3b+c&=y_3 \
\end{split}
$
Que en notación matricial se ven así
$
\left(
\begin{array}{ccc}
x_1^2 & x_1 & 1 \
x_2^2 & x_2 & 1 \
x_3^2 & x_3 & 1 \
\end{array}
\right)
\left(
\begin{array}{c}
a \b \c \
\end{array}
\right)
=
\left(
\begin{array}{c}
y_1 \
y_2 \
y_3 \
\end{array}
\right)
$
Vamos a resolver este sistema lineal, asumiendo que los tres puntos son: $(x_1,y_1)=(-2,2)$, $(x_2,y_2)=(1,-1)$, $(x_3,y_3)=(4,4)$
End of explanation
"""
data = np.loadtxt("movimiento.dat")
plt.scatter(data[:,0], data[:,1])
"""
Explanation: Ejercicio 1
¿Qué pasa si los puntos no se encuentran sobre una parábola?
Ejercicio 2
Tomemos las mediciones de una cantidad $y$ a diferentes tiempos $t$: $(t_0,y_0)=(0,3)$, $(t_1,y_1)=(0.25,1)$, $(t_2,y_2)=(0.5,-3)$, $(t_3,y_3)=(0.75,1)$. Estas medidas son parte de una función periódica que se puede escribir como
$y = a\cos(\pi t) + b\cos(2\pi t) + c\cos(3\pi t) + d\cos(4\pi t)$
donde $a$, $b$, $c$, and $d$ son parámetros. Construya un sistema de ecuaciones lineales y encuentre el valor de estos parámetros. Verifique su respuesta haciendo una gráfica.
Mínimos cuadrados
Volvamos por un momento al ejercicio de la parábola. Que pasaría si en realidad tuviéramos 10 mediciones? En ese caso la matriz $A$ sería de 10 por 3 y no podríamos encontrar una inversa. Aún así es interesante el problema de encontrar los parámetros de la parábola a partir de las mediciones. Aunque en este caso tenemos que olvidarnos de que la parábola pase por todos los puntos experimentales porque en general no lo va a hacer.
Para este caso tenemos que definir un criterio para decir que los parámetros son los mejores. Un posible criterio es que la suma de los cuadrados entre la curva teórica y los datos sea mínima. ¿Cómo podemos entonces encontrar una solución para este caso?
Cambiando un poco la notación pensemos que tenemos un vector $d$ de datos, un vector $m$ con los parámetros del modelo que queremos encontrar y una matriz $G$ que resume la información sobre el modelo que queremos utilizar para explicar los datos. De esta manera el problema se podría escribir como
$G m = d$
Donde $G$ en general no es invertible. Pero usando el criterio de mínimos cuadrados vamos a tener que el vector $m$ en realidad puede ser estimado por un vector $\hat{m}$ que cumple la siguiente condición
$G^T G \hat{m} = G^{T}d$
donde $T$ indica la transpuesta. Si ahora escribimos $G^{T}G=A$, $\hat{m}=x$ y $G^{T}d=b$ volvemos al problema del principio y podemos encontrar fácilmente a $\hat{m}$
Ejercicio 3
Los datos siguientes
https://raw.githubusercontent.com/ComputoCienciasUniandes/MetodosComputacionales/master/hands_on/lin_algebra/movimiento.dat
Representan una coordenada temporal y una coordenada espacial de un movimiento unidimensional en un campo gravitacional. Encuentre el mejor valor posible de la posición inicial, velocidad inicial y gravedad. Verifique que sus valores son razonables con una gráfica.
End of explanation
"""
|
gdementen/larray | doc/source/tutorial/tutorial_IO.ipynb | gpl-3.0 | # first of all, import the LArray library
from larray import *
"""
Explanation: Load And Dump Arrays
The LArray library provides methods and functions to load and dump Array, Session, Axis Group objects to several formats such as Excel, CSV and HDF5. The HDF5 file format is designed to store and organize large amounts of data. It allows to read and write data much faster than when working with CSV and Excel files.
End of explanation
"""
csv_dir = get_example_filepath('examples')
# read the array population from the file 'population.csv'.
# The data of the array below is derived from a subset of the demo_pjan table from Eurostat
population = read_csv(csv_dir + '/population.csv')
population
"""
Explanation: Loading Arrays - Basic Usage (CSV, Excel, HDF5)
To read an array from a CSV file, you must use the read_csv function:
End of explanation
"""
filepath_excel = get_example_filepath('examples.xlsx')
# read the array from the sheet 'births' of the Excel file 'examples.xlsx'
# The data of the array below is derived from a subset of the demo_fasec table from Eurostat
births = read_excel(filepath_excel, 'births')
births
"""
Explanation: To read an array from a sheet of an Excel file, you can use the read_excel function:
End of explanation
"""
filepath_hdf = get_example_filepath('examples.h5')
# read the array from the file 'examples.h5' associated with the key 'deaths'
# The data of the array below is derived from a subset of the demo_magec table from Eurostat
deaths = read_hdf(filepath_hdf, 'deaths')
deaths
"""
Explanation: The open_excel function in combination with the load method allows you to load several arrays from the same Workbook without opening and closing it several times:
```python
open the Excel file 'population.xlsx' and let it opened as long as you keep the indent.
The Python keyword with ensures that the Excel file is properly closed even if an error occurs
with open_excel(filepath_excel) as wb:
# load the array 'population' from the sheet 'population'
population = wb['population'].load()
# load the array 'births' from the sheet 'births'
births = wb['births'].load()
# load the array 'deaths' from the sheet 'deaths'
deaths = wb['deaths'].load()
the Workbook is automatically closed when getting out the block defined by the with statement
```
<div class="alert alert-warning">
**Warning:** `open_excel` requires to work on Windows and to have the library ``xlwings`` installed.
</div>
The HDF5 file format is specifically designed to store and organize large amounts of data.
Reading and writing data in this file format is much faster than with CSV or Excel.
An HDF5 file can contain multiple arrays, each array being associated with a key.
To read an array from an HDF5 file, you must use the read_hdf function and provide the key associated with the array:
End of explanation
"""
# save the array population in the file 'population.csv'
population.to_csv('population.csv')
"""
Explanation: Dumping Arrays - Basic Usage (CSV, Excel, HDF5)
To write an array in a CSV file, you must use the to_csv method:
End of explanation
"""
# save the array population in the sheet 'population' of the Excel file 'population.xlsx'
population.to_excel('population.xlsx', 'population')
"""
Explanation: To write an array to a sheet of an Excel file, you can use the to_excel method:
End of explanation
"""
# add a new sheet 'births' to the file 'population.xlsx' and save the array births in it
births.to_excel('population.xlsx', 'births')
"""
Explanation: Note that to_excel create a new Excel file if it does not exist yet.
If the file already exists, a new sheet is added after the existing ones if that sheet does not already exists:
End of explanation
"""
# 1. reset the file 'population.xlsx' (all sheets are removed)
# 2. create a sheet 'population' and save the array population in it
population.to_excel('population.xlsx', 'population', overwrite_file=True)
"""
Explanation: To reset an Excel file, you simply need to set the overwrite_file argument as True:
End of explanation
"""
# save the array population in the file 'population.h5' and associate it with the key 'population'
population.to_hdf('population.h5', 'population')
"""
Explanation: The open_excel function in combination with the dump() method allows you to open a Workbook and to export several arrays at once. If the Excel file doesn't exist, the overwrite_file argument must be set to True.
<div class="alert alert-warning">
**Warning:** The ``save`` method must be called at the end of the block defined by the *with* statement to actually write data in the Excel file, otherwise you will end up with an empty file.
</div>
```python
to create a new Excel file, argument overwrite_file must be set to True
with open_excel('population.xlsx', overwrite_file=True) as wb:
# add a new sheet 'population' and dump the array population in it
wb['population'] = population.dump()
# add a new sheet 'births' and dump the array births in it
wb['births'] = births.dump()
# add a new sheet 'deaths' and dump the array deaths in it
wb['deaths'] = deaths.dump()
# actually write data in the Workbook
wb.save()
the Workbook is automatically closed when getting out the block defined by the with statement
```
To write an array in an HDF5 file, you must use the to_hdf function and provide the key that will be associated with the array:
End of explanation
"""
# set 'wide' argument to False to indicate that the array is stored in the 'narrow' format
population_BE_FR = read_csv(csv_dir + '/population_narrow_format.csv', wide=False)
population_BE_FR
# same for the read_excel function
population_BE_FR = read_excel(filepath_excel, sheet='population_narrow_format', wide=False)
population_BE_FR
"""
Explanation: Specifying Wide VS Narrow format (CSV, Excel)
By default, all reading functions assume that arrays are stored in the wide format, meaning that their last axis is represented horizontally:
| country \ time | 2013 | 2014 | 2015 |
| --------------- | -------- | -------- | -------- |
| Belgium | 11137974 | 11180840 | 11237274 |
| France | 65600350 | 65942267 | 66456279 |
By setting the wide argument to False, reading functions will assume instead that arrays are stored in the narrow format, i.e. one column per axis plus one value column:
| country | time | value |
| ------- | ---- | -------- |
| Belgium | 2013 | 11137974 |
| Belgium | 2014 | 11180840 |
| Belgium | 2015 | 11237274 |
| France | 2013 | 65600350 |
| France | 2014 | 65942267 |
| France | 2015 | 66456279 |
End of explanation
"""
# dump the array population_BE_FR in a narrow format (one column per axis plus one value column).
# By default, the name of the column containing data is set to 'value'
population_BE_FR.to_csv('population_narrow_format.csv', wide=False)
# same but replace 'value' by 'population'
population_BE_FR.to_csv('population_narrow_format.csv', wide=False, value_name='population')
# same for the to_excel method
population_BE_FR.to_excel('population.xlsx', 'population_narrow_format', wide=False, value_name='population')
"""
Explanation: By default, writing functions will set the name of the column containing the data to 'value'. You can choose the name of this column by using the value_name argument. For example, using value_name='population' you can export the previous array as:
| country | time | population |
| ------- | ---- | ---------- |
| Belgium | 2013 | 11137974 |
| Belgium | 2014 | 11180840 |
| Belgium | 2015 | 11237274 |
| France | 2013 | 65600350 |
| France | 2014 | 65942267 |
| France | 2015 | 66456279 |
End of explanation
"""
# read the 3 x 2 x 3 array stored in the file 'population_missing_axis_name.csv' wihout using 'nb_axes' argument.
population = read_csv(csv_dir + '/population_missing_axis_name.csv')
# shape and data type of the output array are not what we expected
population.info
# by setting the 'nb_axes' argument, you can indicate to read_csv the number of axes of the output array
population = read_csv(csv_dir + '/population_missing_axis_name.csv', nb_axes=3)
# give a name to the last axis
population = population.rename(-1, 'time')
# shape and data type of the output array are what we expected
population.info
# same for the read_excel function
population = read_excel(filepath_excel, sheet='population_missing_axis_name', nb_axes=3)
population = population.rename(-1, 'time')
population.info
"""
Explanation: Like with the to_excel method, it is possible to export arrays in a narrow format using open_excel.
To do so, you must set the wide argument of the dump method to False:
```python
with open_excel('population.xlsx') as wb:
# dump the array population_BE_FR in a narrow format:
# one column per axis plus one value column.
# Argument value_name can be used to change the name of the
# column containing the data (default name is 'value')
wb['population_narrow_format'] = population_BE_FR.dump(wide=False, value_name='population')
# don't forget to call save()
wb.save()
in the sheet 'population_narrow_format', data is written as:
| country | time | value |
| ------- | ---- | -------- |
| Belgium | 2013 | 11137974 |
| Belgium | 2014 | 11180840 |
| Belgium | 2015 | 11237274 |
| France | 2013 | 65600350 |
| France | 2014 | 65942267 |
| France | 2015 | 66456279 |
```
Specifying Position in Sheet (Excel)
If you want to read an array from an Excel sheet which does not start at cell A1 (when there is more than one array stored in the same sheet for example), you will need to use the range argument.
<div class="alert alert-warning">
**Warning:** Note that the ``range`` argument is only available if you have the library ``xlwings`` installed (Windows).
</div>
```python
the 'range' argument must be used to load data not starting at cell A1.
This is useful when there is several arrays stored in the same sheet
births = read_excel(filepath_excel, sheet='population_births_deaths', range='A9:E15')
```
Using open_excel, ranges are passed in brackets:
```python
with open_excel(filepath_excel) as wb:
# store sheet 'population_births_deaths' in a temporary variable sh
sh = wb['population_births_deaths']
# load the array population from range A1:E7
population = sh['A1:E7'].load()
# load the array births from range A9:E15
births = sh['A9:E15'].load()
# load the array deaths from range A17:E23
deaths = sh['A17:E23'].load()
the Workbook is automatically closed when getting out the block defined by the with statement
```
When exporting arrays to Excel files, data is written starting at cell A1 by default. Using the position argument of the to_excel method, it is possible to specify the top left cell of the dumped data. This can be useful when you want to export several arrays in the same sheet for example
<div class="alert alert-warning">
**Warning:** Note that the ``position`` argument is only available if you have the library ``xlwings`` installed (Windows).
</div>
```python
filename = 'population.xlsx'
sheetname = 'population_births_deaths'
save the arrays population, births and deaths in the same sheet 'population_births_and_deaths'.
The 'position' argument is used to shift the location of the second and third arrays to be dumped
population.to_excel(filename, sheetname)
births.to_excel(filename, sheetname, position='A9')
deaths.to_excel(filename, sheetname, position='A17')
```
Using open_excel, the position is passed in brackets (this allows you to also add extra informations):
```python
with open_excel('population.xlsx') as wb:
# add a new sheet 'population_births_deaths' and write 'population' in the first cell
# note: you can use wb['new_sheet_name'] = '' to create an empty sheet
wb['population_births_deaths'] = 'population'
# store sheet 'population_births_deaths' in a temporary variable sh
sh = wb['population_births_deaths']
# dump the array population in sheet 'population_births_deaths' starting at cell A2
sh['A2'] = population.dump()
# add 'births' in cell A10
sh['A10'] = 'births'
# dump the array births in sheet 'population_births_deaths' starting at cell A11
sh['A11'] = births.dump()
# add 'deaths' in cell A19
sh['A19'] = 'deaths'
# dump the array deaths in sheet 'population_births_deaths' starting at cell A20
sh['A20'] = deaths.dump()
# don't forget to call save()
wb.save()
the Workbook is automatically closed when getting out the block defined by the with statement
```
Exporting data without headers (Excel)
For some reasons, you may want to export only the data of an array without axes. For example, you may want to insert a new column containing extra information. As an exercise, let us consider we want to add the capital city for each country present in the array containing the total population by country:
| country | capital city | 2013 | 2014 | 2015 |
| ------- | ------------ | -------- | -------- | -------- |
| Belgium | Brussels | 11137974 | 11180840 | 11237274 |
| France | Paris | 65600350 | 65942267 | 66456279 |
| Germany | Berlin | 80523746 | 80767463 | 81197537 |
Assuming you have prepared an excel sheet as below:
| country | capital city | 2013 | 2014 | 2015 |
| ------- | ------------ | -------- | -------- | -------- |
| Belgium | Brussels | | | |
| France | Paris | | | |
| Germany | Berlin | | | ||
you can then dump the data at right place by setting the header argument of to_excel to False and specifying the position of the data in sheet:
```python
population_by_country = population.sum('gender')
export only the data of the array population_by_country starting at cell C2
population_by_country.to_excel('population.xlsx', 'population_by_country', header=False, position='C2')
```
Using open_excel, you can easily prepare the sheet and then export only data at the right place by either setting the header argument of the dump method to False or avoiding to call dump:
```python
with open_excel('population.xlsx') as wb:
# create new empty sheet 'population_by_country'
wb['population_by_country'] = ''
# store sheet 'population_by_country' in a temporary variable sh
sh = wb['population_by_country']
# write extra information (description)
sh['A1'] = 'Population at 1st January by country'
# export column names
sh['A2'] = ['country', 'capital city']
sh['C2'] = population_by_country.time.labels
# export countries as first column
sh['A3'].options(transpose=True).value = population_by_country.country.labels
# export capital cities as second column
sh['B3'].options(transpose=True).value = ['Brussels', 'Paris', 'Berlin']
# export only data of population_by_country
sh['C3'] = population_by_country.dump(header=False)
# or equivalently
sh['C3'] = population_by_country
# don't forget to call save()
wb.save()
the Workbook is automatically closed when getting out the block defined by the with statement
```
Specifying the Number of Axes at Reading (CSV, Excel)
By default, read_csv and read_excel will search the position of the first cell containing the special character \ in the header line in order to determine the number of axes of the array to read. The special character \ is used to separate the name of the two last axes. If there is no special character \, read_csv and read_excel will consider that the array to read has only one dimension. For an array stored as:
| country | gender \ time | 2013 | 2014 | 2015 |
| ------- | -------------- | -------- | -------- | -------- |
| Belgium | Male | 5472856 | 5493792 | 5524068 |
| Belgium | Female | 5665118 | 5687048 | 5713206 |
| France | Male | 31772665 | 31936596 | 32175328 |
| France | Female | 33827685 | 34005671 | 34280951 |
| Germany | Male | 39380976 | 39556923 | 39835457 |
| Germany | Female | 41142770 | 41210540 | 41362080 |
read_csv and read_excel will find the special character \ in the second cell meaning it expects three axes (country, gender and time).
Sometimes, you need to read an array for which the name of the last axis is implicit:
| country | gender | 2013 | 2014 | 2015 |
| ------- | ------ | -------- | -------- | -------- |
| Belgium | Male | 5472856 | 5493792 | 5524068 |
| Belgium | Female | 5665118 | 5687048 | 5713206 |
| France | Male | 31772665 | 31936596 | 32175328 |
| France | Female | 33827685 | 34005671 | 34280951 |
| Germany | Male | 39380976 | 39556923 | 39835457 |
| Germany | Female | 41142770 | 41210540 | 41362080 |
For such case, you will have to inform read_csv and read_excel of the number of axes of the output array by setting the nb_axes argument:
End of explanation
"""
# by default, cells associated will missing label combinations are filled with nans.
# In that case, the output array is converted to a float array
read_csv(csv_dir + '/population_missing_values.csv')
"""
Explanation: NaNs and Missing Data Handling at Reading (CSV, Excel)
Sometimes, there is no data available for some label combinations. In the example below, the rows corresponding to France - Male and Germany - Female are missing:
| country | gender \ time | 2013 | 2014 | 2015 |
| ------- | -------------- | -------- | -------- | -------- |
| Belgium | Male | 5472856 | 5493792 | 5524068 |
| Belgium | Female | 5665118 | 5687048 | 5713206 |
| France | Female | 33827685 | 34005671 | 34280951 |
| Germany | Male | 39380976 | 39556923 | 39835457 |
By default, read_csv and read_excel will fill cells associated with missing label combinations with nans.
Be aware that, in that case, an int array will be converted to a float array.
End of explanation
"""
read_csv(csv_dir + '/population_missing_values.csv', fill_value=0)
# same for the read_excel function
read_excel(filepath_excel, sheet='population_missing_values', fill_value=0)
"""
Explanation: However, it is possible to choose which value to use to fill missing cells using the fill_value argument:
End of explanation
"""
# sort labels at reading --> Male and Female labels are inverted
read_csv(csv_dir + '/population.csv', sort_rows=True)
read_excel(filepath_excel, sheet='births', sort_rows=True)
read_hdf(filepath_hdf, key='deaths').sort_axes()
"""
Explanation: Sorting Axes at Reading (CSV, Excel, HDF5)
The sort_rows and sort_columns arguments of the reading functions allows you to sort rows and columns alphabetically:
End of explanation
"""
population.meta.title = 'Population at 1st January'
population.meta.origin = 'Table demo_jpan from Eurostat'
population.info
"""
Explanation: Metadata (HDF5)
Since the version 0.29 of LArray, it is possible to add metadata to arrays:
End of explanation
"""
population.to_hdf('population.h5', 'population')
new_population = read_hdf('population.h5', 'population')
new_population.info
"""
Explanation: These metadata are automatically saved and loaded when working with the HDF5 file format:
End of explanation
"""
|
NlGG/Home | seminar/Chap05.ipynb | mit | #Simulate interest rate oath by the Vasicek model
def vasicek(r0, K, theta, sigma, T=1., N=10, seed=777):
np.random.seed(seed)
dt = T/float(N)
rates = [r0]
for i in range(N):
dr = K*(theta-rates[-1]*dt) + sigma*np.random.normal()
rates.append(rates[-1] + dr)
return range(N+1), rates
x, y = vasicek(0.01875, 0.20, 0.01, 0.012, 10., 200)
plt.figure(figsize=(10,5))
plt.plot(x, y)
x, y = vasicek(0.01875, 0.20, 0.01, 0.012, 10., 200)
plt.figure(figsize=(10,5))
plt.plot(x, y)
x, y = vasicek(0.01875, 0.20, 0.01, 0.012, 10., 200, seed=666)
plt.figure(figsize=(10,5))
plt.plot(x, y)
x, y = vasicek(0.01875, 0.20, 0.01, 0.012, 10., 200, seed=888)
plt.figure(figsize=(10,5))
plt.plot(x, y)
"""
Explanation: The Vasicek model
End of explanation
"""
# Get zero coupon bond price by Vasicek model
def exact_zcb(theta, kappa, sigma, tau, r0=0.):
B = (1 - np.exp(-kappa*tau))/kappa
A = np.exp((theta - (sigma**2)/(2*(kappa**2)))*(B - tau) - (sigma**2)/(4*kappa)*(B**2))
return A*np.exp(-r0*B)
Ts = np.r_[0.0:25.5:0.5]
zcbs = [exact_zcb(0.5, 0.02, 0.03, t, 0.015) for t in Ts]
plt.figure(figsize=(10,5))
plt.title("Zero Coupon Bond (ZCB) Value by Time")
plt.plot(Ts, zcbs, label='ZCB')
plt.ylabel("Value ($)")
plt.xlabel("Time in years")
plt.legend()
plt.grid(True)
plt.show()
"""
Explanation: Pricing a zero-coupon bond by the Vasicek model
End of explanation
"""
def exercise_value(K, R, t):
return K*math.exp(-R*t)
Ts = np.r_[0.0:25.5:0.5]
Ks = [exercise_value(0.95, 0.015, t) for t in Ts]
zcbs = [exact_zcb(0.5, 0.02, 0.03, t, 0.015) for t in Ts]
plt.figure(figsize=(10,5))
plt.title("Zero Coupon Bond (ZCB) Value by Time "
"and Strike(K) Values by Time")
plt.plot(Ts, zcbs, label='ZCB')
plt.plot(Ts, Ks, label='K', linestyle="--", marker=".")
plt.ylabel("Value ($)")
plt.xlabel("Time in years")
plt.legend()
plt.grid(True)
plt.show()
"""
Explanation: Value of early-exercise
End of explanation
"""
class VasicekCZCB:
def __init__(self):
self.norminv = st.distributions.norm.ppf
self.nor = st.distributions.norm.cdf
def vasicek_czcb_values(self, r0, R, ratio, T, sigma, kappa, theta, M, prob=1e-6, max_policy_iter=10,
grid_struct_const=0.25, rs=None):
r_min, dr, N, dtau = self.vasicek_params(r0, M, sigma, kappa, theta, T, prob, grid_struct_const, rs)
r = np.r_[0:N]*dr + r_min
v_mplus1 = np.ones(N)
for i in range(1, M+1):
K = self.exercise_call_price(R, ratio, i*dtau)
eex = np.ones(N)*K
subdiagonal, diagonal, superdiagonal = self.vasicek_diagonals(sigma, kappa, theta, r_min, dr, N, dtau)
v_mplus1, iterations = self.iterate(subdiagonal, diagonal, superdiagonal, v_mplus1, eex, max_policy_iter)
return r, v_mplus1
def vasicek_params(self, r0, M, sigma, kappa, theta, T, prob, grid_struct_const=0.25, rs=None):
(r_min, r_max) = (rs[0], rs[-1]) if not rs is None else self.vasicek_limits(r0, sigma, kappa, theta, T, prob)
dt = T/float(M)
N = self.calculate_N(grid_struct_const, dt,
sigma, r_max, r_min)
dr = (r_max-r_min)/(N-1)
return r_min, dr, N, dt
def calculate_N(self, max_structure_const, dt,
sigma, r_max, r_min):
N = 0
while True:
N += 1
grid_structure_interval = dt*(sigma**2)/(
((r_max-r_min)/float(N))**2)
if grid_structure_interval > max_structure_const:
break
return N
def vasicek_limits(self, r0, sigma, kappa,
theta, T, prob=1e-6):
er = theta+(r0-theta)*math.exp(-kappa*T)
variance = (sigma**2)*T if kappa==0 else (sigma**2)/(2*kappa)*(1-math.exp(-2*kappa*T))
stdev = math.sqrt(variance)
r_min = self.norminv(prob, er, stdev)
r_max = self.norminv(1-prob, er, stdev)
return r_min, r_max
def vasicek_diagonals(self, sigma, kappa, theta,
r_min, dr, N, dtau):
rn = np.r_[0:N]*dr + r_min
subdiagonals = kappa*(theta-rn)*dtau/(2*dr) - 0.5*(sigma**2)*dtau/(dr**2)
diagonals = 1 + rn*dtau + sigma**2*dtau/(dr**2)
superdiagonals = -kappa*(theta-rn)*dtau/(2*dr) - 0.5*(sigma**2)*dtau/(dr**2)
# Implement boundary conditions.
if N > 0:
v_subd0 = subdiagonals[0]
superdiagonals[0] = superdiagonals[0] - subdiagonals[0]
diagonals[0] += 2*v_subd0
subdiagonals[0] = 0
if N > 1:
v_superd_last = superdiagonals[-1]
superdiagonals[-1] = superdiagonals[-1] - subdiagonals[-1]
diagonals[-1] += 2*v_superd_last
superdiagonals[-1] = 0
return subdiagonals, diagonals, superdiagonals
def check_exercise(self, V, eex):
return V > eex
def exercise_call_price(self, R, ratio, tau):
K = ratio*np.exp(-R*tau)
return K
def vasicek_policy_diagonals(self, subdiagonal, diagonal,
superdiagonal, v_old, v_new,
eex):
has_early_exercise = self.check_exercise(v_new, eex)
subdiagonal[has_early_exercise] = 0
superdiagonal[has_early_exercise] = 0
policy = v_old/eex
policy_values = policy[has_early_exercise]
diagonal[has_early_exercise] = policy_values
return subdiagonal, diagonal, superdiagonal
def iterate(self, subdiagonal, diagonal, superdiagonal,
v_old, eex, max_policy_iter=10):
v_mplus1 = v_old
v_m = v_old
change = np.zeros(len(v_old))
prev_changes = np.zeros(len(v_old))
iterations = 0
while iterations <= max_policy_iter:
iterations += 1
v_mplus1 = self.tridiagonal_solve(subdiagonal, diagonal, superdiagonal, v_old)
subdiagonal, diagonal, superdiagonal = self.vasicek_policy_diagonals(subdiagonal, diagonal, superdiagonal,
v_old, v_mplus1, eex)
is_eex = self.check_exercise(v_mplus1, eex)
change[is_eex] = 1
if iterations > 1:
change[v_mplus1 != v_m] = 1
is_no_more_eex = False if True in is_eex else True
if is_no_more_eex:
break
v_mplus1[is_eex] = eex[is_eex]
changes = (change == prev_changes)
is_no_further_changes = all((x == 1) for x in changes)
if is_no_further_changes:
break
prev_changes = change
v_m = v_mplus1
return v_mplus1, (iterations-1)
def tridiagonal_solve(self, a, b, c, d):
nf = len(a) # Number of equations
ac, bc, cc, dc = map(np.array, (a, b, c, d)) # Copy the array
for it in range(1, nf):
mc = ac[it]/bc[it-1]
bc[it] = bc[it] - mc*cc[it-1]
dc[it] = dc[it] - mc*dc[it-1]
xc = ac
xc[-1] = dc[-1]/bc[-1]
for il in range(nf-2, -1, -1):
xc[il] = (dc[il]-cc[il]*xc[il+1])/bc[il]
del bc, cc, dc # Delete variables from memory
return xc
r0 = 0.05
R = 0.05
ratio = 0.95
sigma = 0.03
kappa = 0.15
theta = 0.05
prob = 1e-6
M = 250
max_policy_iter=10
grid_struct_interval = 0.25
rs = np.r_[0.0:2.0:0.1]
Vasicek = VasicekCZCB()
r, vals = Vasicek.vasicek_czcb_values(r0, R, ratio, 1., sigma, kappa, theta, M, prob,
max_policy_iter, grid_struct_interval, rs)
plt.figure(figsize=(10,5))
plt.title("Callable Zero Coupon Bond Values by r")
plt.plot(r, vals, label='1 yr')
for T in [5., 7., 10., 20.]:
r, vals = Vasicek.vasicek_czcb_values(r0, R, ratio, T, sigma, kappa, theta, M, prob,
max_policy_iter, grid_struct_interval, rs)
plt.plot(r, vals, label=str(T)+' yr', linestyle="--", marker=".")
plt.ylabel("Value ($)")
plt.xlabel("r")
plt.legend()
plt.grid(True)
plt.show()
for i in range(5, -1, -1):
print(i)
"""
Explanation: 債券の発行者はコールを行使する権利を保有しているので、コール付きゼロクーポン債券の価格は、
$$callable~zero~coupon~bond~price~=~min(ZCB, K)$$
となる。
この債券価格は現在の金利の水準が与えられた時の近似である。
次のステップとして、policy iterationの形で、早期行使を考慮することが考えられる。
Policy iteration by finite differences
End of explanation
"""
|
gardenermike/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (1000, 1050)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {word: i for i, word in enumerate(set(text))}
int_to_vocab = dict(enumerate(vocab_to_int))
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punctuation = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_point||',
'?': '||question_mark||',
'(': '||left_parenthesis||',
')': '||right_parenthesis||',
'--': '||emdash||',
"\n": '||line_break||'
}
return punctuation
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None, None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
layer_count = 3
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm] * layer_count)
initial_state = stacked_lstm.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return stacked_lstm, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.truncated_normal([vocab_size, embed_dim], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
state = tf.identity(state, name="final_state")
return outputs, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, rnn_size)
outputs, state = build_rnn(cell, embedding)
logits = tf.layers.dense(inputs=outputs, units=vocab_size)
return logits, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
number_of_batches = len(int_text) // (batch_size * seq_length)
if len(int_text) % number_of_batches <= 1: #if we don't have at least one extra element to predict
number_of_batches -= 1
batches = np.empty(shape=(number_of_batches, 2, batch_size, seq_length))
for batch_number in range(number_of_batches):
base_index = batch_number * seq_length
inputs = np.array([])
targets = np.array([])
for sequence_number in range(batch_size):
start_index = base_index + (sequence_number * seq_length)
end_index = start_index + seq_length
sequence = int_text[start_index:end_index]
target = int_text[(start_index + 1):(end_index + 1)]
batches[batch_number, 0, sequence_number] = sequence
batches[batch_number, 1, sequence_number] = target
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = 8192
# Sequence Length
seq_length = 32
# Batch Size
batch_size = int(len(int_text) / seq_length // 2) #maximize the batch size to not waste data
# RNN Size
rnn_size = 128
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0')
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, initial_state, final_state, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
weighted_index = np.searchsorted(np.cumsum(probabilities), np.random.rand())
return int_to_vocab[int(weighted_index)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/classify_text_with_bert.ipynb | apache-2.0 | # A dependency of the preprocessing for BERT inputs
!pip install -q --user tensorflow-text
"""
Explanation: Classify text with BERT
Learning Objectives
Learn how to load a pre-trained BERT model from TensorFlow Hub
Learn how to build your own model by combining with a classifier
Learn how to train a your BERT model by fine-tuning
Learn how to save your trained model and use it
Learn how to evaluate a text classification model
This lab will show you how to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews.
In addition to training a model, you will learn how to preprocess text into an appropriate format.
Before you start
Please ensure you have a GPU (1 x NVIDIA Tesla K80 should be enough) attached to your Notebook instance to ensure that the training doesn't take too long.
About BERT
BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers.
BERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks.
Setup
End of explanation
"""
!pip install -q --user tf-models-official
import os
import shutil
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from official.nlp import optimization # to create AdamW optmizer
import matplotlib.pyplot as plt
tf.get_logger().setLevel('ERROR')
"""
Explanation: You will use the AdamW optimizer from tensorflow/models.
End of explanation
"""
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
"""
Explanation: To check if you have a GPU attached. Run the following.
End of explanation
"""
url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
#TODO
#Set a path to a folder outside the git repo. This is important so data won't get indexed by git on Jupyter lab
path = #example: '/home/jupyter/'
dataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url,
untar=True, cache_dir=path,
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
train_dir = os.path.join(dataset_dir, 'train')
# remove unused folders to make it easier to load the data
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
"""
Explanation: Sentiment Analysis
This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review.
You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database.
Download the IMDB dataset
Let's download and extract the dataset, then explore the directory structure.
TODO: Set path to a folder outside the git repo where the IMDB data will be downloaded
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
batch_size = 32
seed = 42
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
path+'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
class_names = raw_train_ds.class_names
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
path+'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = tf.keras.preprocessing.text_dataset_from_directory(
path+'aclImdb/test',
batch_size=batch_size)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
"""
Explanation: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset.
The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below.
Note: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap.
End of explanation
"""
for text_batch, label_batch in train_ds.take(1):
for i in range(3):
print(f'Review: {text_batch.numpy()[i]}')
label = label_batch.numpy()[i]
print(f'Label : {label} ({class_names[label]})')
"""
Explanation: Let's take a look at a few reviews.
End of explanation
"""
bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8'
"""
@param ["bert_en_uncased_L-12_H-768_A-12",
"bert_en_cased_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12",
"small_bert/bert_en_uncased_L-2_H-128_A-2",
"small_bert/bert_en_uncased_L-2_H-256_A-4",
"small_bert/bert_en_uncased_L-2_H-512_A-8",
"small_bert/bert_en_uncased_L-2_H-768_A-12",
"small_bert/bert_en_uncased_L-4_H-128_A-2",
"small_bert/bert_en_uncased_L-4_H-256_A-4",
"small_bert/bert_en_uncased_L-4_H-512_A-8",
"small_bert/bert_en_uncased_L-4_H-768_A-12",
"small_bert/bert_en_uncased_L-6_H-128_A-2",
"small_bert/bert_en_uncased_L-6_H-256_A-4",
"small_bert/bert_en_uncased_L-6_H-512_A-8",
"small_bert/bert_en_uncased_L-6_H-768_A-12",
"small_bert/bert_en_uncased_L-8_H-128_A-2",
"small_bert/bert_en_uncased_L-8_H-256_A-4",
"small_bert/bert_en_uncased_L-8_H-512_A-8",
"small_bert/bert_en_uncased_L-8_H-768_A-12",
"small_bert/bert_en_uncased_L-10_H-128_A-2",
"small_bert/bert_en_uncased_L-10_H-256_A-4",
"small_bert/bert_en_uncased_L-10_H-512_A-8",
"small_bert/bert_en_uncased_L-10_H-768_A-12",
"small_bert/bert_en_uncased_L-12_H-128_A-2",
"small_bert/bert_en_uncased_L-12_H-256_A-4",
"small_bert/bert_en_uncased_L-12_H-512_A-8",
"small_bert/bert_en_uncased_L-12_H-768_A-12",
"albert_en_base", "electra_small",
"electra_base",
"experts_pubmed",
"experts_wiki_books",
"talking-heads_base"]
"""
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print(f'BERT model selected : {tfhub_handle_encoder}')
print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}')
"""
Explanation: Loading models from TensorFlow Hub
For the purpose of this lab, we will be loading a model called Small BERT. Small BERT has the same general architecture as the original BERT but the has fewer and/or smaller Transformer blocks.
Some other popular BERT models are BERT Base, ALBERT, BERT Experts, Electra. See the continued learning section at the end of this lab for more info.
Aside from the models available below, there are multiple versions of the models that are larger and can yeld even better accuracy but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab.
You'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub.
End of explanation
"""
bert_preprocess_model = #TODO: your code goes here
"""
Explanation: The preprocessing model
Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text.
The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically.
Note: You will load the preprocessing model into a hub.KerasLayer to compose your fine-tuned model. This is the preferred API to load a TF2-style SavedModel from TF Hub into a Keras model.
TODO 1: Use hub.KerasLaye to initialize the preprocessing
End of explanation
"""
text_test = ['this is such an amazing movie!']
text_preprocessed = #TODO: Code goes here
#This print box will help you inspect the keys in the pre-processed dictionary
print(f'Keys : {list(text_preprocessed.keys())}')
# 1. input_word_ids is the ids for the words in the tokenized sentence
print(f'Shape : {text_preprocessed["input_word_ids"].shape}')
print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}')
#2. input_mask is the tokens which we are masking (masked language model)
print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}')
#3. input_type_ids is the sentence id of the input sentence.
print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}')
"""
Explanation: Let's try the preprocessing model on some text and see the output:
TODO 2: Call the preprocess model function and pass text_test
End of explanation
"""
bert_model = hub.KerasLayer(tfhub_handle_encoder)
bert_results = bert_model(text_preprocessed)
print(f'Loaded BERT: {tfhub_handle_encoder}')
print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}')
print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}')
print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}')
print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}')
"""
Explanation: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids).
Some other important points:
- The input is truncated to 128 tokens.
- The input_type_ids only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input.
Since this text preprocessor is a TensorFlow model, It can be included in your model directly.
Using the BERT model
Before putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values.
End of explanation
"""
def build_classifier_model():
# TODO: define your model here
return tf.keras.Model(text_input, net)
#Let's check that the model runs with the output of the preprocessing model.
classifier_model = build_classifier_model()
bert_raw_result = classifier_model(tf.constant(text_test))
print(tf.sigmoid(bert_raw_result))
"""
Explanation: The BERT models return a map with 3 important keys: pooled_output, sequence_output, encoder_outputs:
pooled_output to represent each input sequence as a whole. The shape is [batch_size, H]. You can think of this as an embedding for the entire movie review.
sequence_output represents each input token in the context. The shape is [batch_size, seq_length, H]. You can think of this as a contextual embedding for every token in the movie review.
encoder_outputs are the intermediate activations of the L Transformer blocks. outputs["encoder_outputs"][i] is a Tensor of shape [batch_size, seq_length, 1024] with the outputs of the i-th Transformer block, for 0 <= i < L. The last value of the list is equal to sequence_output.
For the fine-tuning you are going to use the pooled_output array.
Define your model
You will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer.
Note: for more information about the base model's input and output you can use just follow the model's url for documentation. Here specifically you don't need to worry about it because the preprocessing model will take care of that for you.
TODO 3: Define your model. It should contain the preprocessing model, the selected BERT model (smallBERT), a dense layer and dropout layer
HINT The order of the layers in the model should be:
1. Input Layer
2. Pre-processing Layer
3. Encoder Layer
4. From the BERT output map, use pooled_output
5. Dropout layer
6. Dense layer
End of explanation
"""
tf.keras.utils.plot_model(classifier_model)
"""
Explanation: The output is meaningless, of course, because the model has not been trained yet.
Let's take a look at the model's structure.
End of explanation
"""
loss = #TODO: your code goes here
metrics = #TODO: your code goes here
"""
Explanation: Model training
You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier.
Loss function
Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function.
TODO 4: define your loss and evaluation metric here. Since it is a binary classification use BinaryCrossentropy and BinaryAccuracy
End of explanation
"""
epochs = 5
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
"""
Explanation: Optimizer
For fine-tuning, let's use the same optimizer that BERT was originally trained with: the "Adaptive Moments" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW.
In past labs, we have been using the Adam optimizer which is a popular choice. However, for this lab we will be using a new optimizier which is meant to improve generalization. The intuition and algoritm behind AdamW can be found in this paper here.
For the learning rate (init_lr), we use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).
End of explanation
"""
#TODO: Model compile code goes here
"""
Explanation: Loading the BERT model and training
Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer.
TODO 5: complile the model using the optimizer, loss and metrics you defined above
End of explanation
"""
print(f'Training model with {tfhub_handle_encoder}')
history = #TODO: model fit code goes here
"""
Explanation: Note: training time will vary depending on the complexity of the BERT model you have selected.
TODO 6: write code to fit the model and start training
End of explanation
"""
loss, accuracy = classifier_model.evaluate(test_ds)
print(f'Loss: {loss}')
print(f'Accuracy: {accuracy}')
"""
Explanation: Evaluate the model
Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy.
End of explanation
"""
history_dict = history.history
print(history_dict.keys())
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'r', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
# plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
"""
Explanation: Plot the accuracy and loss over time
Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy:
End of explanation
"""
dataset_name = 'imdb'
saved_model_path = './{}_bert'.format(dataset_name.replace('/', '_'))
#TODO: your code goes here
"""
Explanation: In this plot, the red lines represents the training loss and accuracy, and the blue lines are the validation loss and accuracy.
Export for inference
Now you just save your fine-tuned model for later use.
TODO 7: Write code to save the model to saved_model_path
End of explanation
"""
reloaded_model = tf.saved_model.load(saved_model_path)
"""
Explanation: Let's reload the model so you can try it side by side with the model that is still in memory.
End of explanation
"""
def print_my_examples(inputs, results):
result_for_printing = \
[f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}'
for i in range(len(inputs))]
print(*result_for_printing, sep='\n')
print()
examples = [
'this is such an amazing movie!', # this is the same sentence tried earlier
'The movie was great!',
'The movie was meh.',
'The movie was okish.',
'The movie was terrible...'
]
reloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples)))
original_results = tf.sigmoid(classifier_model(tf.constant(examples)))
print('Results from the saved model:')
print_my_examples(examples, reloaded_results)
print('Results from the model in memory:')
print_my_examples(examples, original_results)
"""
Explanation: Here you can test your model on any sentence you want, just add to the examples variable below.
End of explanation
"""
serving_results = reloaded_model \
.signatures['serving_default'](tf.constant(examples))
serving_results = tf.sigmoid(serving_results['classifier'])
print_my_examples(examples, serving_results)
"""
Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows:
End of explanation
"""
|
kazzz24/deep-learning | autoencoder/Simple_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
JaviMerino/lisa | ipynb/wlgen/rtapp_examples.ipynb | apache-2.0 | # Let's use the local host as a target
te = TestEnv(
target_conf={
"platform": 'host',
"username": 'put_here_your_username'
})
"""
Explanation: Test environment setup
End of explanation
"""
# Create a new RTApp workload generator
rtapp = RTA(
target=te.target, # Target execution on the local machine
name='example', # This is the name of the JSON configuration file reporting
# the generated RTApp configuration
calibration={0: 10, 1: 11, 2: 12, 3: 13} # These are a set of fake
# calibration values
)
"""
Explanation: Create a new RTA workload generator object
The wlgen::RTA class is a workload generator which exposes an API to configure
RTApp based workload as well as to execute them on a target.
End of explanation
"""
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
# 3. PERIODIC task
#
# This class defines a task which load is periodic with a configured
# period and duty-cycle.
#
# This class is a specialization of the 'pulse' class since a periodic
# load is generated as a sequence of pulse loads.
#
# Args:
# cuty_cycle_pct (int, [0-100]): the pulses load [%]
# default: 50[%]
# duration_s (float): the duration in [s] of the entire workload
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# sched (dict): the scheduler configuration for this task
'task_per20': Periodic(
period_ms=100, # period
duty_cycle_pct=20, # duty cycle
duration_s=5, # duration
cpus=None, # run on all CPUS
sched={
"policy": "FIFO", # Run this task as a SCHED_FIFO task
},
delay_s=0 # start at the start of RTApp
).get(),
},
# 4. use this folder for task logfiles
run_dir='/tmp'
);
"""
Explanation: Workload Generation Examples
Single periodic task
An RTApp workload is defined by specifying a kind, which represents the way
we want to defined the behavior of each task.<br>
The most common kind is profile, which allows to define each task using one
of the predefined profile supported by the RTA base class.<br>
<br>
The following example shows how to generate a "periodic" task<br>
End of explanation
"""
# Dump the configured JSON file for that task
with open("./example_00.json") as fh:
rtapp_config = json.load(fh)
print json.dumps(rtapp_config, indent=4)
"""
Explanation: The output of the previous cell reports the main properties of the generated
tasks. Thus for example we see that the first task is configure to be:
1. named task_per20
2. will be executed as a SCHED_FIFO task
3. generating a load which is calibrated with respect to the CPU 0
3. with one single "phase" which defines a peripodic load for the duration of 5[s]
4. that periodic load consistes of 50 cycles
5. each cycle has a period of 100[ms] and a duty-cycle of 20%
6. which means that the task, for every cycle, will run for 20[ms] and then sleep for 20[ms]
All these properties are translated into a JSON configuration file for RTApp.<br>
Let see what it looks like the generated configuration file:
End of explanation
"""
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
# 3. RAMP task
#
# This class defines a task which load is a ramp with a configured number
# of steps according to the input parameters.
#
# Args:
# start_pct (int, [0-100]): the initial load [%], (default 0[%])
# end_pct (int, [0-100]): the final load [%], (default 100[%])
# delta_pct (int, [0-100]): the load increase/decrease [%],
# default: 10[%]
# increase if start_prc < end_prc
# decrease if start_prc > end_prc
# time_s (float): the duration in [s] of each load step
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_rmp20_5-60': Ramp(
period_ms=100, # period
start_pct=5, # intial load
end_pct=65, # end load
delta_pct=20, # load % increase...
time_s=1, # ... every 1[s]
cpus="0" # run just on first CPU
).get(),
# 4. STEP task
#
# This class defines a task which load is a step with a configured
# initial and final load.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default 0[%])
# end_pct (int, [0-100]): the final load [%]
# default 100[%]
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_stp10-50': Step(
period_ms=100, # period
start_pct=0, # intial load
end_pct=50, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
# 5. PULSE task
#
# This class defines a task which load is a pulse with a configured
# initial and final load.
#
# The main difference with the 'step' class is that a pulse workload is
# by definition a 'step down', i.e. the workload switch from an finial
# load to a final one which is always lower than the initial one.
# Moreover, a pulse load does not generate a sleep phase in case of 0[%]
# load, i.e. the task ends as soon as the non null initial load has
# completed.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default: 0[%]
# end_pct (int, [0-100]): the final load [%]
# default: 100[%]
# NOTE: must be lower than start_pct value
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# NOTE: if end_pct is 0, the task end after the
# start_pct period completed
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_pls5-80': Pulse(
period_ms=100, # period
start_pct=65, # intial load
end_pct=5, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
},
# 6. use this folder for task logfiles
run_dir='/tmp'
);
# Dump the configured JSON file for that task
with open("./example_00.json") as fh:
rtapp_config = json.load(fh)
print json.dumps(rtapp_config, indent=4)
"""
Explanation: Workload mix
Using the wlgen::RTA workload generator we can easily create multiple tasks, each one with different "profiles", which are executed once the rtapp application is started in the target.<br>
<br>
In the following example we configure a workload mix composed by a RAMP task, a STEP task and a PULSE task:
End of explanation
"""
# Initial phase and pinning parameters
ramp = Ramp(period_ms=100, start_pct=5, end_pct=65, delta_pct=20, time_s=1,
cpus="0")
# Following phases
medium_slow = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=100)
high_fast = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=10)
medium_fast = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=1)
high_slow = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=100)
#Compose the task
complex_task = ramp + medium_slow + high_fast + medium_fast + high_slow
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
'complex' : complex_task.get()
},
# 6. use this folder for task logfiles
run_dir='/tmp'
)
"""
Explanation: Workload composition
End of explanation
"""
|
openhep/ackp16 | H750.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import math
from sympy import *
from scipy.optimize import root, brentq
from sympy.abc import tau, sigma, x, D, T, Q, Y, N
T3, sigmaprime = symbols('T3, sigmaprime')
# local packages
from plothelp import label_line
import smgroup
from constants import *
smgroup.GUTU1 = False # we don't work here with GUT-unified value for alpha_1
"""
Explanation: <center><b><font size="5">H(750) decays to gauge boson pairs</font></b></center>
Intro and definitions
Packages and constants
End of explanation
"""
def VVfact(S1, S2, S3):
"""Factors for loop decays to VV channels.
Phase space factors for identical particles accounted for here.
"""
gg = S1 + S2
GG = (2*S3)*(alphas/alpha)*sqrt(Kfactor)
ZZ = (cw2/sw2)*S2 + (sw2/cw2)*S1
Zg = sqrt(2)*( (cw/sw)*S2 - (sw/cw)*S1 )
WW = sqrt(2) * S2 / sw2
return {'gg':gg, 'GG':GG, 'ZZ':ZZ, 'Zg':Zg, 'WW':WW}
def VVfactW(D=7, Y=0, real=True, wght=False):
"""Factors for loop decays to VV channels.
Phase space factors for identical particles accounted for here.
wght --- T3-weight factor entering sum over multiplet
"""
r, wNC, wCC = (1, 1, 1)
if real:
r = 2
if wght:
# Weights for CKP model quintuplet scalars which don't
# couple universally to H
wNC = (2-T3)/4
wCC = (3-2*T3)/8 # average of (2-T3)/4 and (1-T3)/4
T = (D-S(1))/2 # weak isospin
gg = summation((wNC*Q**2).subs(Q,T3+Y/2).evalf(), (T3, -T, T))/r
GG = 0
ZZ = summation((wNC*(T3-sw2*Q)**2).subs(Q,T3+Y/2).evalf(), (T3, -T, T))/sw2/cw2/r
Zg = sqrt(2)*summation((wNC*Q*(T3-sw2*Q)).subs(Q,T3+Y/2).evalf(), (T3, -T, T))/sw/cw/r
WW = sqrt(2) * summation((wCC*(T-T3)*(T+T3+1)/2).evalf(), (T3, -T, T))/sw2/r
return {'gg':gg, 'GG':GG, 'ZZ':ZZ, 'Zg':Zg, 'WW':WW}
def Rtogg(reps, prt=False):
"""Ratios of VV to gamma-gamma channels."""
VVs = VVfact(*smgroup.SMDynkin(reps))
gg = VVs['gg']
RGG = float((VVs['GG']/gg)**2)
RZg = float((VVs['Zg']/gg)**2)
RZZ = float((VVs['ZZ']/gg)**2)
RWW = float((VVs['WW']/gg)**2)
if prt:
print("RGG = {:.3f}, RZg = {:.3f}, RZZ = {:.3f}, RWW = {:.3f}".format(RGG,
RZg, RZZ , RWW) )
return RGG, RZg, RZZ, RWW
def RtoggW(D=7, Y=0, real=True, wght=False, prt=False):
"""Ratios of VV to gamma-gamma channels if T3-weights are needed."""
VVs = VVfactW(D, Y, real, wght)
gg = VVs['gg']
RGG = float((VVs['GG']/gg)**2)
RZg = float((VVs['Zg']/gg)**2)
RZZ = float((VVs['ZZ']/gg)**2)
RWW = float((VVs['WW']/gg)**2)
if prt:
print("RGG = {:.3f}, RZg = {:.3f}, RZZ = {:.3f}, RWW = {:.3f}".format(RGG,
RZg, RZZ , RWW) )
return RGG, RZg, RZZ, RWW
# Check of consistency of two formulas
res = Rtogg([smgroup.RealScalar(1,7,0)], prt=True)
res = RtoggW(D=7, Y=0, real=True, prt=True)
res = Rtogg([smgroup.ComplexScalar(1,5,-1)], prt=True)
res = RtoggW(D=5, Y=-1, real=False, prt=True)
"""
Explanation: One-loop decays to pairs of gauge bosons
Factors different for different VV channels
End of explanation
"""
res = Rtogg([smgroup.ComplexScalar(1,1,-1)], prt=True)
res = RtoggW(D=1, Y=-1, real=False, prt=True)
"""
Explanation: Note that first set is consistent with table below Fig. 4 of Strumia's arXiv:1605.09401, which has RZg=7, RZZ=12, RWW=40. We can reproduce second row of his table with some SU(2) singlet:
End of explanation
"""
# Loop functions
def f(tau):
return asin(1/sqrt(tau))**2
def A0(tau):
return -tau*(1-tau*f(tau))
def A1(tau):
return -2-3*tau-3*tau*(2-tau)*f(tau)
def A12(tau):
return 2*tau*(1+(1-tau)*f(tau))
# numpy-approved versions
fN = lambdify(x, f(x), 'numpy')
A0N = lambdify(x, A0(x), 'numpy')
A1N = lambdify(x, A1(x), 'numpy')
A12N = lambdify(x, A12(x), 'numpy')
def tauN(m, mH=750):
return 4*m**2/mH**2
print(" A0 --> {}".format(limit(A0(tau), tau, oo)))
print("A12 --> {}".format(limit(A12(tau), tau, oo)))
print(" A1 --> {}".format(limit(A1(tau), tau, oo)))
# Numerical check of the relation of C0 and f(tau):
# LoopTools for mH=125, m=375
# fLT = 0.028038859
f(4*375**2/125**2)
"""
Explanation: Final expression for $H\to VV$ width
$$\Gamma(h\to\gamma\gamma) = B \left|\sum_i Q_i^2 A_{i}(\tau_i) \right|^2$$
$$ B = \frac{\alpha^2 g^2 m_h^3}{1024 \pi^3 m_W^2} = \frac{G_F \alpha^2 m_h^3}{
128\sqrt{2} \pi^3}$$
$$\tau_i = \frac{4m_i^2}{m_{H}^2} $$
$$A_{0}(\tau) = -\tau(1-\tau f(\tau)) \to \frac{1}{3} \quad \text{for} \quad \tau\to\infty$$
$$A_{1/2}(\tau) = 2\tau\big(1+(1-\tau)f(\tau)\big) = 2 + (4 m^2 -m_{H}^2)C_0(0,0,m_{H}^2,m^2,m^2,m^2)$$
$$f(\tau) = \arcsin^2(\sqrt{\frac{1}{\tau}}) \quad \text{for} \quad \tau\ge 1$$
$$f(\tau) = -\frac{m_H^2}{2} C_0 (0,0,m_H^2; m, m, m) $$
End of explanation
"""
Bh = (GF * alpha**2 * mh**3)/(128 * sqrt(2) * pi**3).evalf()
Bh * (A1N(tauN(mW,mh)) + 3*(2/3)**2*A12N(tauN(mt,mh)))**2
"""
Explanation: SM $h(125)\to\gamma\gamma$ width in GeV (just W and top contributions)
End of explanation
"""
def GAMHVV(VV='gg',
BSMfermions=[], BSMscalars=[], gHFF=v, mF=400, gHSS=v, mS=400):
"""Decay width of scalar H to pair of gauge bosons (generic model)"""
B = float((alpha**2 * mH**3)/(1024 * pi**3).evalf())
VVf = VVfact(*smgroup.SMDynkin(BSMfermions))[VV]
VVs = VVfact(*smgroup.SMDynkin(BSMscalars))[VV]
amp = - (2*gHFF/mF)*VVf*A12N(tauN(mF))
amp += - (gHSS/mS**2)*VVs*A0N(tauN(mS))
return B * amp**2
def GAMHckp(VV='gg', tau=1, sig=1, sigpri=1, mchi=400, mphi=400):
"""Decay width of scalar H to pair of gauge bosons (CKP model)"""
B = float((alpha**2 * mH**3)/(1024 * pi**3).evalf())
VVtau = VVfactW(D=7, Y=0, real=True)[VV]
VVsig = VVfactW(D=5, Y=-2, real=False)[VV]
VVsigpri = VVfactW(D=5, Y=-2, real=False, wght=True)[VV]
amp = tau*v*VVtau/mchi**2 * A0N(tauN(mchi))
amp += (sig*v*VVsig/mphi**2 + sigpri*v*VVsigpri/mphi**2) * A0N(tauN(mphi))
return B * amp**2
GAMHckp('gg'), GAMHckp('WW')
"""
Explanation: This is about right. (2HDMC gives 8.3e-6 GeV).
Now we define $H\to VV$ decay widths expressions for generic BSM and for ČKP model.
End of explanation
"""
[GAMHckp(VV, sigpri=0) for VV in ['gg', 'Zg', 'ZZ', 'WW']]
[GAMHVV(VV, BSMscalars=[smgroup.RealScalar(1,7,0), smgroup.ComplexScalar(1,5,-2)], gHSS=v) for VV in ['gg', 'Zg', 'ZZ', 'WW']]
"""
Explanation: These numbers agree with my older notebook used for initial plots. Another check: CKP model widths with $\sigma'=0$ (i.e. septuplet plus universal quintuplet contributions) can also be calculated with generic function GAMHVV.
End of explanation
"""
bdrep = [smgroup.Dirac(3,1,S(4)/3), smgroup.Dirac(3,1,-S(2)/3), smgroup.Dirac(1,1,-2)]
Rtogg(bdrep)
"""
Explanation: Models
(For check) Bhupal Dev et al. [1512.08507] 1512.06028
End of explanation
"""
# RGG
GAMHVV('GG', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)
# RZg
GAMHVV('Zg', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)
# RZZ
GAMHVV('ZZ', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)
# RWW
GAMHVV('WW', BSMfermions=bdrep, gHFF=246, mF=400)/GAMHVV(BSMfermions=bdrep, gHFF=246, mF=400)
"""
Explanation: This is in good agreement with their Table 1:
RGG = 220, RZg = 0.61, RZZ = 0.091
Let's also check decay width formula (again in ratios only)
End of explanation
"""
# Model 1
Rtogg([smgroup.Dirac(3,1,S(4)/3), smgroup.Dirac(3,1,-S(4)/3)])
# Zg above agrees with Eq. (32) of 1512.07616
2*sw2/cw2
# Model 2
Rtogg([smgroup.Dirac(3,2,S(1)/3), smgroup.Dirac(3,2,-S(1)/3)])
# Model 3
Rtogg([smgroup.Dirac(3,1,S(4)/3), smgroup.Dirac(3,1,-S(4)/3), smgroup.Dirac(3,2,S(1)/3), smgroup.Dirac(3,2,-S(1)/3), smgroup.Dirac(3,1,-S(2)/3), smgroup.Dirac(3,1,S(2)/3)])
"""
Explanation: (For check) Elllis and Ellis et al. 1512.05327
End of explanation
"""
# VLTQ model of Benbrik and al.
Rtogg([smgroup.Dirac(3,3,S(4)/3), smgroup.Dirac(3,3,-S(2)/3)])
"""
Explanation: They have e.g. for the Model 3:
RGG = 460, RZg = 1.1, RZZ = 2.8, RWW = 15.
So, their RZg and RWW look factor 2 too large.
(For check) Benbrik et al. 1512.06028
End of explanation
"""
bpr = [smgroup.Dirac(1,2,-1)]
smgroup.SMDynkin(bpr)
RGGbpr, RZgbpr, RZZbpr, RWWbpr = Rtogg(bpr, prt=True)
# Branching ratio to gamma gamma:
BrBPRgg = GAMHVV('gg', BSMfermions=bpr)/(GAMHVV('gg', BSMfermions=bpr)+GAMHVV('Zg', BSMfermions=bpr)+GAMHVV('ZZ', BSMfermions=bpr)+GAMHVV('WW', BSMfermions=bpr))
BrBPRgg
# OA's factor
10.8*(750/45)**2 /(64*pi.evalf()**3)**2 * 1000 # fb
"""
Explanation: Benbrink at al have:
RGG = 40, RZg = 2.29, RZZ = 5.59, RWW = 8.88
So their RWW looks factor 2 too small.
"Our" one-loop (BPR) model
For the purposes of couplings to H(750) and gauge bosons, we have one Dirac doublet (times the number of generations, of course)
End of explanation
"""
# Factors relevant for H-->gamma gamma
sigmaprime = symbols('sigmaprime')
print(tau*VVfactW(D=7, Y=0, real=True)['gg'])
print(sigma*VVfactW(D=5, Y=-2, real=False)['gg'])
print(sigmaprime*VVfactW(D=5, Y=-2, real=False, wght=True)['gg'])
# Factors relevant for H--> W+ W- (with sqrt(2)/sw2 factor extracted)
print((tau*VVfactW(D=7, Y=0, real=True)['WW']*sw2/sqrt(2)).evalf())
print((sigma*VVfactW(D=5, Y=-2, real=False)['WW']*sw2/sqrt(2)).evalf())
print((sigmaprime*VVfactW(D=5, Y=-2, real=False, wght=True)['WW']*sw2/sqrt(2)).evalf())
"""
Explanation: "Our" three-loop (ČKP) model
End of explanation
"""
GAMHckp('WW', sig=0, sigpri=0)/GAMHckp('gg', sig=0, sigpri=0)
# Final ratios to gamma gamma channel for tau=sig=sigpri
GAMHckp('Zg')/GAMHckp('gg'), GAMHckp('ZZ')/GAMHckp('gg'), GAMHckp('WW')/GAMHckp('gg')
# Width for H(750) --> t tbar
GAMHtt = 3*(1/126.5)*mH*mt**2/(8*sw2*mW**2)*(1-tauN(mt))**(3/2); GAMHtt
def GAMTOTckp(tau, sig, sigpri, mchi, mphi):
"""Total width of H(750) in CKP model."""
WW = GAMHckp('WW', tau, sig, sigpri, mchi, mphi)
ZZ = GAMHckp('ZZ', tau, sig, sigpri, mchi, mphi)
Zg = GAMHckp('Zg', tau, sig, sigpri, mchi, mphi)
gg = GAMHckp('gg', tau, sig, sigpri, mchi, mphi)
TOT = GAMHtt+WW+ZZ+Zg+gg
return TOT
def GAMTOTckpD(lam, mS):
"""Total width of H(750) in CKP model with degenerate couplings and mases and Br(gg)."""
WW = GAMHckp('WW', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)
ZZ = GAMHckp('ZZ', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)
Zg = GAMHckp('Zg', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)
gg = GAMHckp('gg', tau=lam, sig=lam, sigpri=lam, mchi=mS, mphi=mS)
TOT = GAMHtt+WW+ZZ+Zg+gg
return TOT, gg/TOT
print( "GAMHTOT = {:.1f} GeV; Br(H-->gamma gamma) = {:.4}".format(*GAMTOTckpD(8, 375)) )
"""
Explanation: One possible check is the known fact that for Y=0 model ratio of WW to $\gamma\gamma$ decay widths is $2/s_{W}^4=36.5$. We have such model for $\sigma=\sigma'=0$.
End of explanation
"""
xs_low = 3 # fb
xs_high = 9
"""
Explanation: Experimental constraints
Generalities
750 GeV diphoton excess:
$$\sigma(pp\to H\to \gamma\gamma){\rm CMS} = 4.47 \pm 1.86\;{\rm fb}$$
$$\sigma(pp\to H\to \gamma\gamma){\rm ATLAS} = 10.6 \pm 2.9\;{\rm fb}$$
Combination by Di Chiara et al.:
$$\sigma(pp\to H\to \gamma\gamma)_{\rm LHC} = 6.26 \pm 3.32\; {\rm fb}$$
So one could scan 3-10 fb region.
Width measured by ATLAS is 45 GeV; CMS prefers narrower.
End of explanation
"""
ggF13 = 737. # fb
ggF8 = 157.
"""
Explanation: 2HDM: for $m_H = 750\;{\rm GeV}$, $A$ and $H^\pm$ are also close to 750 GeV.
Signal is $10^4$ times stronger than SM-like Higgs, so pure 2HDM is hopeless
Higgs production by gluon-gluon fusion: 7 and 8 TeV by LHC Higgs xs WG, and 13 TeV by gluon luminosity ratios (13 TeV/8 TeV) being 2.296 for 125 GeV and 4.693 for 750 GeV.
| | 7 TeV | 8 TeV | 13 TeV
| ---- | ----- | ------ | -----
| h(125) | 15.13 pb | 19.27 pb| 44.2 pb
| h(750) | 93 fb | 157 fb | 737 fb
and we have
$$\sigma_{\gamma\gamma} = 737\,{\rm fb}\; Br(H\to\gamma\gamma)$$
End of explanation
"""
# Same prefactor in picobarn from Franceschini et al. Eq. (2)
(45*54)/750/(13000)**2 * GeV2fb / 1000
"""
Explanation: For production via photon fusion $pp \to \gamma \gamma \to H \to \gamma \gamma$ at 13 TeV, Harland-Lang et al. have
$$ \sigma = 4.1\,{\rm pb}\, \left(\frac{\Gamma_H}{45\, {\rm GeV}}\right) {\rm Br}(H\to\gamma\gamma)^2$$
while Csaki et al. have
$$ \sigma = 10.8 \,{\rm pb}\, \left(\frac{\Gamma_H}{45\, {\rm GeV}}\right) {\rm Br}(H\to\gamma\gamma)^2$$.
They both include elastic and inelastic contributions, the latter also mixed, and in narrow width approximation. For $\sigma = 3-9\,{\rm fb}$, we get from the second choice
branching ratio 1.7-2.9 %, or $\Gamma(H\to\gamma\gamma) = 0.75-1.3\,{\rm GeV}$.
End of explanation
"""
## So in pure photon fusion production and pure VV decay scenario, we have for total H(750) width range in GeV
GAMbpr_low, GAMbpr_high = [(siggg/10800)*45/BrBPRgg**2 for siggg in (xs_low, xs_high)]
GAMbpr_low, GAMbpr_high
# Translating this into H->gamma gamma width in GeV:
GAMbpr_HGG_low, GAMbpr_HGG_high = sqrt((xs_low/10800)*GAMbpr_low*45), sqrt((xs_high/10800)*GAMbpr_high*45)
GAMbpr_HGG_low, GAMbpr_HGG_high
def lam(msig, GAM):
"""Coupling to get given width H->gamma gamma for given loop fermion mass."""
ss = ( sqrt(256 * pi**3 * GAM / (alpha**2 * mH**3)) ).evalf()
return float(msig * ss / A12N(tauN(msig)))
lamN = np.frompyfunc(lam, 2, 1)
# Checking that above inverted formula lambda(sigma) is consistent
# with "master" formula sigma(lambda):
lamN(400, GAMHVV('gg', BSMfermions=bpr, gHFF=42, mF=400))
"""
Explanation: one-loop BPR model
End of explanation
"""
lam(375, GAMbpr_HGG_low)
"""
Explanation: So, minimal possible coupling would be:
End of explanation
"""
#rgg = 1.9 # gain for photon fusion xs going from 8 to 13 TeV in Franceschini et al.
# rgg = 3.9 # value from Fichet et al.1512.05751
rgg = 3 # average value
# For 3 pb
[(xs_low*RVV)/rgg for RVV in [RWWbpr, RZZbpr, RZgbpr, 1]]
# And for 9 fb
[(xs_high*RVV)/rgg for RVV in [RWWbpr, RZZbpr, RZgbpr, 1]]
# Bounds on pp->H->VV xs from LHC 8 TeV
sig8 = {'WW' : 40, 'ZZ' : 12, 'Zg' : 11, 'gg' : 1.5} # in fb, from Franceschini
"""
Explanation: So, for $N_E=3$, and $\cos\theta_0\sim 1$ we have $\lambda/(4\pi)$ = 1.1 and we are at the border of perturbativity?
Constraints from 8 TeV VV bounds. First, cross sections for pp->H->VV at 8 TeV if pp->H->gg at 13 TeV is in (3 fb, 9 fb)
End of explanation
"""
def lambound(VV, msig, reps=bpr):
"""Boundary value of gHFF to violate 8 TeV VV xs constraint"""
VVs = VVfact(*smgroup.SMDynkin(reps))
RVV = float((VVs[VV]/VVs['gg'])**2)
# diphoton 13 TeV xs that would mean boundary VV 8 TeV xs
gamgg = sig8[VV]*rgg/RVV
Brgg = BrBPRgg # FIXME: hardwired BPR
# H->gamma gamma width that would give above diphoton 13 TeV xs
GAMgg = gamgg*45/10800/Brgg
return lamN(msig, GAMgg)
"""
Explanation: So we see that strongest bound comes from photon photon final state. This can be relaxed
if one takes larger rgg, advocated by some.
End of explanation
"""
def sig8CKP(VV, tau, sig, sigpri, mchi, mphi):
"""xs in fb for pp-->H-->VV at 8 TeV in CKP model with degenerate couplings an masses"""
TOT = GAMTOTckp(tau, sig, sigpri, mchi, mphi)
BrVV = GAMHckp(VV, tau, sig, sigpri, mchi, mphi)/TOT
# print('GAMH = {:.1f} GeV, Br(H->{}) = {}'.format(TOT, VV, BrVV))
return ggF8 * BrVV
def fun(VV, tau, s):
return sig8CKP(VV, tau=tau, sig=s, sigpri=s, mchi=375, mphi=375) - sig8[VV]
def sigboundCKP(VV, tau, init=6):
"""Boundary value of sig=sig' to violate 8 TeV VV xs constraint"""
return root(lambda s: fun(VV, tau, s), init).x[0]
[sigboundCKP(VV, 8) for VV in ['WW', 'Zg', 'ZZ', 'gg']]
"""
Explanation: three-loop ČKP model
With ggF as dominant production mechanism we have
$$ \sigma_{VV}^{8\,{\rm TeV}} = 157\,{\rm fb} \; Br(H\to VV)$$
End of explanation
"""
def funm(VV, mchi, s):
return sig8CKP(VV, tau=10, sig=10, sigpri=10, mchi=mchi, mphi=s) - sig8[VV]
def mboundCKP(VV, mchi, init=390):
"""Boundary value of mphi to violate 8 TeV VV xs constraint"""
return root(lambda s: funm(VV, mchi, s), init).x[0]
"""
Explanation: So it is again photon-photon channel that is most restrictive.
End of explanation
"""
SAVEPDFS = True
"""
Explanation: Plots
End of explanation
"""
def Rgg(lam=1, m=375):
"""Triplet scalar h(125)->gamma gamma enhancement."""
SM = A1N(tauN(mW,mh)) + 3*(2/3)**2*A12N(tauN(mt,mh))
BSM = lam * v**2 * A0N(tauN(m,mh)) / (2 * m**2)
return (1 + BSM/SM)**2
ms = np.linspace(375, 1000)
fig, ax = plt.subplots(figsize=(4,4))
ax.plot(ms, Rgg(lam=-20, m=ms), 'r--', label=r"$c_S=-20$")
ax.plot(ms, Rgg(lam=-10, m=ms), 'b-', label=r"$c_S=-10$")
ax.plot(ms, Rgg(lam=-5, m=ms), 'k:', label=r"$c_S=-5$")
ax.plot(ms, Rgg(lam=10, m=ms), 'g-.', label=r"$c_S=\;\;10$")
#ax.plot(ms, Rgg(lam=20, m=ms), label=r"$c_S=\;\;20$")
ax.set_xlabel(r'$m_S \;{\rm [GeV]}$', fontsize=16)
ax.set_ylabel(r"$R_{\gamma\gamma}$", fontsize=16)
props = dict(color="red", linestyle="-", linewidth=2)
ax.axhline(0.9, **props)
ax.axhline(1.44, **props)
ax.legend(loc=(0.5, 0.55)).draw_frame(0)
plt.tight_layout()
if SAVEPDFS:
plt.savefig("/home/kkumer/h125gg.pdf")
"""
Explanation: [Fig. 1a] Enhancement of $h(125)\to\gamma\gamma$ in one-loop BRP model
Enhancement from the lighter of two charged components of triplet scalar (cf. Eq. (10) from Brdar et al.)
End of explanation
"""
xmin, xmax = 375, 800
ymin, ymax = 0, 250
ms = np.linspace(xmin, xmax)
lam3 = lamN(ms, GAMbpr_HGG_low).astype(np.float) # 3 fb
lam9 = lamN(ms, GAMbpr_HGG_high).astype(np.float) # 9 fb
# 8 TeV bounds
lamWW = lambound('WW', ms).astype(np.float)
lamZZ = lambound('ZZ', ms).astype(np.float)
lamZg = lambound('Zg', ms).astype(np.float)
lamgg = lambound('gg', ms).astype(np.float)
fig, ax = plt.subplots(figsize=(4,4))
ax.fill_between(ms, lam3, lamgg, color='lightgreen', alpha='0.5')
ax.fill_between(ms, lamgg, ymax, color='gray', alpha='0.6')
ax.plot(ms, lam9, 'b--', label=r'$\sigma_{\gamma\gamma}= 9\,{\rm fb}$')
ax.plot(ms, lam3, 'b-', label=r'$\sigma_{\gamma\gamma}= 3\,{\rm fb}$')
lWW, = ax.plot(ms, lamWW, 'r-', label=r'$\sigma_{VV}^{8\,{\rm TeV}}\,{\rm bounds}$')
lgg, = ax.plot(ms, lamgg, 'r-')
lZg, = ax.plot(ms, lamZg, 'r-')
ax.set_ylabel(r'$g_3\, \cos\theta_0\, N_E$', fontsize=16)
ax.set_xlabel(r'$m_{E}\:{\rm [GeV]}$', fontsize=16)
ax.xaxis.set_major_locator(ticker.MultipleLocator(100))
ax.legend(loc=4).draw_frame(0)
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
plt.tight_layout()
# Put labels on exclusion lines
label_line(lWW, r"$WW$", near_x=650)
label_line(lgg, r"$\gamma\gamma$", near_x=650)
label_line(lZg, r"$Z\gamma$", near_x=420)
if SAVEPDFS:
plt.savefig("/home/kkumer/triplet.pdf")
"""
Explanation: [Fig 1b] $H(750)\to\gamma\gamma$ in one-loop BRP model
End of explanation
"""
resolution = 60 # of calculation grid
bmax=15
xs = np.linspace(-bmax, bmax, resolution)
ys = np.linspace(-bmax, bmax, resolution)
levels = [3, 6, 9]
X, Y = np.meshgrid(xs, ys)
#Z = X**2 + Y**2
Z = GAMHckp('gg', tau=X, sig=Y, sigpri=Y, mchi=375, mphi=375)*737./30.
fig, ax = plt.subplots(figsize=(4.5,4.5))
#ax.contour(X, Y, Z, cmap=plt.cm.viridis)
CS = plt.contour(X, Y, Z, levels, cmap=plt.cm.Dark2, linestyles='dashed')
for c in CS.collections:
c.set_dashes([(0, (8.0, 3.0))])
fig = plt.clabel(CS, inline=1, fmt=r'$%.0f \;{\rm fb}$', fontsize=12, colors='black')
ax.annotate(r'$m_\chi = m_\phi = 375\;{\rm GeV}$', xy=(0.05, 0.58), xycoords='axes fraction', fontsize=12)
ax.set_xlabel(r'$\tau$', fontsize=16)
ax.set_ylabel(r"$\sigma=\sigma'$", fontsize=16)
props = dict(color="green", linestyle="-.", linewidth=1)
ax.axvline(x=0, **props)
ax.axhline(y=0, **props)
cut = 20 # by eyeballing
gs = [sigboundCKP('gg', t, init=-20) for t in xs]
lggL, = plt.plot(xs, gs, 'r-')
ax.fill_between(xs, -bmax, gs, color='gray', alpha='0.6')
gs = [sigboundCKP('gg', t, init=20) for t in xs]
lggH, = plt.plot(xs, gs, 'r-')
ax.fill_between(xs, gs, bmax, color='gray', alpha='0.6')
plt.tight_layout()
fig = plt.ylim(-bmax, bmax)
# Put labels on exclusion lines
label_line(lggL, r"$\gamma\gamma$", near_x=-12)
label_line(lggH, r"$\gamma\gamma$", near_x=12)
if SAVEPDFS:
plt.savefig("/home/kkumer/tausig.pdf")
resolution = 60 # of calculation grid
xs = np.linspace(375, 450, resolution)
ys = np.linspace(375, 450, resolution)
levels = [3, 6, 9]
X, Y = np.meshgrid(xs, ys)
#Z = X**2 + Y**2
Z = GAMHckp('gg', tau=10, sig=10, sigpri=10, mchi=X, mphi=Y)*737./30.
fig, ax = plt.subplots(figsize=(4.5,4.5))
CS = plt.contour(X, Y, Z, levels, cmap=plt.cm.Dark2, linestyles='dashed')
for c in CS.collections:
c.set_dashes([(0, (8.0, 3.0))])
fig = plt.clabel(CS, inline=1, fmt=r'$%.0f \;{\rm fb}$', fontsize=12, colors='black')
ax.annotate(r"$\tau=\sigma=\sigma' = 10$", xy=(0.6, 0.88), xycoords='axes fraction', fontsize=14)
ax.set_xlabel(r'$m_\chi \; {\rm [GeV]}$', fontsize=16)
ax.set_ylabel(r"$m_\phi \; {\rm [GeV]}$", fontsize=16)
#props = dict(color="green", linestyle="-.", linewidth=1)
#ax.axvline(x=375, **props)
#ax.axhline(y=375, **props)
gs = [mboundCKP('gg', m, init=380) for m in xs]
lgg, = plt.plot(xs, gs, 'r-')
ax.fill_between(xs, 375, gs, color='gray', alpha='0.6')
plt.tight_layout()
# Put labels on exclusion lines
label_line(lgg, r"$\gamma\gamma$", near_x=380)
if SAVEPDFS:
plt.savefig("/home/kkumer/mm.pdf")
"""
Explanation: [Fig. 2] Allowed mass/coupling parameter ranges for ČKP model
End of explanation
"""
xmin, xmax = 375, 400
ymin, ymax = 0, 13.5
xs = np.linspace(xmin,xmax)
plt.figure(figsize=(4,4))
TOT, BR = GAMTOTckpD(8, xs)
plt.plot(xs , ggF13 * BR, label=r"$\tau=\sigma=\sigma' = 8$")
TOT, BR = GAMTOTckpD(4, xs)
plt.plot(xs , ggF13* BR, 'r--', label=r"$\tau=\sigma=\sigma' = 4$")
plt.ylabel(r'$\sigma(pp\to H\to\gamma\gamma)\;{\rm [fb]}$', fontsize=16)
plt.xlabel(r'$m_{\chi}=m_{\phi}\;{\rm [GeV]}$', fontsize=16)
plt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)
plt.text(385, 8, r'${\rm ATLAS+CMS}\; \sigma_{\gamma\gamma}\; {\rm range}$')
plt.legend(loc=1).draw_frame(0)
fig = plt.ylim(ymin, ymax)
plt.tight_layout()
if SAVEPDFS:
plt.savefig('/home/kkumer/diphm.pdf')
xmin, xmax = 0.2, 12
xs = np.linspace(xmin,xmax)
plt.figure(figsize=(3.7,4))
TOT, BR = GAMTOTckpD(xs, 375)
plt.plot(xs , ggF13*BR, label=r'$m_{\chi} = m_{\phi} = 375 \;{\rm GeV}$')
TOT, BR = GAMTOTckpD(xs, 400)
plt.plot(xs , ggF13*BR, 'r--', label=r'$m_{\chi} = m_{\phi} = 400 \;{\rm GeV}$')
#plt.ylabel(r'$\sigma(pp\to H\to\gamma\gamma)\;{\rm [fb]}$', fontsize=16)
plt.xlabel(r"$\tau=\sigma=\sigma'$", fontsize=16)
plt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)
plt.text(2.5, 8, r'${\rm ATLAS+CMS}\; \sigma_{\gamma\gamma}\; {\rm range}$')
plt.legend(loc=2).draw_frame(0)
plt.xlim(2, 8.5)
fig = plt.ylim(ymin, ymax)
plt.tight_layout()
if SAVEPDFS:
plt.savefig('/home/kkumer/diphlam.pdf')
"""
Explanation: [Fig 2] $\sigma(pp \to H(750)\to\gamma\gamma)$ in three-loop ČKP model
End of explanation
"""
xmin, xmax = 375, 400
ymin, ymax = 20, 60
xs = np.linspace(xmin,xmax)
plt.figure(figsize=(4,4))
TOT, BR = GAMTOTckpD(8, xs)
plt.plot(xs , TOT, label=r"$\tau=\sigma=\sigma' = 8$")
TOT, BR = GAMTOTckpD(4, xs)
plt.plot(xs , TOT, 'r--', label=r"$\tau=\sigma=\sigma' = 4$")
plt.ylabel(r'$\Gamma_H\;{\rm [GeV]}$', fontsize=16)
plt.xlabel(r'$m_{\chi}=m_{\phi}\;{\rm [GeV]}$', fontsize=16)
#plt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)
#plt.text(388, 8, r'${\rm ATLAS+CMS}\; \gamma\gamma\; {\rm range}$')
plt.legend(loc=1).draw_frame(0)
fig = plt.ylim(ymin, ymax)
plt.tight_layout()
if SAVEPDFS:
plt.savefig('/home/kkumer/diphGAMm.pdf')
xmin, xmax = 0.2, 12
xs = np.linspace(xmin,xmax)
plt.figure(figsize=(3.7,4))
TOT, BR = GAMTOTckpD(xs, 375)
plt.plot(xs , TOT, label=r'$m_{\chi} = m_{\phi} = 375 \;{\rm GeV}$')
TOT, BR = GAMTOTckpD(xs, 400)
plt.plot(xs , TOT, 'r--', label=r'$m_{\chi} = m_{\phi} = 400 \;{\rm GeV}$')
#plt.ylabel(r'$\sigma(pp\to H\to\gamma\gamma)\;{\rm [fb]}$', fontsize=16)
plt.xlabel(r"$\tau=\sigma=\sigma'$", fontsize=16)
#plt.fill_between(xs, xs_low*np.ones(xs.shape), xs_high*np.ones(xs.shape), facecolor='lightgreen', alpha=0.5)
#plt.text(2.5, 8, r'${\rm ATLAS+CMS}\; \gamma\gamma\; {\rm range}$')
plt.legend(loc=2).draw_frame(0)
#plt.xlim(xmin, xmax)
fig = plt.ylim(ymin, ymax)
plt.tight_layout()
if SAVEPDFS:
plt.savefig('/home/kkumer/diphGAMlam.pdf')
"""
Explanation: [Fig 3] $\Gamma_{H(750)}$ in three-loop ČKP model
End of explanation
"""
|
zrhans/python | exemplos/googlecode-day-python/google-python-class-day2-p1.ipynb | gpl-2.0 | # Importando o modulo de expressoes regulares
import re
"""
Sintax: match = re.serach(pat, text)
"""
match = re.search('iig','camado piiig')
# O metodo group do objeto retorna o que match encontrou
match.group()
"""
Explanation: Google Python Class Day 2 Part 1
Fonte: Youtube
Nick Parlante - Google engEDU
Topico:
Expressoes Regulares
End of explanation
"""
# Padrao nao existente
match = re.search('iigs','camado piiig')
"""
Explanation: O que acontece se se nenhum padrao (pat) for encontrado? qual o retorno de match? match (que 'e um apontamento para um objeto) apontara para o que? Resposta: Para um objeto do Tipo NoneType que eh um objeto sem atributos.
```python
Traceback (most recent call last)
<ipython-input-7-60588f6cca9c> in <module>()
----> 1 match.group()
AttributeError: 'NoneType' object has no attribute 'group'
```
End of explanation
"""
# Criando prototipo de funcao Find
def Find(pat, text):
match = re.search(pat,text)
if match: print match.group()
else: print('Nao encontrado')
Find('igs','piiig')
"""
Explanation: Vamos criar um prototipo de funcao para buscas por padrao textuais em um texto generico.
End of explanation
"""
# Procurar quaisquer 3 caracteres seguidos de um g
Find('...g','piiig')
"""
Explanation: Padroes
. (ponto) qualquer caractere
\w caractere tipo word [a-zA-Z_]
\d digito
\s espaco em branco
\S exceto espaco em branco
+ 1 ou mais
* 0 ou mais
End of explanation
"""
Find('..g','piiig muito melhor: xyzg')
# Procurar um periodo formado por dois pontos seguidos por tres letras
Find(':\w\w\w','bla :cat bla bla bla')
# Procurar 3 digitos em uma sentenca
Find('\d\d\d','bla :123xxx')
Find('\d\d\d','bla :car007xxx')
# Trabalhando com espacos
Find('\d\s\d\s\d','bla :1 2 3')
# Quando ha mais de um espaco entre o padrao, usa-se + ou *
Find('\d\s+\d\s+\d', '1 2 3')
# Encontrar dois pontos (:) seguidos de um periodo qualquer
Find(':\w+', 'bla bla :este_periodo bla bla')
# Returnar o que for a partir dos dois pontos (:)
Find(':.+', 'bla bla :este_periodo bla bla')
# Retornar o que for a partir dos dois pontos (:) ate encontar um espaco em branco.
Find(':\S+', 'bla bla :este_periodo123&patty=jui&m="021" bla bla')
# Verificando padroes de e-mail (quaisquer word antes e seguindo o @)
Find('\w+@\w+', 'blah hans.z@gmail.com usr @ serveer 1 2 3')
"""
Explanation: A busca usando re para na analise (da esquerda para a direita) na primeira ocorrencia do padrao. Veja no exemplo, onde esperavamos que yzg tambem casasse (match) com o padrao.
End of explanation
"""
Find('[\w.]+@[\w.]+', 'blah hans.z@gmail.com usr @ serveer 1 2 3')
"""
Explanation: Para pegar o ponto usamos a notacao de conjunto [] ouseja [\w.]+ qualquer caractere mais o ponto uma ou mais vezes que aparecerem
End of explanation
"""
# Usamos o parentesis nas partes (membros) que nos interessam
m = re.search('([\w.]+)@([\w.]+)', 'blah hans.z@gmail.com usr @ serveer 1 2 3')
#m.group()
m.group(1)
m.group(2)
# Quando houver mais de um email na mesma linha de analise
re.findall('[\w.]+@[\w.]+', 'blah hans.z@gmail.com usr@serveer 1 2 3')
"""
Explanation: Obtendo o usuario e o servidor do endereco de e-mail
End of explanation
"""
# Retornando uma lista de tuplas user-server
re.findall('([\w.]+)@([\w.]+)', 'blah hans.z@gmail.com usr@serveer 1 2 3')
"""
Explanation: Retorne os parentesis para ver o resultado? -- Retorna uma lista de tuplas user-server
End of explanation
"""
|
google-coral/tutorials | retrain_efficientdet_model_maker_tf2.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
"""
!pip install -q tflite-model-maker
import numpy as np
import os
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
"""
Explanation: Retrain EfficientDet for the Edge TPU with TensorFlow Lite Model Maker
In this tutorial, we'll retrain the EfficientDet-Lite object detection model (derived from EfficientDet) using the TensorFlow Lite Model Maker library, and then compile it to run on the Coral Edge TPU. All in about 30 minutes.
By default, w'll retrain the model using a publicly available dataset of salad photos, teaching the model to recognize a salad and some of the ingredients. But we've also provided code so you can upload your own training dataset in the Pascal VOC XML format.
Here's an example of the salad training results:
<img src="https://storage.googleapis.com/site_and_emails_static_assets/Images/efficientdet-salads.png?" width="400" hspace="0">
<a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_efficientdet_model_maker_tf2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
<a href="https://github.com/google-coral/tutorials/blob/master/retrain_efficientdet_model_maker_tf2.ipynb" target="_parent"><img src="https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub" alt="View in GitHub"></a>
If you want to run the notebook with the salad dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. But if you want to use your own dataset, then continue down to Load the training data and follow the instructions there.
Note: If using a custom dataset, beware that if your dataset includes more than 20 classes, you'll probably have slower inference speeds compared to if you have fewer classes. This is due to an aspect of the EfficientDet architecture in which a certain layer cannot compile for the Edge TPU when it carries more than 20 classes.
Import the required packages
End of explanation
"""
use_custom_dataset = False #@param ["False", "True"] {type:"raw"}
dataset_is_split = False #@param ["False", "True"] {type:"raw"}
"""
Explanation: Load the training data
To use the default salad training dataset, just run all the code below as-is.
But if you want to train with your own image dataset, follow these steps:
Be sure your dataset is annotated in Pascal VOC XML (various tools can help create VOC annotations, such as LabelImg). Then create a ZIP file with all your JPG images and XML files (JPG and XML files can all be in one directory or in separate directories).
Click the Files tab in the left panel and just drag-drop your ZIP file there to upload it.
Use the following drop-down option to set use_custom_dataset to True.
If your dataset is already split into separate directories for training, validation, and testing, also set dataset_is_split to True. (If your dataset is not split, leave it False and we'll split it below.)
Then skip to Load your own Pascal VOC dataset and follow the rest of the instructions there.
End of explanation
"""
if not use_custom_dataset:
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')
"""
Explanation: Load the salads CSV dataset
Model Maker requires that we load our dataset using the DataLoader API. So in this case, we'll load it from a CSV file that defines 175 images for training, 25 images for validation, and 25 images for testing.
End of explanation
"""
if use_custom_dataset:
# The ZIP file you uploaded:
!unzip dataset.zip
# Your labels map as a dictionary (zero is reserved):
label_map = {1: 'apple', 2: 'banana'}
if dataset_is_split:
# If your dataset is already split, specify each path:
train_images_dir = 'dataset/train/images'
train_annotations_dir = 'dataset/train/annotations'
val_images_dir = 'dataset/validation/images'
val_annotations_dir = 'dataset/validation/annotations'
test_images_dir = 'dataset/test/images'
test_annotations_dir = 'dataset/test/annotations'
else:
# If it's NOT split yet, specify the path to all images and annotations
images_in = 'dataset/images'
annotations_in = 'dataset/annotations'
"""
Explanation: If you want to load your own dataset as a CSV file, you can learn more about the format in Formatting a training data CSV. You can load your CSV either from Cloud Storage (as shown above) or from a local path.
DataLoader can also load your dataset in other formats, such as from a set of TFRecord files or from a local directory using the Pascal VOC format (shown below for a custom dataset).
(Optional) Load your own Pascal VOC dataset
To use your custom dataset, you need to modify a few variables here, such as your ZIP filename, your label map, and the path to your images/annotations:
End of explanation
"""
#@markdown Be sure you run this cell. It's hiding the `split_dataset()` function used in the next code block.
import os
import random
import shutil
def split_dataset(images_path, annotations_path, val_split, test_split, out_path):
"""Splits a directory of sorted images/annotations into training, validation, and test sets.
Args:
images_path: Path to the directory with your images (JPGs).
annotations_path: Path to a directory with your VOC XML annotation files,
with filenames corresponding to image filenames. This may be the same path
used for images_path.
val_split: Fraction of data to reserve for validation (float between 0 and 1).
test_split: Fraction of data to reserve for test (float between 0 and 1).
Returns:
The paths for the split images/annotations (train_dir, val_dir, test_dir)
"""
_, dirs, _ = next(os.walk(images_path))
train_dir = os.path.join(out_path, 'train')
val_dir = os.path.join(out_path, 'validation')
test_dir = os.path.join(out_path, 'test')
IMAGES_TRAIN_DIR = os.path.join(train_dir, 'images')
IMAGES_VAL_DIR = os.path.join(val_dir, 'images')
IMAGES_TEST_DIR = os.path.join(test_dir, 'images')
os.makedirs(IMAGES_TRAIN_DIR, exist_ok=True)
os.makedirs(IMAGES_VAL_DIR, exist_ok=True)
os.makedirs(IMAGES_TEST_DIR, exist_ok=True)
ANNOT_TRAIN_DIR = os.path.join(train_dir, 'annotations')
ANNOT_VAL_DIR = os.path.join(val_dir, 'annotations')
ANNOT_TEST_DIR = os.path.join(test_dir, 'annotations')
os.makedirs(ANNOT_TRAIN_DIR, exist_ok=True)
os.makedirs(ANNOT_VAL_DIR, exist_ok=True)
os.makedirs(ANNOT_TEST_DIR, exist_ok=True)
# Get all filenames for this dir, filtered by filetype
filenames = os.listdir(os.path.join(images_path))
filenames = [os.path.join(images_path, f) for f in filenames if (f.endswith('.jpg'))]
# Shuffle the files, deterministically
filenames.sort()
random.seed(42)
random.shuffle(filenames)
# Get exact number of images for validation and test; the rest is for training
val_count = int(len(filenames) * val_split)
test_count = int(len(filenames) * test_split)
for i, file in enumerate(filenames):
source_dir, filename = os.path.split(file)
annot_file = os.path.join(annotations_path, filename.replace("jpg", "xml"))
if i < val_count:
shutil.copy(file, IMAGES_VAL_DIR)
shutil.copy(annot_file, ANNOT_VAL_DIR)
elif i < val_count + test_count:
shutil.copy(file, IMAGES_TEST_DIR)
shutil.copy(annot_file, ANNOT_TEST_DIR)
else:
shutil.copy(file, IMAGES_TRAIN_DIR)
shutil.copy(annot_file, ANNOT_TRAIN_DIR)
return (train_dir, val_dir, test_dir)
# We need to instantiate a separate DataLoader for each split dataset
if use_custom_dataset:
if dataset_is_split:
train_data = object_detector.DataLoader.from_pascal_voc(
train_images_dir, train_annotations_dir, label_map=label_map)
validation_data = object_detector.DataLoader.from_pascal_voc(
val_images_dir, val_annotations_dir, label_map=label_map)
test_data = object_detector.DataLoader.from_pascal_voc(
test_images_dir, test_annotations_dir, label_map=label_map)
else:
train_dir, val_dir, test_dir = split_dataset(images_in, annotations_in,
val_split=0.2, test_split=0.2,
out_path='split-dataset')
train_data = object_detector.DataLoader.from_pascal_voc(
os.path.join(train_dir, 'images'),
os.path.join(train_dir, 'annotations'), label_map=label_map)
validation_data = object_detector.DataLoader.from_pascal_voc(
os.path.join(val_dir, 'images'),
os.path.join(val_dir, 'annotations'), label_map=label_map)
test_data = object_detector.DataLoader.from_pascal_voc(
os.path.join(test_dir, 'images'),
os.path.join(test_dir, 'annotations'), label_map=label_map)
print(f'train count: {len(train_data)}')
print(f'validation count: {len(validation_data)}')
print(f'test count: {len(test_data)}')
"""
Explanation: Now you're ready to train the model with your custom dataset. But before you run the notebook, you should also skip to the Export to TensorFlow Lite section and change the TFLITE_FILENAME and LABLES_FILENAME for your exported files.
Then run the whole notebook by clicking Runtime > Run all.
End of explanation
"""
spec = object_detector.EfficientDetLite0Spec()
"""
Explanation: Select the model spec
Model Maker supports the EfficientDet-Lite family of object detection models that are compatible with the Edge TPU. (EfficientDet-Lite is derived from EfficientDet, which offers state-of-the-art accuracy in a small model size). There are several model sizes you can choose from:
|| Model architecture | Size(MB) | Latency(ms) | Average Precision |
|-|--------------------|-----------|---------------|----------------------|
|| EfficientDet-Lite0 | 5.7 | 37.4 | 30.4% |
|| EfficientDet-Lite1 | 7.6 | 56.3 | 34.3% |
|| EfficientDet-Lite2 | 10.2 | 104.6 | 36.0% |
|| EfficientDet-Lite3 | 14.4 | 107.6 | 39.4% |
| <td colspan=4><br><i> File size of the compiled Edge TPU models. <br/>** Latency measured on a desktop CPU with a Coral USB Accelerator. <br/> Average Precision is the mAP (mean Average Precision) on the COCO 2017 validation dataset.</i></td> |
Beware that the Lite2 and Lite3 models do not fit onto the Edge TPU's onboard memory, so you'll see even greater latency when using those, due to the cost of fetching data from the host system memory. Maybe this extra latency is okay for your application, but if it's not and you require the precision of the larger models, then you can pipeline the model across multiple Edge TPUs (more about this when we compile the model below).
For this tutorial, we'll use Lite0:
End of explanation
"""
model = object_detector.create(train_data=train_data,
model_spec=spec,
validation_data=validation_data,
epochs=50,
batch_size=10,
train_whole_model=True)
"""
Explanation: The EfficientDetLite0Spec constructor also supports several arguments that specify training options, such as the max number of detections (default is 25 for the TF Lite model) and whether to use Cloud TPUs for training. You can also use the constructor to specify the number of training epochs and the batch size, but you can also specify those in the next step.
Create and train the model
Now we need to create our model according to the model spec, load our dataset into the model, specify training parameters, and begin training.
Using Model Maker, we accomplished all of that with create():
End of explanation
"""
model.evaluate(test_data)
"""
Explanation: Evaluate the model
Now we'll use the test dataset to evaluate how well the model performs with data it has never seen before.
The evaluate() method provides output in the style of COCO evaluation metrics:
End of explanation
"""
TFLITE_FILENAME = 'efficientdet-lite-salad.tflite'
LABELS_FILENAME = 'salad-labels.txt'
model.export(export_dir='.', tflite_filename=TFLITE_FILENAME, label_filename=LABELS_FILENAME,
export_format=[ExportFormat.TFLITE, ExportFormat.LABEL])
"""
Explanation: Because the default batch size for EfficientDetLite models is 64, this needs only 1 step to go through all 25 images in the salad test set. You can also specify the batch_size argument when you call evaluate().
Export to TensorFlow Lite
Next, we'll export the model to the TensorFlow Lite format. By default, the export() method performs full integer post-training quantization, which is exactly what we need for compatibility with the Edge TPU. (Model Maker uses the same dataset we gave to our model spec as a representative dataset, which is required for full-int quantization.)
We just need to specify the export directory and format. By default, it exports to TF Lite, but we also want a labels file, so we declare both:
End of explanation
"""
model.evaluate_tflite(TFLITE_FILENAME, test_data)
"""
Explanation: Evaluate the TF Lite model
Exporting the model to TensorFlow Lite can affect the model accuracy, due to the reduced numerical precision from quantization and because the original TensorFlow model uses per-class non-max supression (NMS) for post-processing, while the TF Lite model uses global NMS, which is faster but less accurate.
Therefore you should always evaluate the exported TF Lite model and be sure it still meets your requirements:
End of explanation
"""
import random
# If you're using a custom dataset, we take a random image from the test set:
if use_custom_dataset:
images_path = test_images_dir if dataset_is_split else os.path.join(test_dir, "images")
filenames = os.listdir(os.path.join(images_path))
random_index = random.randint(0,len(filenames)-1)
INPUT_IMAGE = os.path.join(images_path, filenames[random_index])
else:
# Download a test salad image
INPUT_IMAGE = 'salad-test.jpg'
DOWNLOAD_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg"
!wget -q -O $INPUT_IMAGE $DOWNLOAD_URL
"""
Explanation: Try the TFLite model
Just to be sure of things, let's run the model ourselves with an image from the test set.
End of explanation
"""
! python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
import tflite_runtime.interpreter as tflite
from pycoral.adapters import common
from pycoral.adapters import detect
from pycoral.utils.dataset import read_label_file
def draw_objects(draw, objs, scale_factor, labels):
"""Draws the bounding box and label for each object."""
COLORS = np.random.randint(0, 255, size=(len(labels), 3), dtype=np.uint8)
for obj in objs:
bbox = obj.bbox
color = tuple(int(c) for c in COLORS[obj.id])
draw.rectangle([(bbox.xmin * scale_factor, bbox.ymin * scale_factor),
(bbox.xmax * scale_factor, bbox.ymax * scale_factor)],
outline=color, width=3)
font = ImageFont.truetype("LiberationSans-Regular.ttf", size=15)
draw.text((bbox.xmin * scale_factor + 4, bbox.ymin * scale_factor + 4),
'%s\n%.2f' % (labels.get(obj.id, obj.id), obj.score),
fill=color, font=font)
# Load the TF Lite model
labels = read_label_file(LABELS_FILENAME)
interpreter = tflite.Interpreter(TFLITE_FILENAME)
interpreter.allocate_tensors()
# Resize the image for input
image = Image.open(INPUT_IMAGE)
_, scale = common.set_resized_input(
interpreter, image.size, lambda size: image.resize(size, Image.ANTIALIAS))
# Run inference
interpreter.invoke()
objs = detect.get_objects(interpreter, score_threshold=0.4, image_scale=scale)
# Resize again to a reasonable size for display
display_width = 500
scale_factor = display_width / image.width
height_ratio = image.height / image.width
image = image.resize((display_width, int(display_width * height_ratio)))
draw_objects(ImageDraw.Draw(image), objs, scale_factor, labels)
image
"""
Explanation: To simplify our code, we'll use the PyCoral API:
End of explanation
"""
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
"""
Explanation: Compile for the Edge TPU
First we need to download the Edge TPU Compiler:
End of explanation
"""
NUMBER_OF_TPUS = 1
!edgetpu_compiler $TFLITE_FILENAME -d --num_segments=$NUMBER_OF_TPUS
"""
Explanation: Before compiling the .tflite file for the Edge TPU, it's important to consider whether your model will fit into the Edge TPU memory.
The Edge TPU has approximately 8 MB of SRAM for caching model paramaters, so any model close to or over 8 MB will not fit onto the Edge TPU memory. That means the inference times are longer, because some model parameters must be fetched from the host system memory.
One way to elimiate the extra latency is to use model pipelining, which splits the model into segments that can run on separate Edge TPUs in series. This can significantly reduce the latency for big models.
The following table provides recommendations for the number of Edge TPUs to use with each EfficientDet-Lite model.
| Model architecture | Minimum TPUs | Recommended TPUs
|--------------------|-------|-------|
| EfficientDet-Lite0 | 1 | 1 |
| EfficientDet-Lite1 | 1 | 1 |
| EfficientDet-Lite2 | 1 | 2 |
| EfficientDet-Lite3 | 2 | 2 |
| EfficientDet-Lite4 | 2 | 3 |
If you need extra Edge TPUs for your model, then update NUMBER_OF_TPUS here:
End of explanation
"""
from google.colab import files
files.download(TFLITE_FILENAME)
files.download(TFLITE_FILENAME.replace('.tflite', '_edgetpu.tflite'))
files.download(LABELS_FILENAME)
"""
Explanation: Beware when using multiple segments: The Edge TPU Comiler divides the model such that all segments have roughly equal amounts of parameter data, but that does not mean all segments have the same latency. Especially when dividing an SSD model such as EfficientDet, this results in a latency-imbalance between segments, because SSD models have a large post-processing op that actually executes on the CPU, not on the Edge TPU. So although segmenting your model this way is better than running the whole model on just one Edge TPU, we recommend that you segment the EfficientDet-Lite model using our profiling-based partitioner tool, which measures each segment's latency on the Edge TPU and then iteratively adjusts the segmentation sizes to provide balanced latency between all segments.
Download the files
End of explanation
"""
|
anandha2017/udacity | nd101 Deep Learning Nanodegree Foundation/DockerImages/projects/04-language-translation/notebooks/dlnd_language_translation_v2.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
eos = target_vocab_to_int['<EOS>']
source_sentences = [s for s in source_text.split('\n')]
target_sentences = [s for s in target_text.split('\n')]
source_id_text = [[source_vocab_to_int[w] for w in s.split()] for s in source_sentences]
target_id_text = [[target_vocab_to_int[w] for w in s.split()] + [eos] for s in target_sentences]
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_probability = tf.placeholder(tf.float32, None, name='keep_prob')
target_squence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_squence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return (input_data,
targets,
learning_rate,
keep_probability,
target_squence_length,
max_target_sequence_length,
source_sequence_length)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
def make_stacked_lstm_rnn_cell(rnn_size, num_layers, keep_prob):
def rnn_cell(rnn_size):
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return tf.contrib.rnn.DropoutWrapper(lstm_cell, keep_prob, keep_prob, keep_prob)
stacked_lstm_cell = tf.contrib.rnn.MultiRNNCell([rnn_cell(rnn_size) for _ in range(num_layers)])
return stacked_lstm_cell
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
stacked_cell = make_stacked_lstm_rnn_cell(rnn_size, num_layers, keep_prob)
return tf.nn.dynamic_rnn(stacked_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True,
maximum_iterations=max_summary_length)
return training_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)
decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decode_stacked_lstm_rnn_cell(rnn_size, num_layers, keep_prob=1.0): # remove keep_prob?
def rnn_cell(rnn_size):
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return lstm_cell
decoded_lstm_cell = tf.contrib.rnn.MultiRNNCell([rnn_cell(rnn_size) for _ in range(num_layers)])
return decoded_lstm_cell
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
decoder_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
decoder_embeddings_input = tf.nn.embedding_lookup(decoder_embeddings, dec_input)
decoded_cell = decode_stacked_lstm_rnn_cell(rnn_size, num_layers)
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
with tf.variable_scope("decoding") as training_scope:
training_logits = decoding_layer_train(encoder_state, decoded_cell, decoder_embeddings_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
with tf.variable_scope("decoding", reuse=True) as inference_scope:
inference_logits = decoding_layer_infer(encoder_state, decoded_cell, decoder_embeddings,
start_of_sequence_id, end_of_sequence_id,
max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_logits, inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
enc_output, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, dec_embedding_size)
return training_output, inference_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
display_step = 25
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
loss_list = []
valid_acc_list = []
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
loss_list.append(loss)
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
valid_acc_list.append(valid_acc)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
# Visualize the loss and accuracy
import matplotlib.pyplot as plt
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 6))
ax1.plot(loss_list, color='red')
ax1.set_title('Traning Loss')
ax1.set_ylabel('Loss value')
ax2.plot(valid_acc_list)
ax2.set_xlabel('Iterations')
ax2.set_ylabel('Accuracy')
ax2.set_title('Validation Accuracy')
plt.show()
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# Convert the sentence to lowercase
slower = sentence.lower()
# Convert words into ids using vocab_to_int
word_ids = []
for s in slower.split():
# Convert words not in the vocabulary, to the <UNK> word id.
if s not in vocab_to_int:
s = '<UNK>'
word_ids.append(vocab_to_int[s])
return word_ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
#translate_sentence = 'he saw a old yellow truck .' # il a vu un vieux camion jaune = He saw an old yellow truck
#translate_sentence = 'india is never busy during autumn' # inde est jamais occupé à l'automne , mais il est parfois = India is never busy in the autumn, but it is sometimes
#translate_sentence = 'france is never cold during september' # france ne fait jamais froid au mois de septembre , mais il = France is never cold in September, but it
#translate_sentence = 'your most feared animal is that shark' # la pomme est votre fruit le moins aimé = The apple is your least liked fruit
#translate_sentence = 'our least favorite fruit is the banana' # notre fruit préféré moins est la banane = Our favorite fruit less is banana
#translate_sentence = 'china is hot during july' # chine est chaud en juillet , mais il est calme = Chine is hot in July, but it is quiet
#translate_sentence = 'i fear yellow sharks' # aime la mangue est le français et les = Loves mango is French and
translate_sentence = 'french sharks are yellow' # le pamplemousse et les moins les mangues = Grapefruit and minus mangoes
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
AllenDowney/ProbablyOverthinkingIt | dice_prob.ipynb | mit | from __future__ import print_function, division
from numpy.random import choice
from collections import Counter
from collections import defaultdict
"""
Explanation: Solution to a problem posted at
https://www.reddit.com/r/statistics/comments/4csjee/finding_pab_given_two_sets_of_data/
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
"""
def roll(die):
return choice(die, 6)
die = [1,2,3,4,5,6]
roll(die)
"""
Explanation: Roll six 6-sided dice:
End of explanation
"""
def compute_score(outcome):
counts = Counter(outcome)
dd = defaultdict(list)
[dd[v].append(k) for k, v in counts.items()]
return len(dd[max(dd)])
compute_score([1,1,1,1,1,1])
"""
Explanation: Count how many times each outcome occurs and score accordingly:
End of explanation
"""
n = 100000
scores = [compute_score(roll(die)) for _ in range(n)]
"""
Explanation: Run many times and accumulate scores:
End of explanation
"""
for score, freq in sorted(Counter(scores).items()):
print(score, 100*freq/n)
"""
Explanation: Print the percentages of each score:
End of explanation
"""
from itertools import product
die = [1,2,3,4,5,6]
counts = Counter(compute_score(list(outcome)) for outcome in product(*[die]*6))
n = sum(counts.values())
for score, freq in sorted(counts.items()):
print(score, 100*freq/n)
"""
Explanation: Or even better, just enumerate the possibilities.
End of explanation
"""
|
calroc/joypy | docs/Quadratic.ipynb | gpl-3.0 | from notebook_preamble import J, V, define
"""
Explanation: Quadratic formula
End of explanation
"""
define('quadratic == over [[[neg] dupdip sqr 4] dipd * * - sqrt [+] [-] cleave] dip 2 * [truediv] cons app2 roll< pop')
J('3 1 1 quadratic')
"""
Explanation: Cf. jp-quadratic.html
-b +/- sqrt(b^2 - 4 * a * c)
-----------------------------
2 * a
$\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
Write a straightforward program with variable names.
b neg b sqr 4 a c * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2
Check it.
b neg b sqr 4 a c * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2
-b b sqr 4 a c * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2
-b b^2 4 a c * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2
-b b^2 4ac - sqrt [+] [-] cleave a 2 * [truediv] cons app2
-b b^2-4ac sqrt [+] [-] cleave a 2 * [truediv] cons app2
-b sqrt(b^2-4ac) [+] [-] cleave a 2 * [truediv] cons app2
-b -b+sqrt(b^2-4ac) -b-sqrt(b^2-4ac) a 2 * [truediv] cons app2
-b -b+sqrt(b^2-4ac) -b-sqrt(b^2-4ac) 2a [truediv] cons app2
-b -b+sqrt(b^2-4ac) -b-sqrt(b^2-4ac) [2a truediv] app2
-b -b+sqrt(b^2-4ac)/2a -b-sqrt(b^2-4ac)/2a
Codicil
-b -b+sqrt(b^2-4ac)/2a -b-sqrt(b^2-4ac)/2a roll< pop
-b+sqrt(b^2-4ac)/2a -b-sqrt(b^2-4ac)/2a -b pop
-b+sqrt(b^2-4ac)/2a -b-sqrt(b^2-4ac)/2a
Derive a definition.
b neg b sqr 4 a c * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2 roll< pop
b [neg] dupdip sqr 4 a c * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2 roll< pop
b a c [[neg] dupdip sqr 4] dipd * * - sqrt [+] [-] cleave a 2 * [truediv] cons app2 roll< pop
b a c a [[[neg] dupdip sqr 4] dipd * * - sqrt [+] [-] cleave] dip 2 * [truediv] cons app2 roll< pop
b a c over [[[neg] dupdip sqr 4] dipd * * - sqrt [+] [-] cleave] dip 2 * [truediv] cons app2 roll< pop
End of explanation
"""
define('pm == [+] [-] cleave popdd')
"""
Explanation: Simplify
We can define a pm plus-or-minus function:
End of explanation
"""
define('quadratic == over [[[neg] dupdip sqr 4] dipd * * - sqrt pm] dip 2 * [truediv] cons app2')
J('3 1 1 quadratic')
"""
Explanation: Then quadratic becomes:
End of explanation
"""
from joy.library import SimpleFunctionWrapper
from notebook_preamble import D
@SimpleFunctionWrapper
def pm(stack):
a, (b, stack) = stack
p, m, = b + a, b - a
return m, (p, stack)
D['pm'] = pm
"""
Explanation: Define a "native" pm function.
The definition of pm above is pretty elegant, but the implementation takes a lot of steps relative to what it's accomplishing. Since we are likely to use pm more than once in the future, let's write a primitive in Python and add it to the dictionary.
End of explanation
"""
V('3 1 1 quadratic')
"""
Explanation: The resulting trace is short enough to fit on a page.
End of explanation
"""
|
diegocavalca/Studies | programming/Python/tensorflow/exercises/Sparse_Tensors-Solutions.ipynb | cc0-1.0 | from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
"""
Explanation: Sparse Tensors
End of explanation
"""
x = tf.constant([[1, 0, 0, 0],
[0, 0, 2, 0],
[0, 0, 0, 0]], dtype=tf.int32)
sp = tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
print(sp.eval())
"""
Explanation: Sparse Tensor Representation & Conversion
Q1. Convert tensor x into a SparseTensor.
End of explanation
"""
print("dtype:", sp.dtype)
print("indices:", sp.indices.eval())
print("dense_shape:", sp.dense_shape.eval())
print("values:", sp.values.eval())
"""
Explanation: Q2. Investigate the dtype, indices, dense_shape and values of the SparseTensor sp in Q1.
End of explanation
"""
def dense_to_sparse(tensor):
indices = tf.where(tf.not_equal(tensor, 0))
return tf.SparseTensor(indices=indices,
values=tf.gather_nd(tensor, indices) - 1, # for zero-based index
dense_shape=tf.to_int64(tf.shape(tensor)))
# Test
print(dense_to_sparse(x).eval())
"""
Explanation: Q3. Let's write a custom function that converts a SparseTensor to Tensor. Complete it.
End of explanation
"""
output = tf.sparse_to_dense(sparse_indices=[[0, 0], [1, 2]], sparse_values=[1, 2], output_shape=[3, 4])
print(output.eval())
print("Check if this is identical with x:\n", x.eval())
"""
Explanation: Q4. Convert the SparseTensor sp to a Tensor using tf.sparse_to_dense.
End of explanation
"""
output = tf.sparse_tensor_to_dense(s)
print(output.eval())
print("Check if this is identical with x:\n", x.eval())
"""
Explanation: Q5. Convert the SparseTensor sp to a Tensor using tf.sparse_tensor_to_dense.
End of explanation
"""
|
tcstewar/testing_notebooks | sgbc/Simple LSTM example.ipynb | gpl-2.0 | t = np.arange(50)*0.05
input_data = np.sign(np.array([np.sin(2*np.pi*t),np.sin(2*np.pi*t)]).T).astype(float)
input_data += np.random.normal(size=input_data.shape)*0.1
output_data = (np.sign(np.sin(2*np.pi*t*2+np.pi)).astype(float)+1)/2
print('Input Data', input_data)
print('Output Data', output_data)
"""
Explanation: First, we create some data. In a real example, this would be loaded up out of the file.
In this case, input_data is two values, and output_data is one value (the thing we're trying to predict given the input_data). For the particular data I've generated here, you can't do it given only the current input_data; you can only make an accurate prediction given the previous input_data as well.
End of explanation
"""
plt.subplot(2,1,1)
plt.plot(input_data)
plt.title('input data')
plt.subplot(2,1,2)
plt.plot(output_data)
plt.title('output data')
plt.tight_layout()
plt.show()
"""
Explanation: Let's plot that data, just to make it clearer
End of explanation
"""
n_epochs = 4000 # number of times to run the training
n_units = 200 # size of the neural network
n_classes = 1 # number of values in the output
n_features = 2 # number of values in the input
"""
Explanation: Now we need to make our network and train it.
End of explanation
"""
X = tf.placeholder('float',[None,n_features])
Y = tf.placeholder('float')
weights = tf.Variable(tf.random_normal([n_units, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
x = tf.split(X, n_features, 1)
lstm_cell = rnn.BasicLSTMCell(n_units)
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
output = tf.matmul(outputs[-1], weights) + bias
output = tf.reshape(output, [-1])
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=Y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Now we create our network. I don't quite understand exactly what's happening here, but I copied it from an LSTM tutorial.
End of explanation
"""
with tf.Session() as session:
# initialize the network
tf.global_variables_initializer().run()
tf.local_variables_initializer().run()
# now do the training
for epoch in range(n_epochs):
# this does one pass through the traiing
_, error = session.run([optimizer, cost], feed_dict={X: input_data, Y: output_data})
# print a message every 100 epochs
if epoch % 100 == 0:
print('Epoch', epoch, 'completed out of', n_epochs, 'error:', error)
# now compute the output after training
pred = tf.round(tf.nn.sigmoid(output)).eval({X: input_data})
plt.subplot(2, 1, 1)
plt.title('ideal output')
plt.plot(output_data)
plt.subplot(2, 1, 2)
plt.title('predicted output')
plt.plot(pred)
plt.tight_layout()
plt.show()
"""
Explanation: Now we train it.
End of explanation
"""
|
xtr33me/deep-learning | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
bashtage/statsmodels | examples/notebooks/gee_score_test_simulation.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
from scipy.stats.distributions import norm, poisson
import statsmodels.api as sm
import matplotlib.pyplot as plt
"""
Explanation: GEE score tests
This notebook uses simulation to demonstrate robust GEE score tests. These tests can be used in a GEE analysis to compare nested hypotheses about the mean structure. The tests are robust to miss-specification of the working correlation model, and to certain forms of misspecification of the variance structure (e.g. as captured by the scale parameter in a quasi-Poisson analysis).
The data are simulated as clusters, where there is dependence within but not between clusters. The cluster-wise dependence is induced using a copula approach. The data marginally follow a negative binomial (gamma/Poisson) mixture.
The level and power of the tests are considered below to assess the performance of the tests.
End of explanation
"""
def negbinom(u, mu, scale):
p = (scale - 1) / scale
r = mu * (1 - p) / p
x = np.random.gamma(r, p / (1 - p), len(u))
return poisson.ppf(u, mu=x)
"""
Explanation: The function defined in the following cell uses a copula approach to simulate correlated random values that marginally follow a negative binomial distribution. The input parameter u is an array of values in (0, 1). The elements of u must be marginally uniformly distributed on (0, 1). Correlation in u will induce correlations in the returned negative binomial values. The array parameter mu gives the marginal means, and the scalar parameter scale defines the mean/variance relationship (the variance is scale times the mean). The lengths of u and mu must be the same.
End of explanation
"""
# Sample size
n = 1000
# Number of covariates (including intercept) in the alternative hypothesis model
p = 5
# Cluster size
m = 10
# Intraclass correlation (controls strength of clustering)
r = 0.5
# Group indicators
grp = np.kron(np.arange(n/m), np.ones(m))
"""
Explanation: Below are some parameters that govern the data used in the simulation.
End of explanation
"""
# Build a design matrix for the alternative (more complex) model
x = np.random.normal(size=(n, p))
x[:, 0] = 1
"""
Explanation: The simulation uses a fixed design matrix.
End of explanation
"""
x0 = x[:, 0:3]
"""
Explanation: The null design matrix is nested in the alternative design matrix. It has rank two less than the alternative design matrix.
End of explanation
"""
# Scale parameter for negative binomial distribution
scale = 10
"""
Explanation: The GEE score test is robust to dependence and overdispersion. Here we set the overdispersion parameter. The variance of the negative binomial distribution for each observation is equal to scale times its mean value.
End of explanation
"""
# The coefficients used to define the linear predictors
coeff = [[4, 0.4, -0.2], [4, 0.4, -0.2, 0, -0.04]]
# The linear predictors
lp = [np.dot(x0, coeff[0]), np.dot(x, coeff[1])]
# The mean values
mu = [np.exp(lp[0]), np.exp(lp[1])]
"""
Explanation: In the next cell, we set up the mean structures for the null and alternative models
End of explanation
"""
# hyp = 0 is the null hypothesis, hyp = 1 is the alternative hypothesis.
# cov_struct is a statsmodels covariance structure
def dosim(hyp, cov_struct=None, mcrep=500):
# Storage for the simulation results
scales = [[], []]
# P-values from the score test
pv = []
# Monte Carlo loop
for k in range(mcrep):
# Generate random "probability points" u that are uniformly
# distributed, and correlated within clusters
z = np.random.normal(size=n)
u = np.random.normal(size=n//m)
u = np.kron(u, np.ones(m))
z = r*z +np.sqrt(1-r**2)*u
u = norm.cdf(z)
# Generate the observed responses
y = negbinom(u, mu=mu[hyp], scale=scale)
# Fit the null model
m0 = sm.GEE(y, x0, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson())
r0 = m0.fit(scale='X2')
scales[0].append(r0.scale)
# Fit the alternative model
m1 = sm.GEE(y, x, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson())
r1 = m1.fit(scale='X2')
scales[1].append(r1.scale)
# Carry out the score test
st = m1.compare_score_test(r0)
pv.append(st["p-value"])
pv = np.asarray(pv)
rslt = [np.mean(pv), np.mean(pv < 0.1)]
return rslt, scales
"""
Explanation: Below is a function that carries out the simulation.
End of explanation
"""
rslt, scales = [], []
for hyp in 0, 1:
s, t = dosim(hyp, sm.cov_struct.Independence())
rslt.append(s)
scales.append(t)
rslt = pd.DataFrame(rslt, index=["H0", "H1"], columns=["Mean", "Prop(p<0.1)"])
print(rslt)
"""
Explanation: Run the simulation using the independence working covariance structure. We expect the mean to be around 0 under the null hypothesis, and much lower under the alternative hypothesis. Similarly, we expect that under the null hypothesis, around 10% of the p-values are less than 0.1, and a much greater fraction of the p-values are less than 0.1 under the alternative hypothesis.
End of explanation
"""
_ = plt.boxplot([scales[0][0], scales[0][1], scales[1][0], scales[1][1]])
plt.ylabel("Estimated scale")
"""
Explanation: Next we check to make sure that the scale parameter estimates are reasonable. We are assessing the robustness of the GEE score test to dependence and overdispersion, so here we are confirming that the overdispersion is present as expected.
End of explanation
"""
rslt, scales = [], []
for hyp in 0, 1:
s, t = dosim(hyp, sm.cov_struct.Exchangeable(), mcrep=100)
rslt.append(s)
scales.append(t)
rslt = pd.DataFrame(rslt, index=["H0", "H1"], columns=["Mean", "Prop(p<0.1)"])
print(rslt)
"""
Explanation: Next we conduct the same analysis using an exchangeable working correlation model. Note that this will be slower than the example above using independent working correlation, so we use fewer Monte Carlo repetitions.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/ml_ops/stage6/get_started_with_tf_serving.ipynb | apache-2.0 | import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
! pip3 install tensorflow-hub $USER_FLAG -q
"""
Explanation: E2E ML on GCP: MLOps stage 6 : Get started with TensorFlow Serving with Vertex AI Prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to serve predictions from a Vertex AI Endpoint with TensorFlow Serving serving binary.
Objective
In this tutorial, you learn how to use Vertex AI Prediction on a Vertex AI Endpoint resource with TensorFlow Serving serving binary.
This tutorial uses the following Google Cloud ML services and resources:
Vertex AI Prediction
Vertex AI Models
Vertex AI Endpoints
The steps performed include:
Download a pretrained image classification model from TensorFlow Hub.
Create a serving function to receive compressed image data, and output decomopressed preprocessed data for the model input.
Upload the TensorFlow Hub model and serving function as a Vertex AI Model resource.
Creating an Endpoint resource.
Deploying the Model resource to an Endpoint resource with TensorFlow Serving serving binary.
Make an online prediction to the Model resource instance deployed to the Endpoint resource.
Dataset
This tutorial uses a pre-trained image classification model from TensorFlow Hub, which is trained on ImageNet dataset.
Learn more about ResNet V2 pretained model.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the following packages to execute this notebook.
End of explanation
"""
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
import tensorflow as tf
import tensorflow_hub as hub
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
"""
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more about hardware accelerator support for your region.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
! gcloud services enable artifactregistry.googleapis.com
"""
Explanation: Enable Artifact Registry API
You must enable the Artifact Registry API service for your project.
Learn more about Enabling service.
End of explanation
"""
PRIVATE_REPO = "my-docker-repo"
! gcloud artifacts repositories create {PRIVATE_REPO} --repository-format=docker --location={REGION} --description="Docker repository"
! gcloud artifacts repositories list
"""
Explanation: Create a private Docker repository
Your first step is to create your own Docker repository in Google Artifact Registry.
Run the gcloud artifacts repositories create command to create a new Docker repository with your region with the description "docker repository".
Run the gcloud artifacts repositories list command to verify that your repository was created.
End of explanation
"""
! gcloud auth configure-docker {REGION}-docker.pkg.dev --quiet
"""
Explanation: Configure authentication to your private repo
Before you push or pull container images, configure Docker to use the gcloud command-line tool to authenticate requests to Artifact Registry for your region.
End of explanation
"""
# Executes in Vertex AI Workbench
if DEPLOY_GPU:
DEPLOY_IMAGE = (
f"{REGION}-docker.pkg.dev/"
+ PROJECT_ID
+ f"/{PRIVATE_REPO}"
+ "/tf_serving:gpu"
)
TF_IMAGE = "tensorflow/serving:latest-gpu"
else:
DEPLOY_IMAGE = (
f"{REGION}-docker.pkg.dev/" + PROJECT_ID + f"/{PRIVATE_REPO}" + "/tf_serving"
)
TF_IMAGE = "tensorflow/serving:latest"
if not IS_COLAB:
if DEPLOY_GPU:
! sudo docker pull tensorflow/serving:latest-gpu
else:
! sudo docker pull tensorflow/serving:latest
! sudo docker tag tensorflow/serving $DEPLOY_IMAGE
! sudo docker push $DEPLOY_IMAGE
else:
# install docker daemon
! apt-get -qq install docker.io
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Container (Docker) image for serving
Set the TensorFlow Serving Docker container image for serving prediction.
1. Pull the corresponding CPU or GPU Docker image for TF Serving from Docker Hub.
2. Create a tag for registering the image with Artifact Registry
3. Register the image with Artifact Registry.
Learn more about TensorFlow Serving.
End of explanation
"""
%%bash -s $IS_COLAB $DEPLOY_IMAGE $TF_IMAGE
if [ $1 == "False" ]; then
exit 0
fi
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker pull $3
docker tag tensorflow/serving $2
docker push $2
kill $(jobs -p)
"""
Explanation: Executes in Colab
End of explanation
"""
tfhub_model = tf.keras.Sequential(
[hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/5")]
)
tfhub_model.build([None, 224, 224, 3])
tfhub_model.summary()
"""
Explanation: Get pretrained model from TensorFlow Hub
For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource.
Download the pretrained model
First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model.
End of explanation
"""
MODEL_DIR = BUCKET_URI + "/model/1"
tfhub_model.save(MODEL_DIR)
"""
Explanation: Save the model artifacts
At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location.
Note: For TF Serving, the MODEL_DIR must end in a subfolder that is a number, e.g., 1.
End of explanation
"""
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(224, 224))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(tfhub_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(tfhub_model, MODEL_DIR, signatures={"serving_default": serving_fn})
"""
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Serving function for image data
Preprocessing
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes, and then preprocessed to match the model input requirements, before it is passed as input to the deployed model.
To resolve this, you define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.
image.resize - Resizes the image to match the input shape for the model.
At this point, the data can be passed to the model (m_call), via a concrete function. The serving function is a static graph, while the model is a dynamic graph. The concrete function performs the tasks of marshalling the input data from the serving function to the model, and marshalling the prediction result from the model back to the serving function.
End of explanation
"""
loaded = tf.saved_model.load(MODEL_DIR)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
"""
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
"""
MODEL_NAME = "example_" + TIMESTAMP
model = aip.Model.upload(
display_name="example_" + TIMESTAMP,
artifact_uri=MODEL_DIR[:-2],
serving_container_image_uri=DEPLOY_IMAGE,
serving_container_health_route="/v1/models/" + MODEL_NAME,
serving_container_predict_route="/v1/models/" + MODEL_NAME + ":predict",
serving_container_command=["/usr/bin/tensorflow_model_server"],
serving_container_args=[
"--model_name=" + MODEL_NAME,
"--model_base_path=" + "$(AIP_STORAGE_URI)",
"--rest_api_port=8080",
"--port=8500",
"--file_system_poll_wait_seconds=31540000",
],
serving_container_ports=[8080],
)
print(model)
"""
Explanation: Upload the TensorFlow Hub model to a Vertex AI Model resource
Finally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource. Since you are using a non Google pre-built serving binary -- i.e., TensorFlow Serving, you need to specify the following additional serving configuration settings:
serving_container_command: The serving binary (HTTP Server) to start up.
serving_container_args: The arguments to pass to the serving binary. For TensorFlow Serving, the required arguments are:
--model_name: The human readable name to assign to the model.
--model_base_name: Where to store the model artifacts in the container. The Vertex service sets the variable $(AIP_STORAGE_URI) to where the service installed the model artifacts in the container.
--rest_api_port: The port to which to send REST based prediction requests. Can either be 8080 or 8501 (default for TensorFlow Serving).
--port: The port to which to send gRPC based prediction requests. Should be 8500 for TensorFlow Serving.
serving_container_health_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For TensorFlow Serving, this will be /v1/models/\<model_name>.
serving_container_predict_route: The URL for the service to route REST-based prediction requests to. For TF Serving, this will be /v1/models/[model_name]:predict.
serving_container_ports: A list of ports for the HTTP server to listen for requests.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments.
Note: You drop the ending number subfolder (e.g., /1) from the model path to upload. The Vertex service will upload the parent folder above the subfolder with the model artifacts -- which is what TensorFlow Serving binary expects.
Note: When you upload the model artifacts to a Vertex AI Model resource, you specify the corresponding deployment container image.
End of explanation
"""
endpoint = aip.Endpoint.create(
display_name="example_" + TIMESTAMP,
project=PROJECT_ID,
location=REGION,
labels={"your_key": "your_value"},
)
print(endpoint)
"""
Explanation: Creating an Endpoint resource
You create an Endpoint resource using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method.
In this example, the following parameters are specified:
display_name: A human readable name for the Endpoint resource.
project: Your project ID.
location: Your region.
labels: (optional) User defined metadata for the Endpoint in the form of key/value pairs.
This method returns an Endpoint object.
Learn more about Vertex AI Endpoints.
End of explanation
"""
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
)
print(endpoint)
"""
Explanation: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
Note: For this example, you specified the deployment container for the TFHub model in the previous step of uploading the model artifacts to a Vertex AI Model resource.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings:
The machine type.
The (if any) type and number of GPUs.
Static, manual or auto-scaling of VM instances.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.
deployed_model_displayed_name: The human readable name for the deployed model instance.
machine_type: The machine type for each VM instance.
Do to the requirements to provision the resource, this may take upto a few minutes.
End of explanation
"""
! gsutil cp gs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg test.jpg
import base64
with open("test.jpg", "rb") as f:
data = f.read()
b64str = base64.b64encode(data).decode("utf-8")
"""
Explanation: Prepare test data for prediction
Next, you will load a compressed JPEG image into memory and then base64 encode it. For demonstration purposes, you use an image from the Flowers dataset.
End of explanation
"""
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: {"b64": b64str}}]
prediction = endpoint.predict(instances=instances)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ serving_input: { 'b64': base64_encoded_bytes } }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
delete_bucket = False
delete_model = True
delete_endpoint = True
if delete_endpoint:
try:
endpoint.undeploy_all()
endpoint.delete()
except Exception as e:
print(e)
if delete_model:
try:
model.delete()
except Exception as e:
print(e)
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -rf {BUCKET_URI}
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation
"""
|
Xilinx/meta-petalinux | recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-camera-encode-file.ipynb | mit | from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
"""
Explanation: Video Codec Unit (VCU) Demo Example: CAMERA->ENCODE ->FILE
Introduction
Video Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time.
This notebook example shows video audio (AV) recording usecase – the process of capturing raw video and audio(optional), encode using VCU and then store content in file. The file stored is recorded compressed file.
Implementation Details
<img src="pictures/block-diagram-camera-encode-file.png" align="center" alt="Drawing" style="width: 400px; height: 200px"/>
Board Setup
Connect Ethernet cable.
Connect serial cable to monitor logs on serial console.
Connect USB camera(preferably Logitech HD camera, C920) with board.
If Board is connected to private network, then export proxy settings in /home/root/.bashrc file as below,
create/open a bashrc file using "vi ~/.bashrc"
Insert below line to bashrc file
export http_proxy="< private network proxy address >"
export https_proxy="< private network proxy address >"
Save and close bashrc file.
Determine Audio input device names based on requirements. Please refer Determine AUDIO Device Names section.
Determine Audio Device Names
The audio device name of audio source(Input device) and playback device(output device) need to be determined using arecord and aplay utilities installed on platform.
Audio Input
ALSA sound device names for capture devices
- Run below command to get ALSA sound device names for capture devices
root@zcu106-zynqmp:~#arecord -l
It shows list of Audio Capture Hardware Devices. For e.g
- card 1: C920 [HD Pro Webcam C920], device 0: USB Audio [USB Audio]
- Subdevices: 1/1
- Subdevice #0: subdevice #0
Here card number of capture device is 1 and device id is 0. Hence " hw:1,0 " to be passed as auido input device.
Pulse sound device names for capture devices
- Run below command to get PULSE sound device names for capture devices
root@zcu106-zynqmp:~#pactl list short sources
It shows list of Audio Capture Hardware Devices. For e.g
- 0 alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo ...
Here "alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo" is the name of audio capture device. Hence it can be passed as auido input device.
USB Camera Capabilities
Resolutions for this example need to set based on USB Camera Capabilities
- Capabilities can be found by executing below command on board
root@zcu106-zynqmp:~#"v4l2-ctl -d < dev-id > --list-formats-ext".
< dev-id >:- It can be found using dmesg logs. Mostly it would be like "/dev/video0"
V4lutils if not installed in the pre-built image, need to install using dnf or rebuild petalinux image including v4lutils
End of explanation
"""
from ipywidgets import interact
import ipywidgets as widgets
from common import common_vcu_demo_camera_encode_file
import os
from ipywidgets import HBox, VBox, Text, Layout
"""
Explanation: Run the Demo
End of explanation
"""
video_capture_device=widgets.Text(value='',
placeholder='"/dev/video1"',
description='Camera Dev Id:',
style={'description_width': 'initial'},
#layout=Layout(width='35%', height='30px'),
disabled=False)
video_capture_device
codec_type=widgets.RadioButtons(
options=['avc', 'hevc'],
description='Codec Type:',
disabled=False)
sink_name=widgets.RadioButtons(
options=['none', 'fakevideosink'],
description='Video Sink:',
disabled=False)
video_size=widgets.RadioButtons(
options=['640x480', '1280x720', '1920x1080', '3840x2160'],
description='Resolution:',
description_tooltip='To select the values, please refer USB Camera Capabilities section',
disabled=False)
HBox([codec_type, video_size, sink_name])
"""
Explanation: Video
End of explanation
"""
device_id=Text(value='',
placeholder='(optional) "hw:1"',
description='Input Dev:',
description_tooltip='To select the values, please refer Determine Audio Device Names section',
disabled=False)
device_id
audio_sink={'none':['none'], 'aac':['auto','alsasink','pulsesink'],'vorbis':['auto','alsasink','pulsesink']}
audio_src={'none':['none'], 'aac':['auto','alsasrc','pulseaudiosrc'],'vorbis':['auto','alsasrc','pulseaudiosrc']}
#val=sorted(audio_sink, key = lambda k: (-len(audio_sink[k]), k))
def print_audio_sink(AudioSink):
pass
def print_audio_src(AudioSrc):
pass
def select_audio_sink(AudioCodec):
audio_sinkW.options = audio_sink[AudioCodec]
audio_srcW.options = audio_src[AudioCodec]
audio_codecW = widgets.RadioButtons(options=sorted(audio_sink.keys(), key=lambda k: len(audio_sink[k])), description='Audio Codec:')
init = audio_codecW.value
audio_sinkW = widgets.RadioButtons(options=audio_sink[init], description='Audio Sink:')
audio_srcW = widgets.RadioButtons(options=audio_src[init], description='Audio Src:')
#j = widgets.interactive(print_audio_sink, AudioSink=audio_sinkW)
k = widgets.interactive(print_audio_src, AudioSrc=audio_srcW)
i = widgets.interactive(select_audio_sink, AudioCodec=audio_codecW)
HBox([i, k])
"""
Explanation: Audio
End of explanation
"""
frame_rate=widgets.Text(value='',
placeholder='(optional) 15, 30, 60',
description='Frame Rate:',
disabled=False)
bit_rate=widgets.Text(value='',
placeholder='(optional) 1000, 20000',
description='Bit Rate(Kbps):',
style={'description_width': 'initial'},
disabled=False)
gop_length=widgets.Text(value='',
placeholder='(optional) 30, 60',
description='Gop Length',
disabled=False)
display(HBox([bit_rate, frame_rate, gop_length]))
no_of_frames=Text(value='',
placeholder='(optional) 1000, 2000',
description=r'<p>Frame Nos:</p>',
#layout=Layout(width='25%', height='30px'),
disabled=False)
output_path=widgets.Text(value='',
placeholder='(optional) /mnt/sata/op.ts',
description='Output Path:',
disabled=False)
entropy_buffers=widgets.Dropdown(
options=['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'],
value='5',
description='Entropy Buffers Nos:',
style={'description_width': 'initial'},
disabled=False,)
#entropy_buffers
#output_path
#gop_length
HBox([entropy_buffers, no_of_frames, output_path])
#entropy_buffers
show_fps=widgets.Checkbox(
value=False,
description='show-fps',
#style={'description_width': 'initial'},
disabled=False)
compressed_mode=widgets.Checkbox(
value=False,
description='compressed-mode',
disabled=False)
HBox([compressed_mode, show_fps])
from IPython.display import clear_output
from IPython.display import Javascript
def run_all(ev):
display(Javascript('IPython.notebook.execute_cells_below()'))
def clear_op(event):
clear_output(wait=True)
return
button1 = widgets.Button(
description='Clear Output',
style= {'button_color':'lightgreen'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '300px'}
)
button2 = widgets.Button(
description='',
style= {'button_color':'white'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '83px'}
)
button1.on_click(run_all)
button1.on_click(clear_op)
def start_demo(event):
#clear_output(wait=True)
arg = [];
arg = common_vcu_demo_camera_encode_file.cmd_line_args_generator(device_id.value, video_capture_device.value, video_size.value, codec_type.value, audio_codecW.value, frame_rate.value, output_path.value, no_of_frames.value, bit_rate.value, entropy_buffers.value, show_fps.value, audio_srcW.value, compressed_mode.value, gop_length.value, sink_name.value);
#!sh vcu-demo-camera-encode-decode-display.sh $arg > logs.txt 2>&1
!sh vcu-demo-camera-encode-file.sh $arg
return
button = widgets.Button(
description='click to start camera-encode-file demo',
style= {'button_color':'lightgreen'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '300px'}
)
button.on_click(start_demo)
HBox([button, button2, button1])
"""
Explanation: Advanced options:
End of explanation
"""
|
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_04/Final/Universal.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])
sum_df = df + df2
sum_df
"""
Explanation: NumPy Universal Functions
If the data within a DataFrame are numeric, NumPy's universal functions can be used on/with the DataFrame.
End of explanation
"""
np.exp(sum_df)
"""
Explanation: NaN are handled correctly by universal function
End of explanation
"""
sum_df.T
np.transpose(sum_df.values)
"""
Explanation: Transpose availabe T attribute
End of explanation
"""
A_df = pd.DataFrame(np.arange(15).reshape((3,5)))
B_df = pd.DataFrame(np.arange(10).reshape((5,2)))
A_df.dot(B_df)
"""
Explanation: dot method on DataFrame implements matrix multiplication
Note: row and column headers
End of explanation
"""
C_Series = pd.Series(np.arange(5,10))
C_Series.dot(C_Series)
"""
Explanation: dot method on Series implements dot product
End of explanation
"""
|
ClickSecurity/data_hacking | mdl_exploration/MDL_Data_Exploration.ipynb | mit | # This exercise is mostly for us to understand what kind of data we have and then
# run some simple stats on the fields/values in the data. Pandas will be great for that
import pandas as pd
pd.__version__
# Set default figure sizes
pylab.rcParams['figure.figsize'] = (14.0, 5.0)
# This data url can be a web location http://foo.bar.com/mydata.csv or it can be a
# a path to your disk where the data resides /full/path/to/data/mydata.csv
# Note: Be a good web citizen, download the data once and then specify a path to your local file :)
# For instance: > wget http://www.malwaredomainlist.com/mdlcsv.php -O mdl_data.csv
# data_url = 'http://www.malwaredomainlist.com/mdlcsv.php'
data_url = 'data/mdl_data.csv'
# Note: when the data was pulled it didn't have column names, so poking around
# on the website we found the column headers referenced so we're explicitly
# specifying them to the CSV reader:
# date,domain,ip,reverse,description,registrant,asn,inactive,country
dataframe = pd.read_csv(data_url, names=['date','domain','ip','reverse','description',
'registrant','asn','inactive','country'], header=None, error_bad_lines=False, low_memory=False)
dataframe.head(5)
dataframe.tail(5)
# We can see there's a blank row at the end that got filled with NaNs
# Thankfully Pandas is great about handling missing data.
print dataframe.shape
dataframe = dataframe.dropna()
dataframe.shape
# For this use case we're going to remove any rows that have a '-' in the data
# by replacing '-' with NaN and then running dropna() again
dataframe = dataframe.replace('-', np.nan)
dataframe = dataframe.dropna()
dataframe.shape
# Drilling down into one of the columns
dataframe['description']
# Pandas has a describe method
# For numerical data it give a nice set of summary statistics
# For categorical data it simply gives count, unique values
# and the most common value
dataframe['description'].describe()
# We can get a count of all the unique values by running value_counts()
dataframe['description'].value_counts()
# We noticed that the description values just differ by whitespace or captilization
dataframe['description'] = dataframe['description'].map(lambda x: x.strip().lower())
dataframe['description']
# First thing we noticed was that many of the 'submissions' had the exact same
# date, which we're guessing means some batch jobs just through a bunch of
# domains in and stamped them all with the same date.
# We also noticed that many values just differ by captilization (this is common)
dataframe = dataframe.applymap(lambda x: x.strip().lower() if not isinstance(x,float64) else x)
dataframe.head()
# The domain column looks to be full URI instead of just the domain
from urlparse import urlparse
dataframe['domain'] = dataframe['domain'].astype(str)
dataframe['domain'] = dataframe['domain'].apply(lambda x: "http://" + x)
dataframe['domain'] = dataframe['domain'].apply(lambda x: urlparse(x).netloc)
"""
Explanation: Data Exploration of a publicly available dataset.
<img align="right" src="http://www.sharielf.com/gifs/zz032411pony.jpg" width="220px">
Data processing, cleaning and normalization is often 95% of the battle. Never underestimate this part of the process, if you're not careful about it your derrière will be sore later. Another good reason to spend a bit of time on understanding your data is that you may realize that the data isn't going to be useful for your task at hand. Quick pruning of fruitless branches is good.
Data as an analogy: Data is almost always a big pile of shit, the only real question is, "Is there a Pony inside?" and that's what data exploration and understanding is about.
For this exploration we're going to pull some data from the Malware Domain List website http://www.malwaredomainlist.com. We'd like to thank them for providing a great resourse and making their data available to the public. In general data is messy so even though we're going to be nit-picking quite a bit, we recognized that many datasets will have similar issues which is why we feel like this is a good 'real world' example of data.
Full database: http://www.malwaredomainlist.com/mdlcsv.php
End of explanation
"""
# Using numpy.corrcoef to compute the correlation coefficient matrix
np.corrcoef(dataframe["inactive"], dataframe["country"])
# Pandas also has a correlation method on it's dataframe which has nicer output
dataframe.corr()
# Yeah perfectly correlated, so looks like 'country'
# is just the 'inactive' column duplicated.
# So what happened here? Seems bizarre to have a replicated column.
"""
Explanation: Two columns that are a mistaken copy of each other?...
We also suspect that the 'inactive' column and the 'country' column are exactly the same, also why is there one row in the inactive column with a value of '2'?
<pre>
"Ahhh, what an awful dream. Ones and zeroes everywhere... and I thought I saw a two [shudder]."
-- Bender
"It was just a dream, Bender. There's no such thing as two".
-- Fry
</pre>
End of explanation
"""
# The data hacking repository has a simple stats module we're going to use
import data_hacking.simple_stats as ss
# Spin up our g_test class
g_test = ss.GTest()
# Here we'd like to see how various exploits (description) are related to
# the ASN (Autonomous System Number) associated with the ip/domain.
(exploits, matches, cont_table) = g_test.highest_gtest_scores(
dataframe['description'], dataframe['asn'], N=5, matches=5)
ax = exploits.T.plot(kind='bar', stacked=True)
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('ASN (Autonomous System Number)')
patches, labels = ax.get_legend_handles_labels()
ax.legend(patches, labels, loc='upper right')
# The plot below is showing the number of times a particular exploit was associated with an ASN.
# Interesing to see whether exploits are highly correlated to particular ASNs.
# Now we use g_test with the 'reverse=True' argument to display those exploits
# that do not have a high correlation with a particular ASN.
exploits, matches, cont_table = g_test.highest_gtest_scores(dataframe['description'],
dataframe['asn'], N=7, reverse=True, min_volume=500, matches=15)
ax = exploits.T.plot(kind='bar', stacked=True)
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('ASN (Autonomous System Number)')
patches, labels = ax.get_legend_handles_labels()
ax.legend(patches, labels, loc='best')
# The plot below is showing exploits who aren't associated with any particular ASN.
# Interesing to see exploits that are spanning many ASNs.
exploits, matches, cont_table = g_test.highest_gtest_scores(dataframe['description'],
dataframe['domain'], N=5)
ax = exploits.T.plot(kind='bar', stacked=True) #, log=True)
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Domain')
patches, labels = ax.get_legend_handles_labels()
ax.legend(patches, labels, loc='best')
# The Contingency Table below is just showing the counts of the number of times
# a particular exploit was associated with an TLD.
# Drilling down on one particular exploit
banker = dataframe[dataframe['description']=='trojan banker'] # Subset dataframe
exploits, matches, cont_table = g_test.highest_gtest_scores(banker['description'], banker['domain'], N=5)
import pprint
pprint.pprint(["Domain: %s Count: %d" % (domain,count) for domain,count in exploits.iloc[0].iteritems()])
"""
Explanation: Okay well lets try to get something out of this pile. We'd like to run some simple statistics to see what correlations the data might contain.
G-test is for goodness of fit to a distribution and for independence in contingency tables. It's related to chi-squared, multinomial and Fisher's exact test, please see http://en.wikipedia.org/wiki/G_test.
End of explanation
"""
# Add the proper timestamps to the dataframe replacing the old ones
dataframe['date'] = dataframe['date'].apply(lambda x: str(x).replace('_','T'))
dataframe['date'] = pd.to_datetime(dataframe['date'])
# Now prepare the data for plotting by pivoting on the
# description to create a new column (series) for each value
# We're going to add a new column called value (needed for pivot). This
# is a bit dorky, but needed as the new columns that get created should
# really have a value in them, also we can use this as our value to sum over.
subset = dataframe[['date','description']]
subset['count'] = 1
pivot = pd.pivot_table(subset, values='count', rows=['date'], cols=['description'], fill_value=0)
by = lambda x: lambda y: getattr(y, x)
grouped = pivot.groupby([by('year'),by('month')]).sum()
# Only pull out the top 7 desciptions (exploit types)
topN = subset['description'].value_counts()[:7].index
grouped[topN].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
# The plot below shows the volume of particular exploits impacting new domains.
# Tracking the ebb and flow of exploits over time might be useful
# depending on the type of analysis you're doing.
# The rise and fall of the different exploits is intriguing but
# the taper at the end is concerning, let look at total volume of
# new malicious domains coming into the MDL database.
total_mdl = dataframe['description']
total_mdl.index=dataframe['date']
total_agg = total_mdl.groupby([by('year'),by('month')]).count()
matplotlib.pyplot.figure()
total_agg.plot(label='New Domains in MDL Database')
pylab.ylabel('Total Exploits')
pylab.xlabel('Date Submitted')
matplotlib.pyplot.legend()
"""
Explanation: So switching gears, perhaps we'll look at date range, volume over time, etc.
Pandas also has reasonably good functionality for date/range processing and plotting.
End of explanation
"""
# Only pull out the top 20 desciptions (exploit types)
topN = subset['description'].value_counts()[:20].index
corr_df = grouped[topN].corr()
# Statsmodels has a correlation plot, we expect the diagonal to have perfect
# correlation (1.0) but anything high score off the diagonal means that
# the volume of different exploits are temporally correlated.
import statsmodels.api as sm
corr_df.sort(axis=0, inplace=True) # Just sorting so exploits names are easy to find
corr_df.sort(axis=1, inplace=True)
corr_matrix = corr_df.as_matrix()
pylab.rcParams['figure.figsize'] = (8.0, 8.0)
sm.graphics.plot_corr(corr_matrix, xnames=corr_df.index.tolist())
plt.show()
"""
Explanation: That doesn't look good...
The plot above shows the total volume of ALL newly submitted domains. We see from the plot that the taper is a general overall effect due to a drop in new domain submissions into the MDL database. Given the recent anemic volume there might be another data source that has more active submissions.
Well the anemic volume issue aside we're going to carry on by looking at the correlations in volume over time. In other words are the volume of reported exploits closely related to the volume of other exploits...
Correlations of Volume Over Time
<ul>
<li>**Prof. Farnsworth:** Behold! The Deathclock!
<li>**Leela:** Does it really work?
<li>**Prof. Farnsworth:** Well, it's occasionally off by a few seconds, what with "free will" and all.
</ul>
End of explanation
"""
pylab.rcParams['figure.figsize'] = (14.0, 3.0)
print grouped[['zeus v1 trojan','zeus v1 config file','zeus v1 drop zone']].corr()
grouped[['zeus v1 trojan','zeus v1 config file','zeus v1 drop zone']].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
grouped[['zeus v2 trojan','zeus v2 config file','zeus v2 drop zone']].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
# Drilling down on the correlation between 'trojan' and 'phoenix exploit kit'
print grouped[['trojan','phoenix exploit kit']].corr()
grouped[['trojan','phoenix exploit kit']].plot()
pylab.ylabel('Exploit Occurrences')
pylab.xlabel('Date Submitted')
"""
Explanation: Discussion of Correlation Matrix
The two sets of 3x3 red blocks on the lower right make intuitive sense, Zeus config file, drop zone and trojan show almost perfect volume over time correlation.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-3/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
freininghaus/adventofcode | 2016/day11-python.ipynb | mit | with open("input/day11.txt", "r") as f:
inputLines = tuple(line.strip() for line in f)
import itertools
import re
"""
Explanation: Day 11: Radioisotope Thermoelectric Generators
End of explanation
"""
floors = {
"first" : 1,
"second" : 2,
"third" : 3,
"fourth" : 4,
}
"""
Explanation: Functions for parsing the initial state
Map the floors to integers
End of explanation
"""
def parseItem(item):
microchipMatch = re.fullmatch("([a-z]+)-compatible microchip", item)
if microchipMatch is not None:
return microchipMatch.group(1), "M"
generatorMatch = re.fullmatch("([a-z]+) generator", item)
assert generatorMatch is not None
return generatorMatch.group(1), "G"
assert parseItem("hydrogen-compatible microchip") == ("hydrogen", "M")
assert parseItem("lithium generator") == ("lithium", "G")
"""
Explanation: Parse an item (microchip or generator)
End of explanation
"""
def parseFloor(line):
m = re.fullmatch("The ([a-z]+) floor contains (.*).", line)
floor, itemsStr = m.groups()
return tuple(sorted(parseItem(item[2:]) + (floors[floor],)
for item in re.split("(?:,\ )?(?:\ ?and\ )?", itemsStr)
if item.startswith("a ")))
assert (parseFloor("The first floor contains a hydrogen-compatible microchip and a lithium generator.") ==
(("hydrogen", "M", 1),
("lithium", "G", 1)))
assert (parseFloor("The second floor contains a hydrogen generator, and a lithium-compatible microchip.") ==
(("hydrogen", "G", 2),
("lithium", "M", 2)))
assert (parseFloor("The second floor contains a hydrogen generator.") ==
(("hydrogen", "G", 2),))
assert (parseFloor("The third floor contains a lithium-compatible microchip.") ==
(("lithium", "M", 3),))
assert (parseFloor("The fourth floor contains nothing relevant.") ==
())
"""
Explanation: Parse all items on a floor
End of explanation
"""
initialItems = tuple(sorted(itertools.chain.from_iterable(parseFloor(line) for line in inputLines)))
print(initialItems)
"""
Explanation: Use these functions for parsing the initial items on all floors
End of explanation
"""
# Takes an iterable that yields two (element, type, floor) tuples, where
# * the element should be the same for both tuples,
# * the first item should be a generator (type 'G'),
# * the second item should be a microchip (type 'M').
# Returns a tuple that contains only the floors where the generator and the microchip are.
def tupleForElement(items):
result = tuple(floor for element, itemType, floor in items)
assert len(result) == 2
return result
assert tupleForElement((("iron", "G", 3), ("iron", "M", 1))) == (3, 1)
"""
Explanation: Compact representation of the item positions
Our current representation of the positions of the microchips and generators is inefficient. Assuming that there is exactly one microchip and one generator per element, we can make the following simplifications:
* For each element, it is sufficient to store the positions of the generator and the microchip in a tuple with two elements.
* For the solution of the problem, the element names are irrelevant. Therefore, it is sufficient to store only the tuples with the positions of the generator and the microchip for each element, and ignore the element name.
* In order to reduce the problem space, the list of tuples can be sorted: for the number of moves that are needed to solve the puzzle, it does not matter if the positions for two elements are ((2, 3), (1, 1)) or ((1, 1), (2, 3)).
Helper function that generates a position tuple for a single element: tupleForElement
End of explanation
"""
def compressedItems(items):
return tuple(sorted(tupleForElement(itemsForElement)
for _, itemsForElement in itertools.groupby(items, lambda t: t[0])))
assert (compressedItems((("copper", "G", 4), ("copper", "M", 2), ("iron", "G", 1), ("iron", "M", 3)))
== ((1, 3), (4, 2)))
"""
Explanation: This function can create the compact representation for initialItems
End of explanation
"""
initialState = (1, compressedItems(initialItems))
print(initialState)
"""
Explanation: A state is a tuple the contains the elevator position and the compressed representation of the item positions
End of explanation
"""
def isFinalState(state, targetFloor=4):
currentFloor, items = state
return currentFloor == targetFloor and all(item == (targetFloor, targetFloor) for item in items)
"""
Explanation: Functions for working with states
End of explanation
"""
def isValidState(state):
currentFloor, items = state
floorsWithGenerators = set(generatorFloor for generatorFloor, microchipFloor in items)
floorsWithVulnerableMicrochips = set(microchipFloor
for generatorFloor, microchipFloor in items
if generatorFloor != microchipFloor)
return len(floorsWithGenerators & floorsWithVulnerableMicrochips) == 0
assert isValidState((1, ((2, 2), (2, 3), (4, 3), (4, 4))))
assert not isValidState((1, ((2, 2), (2, 3), (4, 2), (4, 4))))
"""
Explanation: Check if a state is valid
A state is valid unless there is a floor
* which has at least one generator, and
* which has at least one microchip which is not accompanied by the matching generator.
End of explanation
"""
def nextStates(state):
currentFloor, items = state
# Put all item positions into a flat list for easier manipulation
flattenedPositions = tuple(itertools.chain.from_iterable(items))
# Find the index (in flattenedPositions) of all items that are on the current floor
onCurrentFloor = tuple(index
for index, pos in enumerate(flattenedPositions)
if pos == currentFloor)
# Each combination of items that can be moved by the elevator from the current floor is
# represented by a tuple in 'candidatesForMoving'.
# Note that the elevator can take either one or two items.
candidatesForMoving = (tuple((i,) for i in onCurrentFloor) +
tuple(itertools.combinations(onCurrentFloor, 2)))
# Calculate the possible new states for each direction (-1: down, +1: up)
for direction in (-1, 1):
newFloor = currentFloor + direction
if newFloor < 1 or newFloor > 4:
continue
for movedIndices in candidatesForMoving:
# 'movedIndices' is a tuple that contains either one index, or two indices (in the list
# 'flattenedPositions') of the items which are moved by the elevator.
# Find the 'flattenedPositions' for the next state if the items in 'candidate' are moved
# to 'newFloor'.
newFlattenedPositions = tuple(newFloor if index in movedIndices else pos
for index, pos in enumerate(flattenedPositions))
# Convert 'newFlattenedPositions' to the compressed format (see above) by
# * grouping neighboring items to 2-element tuples,
# * sorting the list of these tuples.
newItems = tuple(
sorted(tuple(p for _, p in ps)
for _, ps in itertools.groupby(enumerate(newFlattenedPositions),
lambda x: x[0] // 2)))
newState = (newFloor, newItems)
# Only yield the new state if it is valid.
if isValidState(newState):
yield newState
# If there are two microchips and generators on the first floor initially, the elevator can move
# * both microchips, or
# * both generators, or
# * one microchip, or
# * one microchip and its generator
# to the second floor. Moving one generator without its microchip is not possible because this would
# leave this microchip vulnerable on the first floor.
assert set(nextStates((1, ((1, 1), (1, 1))))) == {(2, ((1, 2), (1, 2))),
(2, ((2, 1), (2, 1))),
(2, ((1, 1), (1, 2))),
(2, ((1, 1), (2, 2)))}
"""
Explanation: Calculate all states that can be reached in one step
End of explanation
"""
def movesToFinish(initialState):
currentStates = {initialState}
seenStates = {initialState}
for numberOfMoves in itertools.count():
if any(isFinalState(state) for state in currentStates):
return numberOfMoves
currentStates = set(newState
for state in currentStates
for newState in nextStates(state)
if not newState in seenStates)
seenStates |= currentStates
"""
Explanation: Calculate the minimal number of moves to reach the final state
End of explanation
"""
movesToFinish(initialState)
"""
Explanation: Solution for Part one
End of explanation
"""
initialItems2 = compressedItems(initialItems) + ((1, 1), (1, 1))
initialState2 = (1, initialItems2)
movesToFinish(initialState2)
"""
Explanation: Part two: two more elements with generators and microchips on first floor
End of explanation
"""
|
tensorflow/docs | site/en/guide/migrate/validate_correctness.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
!pip uninstall -y -q tensorflow
# Install tf-nightly as the DeterministicRandomTestTool is available only in
# Tensorflow 2.8
!pip install -q tf-nightly
!pip install -q tf_slim
import tensorflow as tf
import tensorflow.compat.v1 as v1
import numpy as np
import tf_slim as slim
import sys
from contextlib import contextmanager
!git clone --depth=1 https://github.com/tensorflow/models.git
import models.research.slim.nets.inception_resnet_v2 as inception
"""
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/validate_correctness"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/validate_correctness.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/validate_correctness.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/validate_correctness.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Validating correctness & numerical equivalence
When migrating your TensorFlow code from TF1.x to TF2, it is a good practice to ensure that your migrated code behaves the same way in TF2 as it did in TF1.x.
This guide covers migration code examples with the tf.compat.v1.keras.utils.track_tf1_style_variables modeling shim applied to tf.keras.layers.Layer methods. Read the model mapping guide to find out more about the TF2 modeling shims.
This guide details approaches you can use to:
* Validate the correctness of the results obtained from training models using the migrated code
* Validate the numerical equivalence of your code across TensorFlow versions
Setup
End of explanation
"""
# TF1 Inception resnet v2 forward pass based on slim layers
def inception_resnet_v2(inputs, num_classes, is_training):
with slim.arg_scope(
inception.inception_resnet_v2_arg_scope(batch_norm_scale=True)):
return inception.inception_resnet_v2(inputs, num_classes, is_training=is_training)
class InceptionResnetV2(tf.keras.layers.Layer):
"""Slim InceptionResnetV2 forward pass as a Keras layer"""
def __init__(self, num_classes, **kwargs):
super().__init__(**kwargs)
self.num_classes = num_classes
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
is_training = training or False
# Slim does not accept `None` as a value for is_training,
# Keras will still pass `None` to layers to construct functional models
# without forcing the layer to always be in training or in inference.
# However, `None` is generally considered to run layers in inference.
with slim.arg_scope(
inception.inception_resnet_v2_arg_scope(batch_norm_scale=True)):
return inception.inception_resnet_v2(
inputs, self.num_classes, is_training=is_training)
"""
Explanation: If you're putting a nontrivial chunk of forward pass code into the shim, you want to know that it is behaving the same way as it did in TF1.x. For example, consider trying to put an entire TF-Slim Inception-Resnet-v2 model into the shim as such:
End of explanation
"""
@contextmanager
def assert_no_variable_creations():
"""Assert no variables are created in this context manager scope."""
def invalid_variable_creator(next_creator, **kwargs):
raise ValueError("Attempted to create a new variable instead of reusing an existing one. Args: {}".format(kwargs))
with tf.variable_creator_scope(invalid_variable_creator):
yield
@contextmanager
def catch_and_raise_created_variables():
"""Raise all variables created within this context manager scope (if any)."""
created_vars = []
def variable_catcher(next_creator, **kwargs):
var = next_creator(**kwargs)
created_vars.append(var)
return var
with tf.variable_creator_scope(variable_catcher):
yield
if created_vars:
raise ValueError("Created vars:", created_vars)
"""
Explanation: As it so happens, this layer actually works perfectly fine out of the box (complete with accurate regularization loss tracking).
However, this is not something you want to take for granted. Follow the below steps to verify that it is actually behaving as it did in TF1.x, down to observing perfect numerical equivalence. These steps can also help you triangulate what part of the forward pass is causing a divergence from TF1.x (identify if the divergence arises in the model forward pass as opposed to a different part of the model).
Step 1: Verify variables are only created once
The very first thing you should verify is that you have correctly built the model in a way that reuses variables in each call rather than accidentally creating and using new variables each time. For example, if your model creates a new Keras layer or calls tf.Variable in each forward pass call then it is most likely failing to capture variables and creating new ones each time.
Below are two context manager scopes you can use to detect when your model is creating new variables and debug which part of the model is doing it.
End of explanation
"""
model = InceptionResnetV2(1000)
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
# Create all weights on the first call
model(inputs)
# Verify that no new weights are created in followup calls
with assert_no_variable_creations():
model(inputs)
with catch_and_raise_created_variables():
model(inputs)
"""
Explanation: The first scope (assert_no_variable_creations()) will raise an error immediately once you try creating a variable within the scope. This allows you to inspect the stacktrace (and use interactive debugging) to figure out exactly what lines of code created a variable instead of reusing an existing one.
The second scope (catch_and_raise_created_variables()) will raise an exception at the end of the scope if any variables ended up being created. This exception will include the list of all variables created in the scope. This is useful for figuring out what the set of all weights your model is creating is in case you can spot general patterns. However, it is less useful for identifying the exact lines of code where those variables got created.
Use both scopes below to verify that the shim-based InceptionResnetV2 layer does not create any new variables after the first call (presumably reusing them).
End of explanation
"""
class BrokenScalingLayer(tf.keras.layers.Layer):
"""Scaling layer that incorrectly creates new weights each time:"""
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
var = tf.Variable(initial_value=2.0)
bias = tf.Variable(initial_value=2.0, name='bias')
return inputs * var + bias
model = BrokenScalingLayer()
inputs = tf.ones( (1, height, width, 3))
model(inputs)
try:
with assert_no_variable_creations():
model(inputs)
except ValueError as err:
import traceback
traceback.print_exc()
model = BrokenScalingLayer()
inputs = tf.ones( (1, height, width, 3))
model(inputs)
try:
with catch_and_raise_created_variables():
model(inputs)
except ValueError as err:
print(err)
"""
Explanation: In the example below, observe how these decorators work on a layer that incorrectly creates new weights each time instead of reusing existing ones.
End of explanation
"""
class FixedScalingLayer(tf.keras.layers.Layer):
"""Scaling layer that incorrectly creates new weights each time:"""
def __init__(self):
super().__init__()
self.var = None
self.bias = None
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
if self.var is None:
self.var = tf.Variable(initial_value=2.0)
self.bias = tf.Variable(initial_value=2.0, name='bias')
return inputs * self.var + self.bias
model = FixedScalingLayer()
inputs = tf.ones( (1, height, width, 3))
model(inputs)
with assert_no_variable_creations():
model(inputs)
with catch_and_raise_created_variables():
model(inputs)
"""
Explanation: You can fix the layer by making sure it only creates the weights once and then reuses them each time.
End of explanation
"""
# Build the forward pass inside a TF1.x graph, and
# get the counts, shapes, and names of the variables
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=False)
tf1_variable_names_and_shapes = {
var.name: (var.trainable, var.shape) for var in tf.compat.v1.global_variables()}
num_tf1_variables = len(tf.compat.v1.global_variables())
"""
Explanation: Troubleshooting
Here are some common reasons why your model might accidentally be creating new weights instead of reusing existing ones:
It uses an explicit tf.Variable call without reusing already-created tf.Variables. Fix this by first checking if it has not been created then reusing the existing ones.
It creates a Keras layer or model directly in the forward pass each time (as opposed to tf.compat.v1.layers). Fix this by first checking if it has not been created then reusing the existing ones.
It is built on top of tf.compat.v1.layers but fails to assign all compat.v1.layers an explicit name or to wrap your compat.v1.layer usage inside of a named variable_scope, causing the autogenerated layer names to increment in each model call. Fix this by putting a named tf.compat.v1.variable_scope inside your shim-decorated method that wraps all of your tf.compat.v1.layers usage.
Step 2: Check that variable counts, names, and shapes match
The second step is to make sure your layer running in TF2 creates the same number of weights, with the same shapes, as the corresponding code does in TF1.x.
You can do a mix of manually checking them to see that they match, and doing the checks programmatically in a unit test as shown below.
End of explanation
"""
height, width = 299, 299
num_classes = 1000
model = InceptionResnetV2(num_classes)
# The weights will not be created until you call the model
inputs = tf.ones( (1, height, width, 3))
# Call the model multiple times before checking the weights, to verify variables
# get reused rather than accidentally creating additional variables
out, endpoints = model(inputs, training=False)
out, endpoints = model(inputs, training=False)
# Grab the name: shape mapping and the total number of variables separately,
# because in TF2 variables can be created with the same name
num_tf2_variables = len(model.variables)
tf2_variable_names_and_shapes = {
var.name: (var.trainable, var.shape) for var in model.variables}
# Verify that the variable counts, names, and shapes all match:
assert num_tf1_variables == num_tf2_variables
assert tf1_variable_names_and_shapes == tf2_variable_names_and_shapes
"""
Explanation: Next, do the same for the shim-wrapped layer in TF2.
Notice that the model is also called multiple times before grabbing the weights. This is done to effectively test for variable reuse.
End of explanation
"""
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=False)
# Rather than running the global variable initializers,
# reset all variables to a constant value
var_reset = tf.group([var.assign(tf.ones_like(var) * 0.001) for var in tf.compat.v1.global_variables()])
sess.run(var_reset)
# Grab the outputs & regularization loss
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
tf1_output[0][:5]
"""
Explanation: The shim-based InceptionResnetV2 layer passes this test. However, in the case where they don't match, you can run it through a diff (text or other) to see where the differences are.
This can provide a clue as to what part of the model isn't behaving as expected. With eager execution you can use pdb, interactive debugging, and breakpoints to dig into the parts of the model that seem suspicious, and debug what is going wrong in more depth.
Troubleshooting
Pay close attention to the names of any variables created directly by explicit tf.Variable calls and Keras layers/models as their variable name generation semantics may differ slightly between TF1.x graphs and TF2 functionality such as eager execution and tf.function even if everything else is working properly. If this is the case for you, adjust your test to account for any slightly different naming semantics.
You may sometimes find that the tf.Variables, tf.keras.layers.Layers, or tf.keras.Models created in your training loop's forward pass are missing from your TF2 variables list even if they were captured by the variables collection in TF1.x. Fix this by assigning the variables/layers/models that your forward pass creates to instance attributes in your model. See here for more info.
Step 3: Reset all variables, check numerical equivalence with all randomness disabled
The next step is to verify numerical equivalence for both the actual outputs and the regularization loss tracking when you fix the model such that there is no random number generation involved (such as during inference).
The exact way to do this may depend on your specific model, but in most models (such as this one), you can do this by:
1. Initializing the weights to the same value with no randomness. This can be done by resetting them to a fixed value after they have been created.
2. Running the model in inference mode to avoid triggering any dropout layers which can be sources of randomness.
The following code demonstrates how you can compare the TF1.x and TF2 results this way.
End of explanation
"""
height, width = 299, 299
num_classes = 1000
model = InceptionResnetV2(num_classes)
inputs = tf.ones((1, height, width, 3))
# Call the model once to create the weights
out, endpoints = model(inputs, training=False)
# Reset all variables to the same fixed value as above, with no randomness
for var in model.variables:
var.assign(tf.ones_like(var) * 0.001)
tf2_output, endpoints = model(inputs, training=False)
# Get the regularization loss
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
tf2_output[0][:5]
# Create a dict of tolerance values
tol_dict={'rtol':1e-06, 'atol':1e-05}
# Verify that the regularization loss and output both match
# when we fix the weights and avoid randomness by running inference:
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
"""
Explanation: Get the TF2 results.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
graph_a, graph_b, graph_c = sess.run([a, b, c])
graph_a, graph_b, graph_c
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
a, b, c
# Demonstrate that the generated random numbers match
np.testing.assert_allclose(graph_a, a.numpy(), **tol_dict)
np.testing.assert_allclose(graph_b, b.numpy(), **tol_dict)
np.testing.assert_allclose(graph_c, c.numpy(), **tol_dict)
"""
Explanation: The numbers match between TF1.x and TF2 when you remove sources of randomness, and the TF2-compatible InceptionResnetV2 layer passes the test.
If you are observing the results diverging for your own models, you can use printing or pdb and interactive debugging to identify where and why the results start to diverge. Eager execution can make this significantly easier. You can also use an ablation approach to run only small portions of the model on fixed intermediate inputs and isolate where the divergence happens.
Conveniently, many slim nets (and other models) also expose intermediate endpoints that you can probe.
Step 4: Align random number generation, check numerical equivalence in both training and inference
The final step is to verify that the TF2 model numerically matches the TF1.x model, even when accounting for random number generation in variable initialization and in the forward pass itself (such as dropout layers during the forward pass).
You can do this by using the testing tool below to make random number generation semantics match between TF1.x graphs/sessions and eager execution.
TF1 legacy graphs/sessions and TF2 eager execution use different stateful random number generation semantics.
In tf.compat.v1.Sessions, if no seeds are specified, the random number generation depends on how many operations are in the graph at the time when the random operation is added, and how many times the graph is run. In eager execution, stateful random number generation depends on the global seed, the operation random seed, and how many times the operation with the operation with the given random seed is run. See
tf.random.set_seed for more info.
The following v1.keras.utils.DeterministicRandomTestTool class provides a context manager scope() that can make stateful random operations use the same seed across both TF1 graphs/sessions and eager execution.
The tool provides two testing modes:
1. constant which uses the same seed for every single operation no matter how many times it has been called and,
2. num_random_ops which uses the number of previously-observed stateful random operations as the operation seed.
This applies both to the stateful random operations used for creating and initializing variables, and to the stateful random operations used in computation (such as for dropout layers).
Generate three random tensors to show how to use this tool to make stateful random number generation match between sessions and eager execution.
End of explanation
"""
np.testing.assert_allclose(b.numpy(), c.numpy(), **tol_dict)
"""
Explanation: However, notice that in constant mode, because b and c were generated with the same seed and have the same shape, they will have exactly the same values.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
graph_a, graph_b, graph_c = sess.run([a, b, c])
graph_a, graph_b, graph_c
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
a, b, c
# Demonstrate that the generated random numbers match
np.testing.assert_allclose(graph_a, a.numpy(), **tol_dict)
np.testing.assert_allclose(graph_b, b.numpy(), **tol_dict )
np.testing.assert_allclose(graph_c, c.numpy(), **tol_dict)
# Demonstrate that with the 'num_random_ops' mode,
# b & c took on different values even though
# their generated shape was the same
assert not np.allclose(b.numpy(), c.numpy(), **tol_dict)
"""
Explanation: Trace order
If you are worried about some random numbers matching in constant mode reducing your confidence in your numerical equivalence test (for example if several weights take on the same initializations), you can use the num_random_ops mode to avoid this. In the num_random_ops mode, the generated random numbers will depend on the ordering of random ops in the program.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
b_prime = tf.random.uniform(shape=(3,3))
b_prime = b_prime * 3
a_prime = tf.random.uniform(shape=(3,1))
a_prime = a_prime * 3
assert not np.allclose(a.numpy(), a_prime.numpy())
assert not np.allclose(b.numpy(), b_prime.numpy())
"""
Explanation: However, notice that in this mode random generation is sensitive to program order, and so the following generated random numbers do not match.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
print(random_tool.operation_seed)
a = tf.random.uniform(shape=(3,1))
a = a * 3
print(random_tool.operation_seed)
b = tf.random.uniform(shape=(3,3))
b = b * 3
print(random_tool.operation_seed)
"""
Explanation: To allow for debugging variations due to tracing order, DeterministicRandomTestTool in num_random_ops mode allows you to see how many random operations have been traced with the operation_seed property.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
print(random_tool.operation_seed)
a = tf.random.uniform(shape=(3,1))
a = a * 3
print(random_tool.operation_seed)
b = tf.random.uniform(shape=(3,3))
b = b * 3
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
random_tool.operation_seed = 1
b_prime = tf.random.uniform(shape=(3,3))
b_prime = b_prime * 3
random_tool.operation_seed = 0
a_prime = tf.random.uniform(shape=(3,1))
a_prime = a_prime * 3
np.testing.assert_allclose(a.numpy(), a_prime.numpy(), **tol_dict)
np.testing.assert_allclose(b.numpy(), b_prime.numpy(), **tol_dict)
"""
Explanation: If you need to account for varying trace order in your tests, you can even set the auto-incrementing operation_seed explicitly. For example, you can use this to make random number generation match across two different program orders.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
random_tool.operation_seed = 1
b_prime = tf.random.uniform(shape=(3,3))
b_prime = b_prime * 3
random_tool.operation_seed = 0
a_prime = tf.random.uniform(shape=(3,1))
a_prime = a_prime * 3
try:
c = tf.random.uniform(shape=(3,1))
raise RuntimeError("An exception should have been raised before this, " +
"because the auto-incremented operation seed will " +
"overlap an already-used value")
except ValueError as err:
print(err)
"""
Explanation: However, DeterministicRandomTestTool disallows reusing already-used operation seeds, so make sure the auto-incremented sequences cannot overlap. This is because eager execution generates different numbers for follow-on usages of the same operation seed while TF1 graphs and sessions do not, so raising an error helps keep session and eager stateful random number generation in line.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=False)
# Initialize the variables
sess.run(tf.compat.v1.global_variables_initializer())
# Grab the outputs & regularization loss
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
height, width = 299, 299
num_classes = 1000
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
model = InceptionResnetV2(num_classes)
inputs = tf.ones((1, height, width, 3))
tf2_output, endpoints = model(inputs, training=False)
# Grab the regularization loss as well
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
# Verify that the regularization loss and output both match
# when using the DeterministicRandomTestTool:
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
"""
Explanation: Verifying Inference
You can now use the DeterministicRandomTestTool to make sure the InceptionResnetV2 model matches in inference, even when using the random weight initialization. For a stronger test condition due to matching program order, use the num_random_ops mode.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=True)
# Initialize the variables
sess.run(tf.compat.v1.global_variables_initializer())
# Grab the outputs & regularization loss
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
height, width = 299, 299
num_classes = 1000
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
model = InceptionResnetV2(num_classes)
inputs = tf.ones((1, height, width, 3))
tf2_output, endpoints = model(inputs, training=True)
# Grab the regularization loss as well
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
# Verify that the regularization loss and output both match
# when using the DeterministicRandomTestTool
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
"""
Explanation: Verifying Training
Because DeterministicRandomTestTool works for all stateful random operations (including both weight initialization and computation such as dropout layers), you can use it to verify the models match in training mode as well. You can again use the num_random_ops mode because the program order of the stateful random ops matches.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=True)
# Initialize the variables
sess.run(tf.compat.v1.global_variables_initializer())
# Get the outputs & regularization losses
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
height, width = 299, 299
num_classes = 1000
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
keras_input = tf.keras.Input(shape=(height, width, 3))
layer = InceptionResnetV2(num_classes)
model = tf.keras.Model(inputs=keras_input, outputs=layer(keras_input))
inputs = tf.ones((1, height, width, 3))
tf2_output, endpoints = model(inputs, training=True)
# Get the regularization loss
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
# Verify that the regularization loss and output both match
# when using the DeterministicRandomTestTool
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
"""
Explanation: You have now verified that the InceptionResnetV2 model running eagerly with decorators around tf.keras.layers.Layer numerically matches the slim network running in TF1 graphs and sessions.
Note: When using the DeterministicRandomTestTool in num_random_ops mode, it is suggested you directly use and call the tf.keras.layers.Layer method decorator when testing for numerical equivalence. Embedding it within a Keras functional model or other Keras models can produce differences in stateful random operation tracing order that can be tricky to reason about or match exactly when comparing TF1.x graphs/sessions and eager execution.
For example, calling the InceptionResnetV2 layer directly with training=True interleaves variable initialization with the dropout order according to the network creation order.
On the other hand, first putting the tf.keras.layers.Layer decorator in a Keras functional model and only then calling the model with training=True is equivalent to initializing all variables then using the dropout layer. This produces a different tracing order and a different set of random numbers.
However, the default mode='constant' is not sensitive to these differences in tracing order and will pass without extra work even when embedding the layer in a Keras functional model.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/72bb0e260a352fd7c21fee1dd2f83d79/decoding_spoc_CMC.ipynb | bsd-3-clause | # Author: Alexandre Barachant <alexandre.barachant@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.decoding import SPoC
from mne.datasets.fieldtrip_cmc import data_path
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold, cross_val_predict
# Define parameters
fname = data_path() + '/SubjectCMC.ds'
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 250.) # crop for memory purposes
# Filter muscular activity to only keep high frequencies
emg = raw.copy().pick_channels(['EMGlft']).load_data()
emg.filter(20., None, fir_design='firwin')
# Filter MEG data to focus on beta band
raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False).load_data()
raw.filter(15., 30., fir_design='firwin')
# Build epochs as sliding windows over the continuous raw file
events = mne.make_fixed_length_events(raw, id=1, duration=.250)
# Epoch length is 1.5 second
meg_epochs = Epochs(raw, events, tmin=0., tmax=1.500, baseline=None,
detrend=1, decim=8)
emg_epochs = Epochs(emg, events, tmin=0., tmax=1.500, baseline=None)
# Prepare classification
X = meg_epochs.get_data()
y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power
# Classification pipeline with SPoC spatial filtering and Ridge Regression
spoc = SPoC(n_components=2, log=True, reg='oas', rank='full')
clf = make_pipeline(spoc, Ridge())
# Define a two fold cross-validation
cv = KFold(n_splits=2, shuffle=False)
# Run cross validaton
y_preds = cross_val_predict(clf, X, y, cv=cv)
# Plot the True EMG power and the EMG power predicted from MEG data
fig, ax = plt.subplots(1, 1, figsize=[10, 4])
times = raw.times[meg_epochs.events[:, 0] - raw.first_samp]
ax.plot(times, y_preds, color='b', label='Predicted EMG')
ax.plot(times, y, color='r', label='True EMG')
ax.set_xlabel('Time (s)')
ax.set_ylabel('EMG Power')
ax.set_title('SPoC MEG Predictions')
plt.legend()
mne.viz.tight_layout()
plt.show()
"""
Explanation: Continuous Target Decoding with SPoC
Source Power Comodulation (SPoC) :footcite:DahneEtAl2014 allows to identify
the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP for continuous variables.
Here, SPoC is applied to decode the (continuous) fluctuation of an
electromyogram from MEG beta activity using data from
Cortico-Muscular Coherence example of FieldTrip
<http://www.fieldtriptoolbox.org/tutorial/coherence>_
End of explanation
"""
spoc.fit(X, y)
spoc.plot_patterns(meg_epochs.info)
"""
Explanation: Plot the contributions to the detected components (i.e., the forward model)
End of explanation
"""
|
oroszl/szamprob | notebooks/Package04/plotly.ipynb | gpl-3.0 | from plotly import *
from plotly.offline import *
init_notebook_mode()
"""
Explanation: ☠ Ábrakészítés a plotly modul segítségével
Az ábrakészítéshez természetesen az eddig használt matplotlib modul mellett számos másik függvénycsomag is létezik. A lent röviden bemutatott plotly modul előnye, hogy az alapbeállításokat használva is elegáns és interaktív ábrákat tudunk készíteni.
A plotly szintaxisa az eddigiektől azonban némiképp eltér, ennek a rövid bemutatására törekszünk néhány példán keresztül.
A plotly offline és online üzemmódja
A plotly alapvetően egy webes felület, melyen egy ingyenes regisztráció és bejelentkezés után mindenkinek lehetősége nyílik adatok feltöltésére, azok feldolgozására interaktív ábrák formájában, majd a legyártott ábrák megosztására. Ez a módszer jelentősen megkönnyítheti egy csoporton belül a kollaborációt, hiszen a csoporttagok nemcsak az ábrákhoz férnek hozzá, hanem magukhoz az adatokhoz és az ábrákat legyártó kódrészletekhez is. Így ha valaki csak a vonalak színét szeretné egy ábrán megváltoztatni, nem kell e-mailben megkérnie az ábra eredeti gyártóját, hogy ezt tegye meg, hanem a kód átírásával saját maga is megoldhatja.
Néha azonban a vizsgált adatok természetüknél fogva nem tölthetők fel egy publikus tárhelyre, a privát tárhely használatához pedig Pro accountra van szükség, ami értelemszerűen nem ingyenes. Gyakran előfordul olyan probléma is, hogy a feldolgozni kívánt adatfájlok annyira nagyok, hogy feltöltésük (illetve bármilyen mozgatásuk) nem praktikus. Ilyen és ehhez hasonló esetekben szükség lenne egy "lokális" ábrakészítő opcióra, mellyel nem az adatok mennek a plotlyhoz, hanem a plotly jön az adatokhoz. Ennek a megoldására készült a Plotly Offline verziója, mellyel az ábrák és adatok nem egy központi szerveren tárolódnak, hanem a lokális gépen, illetve notebookban. A lenti példákban ezt az offline verziót használjuk.
(A plotly ábrák generálása emellett nemcsak a mostanra megszokott Python nyelven lehetséges, hanem akár R-ben vagy JavaScriptben is. Ezekről az alábbiakban nem ejtünk szót.)
A plotly modul importálása és az offline verzió függvényeinek betöltése
End of explanation
"""
import numpy as np
"""
Explanation: Egyéb hasznos, már ismert modulok importálása
End of explanation
"""
x_pontok = np.linspace(0,2*np.pi,10)
y_pontok = np.sin(x_pontok)
"""
Explanation: Egyszerű ábra készítése
Adatok gyártása
Mint ahogy a matplotlib modulnál is láttuk, minden ábra generálásához elsőként az adatok betöltésére vagy legyártására van szükség. Nézzük a szokásos példánkat: ábrázoljuk a sin(x) függvényt 0 és 2π között.
End of explanation
"""
trace_sin_gorbe = graph_objs.Scatter(x=x_pontok, y=y_pontok, mode='lines')
trace_sin_gorbe
"""
Explanation: Az ábra elkészítése
A plotly kicsit másképp viselkedik, mint amit a matplotlib-nél láttunk. Itt egy ábrát alapvetően különféle objektumokkal és egymásba ágyazott dict-ekkel definiálhatunk. Az ábra minden egyes jellemzőjéhez (pl. színek, rács, az adatok, stb.) tartozik egy kulcs-érték páros.
Ezeket a jellemzőket a plotly két kategóriába sorolja, a trace-ekbe és a layout-ba. A trace-ek olyan objektumok, melyek egy adatsort írnak le az ábrán, például egy Scatter vagy egy Heatmap objektum. Egy ábrán természetesen több trace is lehet, ha például kétféle adatsort is ábrázolni szeretnénk. A trace-ek emellett ábrákon belül is kombinálhatók, egyetlen ábrán megjeleníthetünk elszórt mérési pontokat és oszlopdiagramot is. A layout jellemzők pedig az ábra egészére vonatkozó formázási utasítások, például az ábra címe, a háttér színe, a tengelyfeliratok, illetve további annotációk (szövegek).
Az ábrakészítésben a plotly dokumentációja sok segítséget ad.
1. Trace-ek legyártása
Egy ábra definiálásakor tehát elsőként az adatokból le kell gyártanunk a megfelelő trace-eket. A konkrét példában egy olyan ábrát szeretnénk, ahol a fenti y_pontok array az x_pontok array függvényében van ábrázolva, és a pontok folytonos vonallal vannak összekötve. Ezt az alábbi utasítással tehetjük meg:
End of explanation
"""
trace_sin_gorbe = {'mode': 'lines',
'type' : 'scatter',
'x': x_pontok,
'y': y_pontok}
"""
Explanation: Itt tehát legyártottunk egy "gráf objektumot", mely most éppen Scatter típusú. A kiíratásból láthatjuk, hogy bár ezt mint egy objektumot definiáltuk, valójában egy olyan dict, melyben a 'type' kulcshoz a 'scatter' érték tartozik. Tehát tulajdonképpen az alábbi módszer is működne a fenti trace definiálására, az objektumként való megadás csak a kényelmünket szolgálja.
End of explanation
"""
adatok_sin_gorbe = [trace_sin_gorbe]
"""
Explanation: Arra viszont figyeljünk, hogy ha a dict-ként való megadást választjuk, akkor a kulcs-érték párokat kettősponttal válasszuk el, a gráf objektumként való megadás esetén pedig egyenlőségjellel.
A mode kulcshoz írt 'lines' azt jelenti, hogy az adatpontokat vonallal szeretnénk az ábrán összekötni. Ha csak be szeretnénk szórni a pontokat az ábrára, akkor ide írjuk a 'markers' kifejezést.
Végül tegyük be az adatok_sin_gorbe listába a legyártott trace objektumokat. Mivel ebben a példában csak egyetlen trace-re volt szükség, így ez elhagyható lenne, de több trace esetén az összeset célszerű egy listába összefűzni:
End of explanation
"""
layout_sin_gorbe = graph_objs.Layout(title='Ez az ábra címe',
xaxis=graph_objs.XAxis(title='x'),
yaxis=graph_objs.YAxis(title='sin(x)'))
layout_sin_gorbe
"""
Explanation: 2. A layout definiálása
Adjunk az ábránknak címet, a tengelyekre pedig rakjunk tengelyfeliratot. Ezt a Layout objektum specifikálásával tehetjük meg az alábbiak szerint:
End of explanation
"""
layout_sin_gorbe = {'title': 'Ez az ábra címe',
'xaxis': {'title': 'x'},
'yaxis': {'title': 'sin(x)'}}
"""
Explanation: A Layout objektum xaxis változója egy XAxis objektum, melyben már a title változó értéke közvetlenül megadható. Hasonlóan járunk az yaxis változó esetén is. Látjuk azonban, hogy csakúgy mint a trace esetén, ezek az objektumok itt is helyettesíthetők egymásba ágyazott dict-ekkel:
End of explanation
"""
figure_sin_gorbe = graph_objs.Figure(data=adatok_sin_gorbe, layout=layout_sin_gorbe)
figure_sin_gorbe
"""
Explanation: 3. A Figure objektum legyártása
Miután az összes szükséges objektum elkészült (az adatokhoz a megfelelő trace-ek, az ábra formázásához pedig a Layout), ezeket összefűzve definiálhatjuk a Figure objektumot:
End of explanation
"""
iplot(figure_sin_gorbe)
"""
Explanation: A fentiekhez hasonlóan ez is "csak" egy egymásba ágyazott dict objektum lesz, melyet a hagyományos úton is definiálhattunk volna.
4. Az ábra kirajzoltatása
Az így elkészített Figure objektum már ábrázolható:
End of explanation
"""
from plotly.graph_objs import *
x_pontok_uj = np.linspace(0,2*np.pi,50)
y_pontok_sin = np.sin(x_pontok_uj)
y_pontok_cos = np.cos(x_pontok_uj)
trace_sin_gorbe = Scatter(x=x_pontok_uj, y=y_pontok_sin, mode='lines')
trace_cos_gorbe = Scatter(x=x_pontok_uj, y=y_pontok_cos, mode='lines')
adatok = [trace_sin_gorbe, trace_cos_gorbe]
layout_sin_gorbe = Layout(title='Ez az új ábra címe',
xaxis=XAxis(title='x'),
yaxis=YAxis(title='sin(x), cos(x)'))
figure_uj = Figure(data=adatok, layout=layout_sin_gorbe)
iplot(figure_uj)
"""
Explanation: Összefoglalva:
A fentiek talán bonyolultnak tűnnek, de valójában néhány sorban el tudjuk készíteni a fenti ábrát. Láthattuk, hogy a gráfobjektumok definiálása mindig a graph_objs almodul használatával történik. Érdemes tehát az egész almodul összes függvényét importálni, így többé nem kell kiírnunk a graph_objs. részletet.
A fenti ábra persze nagyon "szögletes" egy valódi sin-görbének, de ezen könnyen segíthetünk: növeljük meg a mintavételezési pontok számát és ábrázoljuk újra, most azonban a cos-görbével együtt. Ekkor már két trace objektumot kell gyártanunk.
End of explanation
"""
trace_sin_gorbe = Scatter(x=x_pontok_uj, y=y_pontok_sin, mode='lines')
trace_cos_gorbe = Bar(x=x_pontok_uj, y=y_pontok_cos)
adatok = [trace_sin_gorbe, trace_cos_gorbe]
layout_sin_gorbe = Layout(title='Ez az oszlopos ábra címe',
xaxis=XAxis(title='x'),
yaxis=YAxis(title='sin(x), cos(x)'))
figure_oszlop = Figure(data=adatok, layout=layout_sin_gorbe)
iplot(figure_oszlop)
"""
Explanation: Milyen szép sima görbéket kaptunk! Ellenőrizzük, hogy ez valóban így van-e! Próbáljunk az ábra jobb felső sarkában lévő gombokkal ráközelíteni egy-egy csúcsra. Látható, hogy ilyen skálán még ezek a görbék is szögletesek.
A plotly nagy előnye az ilyen jellegű interaktív nézegetési lehetőség. Ha egy ábrán nagyon sok mérési pont van, akkor csak kellően ráközelítve tudjuk őket megkülönböztetni egymástól. A jobb oldalon a trace 0 és trace 1 feliratok melletti vonalakra kattintva az aktuális görbe az ábráról ideiglenesen eltűntethető, majd ismételt kattintással újra megjeleníthető.
Oszlopdiagram készítése
Ha a fenti ábrát nem vonalakkal szeretnénk elkészíteni, hanem mondjuk a cos függvényt oszlopdiagrammal, csak a trace típusán kell változtatnunk:
End of explanation
"""
data_file = np.loadtxt('data/plotly_3D.txt')
x_tengely = data_file[:,0]
y_tengely = data_file[:,1]
z_tengely = data_file[:,2]
"""
Explanation: 3D-s ábra készítése
A következő háromdimenziós ábrához az ábrázolni kívánt pontok koordinátáit a plotly_3D.txt szövegfájlban találjuk. Az adatok innen származnak, hasonló ábrák készítéséhez hasznos adatfájlok itt találhatók.
Az adatfájlban az első oszlop az x, a második az y, a harmadik pedig a z koordináta. Elsőként olvassuk be az adatokat. Ehhez a loadtxt függvényt használjuk, mely a numpy modulban található.
End of explanation
"""
trace_3D = Scatter3d(x=x_tengely, y=y_tengely, z=z_tengely, mode='markers', marker = dict(size=2))
adatok_3D = [trace_3D]
layout_3D = Layout(width=900,height=500,scene=dict(aspectmode='manual', aspectratio = dict(x=0.2, y=1, z=2/3)))
fig_3D = Figure(data=adatok_3D, layout=layout_3D)
iplot(fig_3D)
"""
Explanation: Ezek után már csak az ábrázolás van hátra. Most nem az eddig használt Scatter objektumra van szükség, hanem ennek a 3D-s változatára, a Scatter3d-re.
End of explanation
"""
r = 3
phi = np.linspace(0,2*np.pi,1000)
x = r*np.cos(phi)
y = r*np.sin(phi)
"""
Explanation: Nagyítsuk, forgassuk kedvünkre az ábrát. Próbáljuk meg kitalálni, hogy mit jelenthet a Layout objektumon belül szereplő aspectratio kulcsszó.
☠ Színes kör
Ábrázoljunk scatter pontokkal egy $r=3$ egység sugarú kört, a pontok színét aszerint választva, hogy melyik síknegyedben van a pont.
A kör ábrázolásához elsőként le kell gyártani azokat az $(x,y)$ koordinátapárokat, melyek majd a kört kirajzolják. Ennek a legegyszerűbb módja az alábbi paraméterezés:
$$x = r\cos(\phi)$$
$$y = r\sin(\phi)$$
Ahol $\phi$ az x-tengelytől mért szög. Így a $\phi$ értékeit egyenletesen megválasztva a $(0,2\pi)$ intervallumon, az $x$ és $y$ értékeit tartalmazó array-ek a fentiek szerint adódnak:
End of explanation
"""
trace_1 = Scatter(x=x[(x<0) * (y<0)], y=y[(x<0) * (y<0)], mode='lines', marker=dict(color='red'))
trace_2 = Scatter(x=x[(x>=0) * (y<0)], y=y[(x>=0) * (y<0)], mode='lines', marker=dict(color='blue'))
trace_3 = Scatter(x=x[(x<0) * (y>=0)], y=y[(x<0) * (y>=0)], mode='lines', marker=dict(color='green'))
trace_4 = Scatter(x=x[(x>=0) * (y>=0)], y=y[(x>=0) * (y>=0)], mode='lines', marker=dict(color='orange'))
data = [trace_1, trace_2, trace_3, trace_4]
layout = Layout(title='Ez egy színes kör', width=500, height=500)
fig = Figure(data=data, layout=layout)
iplot(fig)
"""
Explanation: Attól függően, hogy az adott $x$ és $y$ milyen előjelű, 4-féle kategóriába sorolhatók a pontok. (Másként fogalmazva attól függően, hogy a pont melyik síknegyedbe esik.) A tengelyeket mindig a tőlük az óramutató járásával megegyező irányba eső síknegyedhez soroljuk.
Hogy ezeket különböző színnel tudjuk ábrázolni, egy lehetséges megoldás, hogy négy különböző trace objektumot generálunk az alábbiak szerint:
End of explanation
"""
|
damontallen/IPython-quick-ref-sheets | SVG_Table_Builder.ipynb | mit | cd Git_quickref_project/
"""
Explanation: Using Custom Magic and SVG Table Builder Classes to turn %quickref Magic into a SVG Table.
This notebook uses the SVG table classes here to build SGV tables of the %quickref text. The github project containing the external files used is here.
End of explanation
"""
from IPython.core.magic import (magics_class, line_magic)
from IPython.core.magics.basic import BasicMagics #Where _magic_docs is defined, also BasicMagics inherits the Magics already
@magics_class
class MyMagics(BasicMagics):
@line_magic
def quickref_text(self, line):
""" Return the quickref text to be assigned to a variable """
from IPython.core.usage import quick_reference
qr = quick_reference + self._magic_docs(brief=True)
return qr
ip = get_ipython()
ip.register_magics(MyMagics)
quickref_text = %quickref_text
"""
Explanation: Grab the %quickref Text with a New Magic Command
Creating custom(s) command documentation.
End of explanation
"""
from Build_dict import build_dict #to build a dictionary out of the %quickref text file
lines = quickref_text.split('\n')
Quick_ref_dic = build_dict(lines) #this parses the text and builds an ordered dict out of it.
#make a list of all the headings
heading =list(Quick_ref_dic.keys())
"""
Explanation: Convert the %quickref Text to a Dictionary
End of explanation
"""
def quick_dict_to_SVG(Table_dict, width=500, L_width=200, x_location=0, y_location=0):
title_color = (173,216,230) #lightblue
header_color = (213,224,197) #a grey green
font_family="Arial"
font_size=12 #font_size=font_size
header_font_size = 14
line_width=1 #, line_width=line_width
#good site for color translations: http://www.yellowpipe.com/yis/tools/hex-to-rgb/color-converter.php
TL = [x_location, y_location]
row_horz_shift = 5 #shift the row text over to line up with the header text
display_text = ''
for header, values in Table_dict.items():
entries = list(values.keys())
#Types = values['_type_']
heading = True #default since most row groups have a heading
#There are at least one entry unter each header
for entry, explained in values.items():
if entry =='_type_':
#This entry indicates what type of group of rows this is.
#Select the header color
if explained == 'Heading':
bg = header_color
elif explained == 'Title':
bg = title_color
else: #for comment or other headerless row groups
heading = False #
if heading: #Build a header for the table
head=Table_Header(text=header, width=width, TL=list(TL), background=bg, font_family=font_family, size=header_font_size, line_width=line_width)
bg=header_color
display_text += head.get_SVG_header() #add the header
TL[1]=head.bottom #Move the top left cordinates
spacer = Table_rows(font_size=3, width=width, top_left=list(TL), line_width=line_width)
spacer.set_count(1)
display_text += spacer.get_SVG_rows() #add a spacer after the header
TL[1]=spacer.bottom #Move the top left cordinates
elif "_Comment_starts_at_" in entry: #This is a block of comments
#make a row with one column and multiple lines of text
txt = clean_text('\n'.join(explained))
text_list = [[txt]] #
rows = Table_rows(top_left=list(TL),width=width,font_family=font_family,font_size=font_size, line_width=line_width)
rows.set_text_list(text_list)
rows.x_shift=row_horz_shift
display_text += rows.get_SVG_rows()
TL[1]=rows.bottom #Move the top left cordinates
elif "_Multiline_Flag_" in entry: #this is an example that spans multiple lines
#Make a row with two columns and multiple lines of text
multi_left=[]
multi_right=[]
#Multiline row entries are lists of tuple pairs
for line in explained:
left = clean_text(line[0])
multi_left.append(left)
right = clean_text(line[1])
multi_right.append(right)
multi_left_text = '\n'.join(multi_left)
multi_right_text = '\n'.join(multi_right)
text_list = [[multi_left_text,multi_right_text]]
rows = Table_rows(top_left=list(TL),width=width,font_family=font_family,font_size=font_size, line_width=line_width)
rows.set_text_list(text_list)
rows.column_locations=[0,L_width]
rows.x_shift=row_horz_shift
display_text += rows.get_SVG_rows()
TL[1]=rows.bottom #Move the top left cordinates
else: #this must be rows of examples and explanation
rows = Table_rows(top_left=list(TL),width=width,font_family=font_family,font_size=font_size, line_width=line_width)
text_list = [[clean_text(entry), clean_text(explained)]]
rows.set_text_list(text_list)
rows.column_locations=[0,L_width]
rows.x_shift=row_horz_shift
display_text += rows.get_SVG_rows()
TL[1]=rows.bottom #Move the top left cordinates
#Add a spacer at the bottom of the row group
bot_spacer = Table_rows(font_size=3, width=width, top_left=list(TL), line_width=line_width)
bot_spacer.set_count(1)
display_text += bot_spacer.get_SVG_rows()
TL[1]=bot_spacer.bottom #Move the top left cordinates
return (display_text, width+2*line_width+TL[0], TL[1]+2*line_width)
"""
Explanation: Dictionary to SVG Table
End of explanation
"""
from IPython.display import SVG #to display the results
from collections import OrderedDict as Or_dict
#load the SVG Table classes
%run SVG_Table_Classes #used instead of import during editing
#Set the table size
width=552
Left_column_width=170
#Build the SVG text
start_stop = 4 #number of headings to have on the left
Table_dict = Or_dict()
for i in range(0,start_stop): #Grab some headings
Table_dict[heading[i]]=Quick_ref_dic[heading[i]]
display_text, right, bottom = quick_dict_to_SVG(Table_dict, width=width, L_width=Left_column_width)
running_text = display_text
Bottom = bottom
#print("Right = %d"%right)
Table_dict = Or_dict()
for i in range(start_stop,8): #Grab some headings
Table_dict[heading[i]]=Quick_ref_dic[heading[i]]
width=525
Left_column_width=130
display_text, right, bottom = quick_dict_to_SVG(Table_dict, width=width, L_width=Left_column_width, x_location=right+10)
Bottom =max(Bottom , bottom)
running_text += display_text
Text=Set_SVG_view(right, Bottom, running_text)
"""
Explanation: Make a table of the Basic commands
End of explanation
"""
#Save the result
with open("Basic_Help.svg",'w') as f:
f.write(Text)
#Display the results
SVG(Text)
"""
Explanation: Save and display the results...
End of explanation
"""
#load the SVG Table classes
%run SVG_Table_Classes #used instead of import during editing (not needed a second time but left in for clarification)
#Set the table lengths
stops = [32+4+5,65+8+10]
#Build the SVG text
#Make a tempory dictionary of the magic heading entries
magic_dict = Or_dict()
magic_dict[heading[-1]]=Quick_ref_dic[heading[-1]]
entries = list(magic_dict[heading[-1]].keys())
#First table
Table_dict = Or_dict()
magic_dict_list = Or_dict()
for i in range(0,stops[0]): #grab some of the magic entries
magic_dict_list[entries[i]] = magic_dict[heading[-1]][entries[i]]
#make a table dictionary to display
Table_dict[heading[-1]]=magic_dict_list
width=585
Left_column_width=145
display_text, right, bottom = quick_dict_to_SVG(Table_dict, width=width, L_width=Left_column_width)
Bottom = bottom #keep track of the bottom of the view port
running_text = display_text #start a running total of the display text
#Second table
Table_dict = Or_dict()
magic_dict_list = Or_dict()
for i in range(stops[0],stops[1]): #grab some of the magic entries
magic_dict_list[entries[i]] = magic_dict[heading[-1]][entries[i]]
#make a table dictionary to display
Table_dict["Continuation"]=magic_dict_list
Table_dict["Continuation"]['_type_']="continuation"
width=600
Left_column_width=110#95
display_text, right, bottom = quick_dict_to_SVG(Table_dict, width=width, L_width=Left_column_width, x_location=right+10)
Bottom =max(Bottom , bottom) #keep track of the bottom of the view port
running_text += display_text #Add the diplay text to the running total of text
#Third table
Table_dict = Or_dict()
magic_dict_list = Or_dict()
last = len(entries)
for i in range(stops[1],last): #grab some of the magic entries
magic_dict_list[entries[i]] = magic_dict[heading[-1]][entries[i]]
#make a table dictionary to display
Table_dict["Continuation"]=magic_dict_list
Table_dict["Continuation"]['_type_']="continuation"
width=575
Left_column_width=90
display_text, right, bottom = quick_dict_to_SVG(Table_dict, width=width, L_width=Left_column_width, x_location=right+10)
Bottom =max(Bottom , bottom) #keep track of the bottom of the view port
running_text += display_text#Add the diplay text to the running total of text
Text=Set_SVG_view(right, Bottom, running_text)
"""
Explanation: Make a table of the Magic commands
End of explanation
"""
#Save the result
with open("Magic_only.svg",'w') as f:
f.write(Text)
#Display the results
SVG(Text)
%quickref
"""
Explanation: Save and display the results...
End of explanation
"""
|
camm0991/ThesisProject | Scripts/02 Signal filtering/Signal filtering from csv file.ipynb | mit | from scipy.signal import butter
from scipy.signal import lfilter
from sklearn.preprocessing import StandardScaler
import random
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
"""
Explanation: Signal filtering
End of explanation
"""
def butter_bandpass(lowcut, highcut, fs, order=4):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype='band')
return b, a
b, a = butter_bandpass(0.5, 30.0, 128.0)
"""
Explanation: Filter definition
We define our filter here in which we specify the low cut (0.5Hz), the high cut (30Hz), the frecuency (128Hz) and the order of the filter.
based on: http://scipy-cookbook.readthedocs.io/items/ButterworthBandpass.html
End of explanation
"""
path = "../../Dataset/Test/EEG_Test_Sorted.csv"
file_name = "Filtered_file.csv"
df = pd.read_csv(path)
labels = ['AF3','F7','F3','FC5','T7','P7','O1','O2','P8','T8','FC6','F4','F8','AF4']
filtered_dataset = pd.DataFrame(columns=['AF3','F7','F3','FC5','T7','P7','O1','O2','P8','T8','FC6','F4','F8','AF4','Class'])
for i in labels:
temp = lfilter(b, a, df[i])
filtered_dataset[i] = temp
filtered_dataset['Class'] = df['Class']
filtered_dataset.to_csv(file_name, index=False)
"""
Explanation: Filter application
Here we filter each one of the signal channels and add it to a new dataframe that will be written into a csv file.
End of explanation
"""
|
MATH497project/MATH497-DiabeticRetinopathy | person_profile_exploration/person_profile.ipynb | mit | temp=list(data['family_hist_list'][data['family_hist_list'].Relation.notnull()].Relation.drop_duplicates())
len(temp)
data['encounters'].head()
"""
Explanation: There are 443 different relationships
End of explanation
"""
# Create Date variable
from datetime import datetime
data['family_hist_list']['Date'] = [datetime.strftime(item, '%Y-%m-%d') for item in data['family_hist_list']['Date_Created']]
# Individual family history grouped by the relationship
# Date of collecting could be omited
family_hist_list = {k:[{'Relation':k1,
'History':[{'Code': a, 'Family_history': b} for a,b in zip(v1.Code, v1.Family_History)]}
for k1, v1 in v.groupby('Relation')]
for k,v in data['family_hist_list'].groupby('Person_Nbr')}
family_hist_list[109227]
"""
Explanation: ### Process family history
End of explanation
"""
# There is no person duplicated in demographics
len(data['demographics'].Person_Nbr.drop_duplicates()) == len(data['demographics'])
# Normalize zip code with only 5 digits
def clean_zip(zip):
if len(zip)<5:
return 'Null'
else:
return zip[:5]
data['demographics']['Zip'] = demographics.Zip.map(lambda x: clean_zip(x))
data['demographics'].head()
# Null cases for zip code
data['demographics'][data['demographics'].Zip=='Null']
data['demographics'].to_pickle(path+'demographics_processed_Dan_20170304.pickle')
data['demographics']['Age']=data['demographics']['DOB'].map(lambda x: datetime.now().year - x.year)
demographics=data['demographics'].set_index('Person_Nbr')[['Age', 'Gender', 'Race', 'Ethnicity', 'Zip', 'Age_Censored']].T.to_dict()
demographics[109227]
# People in demographics have fully covered people in family_hist_list
set(demographics.keys())&set(family_hist_list.keys())==set(family_hist_list)
"""
Explanation: Process demographics
End of explanation
"""
# Create Date variable
#data['encounters']['Enc_Date'] = pd.to_datetime([datetime.strftime(item, '%Y-%m-%d') for item in data['encounters']['Enc_Timestamp']])
Enc_list = {k:sorted([{'Enc_Nbr': a, 'Enc_Date': b} for a,b in zip(v.Enc_Nbr, v.Enc_Timestamp)], key=lambda x:x['Enc_Date']) for k,v in data['encounters'].groupby('Person_Nbr')}
Enc_list[109227]
set(Enc_list.keys())&set(demographics.keys())==set(Enc_list)
# People in demographics have fully covered people in encouters
set(Enc_list)&set(family_hist_list) == set(family_hist_list)
# People has family history record must have encounter records, encounters fully cover family_hist_list
"""
Explanation: Process encounter list
End of explanation
"""
profile_full={}
for k,v in demographics.items():
profile_full[k]=v
# patint may or may nor have a family history
profile_full[k]['family_hist_list'] = {}
profile_full[k]['family_hist_list_count'] = 0
if k in family_hist_list.keys():
profile_full[k]['family_hist_list'] = family_hist_list[k]
profile_full[k]['family_hist_list_count'] = len(family_hist_list[k])
# patient may or may not have encounter records
profile_full[k]['Enc_list'] = {}
profile_full[k]['Enc_list_count'] = 0
profile_full[k]['Enc_list_span'] = 0
if k in Enc_list.keys():
profile_full[k]['Enc_list'] = Enc_list[k]
profile_full[k]['Enc_list_count'] = len(Enc_list[k])
profile_full[k]['Enc_list_span'] = datetime.now().year - int(datetime.strftime(datetime.date(Enc_list[k][0]['Enc_Date']), '%Y'))
profile_full[109227]
len(profile_full)
"""
Explanation: Merge into a dictionary of profile
End of explanation
"""
len(set(demographics)-set(family_hist_list))
# That is to remove 2975 patients
len(set(family_hist_list)&set(Enc_list))
# So that only 14044 patients left have both records
len(set(demographics)-set(Enc_list))
"""
Explanation: Shall we remove the profiles that has no history and no encounters?
End of explanation
"""
# Remove patients have no family or no encounter records
profile={}
for k,v in demographics.items():
if k in set(family_hist_list)&set(Enc_list):
profile[k]=v
profile[k]['family_hist_list'] = family_hist_list[k]
profile[k]['family_hist_list_count'] = len(family_hist_list[k])
profile[k]['Enc_list'] = Enc_list[k]
profile[k]['Enc_list_count'] = len(Enc_list[k])
profile_full[k]['Enc_list_span'] = datetime.now().year - int(datetime.strftime(datetime.date(Enc_list[k][0]['Enc_Date']), '%Y'))
else:
continue
profile[109227]
len(profile)
"""
Explanation: We have in total 17019 patients. 510 of total have no encounter records. 2465 of total have no encounter records and no family records. If we want a profile that everyone has both records, we need to remove all 2975 patients.
End of explanation
"""
# Remove patients have no encounter records
profile1={}
for k,v in demographics.items():
if k in set(Enc_list):
profile1[k]=v
profile1[k]['Enc_list'] = Enc_list[k]
profile1[k]['Enc_list_count'] = len(Enc_list[k])
profile_full[k]['Enc_list_span'] = datetime.now().year - int(datetime.strftime(datetime.date(Enc_list[k][0]['Enc_Date']), '%Y'))
profile1[k]['family_hist_list'] = {}
profile1[k]['family_hist_list_count'] = 0
if k in family_hist_list.keys():
profile1[k]['family_hist_list'] = family_hist_list[k]
profile1[k]['family_hist_list_count'] = len(family_hist_list[k])
else:
continue
profile1[109227]
len(profile1)
temp=pd.DataFrame.from_dict(profile1, orient='index')
temp.head()
temp.to_pickle(path+'person_profile_df.pickle')
"""
Explanation: (Or we can only remove 510 patients that lack encounter records and omit family part for now)
End of explanation
"""
data['SNOMED_problem_list'].head()
{k:list(v) for k,v in data['systemic_disease_list'].groupby('Person_Nbr')['Snomed_Code']}[109227]
{k:list(v) for k,v in data['SNOMED_problem_list'].groupby('Person_Nbr')['Concept_ID']}[109227]
"""
Explanation: Tried to process SNOMED code list for person
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_lcmv_beamformer.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import numpy as np
import mne
from mne.datasets import sample
from mne.beamformer import lcmv
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
"""
Explanation: Compute LCMV beamformer on evoked data
Compute LCMV beamformer solutions on evoked dataset for three different choices
of source orientation and stores the solutions in stc files for visualisation.
End of explanation
"""
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True, proj=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads', selection=left_temporal_channels)
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Compute regularized noise and data covariances
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
plt.close('all')
pick_oris = [None, 'normal', 'max-power']
names = ['free', 'normal', 'max-power']
descriptions = ['Free orientation', 'Normal orientation', 'Max-power '
'orientation']
colors = ['b', 'k', 'r']
for pick_ori, name, desc, color in zip(pick_oris, names, descriptions, colors):
stc = lcmv(evoked, forward, noise_cov, data_cov, reg=0.01,
pick_ori=pick_ori)
# View activation time-series
label = mne.read_label(fname_label)
stc_label = stc.in_label(label)
plt.plot(1e3 * stc_label.times, np.mean(stc_label.data, axis=0), color,
hold=True, label=desc)
plt.xlabel('Time (ms)')
plt.ylabel('LCMV value')
plt.ylim(-0.8, 2.2)
plt.title('LCMV in %s' % label_name)
plt.legend()
plt.show()
# Plot last stc in the brain in 3D with PySurfer if available
brain = stc.plot(hemi='lh', subjects_dir=subjects_dir,
initial_time=0.1, time_unit='s')
brain.show_view('lateral')
"""
Explanation: Get epochs
End of explanation
"""
|
rexthompson/data-512-a1 | hcds-a1-data-curation.ipynb | mit | import json
import matplotlib.pyplot as plt
import os
import pandas as pd
import requests
%matplotlib inline
"""
Explanation: English Wikipedia page views, 2008 - 2017
Here I retrieve, aggregate and visualize the number of monthly visitors to English Wikipedia from January 2008 through September 2017. I group the data by the nature of the visit, whether via the desktop website, or via mobile, which includes the mobile website and the mobile application. I also present total visits by month, which is simply the sum of monthly desktop and mobile visits. I visualize the results (see below) and save the output to a .csv file.
This work is meant to be fully reproducible. I welcome comments if any errors or hurdles are discovered during an attempted reproduction.
Setup
We run a few lines of code to set up the system before we get going.
End of explanation
"""
# create a dictionary to hold API response data
data_dict = dict()
# define headers to pass to the API call
headers={'User-Agent' : 'https://github.com/rexthompson', 'From' : 'rext@uw.edu'}
"""
Explanation: Data Ingest
The first step in all data projects is data retrieval. For this project we will be downloading data from Wikimedia's REST API. We'll pull from two endpoints:
Legacy Pagecounts API
Pageview API
Before we make our first call, we define an empty dictionary to hold the results. We also define a second dictionary with some contact information which we'll pass to the API calls when they are made.
End of explanation
"""
# define endpoint and access sites
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
access_sites = ['desktop-site', 'mobile-site']
# repeat for each access site of interest
for access_site in access_sites:
# set filename for access-specific API call response JSON file
filename = 'data/raw/pagecounts_' + access_site + '_200807_201709.json'
# check if file already exists; load if so, create if not
if os.path.isfile(filename):
with open(filename) as json_data:
response = json.load(json_data)
print('loaded JSON data from ./' + filename)
else:
# define parameters
params = {'project' : 'en.wikipedia.org',
'access-site' : access_site, # [all-sites, desktop-site, mobile-site]
'granularity' : 'monthly', # [hourly, daily, monthly]
'start' : '2008010100',
'end' : '2017100100' # use the first day of the following month to ensure a full month of data is collected
}
# fetch and format data
api_call = requests.get(endpoint.format(**params), headers)
response = api_call.json()
# format and save output as JSON file
with open(filename, 'w') as f:
json.dump(response, f)
print('saved JSON data to ./' + filename)
# convert to dataframe
temp_df = pd.DataFrame.from_dict(response['items'])
temp_df['yyyymm'] = temp_df.timestamp.str[0:6]
col_name = 'pc_' + access_site
temp_df.rename(columns={'count': col_name}, inplace=True)
# save to dictionary for later combination
data_dict[col_name] = temp_df[['yyyymm', col_name]]
"""
Explanation: Pagecounts (Legacy data)
The code below performs the following steps the Pagecount data, for both the desktop and mobile websites:
retrieves data from the API, or loads the data from local memory if it already exists
saves the raw data for each source as .json files if it does not already exist
extracts the year, month and count data and converts it to a dataframe
saves the dataframe to a dictionary for later combination with other sources
I chose to perform this step in a loop since we would otherwise be repeating much of the same code.
End of explanation
"""
# define endpoint and access sites
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
access_sites = ['desktop', 'mobile-app', 'mobile-web']
# repeat for each access site of interest
for access_site in access_sites:
# set filename for access-specific API call response JSON file
filename = 'data/raw/pageviews_' + access_site + '_200807_201709.json'
# check if file already exists; load if so, create if not
if os.path.isfile(filename):
with open(filename) as f:
response = json.load(f)
print('loaded JSON data from ./' + filename)
else:
# define parameters
params = {'project' : 'en.wikipedia.org',
'access' : access_site, # [all-access, desktop, mobile-app, mobile-web]
'agent' : 'user', # [all-agents, user, spider]
'granularity' : 'monthly', # [hourly, daily, monthly]
'start' : '2008010100',
'end' : '2017100100' # use the first day of the following month to ensure a full month of data is collected
}
# fetch and format data
api_call = requests.get(endpoint.format(**params), headers)
response = api_call.json()
# format and save output as JSON file
with open(filename, 'w') as f:
json.dump(response, f)
print('saved JSON data to ./' + filename)
# convert to dataframe
temp_df = pd.DataFrame.from_dict(response['items'])
temp_df['yyyymm'] = temp_df.timestamp.str[0:6]
col_name = 'pv_' + access_site
temp_df.rename(columns={'views': col_name}, inplace=True)
# save to dictionary for later combination
data_dict[col_name] = temp_df[['yyyymm', col_name]]
"""
Explanation: Pageviews (New data)
We now repeat the same process as above, but for the Pageviews data. I chose to perform this step separate from the loop above since I felt there were enough differences between the API call parameters and schema to warrant breaking these steps into two. See Pageview API and Legacy Pagecounts API.
The loop below continues appending dataframes to our data dictionary. When complete, the dictionary contains data for all five API call endpoints.
End of explanation
"""
keys = list(data_dict.keys())
df = data_dict[keys[0]]
for i in range(1, len(keys)):
df = df.merge(data_dict[keys[i]], how='outer', on='yyyymm')
"""
Explanation: Data Aggregation
At this point our data consists of five separate dataframes, stored in a single data dictionary. We want to get this all into the same dataframe so we can work with it, export it, and plot it. To do this, we will define a new dataframe (df) which consists of the first dataframe in the dictionary. We'll then loop over the remaining dictionary elements and merge them all together.
The end result is a single dataframe with a single date column (yyyymm) and a column for each of the original five dataframes. The dataframes are merged on the date column so they are sure to align vertically.
End of explanation
"""
df = df.fillna(0)
"""
Explanation: Data Export
First we run the following line to convert missing values to zero, as per the assignment instructions.
End of explanation
"""
df.iloc[:,1:] = df.iloc[:,1:].astype(int)
"""
Explanation: We then run the following line to convert the data to integers, since we are working with discrete counts. Omitting this step would result in unnecessary decimal precision in the output .csv file.
End of explanation
"""
# create dataframe for .csv and plot
df_new = pd.DataFrame({'year':df['yyyymm'].str[0:4],
'month':df['yyyymm'].str[4:6],
'pagecount_all_views':df['pc_desktop-site'] + df['pc_mobile-site'],
'pagecount_desktop_views':df['pc_desktop-site'].astype(int),
'pagecount_mobile_views':df['pc_mobile-site'],
'pageview_all_views':df['pv_desktop'] + df['pv_mobile-app'] + df['pv_mobile-web'],
'pageview_desktop_views':df['pv_desktop'],
'pageview_mobile_views':df['pv_mobile-app'] + df['pv_mobile-web']})
# reorder columns
df_new = df_new[['year',
'month',
'pagecount_all_views',
'pagecount_desktop_views',
'pagecount_mobile_views',
'pageview_all_views',
'pageview_desktop_views',
'pageview_mobile_views']]
"""
Explanation: Now we perform a few simple string splitting and aggregation steps to create the final dataframe that we'll export to CSV.
End of explanation
"""
# set filename for combined data CSV
filename = 'data/en-wikipedia_traffic_200801-201709_prelim.csv'
# check if file already exists; load if so, create if not
if os.path.isfile(filename):
df_new = pd.read_csv(filename)
print('loaded CSV data from ./' + filename)
else:
df_new.to_csv(filename, index=False, )
print('saved CSV data to ./' + filename)
"""
Explanation: Now we'll save to CSV, unless the file already exists in which case we can load for utmost transparency. We'll call this data "prelim" since we aren't sure yet if there are any problems with the data.
End of explanation
"""
df_new.replace(0, float('nan'), inplace=True)
"""
Explanation: Data Visualization
We want to have a look at the data. But first, we need a few cleanup steps.
We don't want missing values to clutter the plot, so we retract our previous step and convert 0's (zeros) back to NaN.
End of explanation
"""
yyyymm = pd.to_datetime(df_new['year'].astype(str) + df_new['month'].astype(str), format='%Y%m')
"""
Explanation: We also pull out a series that consists of timestamps. This is a necessary step since the plot function won't recognize the current date integers as dates.
End of explanation
"""
plt.rcParams["figure.figsize"] = (14, 4)
plt.plot(yyyymm, df_new['pagecount_desktop_views']/1e6, 'g--')
plt.plot(yyyymm, df_new['pagecount_mobile_views']/1e6, 'b--')
plt.plot(yyyymm, df_new['pagecount_all_views']/1e6, 'k--')
plt.legend(['main site','mobile site','total'], loc=2, framealpha=1)
plt.plot(yyyymm, df_new['pageview_desktop_views']/1e6, 'g-')
plt.plot(yyyymm, df_new['pageview_mobile_views']/1e6, 'b-')
plt.plot(yyyymm, df_new['pageview_all_views']/1e6, 'k-')
plt.grid(True)
plt.ylim((0,12000))
plt.xlim(('2008-01-01','2017-10-01'))
plt.title('Page Views on English Wikipedia (x 1,000,000)')
plt.suptitle('May 2015: a new pageview definition took effect, which eliminated all crawler traffic. Solid lines mark new definition.', y=0.04, color='#b22222');
"""
Explanation: Now we plot the data.
End of explanation
"""
df_new.loc[(pd.DatetimeIndex(yyyymm).year == 2016)]
"""
Explanation: This looks pretty good! However, we notice in the plot above that there appears to be some bad data in mid-2016.
Data Cleaning
Let's see if we can figure out what's going on here. We display the data for all of 2016 below to look for bad values.
End of explanation
"""
# update values
df_new.loc[103,['pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views']] = None
df_new.iloc[100:106,]
"""
Explanation: It looks like row 103 has lower values for the first three columns. Interestingly, this month wasn't even supposed to be included in the analysis. So it seems these values may be an artifact from the API process, in which case we were instructed to set the end date to the start of the following month.
Thus, we set these values to NaN so they won't clutter the analysis.
End of explanation
"""
plt.rcParams["figure.figsize"] = (14, 4)
plt.plot(yyyymm, df_new['pagecount_desktop_views']/1e6, 'g--')
plt.plot(yyyymm, df_new['pagecount_mobile_views']/1e6, 'b--')
plt.plot(yyyymm, df_new['pagecount_all_views']/1e6, 'k--')
plt.legend(['main site','mobile site','total'], loc=2, framealpha=1)
plt.plot(yyyymm, df_new['pageview_desktop_views']/1e6, 'g-')
plt.plot(yyyymm, df_new['pageview_mobile_views']/1e6, 'b-')
plt.plot(yyyymm, df_new['pageview_all_views']/1e6, 'k-')
plt.grid(True)
plt.ylim((0,12000))
plt.xlim(('2008-01-01','2017-10-01'))
plt.title('Page Views on English Wikipedia (x 1,000,000)')
plt.suptitle('May 2015: a new pageview definition took effect, which eliminated all crawler traffic. Solid lines mark new definition.', y=0.04, color='#b22222')
plt.savefig('en-wikipedia_traffic_200801-201709.png', dpi=80);
"""
Explanation: That looks better. Let's plot again to see how it looks graphically.
End of explanation
"""
# data formatting updates
df_new = df_new.fillna(0)
df_new.iloc[:,1:] = df_new.iloc[:,1:].astype(int)
# set filename for combined data CSV
filename = 'data/en-wikipedia_traffic_200801-201709.csv'
# check if file already exists; load if so, create if not
if os.path.isfile(filename):
df_new = pd.read_csv(filename)
print('loaded CSV data from ./' + filename)
else:
df_new.to_csv(filename, index=False, )
print('saved CSV data to ./' + filename)
"""
Explanation: That looks better!! So, it seems we have successfully reproduced the plot found here: https://wiki.communitydata.cc/upload/a/a8/PlotPageviewsEN_overlap.png
Data Export, Attempt 2
Finally, we'll clean up the data (in the same way as before) and output the final data file to .csv.
End of explanation
"""
|
computationforpolicy/lecture-examples | Solutions 4.ipynb | gpl-3.0 | base_url = "https://en.wikipedia.org"
index_ref = "/wiki/List_of_accidents_and_incidents_involving_commercial_aircraft"
index_html = urlopen(base_url + index_ref)
index = BeautifulSoup(index_html, "lxml")
"""
Explanation: Question a: Setting up the dataframe
End of explanation
"""
result = index.find_all('li')
"""
Explanation: Grab the <li> tags
From inspecting the source code, we can see that every item is within <li> tags - in some cases several tags for some days like September 11, 2001.
End of explanation
"""
result[829]
"""
Explanation: Handling special dates with multiple accidents
However, there are some elements in our list that contain another list of dates, which look something like this:
`<li><b><a href="/wiki/1950_Air_France_multiple_Douglas_DC-4_accidents" title="1950 Air France multiple Douglas DC-4 accidents">1950 Air France multiple Douglas DC-4 accidents</a></b>:
<ul>
<li>June 12 – An Air France Douglas DC-4 (F-BBDE) on a flight from Saigon to Paris crashes in the Arabian Sea while on approach to Bahrain Airport, killing 46 of 52 on board.</li>
<li>June 14 – An Air France Douglas DC-4, F-BBDM, crashes in the Arabian Sea while on approach to Bahrain Airport, killing 40 of 53 on board. This aircraft was operating on the same flight route as F-BBDE.</li>
</ul>
</li>`
We'll need to make sure we can handle these cases. Additionally, since there are <li> tags within this block, the <li>June 12...</li> and <li>June 14...</li> will appear again in our ResultSet. We should get rid of these duplicates.
Looking at the HTML above, the duplicated <li> entries won't have both links and date separators in them, so that's one way we can drop them. Then we'll save rows from the interior <li> tags, supplementing link information from the parent if necessary.
For example, the entry for September 11 is:
End of explanation
"""
result[830:834]
"""
Explanation: But then there are also separate entries for each <li> in the list inside:
End of explanation
"""
result[0].find('a').get('href')
result[0].text
result[0].text.split(' – ')
result[0].text.split(' – ')[0]
result[0].text.split(' – ')[1]
"""
Explanation: Extracting data from the HTML
Let's take a look at one of the entries in our results containing only a list element defined by <li> in the page to see how we can select each piece of data we want to save. We'll store these items in lists.
Here we show how to extract pieces of data from the html:
End of explanation
"""
def get_date_separator(html_fragment):
# Date separator changes throughout the document, so let's handle both
if ' – ' in html_fragment.text:
return '–'
elif ' - ' in html_fragment.text:
return '-'
else:
return None
"""
Explanation: Iterating over all <li> elements and extract information
We need to write a function that will handle the fact that the date separator changes during the page.
End of explanation
"""
def extract_details(html_fragment):
# these lists may have one or more elements when returned
bdates, blinks, bdescrips = [], [], []
if html_fragment.find_all('li') == []:
# Then there is only one crash for this bullet
separator = get_date_separator(html_fragment)
blinks.append(html_fragment.find('a').get('href'))
bdates.append(html_fragment.text.split(separator)[0].strip())
bdescrips.append(html_fragment.text.split(separator)[1].strip())
else:
# Then there are multiple crashes for this bullet
for bullet in html_fragment.find_all('li'):
# Dates might appear in current or parent <li>
separator = get_date_separator(bullet)
if separator != None:
bdates.append(bullet.text.split(separator)[0].strip())
bdescrips.append(bullet.text.split(separator)[1].strip())
else:
parent_separator = get_date_separator(html_fragment)
bdates.append(html_fragment.text.split(parent_separator)[0].strip())
bdescrips.append(bullet.text.strip())
# Relevant link might appear in current or parent <li>
if bullet.find('a') == None:
blinks.append(html_fragment.find('a').get('href'))
else:
blinks.append(bullet.find('a').get('href'))
return bdates, blinks, bdescrips
dates_month_day, links, descriptions = [], [], []
for each_li in result:
if (' – ' in each_li.text or ' - ' in each_li.text) and each_li.find('a') != None:
lis_dates, lis_links, lis_descrips = extract_details(each_li)
dates_month_day += lis_dates
links += lis_links
descriptions += lis_descrips
else:
# If neither condition is true, then we hit duplicate or extra links
# elsewhere in the page so we can skip these and throw them away
continue
"""
Explanation: Let's write a function that will take each <li> and extract the crash details (month, day, link, and text description) from each element. We should also make sure that we handle the cases where there are several crashes per <li>.
End of explanation
"""
len(dates_month_day), len(links), len(descriptions)
"""
Explanation: Sanity check the lengths of each list:
End of explanation
"""
df = pd.DataFrame({'date': dates_month_day, 'link': links, 'description': descriptions})
"""
Explanation: Looks good! Time to make the DataFrame, which we can do by passing a Python dict:
End of explanation
"""
df.head()
"""
Explanation: Sanity check again:
End of explanation
"""
df[df.date == 'September 11']
"""
Explanation: Let's check that we did everything right for the weird cases by checking one of the bullets that had multiple crashes:
End of explanation
"""
df.to_csv('crashes_question_starter.csv')
df[['description', 'link']].to_csv('crashes_no_extra_credit.csv')
"""
Explanation: This looks like exactly what we expected. Now let's proceed with clicking the links so we can add in the year. We have to click those links anyway to extract the additional crash details, so let's just grab the years from there.
End of explanation
"""
def try_request(url):
html = urlopen(url)
time.sleep(1)
return BeautifulSoup(html, "lxml")
"""
Explanation: Question b: Completing the dataframe with details
I'll rate limit my requests by using the function below:
End of explanation
"""
def extract_summary(trs):
date_w_year, passengers, crew, fatalities, survivors = '', 0, 0, 0, 0
registration, origins, destination = 'No data', 'No data', 'No data'
for each_tr in trs:
if each_tr.find('th', text = re.compile('Destination')) != None:
try:
destination = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Date')) != None:
try:
date_w_year = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Passengers')) != None:
try:
passengers = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Crew')) != None:
try:
crew = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Fatalities')) != None:
try:
fatalities = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Survivors')) != None:
try:
survivors = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Flight origin')) != None:
try:
origins = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Registration')) != None:
try:
registration = each_tr.td.text
except:
pass
else:
pass
return {'destination': destination,
'date': date_w_year,
'passengers': passengers,
'crew': crew,
'fatalities': fatalities,
'survivors': survivors,
'origins': origins,
'registration': registration}
"""
Explanation: Extracting elements from the summary page
We will write a function to extract elements from a list of table rows (defined by <tr>).
End of explanation
"""
def extract_numbers(td_text, passengers):
"""
Function that handles table data rows to extract numbers.
Handles special cases where there are strings like all, none, etc. in the text
"""
number_regex = re.compile('\d+')
all_regex = re.compile('ll')
none_regex = re.compile('one')
unknown_regex = re.compile('nknown')
try:
data_element = int(number_regex.findall(td_text)[0])
except:
if len(all_regex.findall(td_text)) >= 1:
data_element = passengers
elif len(none_regex.findall(td_text)) >= 1:
data_element = 0
elif len(unknown_regex.findall(td_text)) >= 1:
data_element = 0
else:
data_element = 0
return data_element
"""
Explanation: Upon inspection of the text of each summary, we can see that there are some cases where in addition to (or instead of) an integer, there is some extraneous text or just a string like "all", "unknown", or "none". Let's handle these special cases and extract numbers in a function:
End of explanation
"""
# Define lists we use to store our results
dates_w_year, passengers, crew, fatalities, survivors = [], [], [], [], []
registration, origins, destination = [], [], []
for row in links:
# Get HTML of detail page
summary_html = try_request(base_url + row)
trs = summary_html.find_all('tr')
# Extract data from summary HTML
summary = extract_summary(trs)
# Save the data for this page in our lists
dates_w_year.append(summary['date'])
passengers.append(summary['passengers'])
crew.append(summary['crew'])
fatalities.append(summary['fatalities'])
survivors.append(summary['survivors'])
origins.append(summary['origins'])
registration.append(summary['registration'])
destination.append(summary['destination'])
"""
Explanation: Scraping each page
Now let's use these functions to scrape each link.
End of explanation
"""
len(destination), len(origins), len(registration), len(dates_w_year), len(passengers), len(crew), len(fatalities), len(survivors)
df_full = pd.DataFrame({'date': dates_w_year, 'link': links, 'description': descriptions, 'passengers': passengers,
'crew': crew, 'fatalities': fatalities, 'survivors': survivors,
'registration': registration, 'flight origin': origins, 'destination': destination})
# save all this scraped stuff!
df_full.to_csv('all_data_rescraped.csv')
df_full = pd.read_csv('all_data_rescraped.csv')
dates_w_year = df_full['date']
df_full.columns
df_full.drop(['Unnamed: 0'], axis=1, inplace=True)
"""
Explanation: Let's sanity check the lengths of these lists.
End of explanation
"""
dates_w_year[0:10]
"""
Explanation: Clean up dates and format them as datetimes
The formatting of the dates is not so great, so let's just clean that up.
End of explanation
"""
cleaned_dates = [str(d).replace(',', '') for d in dates_w_year]
"""
Explanation: Let's remove commas.
End of explanation
"""
import calendar
months = list(calendar.month_name)
days = list(calendar.day_name)
dates = [str(d) for d in list(range(1, 32))]
years = [str(y) for y in list(range(1900, 2017))]
def parse_date_strings(text):
split_row = text.split()
month, day, year, date = '', '', '', ''
for each in split_row[0:4]:
if each in months:
month = each
elif each in days:
day = each
elif each in years:
year = each
elif each in dates:
date = each
else:
pass
return {'month': month,
'day': day,
'year': year,
'date': date}
def fix_dates(datecol):
correctedcol = []
for row in datecol:
parsed_date = parse_date_strings(row)
correctedcol.append('{} {} {}'.format(parsed_date['date'],
parsed_date['month'],
parsed_date['year']))
return correctedcol
datescol = fix_dates(cleaned_dates)
datescol[0:5]
"""
Explanation: Some dates have month first. Some dates have date first. Let's make them consistent while also getting rid of extraneous information appended to the end of the date (like links to references). We'll write our own function to parse dates because we like to do fun and cool things like that.
End of explanation
"""
dates_datetime = pd.to_datetime(datescol, format='%d %B %Y', errors='coerce')
df_full['date'] = dates_datetime
df_full.head()
df_full = pd.DataFrame({'date': dates_datetime, 'link': links, 'description': descriptions, 'passengers': passengers,
'crew': crew, 'fatalities': fatalities, 'survivors': survivors,
'registration': registration, 'flight origin': origins, 'destination': destination})
# save all this scraped stuff!
df_full.to_csv('final_dataframe.csv')
"""
Explanation: We can see now that our dates are nicely formatted and can create them as datetime objects:
End of explanation
"""
%pdb
def extract_summaries(tables, relevant_date):
if len(tables) == 1:
result = extract_single_table_summary(tables[0])
else:
result = extract_relevant_table_summary(tables, relevant_date)
return {'destination': result['destination'],
'date': result['date'],
'passengers': result['passengers'],
'crew': result['crew'],
'fatalities': result['fatalities'],
'survivors': result['survivors'],
'origins': result['origins'],
'registration': result['registration']}
def pick_out_table(tables, relevant_date):
for table in tables:
trs = table.find_all('tr')
for each_tr in trs:
if each_tr.find('th', text = re.compile('Date')) != None:
# Clean and parse date
date = each_tr.td.text.replace(',', '')
parsed_date = parse_date_strings(date)
if (parsed_date['month'] == relevant_date.split()[0]
and parsed_date['date'] == relevant_date.split()[1]):
return table
return tables[0]
def extract_relevant_table_summary(tables, relevant_date):
date_w_year, passengers, crew, fatalities, survivors = '', 0, 0, 0, 0
registration, origins, destination = '', '', ''
table = pick_out_table(tables, relevant_date)
trs = table.find_all('tr')
for each_tr in trs:
if each_tr.find('th', text = re.compile('Destination')) != None:
try:
destination = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Date')) != None:
try:
date_w_year = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Passengers')) != None:
try:
passengers = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Crew')) != None:
try:
crew = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Fatalities')) != None:
try:
fatalities = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Survivors')) != None:
try:
survivors = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Flight origin')) != None:
try:
origins = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Registration')) != None:
try:
registration = each_tr.td.text
except:
pass
else:
continue
return {'destination': destination.strip(),
'date': date_w_year,
'passengers': passengers,
'crew': crew,
'fatalities': fatalities,
'survivors': survivors,
'origins': origins.strip(),
'registration': registration.strip()}
def extract_single_table_summary(table):
date_w_year, passengers, crew, fatalities, survivors = '', 0, 0, 0, 0
registration, origins, destination = '', '', ''
trs = table.find_all('tr')
for each_tr in trs:
if each_tr.find('th', text = re.compile('Destination')) != None:
try:
destination += ' ' + each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Date')) != None:
try:
date_w_year = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Passengers')) != None:
try:
passengers += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Crew')) != None:
try:
crew += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Fatalities')) != None:
try:
fatalities += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Survivors')) != None:
try:
survivors += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Flight origin')) != None:
try:
origins += ' ' + each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Registration')) != None:
try:
registration += ' ' + each_tr.td.text
except:
pass
else:
continue
return {'destination': destination.strip(),
'date': date_w_year,
'passengers': passengers,
'crew': crew,
'fatalities': fatalities,
'survivors': survivors,
'origins': origins.strip(),
'registration': registration.strip()}
"""
Explanation: Optional: Doing part b while handling the special cases where there are multiple summaries per page
I told you to ignore this for the sake of simplifying this assignment, but it is the case that there are multiple summaries on a couple pages, for example:
https://en.wikipedia.org/wiki/1950_Air_France_multiple_Douglas_DC-4_accidents
These we can separate using the dates.
However, in addition to this, there are some cases where planes crash into one another. In these cases, the summaries are separated into two tables, one for each page, for example:
https://en.wikipedia.org/wiki/1922_Picardie_mid-air_collision
We could handle this in two ways: Sum the numbers for passengers, fatalities, etc. Or we could instead create two rows for these crashes.
We could handle both these cases by doing the following (pseudocode):
if there are multiple tables
get the summary details for the appropriate date
else if there is one table
get the summary details for each summary table and sum them (this is a collision)
We will create a new function extract_summaries() that will implement this approach.
End of explanation
"""
test_collision_url = 'https://en.wikipedia.org/wiki/1922_Picardie_mid-air_collision'
summary_html = try_request(test_collision_url)
tables = summary_html.find_all('table', {"class" : "infobox vcard vevent"})
result_updated = extract_summaries(tables)
result_updated
"""
Explanation: Let's test with the two URLs from earlier:
End of explanation
"""
test_multiple_dates_url = 'https://en.wikipedia.org/wiki/1950_Air_France_multiple_Douglas_DC-4_accidents'
summary_html = try_request(test_multiple_dates_url)
first_crash = 'June 12'
second_crash = 'June 14'
tables = summary_html.find_all('table', {"class" : "infobox vcard vevent"})
result_updated = extract_summaries(tables, first_crash)
result_updated
result_updated = extract_summaries(tables, second_crash)
result_updated
dates_w_year, passengers, crew, fatalities, survivors = [], [], [], [], []
registration, origins, destination = [], [], []
for num_row in range(len(links)):
# Get HTML of detail page
summary_html = try_request(base_url + links[num_row])
# Get tables that are in these sidebars (mostly one, but sometimes multiple)
tables = summary_html.find_all('table', {"class" : ["infobox", "vcard"]})
# Extract data from summary HTML
summary = extract_summaries(tables, dates_month_day[num_row])
# Save the data for this page in our lists
dates_w_year.append(summary['date'])
passengers.append(summary['passengers'])
crew.append(summary['crew'])
fatalities.append(summary['fatalities'])
survivors.append(summary['survivors'])
origins.append(summary['origins'])
registration.append(summary['registration'])
destination.append(summary['destination'])
# Clean dates
cleaned_dates = [str(d).replace(',', '') for d in dates_w_year]
datescol = fix_dates(cleaned_dates)
dates_datetime = pd.to_datetime(datescol, format='%d %B %Y', errors='coerce')
# Save!
df_summary = pd.DataFrame({'date': dates_datetime, 'link': links, 'description': descriptions, 'passengers': passengers,
'crew': crew, 'fatalities': fatalities, 'survivors': survivors,
'registration': registration, 'flight origin': origins, 'destination': destination})
# save all this scraped stuff!
df_summary.to_csv('final_dataframe_summary.csv')
"""
Explanation: Looks like we can correctly extract a summary table with multiple aircraft in it.
Now let's try on a page that has multiple crashes in it on different days.
End of explanation
"""
top_5_crashes = df_full.sort_values('fatalities', ascending=False)[0:5]
"""
Explanation: Question c: Which were the top 5 most deadly aviation incidents? Report the number of fatalities and the flight origin for each.
End of explanation
"""
top_5_crashes[['fatalities', 'flight origin']]
"""
Explanation: So the top 5 crashes, the number of fatalities and the flight origin was:
End of explanation
"""
top_5_crashes['description']
"""
Explanation: Let's see the description:
End of explanation
"""
df_full.date[672]
recent_incidents = df_full[673:]
recent_incidents['flight origin'].value_counts()[0:5]
"""
Explanation: Question d: Which flight origin has the highest number of aviation incidents in the last 25 years?
It's 2016, so let's take accidents from 1991 and later and see which is the most common flight origin.
This crash is the first one to occur in 1991:
End of explanation
"""
df_full['flight origin'].value_counts()[0:10]
"""
Explanation: Without de-duplication, Bergen Airport, Ninoy Aquino International Airport, and Domodedovo International Airport in Moscow had the highest number of aviation incidents.
Out of curiosity, let's do this for the entire dataset:
End of explanation
"""
df_full.to_json('crashes.json')
"""
Explanation: London Heathrow and LAX (entered twice as two slightly different strings) come out on top, which is not unexpected given the number of flights these airports have.
Note that one way we could proceed with de-duplication would be to use the fact that the summary tables actually contain links to their corresponding wikipedia pages. We could link together strings that correspond to the same airport using their common link.
Question e: Output as JSON
End of explanation
"""
|
Jackporter415/phys202-2015-work | assignments/assignment12/FittingModelsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Fitting Models Exercise 1
Imports
End of explanation
"""
a_true = 0.5
b_true = 2.0
c_true = -4.0
dy = 2.0
x = np.linspace(-5,5,30)
"""
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
"""
ydata = a_true*x**2 + b_true*x + c_true
assert True # leave this cell for grading the raw data generation and plot
"""
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
"""
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
plt.plot(x, ydata, 'k.')
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-5,5)
;
plt.errorbar(x, ydata, dy,
fmt='.k', ecolor='lightgray')
def exp_model(x, A, B, C):
return A*np.exp(x*B) + C
yfit = exp_model(x, a_true, b_true, c_true)
plt.plot(x, yfit)
plt.plot(x, ydata, 'k.')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim(-20,100);
"""
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation
"""
|
saashimi/CPO-datascience | Normalized Dataset - Testing parameters.ipynb | mit | #Import required packages
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
def format_date(df_date):
"""
Splits Meeting Times and Dates into datetime objects where applicable using regex.
"""
df_date['Days'] = df_date['Meeting_Times'].str.extract('([^\s]+)', expand=True)
df_date['Start_Date'] = df_date['Meeting_Dates'].str.extract('([^\s]+)', expand=True)
df_date['Year'] = df_date['Term'].astype(str).str.slice(0,4)
df_date['Quarter'] = df_date['Term'].astype(str).str.slice(4,6)
df_date['Term_Date'] = pd.to_datetime(df_date['Year'] + df_date['Quarter'], format='%Y%m')
df_date['End_Date'] = df_date['Meeting_Dates'].str.extract('(?<=-)(.*)(?= )', expand=True)
df_date['Start_Time'] = df_date['Meeting_Times'].str.extract('(?<= )(.*)(?=-)', expand=True)
df_date['Start_Time'] = pd.to_datetime(df_date['Start_Time'], format='%H%M')
df_date['End_Time'] = df_date['Meeting_Times'].str.extract('((?<=-).*$)', expand=True)
df_date['End_Time'] = pd.to_datetime(df_date['End_Time'], format='%H%M')
df_date['Duration_Hr'] = ((df_date['End_Time'] - df_date['Start_Time']).dt.seconds)/3600
return df_date
def format_xlist(df_xl):
"""
revises % capacity calculations by using Max Enrollment instead of room capacity.
"""
df_xl['Cap_Diff'] = np.where(df_xl['Xlst'] != '',
df_xl['Max_Enrl'].astype(int) - df_xl['Actual_Enrl'].astype(int),
df_xl['Room_Capacity'].astype(int) - df_xl['Actual_Enrl'].astype(int))
df_xl = df_xl.loc[df_xl['Room_Capacity'].astype(int) < 999]
return df_xl
"""
Explanation: Random Forests Using Full PSU dataset
End of explanation
"""
pd.set_option('display.max_rows', None)
df = pd.read_csv('data/PSU_master_classroom_91-17.csv', dtype={'Schedule': object, 'Schedule Desc': object})
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
#terms = [199104, 199204, 199304, 199404, 199504, 199604, 199704, 199804, 199904, 200004, 200104, 200204, 200304, 200404, 200504, 200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]
#terms = [200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]
#df = df.loc[df['Term'].isin(terms)]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
#dept_lst = ['MTH', 'CH', 'BI', 'CE', 'CS', 'ECE', 'EMGT' ]
#df = df.loc[df['Dept'].isin(dept_lst)]
# Calculate number of days per week and treat Sunday condition
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df['Building'] = df['ROOM'].str.extract('([^\s]+)', expand=True)
df_cl = format_xlist(df)
df_cl['%_Empty'] = df_cl['Cap_Diff'].astype(float) / df_cl['Room_Capacity'].astype(float)
# Normalize the results
df_cl['%_Empty'] = df_cl['Actual_Enrl'].astype(np.float32)/df_cl['Room_Capacity'].astype(np.float32)
df_cl = df_cl.replace([np.inf, -np.inf], np.nan).dropna()
from sklearn.preprocessing import LabelEncoder
df_cl = df_cl.sample(n = 80000)
# Save as a 1D array. Otherwise will throw errors.
y = np.asarray(df_cl['%_Empty'], dtype="|S6")
cols = df_cl[['Dept', 'Days', 'Start_Time', 'ROOM', 'Quarter', 'Room_Capacity', 'Building', 'Class', 'Instructor', 'Schedule', 'Max_Enrl']]
cat_columns = ['Dept', 'Days', 'Class', 'Start_Time', 'ROOM', 'Building', 'Instructor', 'Schedule']
#cols = df_cl[['Start_Time', 'Class', 'Instructor' ]]
#cat_columns = ['Start_Time', 'Class', 'Instructor']
for column in cat_columns:
categorical_mapping = {label: idx for idx, label in enumerate(np.unique(cols['{0}'.format(column)]))}
cols['{0}'.format(column)] = cols['{0}'.format(column)].map(categorical_mapping)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X = cols.iloc[:, :].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
"""
Explanation: Partitioning a dataset in training and test sets
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
feat_labels = cols.columns[:]
forest = RandomForestClassifier(n_estimators=20,
random_state=0,
n_jobs=-1) # -1 sets n_jobs=n_CPU cores
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30,
feat_labels[indices[f]],
importances[indices[f]]))
plt.title('Feature Importances')
plt.bar(range(X_train.shape[1]),
importances[indices],
color='lightblue',
align='center')
plt.xticks(range(X_train.shape[1]),
feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()
"""
Explanation: Determine Feature Importances
Utilize Random Forests Method to determine feature importances. On the left, trees are trained independently by recursive binary partitioning of a bootstrapped sample of the input data, X . On the right, test data is dropped down through each tree and the response estimate is the average over the all the individual predictions in the forest.
Random Forests Diagram
<img src="files/Random Forests.png">
Source: ResearchGate.net
End of explanation
"""
from sklearn.metrics import accuracy_score
forest = RandomForestClassifier(n_estimators=20,
random_state=0,
n_jobs=-1) # -1 sets n_jobs=n_CPU cores
forest.fit(X_train, y_train)
y_predict = forest.predict(X)
y_actual = y
print(accuracy_score(y_actual, y_predict))
"""
Explanation: Feature Importance Results
Class, Instructor, and Start Times are the three most important factors in predicting the percentage of empty seats expected.
Test Prediction
Machine-generated algorithm results in roughly .65 - .70 accuracy score.
End of explanation
"""
|
enchantner/python-zero | lesson_5/Slides.ipynb | mit | a = 1
b = 3
a + b
a.__add__(b)
type(a)
isinstance(a, int)
class Animal(object):
mammal = True # class variable
def __init__(self, name, voice, color="black"):
self.name = name
self.__voice = voice # "приватный" или "защищенный" атрибут
self._color = color # "типа приватный" атрибут
def make_sound(self):
print('{0} {1} says "{2}"'.format(self._color, self.name, self.__voice))
@classmethod
def description(cls):
print("Some animal")
Animal.mammal
Animal.description()
a = Animal("dog", "bark")
a.mammal
c.__voice
c._color
dir(c)
class Cat(Animal):
def __init__(self, color):
super().__init__(name="cat", voice="meow", color=color)
c = Cat(color="white")
isinstance(c, Animal)
c1 = Cat(color="white")
c2 = Cat(color="black")
print(c1.mammal)
c1.mammal = False
print(c1.mammal)
print(c2.mammal)
c1 = Cat(color="white")
c2 = Cat(color="black")
print(c1.mammal)
Cat.mammal = False
print(c1.mammal)
print(c2.mammal)
c._color = "green"
c.make_sound()
class Cat(Animal):
def __init__(self, color):
super().__init__(name="cat", voice="meow", color=color)
@property
def color(self):
return self._color
@color.setter
def color(self, val):
if val not in ("black", "white", "grey", "mixed"):
raise Exception("Cat can't be {0}!".format(val))
self._color = val
c = Cat("white")
c.color
c.color = "green"
c.color
"""
Explanation: Вопросы по прошлому занятию
Что найдет регулярка "^\d+.\d{1,2}.\d{1,2}\s[^A-Z]?$" ?
Что делает функция filter(lambda s: s.startswith("https://"), sys.stdin)?
Объясните своими словами, что такое yield.
В Python 2 была возможность получить очень странную ошибку ValueError: function 'func' accepts at least 2 arguments (2 given). В Python 3 сообщение об ошибке исправили на более информативное, но попытайтесь предположить, что надо было сделать для получения такой ошибки?
Как выбрать случайный элемент из списка?
Классы и магические методы
End of explanation
"""
class A(object):
def __init__(self):
self.sandbox = {}
def __enter__(self):
return self.sandbox
def __exit__(self, exc_type, exc_value, traceback):
self.sandbox = {}
a = A()
with a as sbox:
sbox["foo"] = "bar"
print(sbox)
print(a.sandbox)
from contextlib import contextmanager
@contextmanager
def contextgen():
print("enter")
yield 1
print("exit")
with contextgen() as a:
print(a)
"""
Explanation: Упражнение
Написать класс RangeCounter, который принимает начальное значение и шаг. У счетчика есть метод step(), который позволяет увеличить значение на размер шага. Давайте запретим менять значение напрямую, а шаг позволим устанавливать через сеттер.
Более подробно о with
End of explanation
"""
import os
import requests
from threading import Thread
class DownloadThread(Thread):
def __init__(self, url, name):
super().__init__()
self.url = url
self.name = name
def run(self):
res = requests.get(self.url, stream=True)
res.raise_for_status()
fname = os.path.basename(self.url)
with open(fname, "wb") as savefile:
for chunk in res.iter_content(1024):
savefile.write(chunk)
print(f"{self.name} закончил загрузку {self.url} !")
def main(urls):
for item, url in enumerate(urls):
thread = DownloadThread(url, f"Поток {item + 1}")
thread.start()
main([
"http://www.irs.gov/pub/irs-pdf/f1040.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"
])
"""
В данном случае интерпретатор дожидается
завершения всех дочерних потоков.
В других языках может быть иначе!
"""
import queue
class DownloadThread2(Thread):
def __init__(self, queue, name):
super().__init__()
self.queue = queue
self.name = name
def run(self):
while True:
url = self.queue.get()
fname = os.path.basename(url)
res = requests.get(url, stream=True)
res.raise_for_status()
with open(fname, "wb") as savefile:
for chunk in res.iter_content(1024):
savefile.write(chunk)
self.queue.task_done()
print(f"{self.name} закончил загрузку {url} !")
def main(urls):
q = queue.Queue()
threads = [DownloadThread2(q, f"Поток {i + 1}") for i in range(2)]
for t in threads:
# заставляем интерпретатор НЕ ждать завершения дочерних потоков
t.setDaemon(True)
t.start()
for url in urls:
q.put(url)
q.join() # все обработано - выходим
main([
"http://www.irs.gov/pub/irs-pdf/f1040.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"
])
"""
Explanation: Что такое потоки?
Системный планировщик отдает процессорное время потокам/процессам, переключая между ними контекст
Процессы/потоки работают "параллельно", в идеале используя несколько ядер процессора
Пути планировщика неисповедимы, нельзя заранее предсказать, какой процесс получит ресурсы в конкретный момент
Потоки надо синхронизировать согласно задачам, чтобы не было проблем с одновременным доступом
Пример - простая версия веб-сервера
Есть CPU-bound задачи и есть I/O-bound задачи - важно понимать разницу
Что такое GIL?
GIL - это глобальный мьютекс (механизм синхронизации) в интерпретаторе Python
GIL запрещает выполнять байткод Python больше чем одному потоку одновременно
Но это касается ТОЛЬКО байткода Python и не распространяется на I/O операции
Потоки Python (в отличие от потоков, скажем, в Ruby) - это полноценные потоки ОС
End of explanation
"""
from multiprocessing import Process
from multiprocessing import Queue
"""
Explanation: Упражнение
Реализовать "sleepsort". Предположим у нас есть короткий список чисел от 0 до 10. Чтобы их вывести в отсортированном порядке - достаточно каждый поток заставить "спать" количество секунд, равное самому числу, и только потом его выводить. В чем недостаток данного подхода?
Как обойти GIL?
Например, использовать процессы вместо потоков.
Тогда проблема будет с синхронизацией и обменом сообщениями (см. pickle)
И процессы все-таки немного тяжелее потоков. Стартовать по процессу на каждого клиента слишком дорого.
End of explanation
"""
import time
from concurrent.futures import ThreadPoolExecutor
# аналогично с ProcessPoolExecutor
def hold_my_beer_5_sec(beer):
time.sleep(5)
return beer
pool = ThreadPoolExecutor(3)
future = pool.submit(hold_my_beer_5_sec, ("Балтика"))
print(future.done())
time.sleep(5)
print(future.done())
print(future.result())
import concurrent.futures
import requests
def load_url(url):
fname = os.path.basename(url)
res = requests.get(url, stream=True)
res.raise_for_status()
with open(fname, "wb") as savefile:
for chunk in res.iter_content(1024):
savefile.write(chunk)
return fname
URLS = [
"http://www.irs.gov/pub/irs-pdf/f1040.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"
]
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = {
executor.submit(load_url, url): url
for url in URLS
}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
print(f"URL '{future_to_url[future]}' is saved to '{future.result()}'")
"""
Explanation: Ipyparallel
0MQ + Kernels
Поддержка платформ наподобие EC2
mpi4py
Task DAG
https://ipyparallel.readthedocs.io/en/latest/
End of explanation
"""
m = threading.Lock()
m.acquire()
m.release()
"""
Explanation: Примитивы синхронизации - мьютекс
End of explanation
"""
|
zpace/zaphod | zaphod/zaphod_example.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.io import fits
from astropy import units as u
"""
Explanation: Let's make fake galaxies!
(Zach Pace, UWisc; last substantial update 2 Sept 2016)
Let's say you want to test your spectral fitting model... First, generate a CSP spectrum with a known SFH (and therefore, a known MWA). I'm using BC03. We can read in that spectrum, and fill an entire IFU with that spectrum, subject to some noise. We then feed that IFU into our spectral fitting model, and see how well the inference works at various levels of noise.
zaphod giveth confidence, and zaphod taketh confidence away.
End of explanation
"""
import sys
if '../../stellarmass_pca/' not in sys.path:
sys.path.append('../../stellarmass_pca/')
print(sys.path[-1])
# this is where all my SP fitting code lives
from find_pcs import *
import find_pcs
reload(find_pcs)
import cPickle as pkl
pca = pkl.load(open('../../stellarmass_pca/pca.pkl', 'r'))
i = 55 # pick a SFH
sfh_spec = pca.trn_spectra[i]
sfh_logl = pca.logl
spec_cov = np.load('bc03/manga_cov_for_model.npy')
param = 'MWA' # pick a parameter to focus on
param_real = pca.metadata[param][i]
print param, param_real
import fake
reload(fake)
"""
Explanation: Start by reading in a SFH, the resulting log-wavelength & spectrum, and the instrument's spectral covariance matrix...
End of explanation
"""
fake_ifu = fake.FakeIFU.SingleSFH(
logl=sfh_logl, dlogl=1.0e-4, spec=sfh_spec,
true_params=[param_real], param_names=[param],
F_max=500, K=spec_cov, SNmax=60.)
"""
Explanation: Instantiate a FakeIFU object, with the parameters you're trying to fit for. Also note that we've supplied a spectral covariance matrix (but you could give a constant or a 1d array, and a multivariate---but non-covariate---matrix would be constructed for you). Also, it's almost certain that you don't know your actual spectral covariance, but how bad it really is depends on how careful your spectrophotometric calibration is. This is a can of worms.
End of explanation
"""
cube = fake_ifu.make_datacube()
plt.plot(10.**sfh_logl, cube[:, 37, 37], label='noisy')
plt.plot(10.**sfh_logl, fake_ifu.true_spectra[:, 37, 37], label='pure')
plt.legend(loc='best')
plt.xlabel(r'$\lambda~[\textrm{\AA}]$')
plt.ylabel(r'$F_{\lambda}~[\textrm{erg/s/cm}^2\textrm{/\AA}]$')
plt.show()
dlogl = 1.0e-4
l_l = 10.**(fake_ifu.logl - dlogl/2.)
l_u = 10.**(fake_ifu.logl + dlogl/2.)
dl = l_u - l_l
plt.imshow(
(cube * dl[..., None, None]).sum(axis=0),
aspect='equal')
plt.colorbar()
plt.show()
"""
Explanation: Now make an artificial cube, with the given covariate noise, all built around a single SFH, but with a variety of amplitudes
End of explanation
"""
from cov_obs import *
import cPickle as pkl
from astropy.cosmology import WMAP9 as cosmo
cov = Cov_Obs(
cov=fake_ifu.K, lllim=10.**sfh_logl[0],
dlogl=1.0e-4, nobj=0, SB_r_mean=0.)
"""
Explanation: Now let's test my PCA SP fitter
I wrote a stellar populations fitting library in summer 2016, which uses Principal Component Analysis to perform dimensionality reduction on a library of over ten-thousand spectra (which have values for properties like stellar metallicity, optical depth of absorbing dust clouds, and stellar mass-to-light ratio precomputed). This allows us to infer those properties for spectra that have been observed. The whole process runs much faster than traditional template-weighting (and "discovers" what spectral features are important for inferring what parameters), but its accuracy is not well-known. I want to test the outputs of this library on lots of fake (but noisified) IFUs, which have known underlying parameters. If the PCA SP fitting breaks in certain use-cases, this is a good way to discover that.
For more details on the fitting method, see Chen et al. (2012) and an upcoming paper by me.
Here's some prep for my spectral fits. Ignore this.
End of explanation
"""
pca_res = find_pcs.Test_PCA_Result(
pca=pca, K_obs=cov, cosmo=cosmo, fake_ifu=fake_ifu,
objname = 'FAKE')
"""
Explanation: Instantiate an object to perform all the computation of supposed best-fit underlying parameters by the PCA model.
End of explanation
"""
param_fit = pca_res.pca.param_pct_map(
param, pca_res.w, [50.])[0, ...]
plt.imshow(param_fit, aspect='equal')
plt.colorbar()
plt.title(param)
plt.show()
plt.hist(param_fit.flatten(), 100)
plt.title(param)
plt.axvline(param_real)
"""
Explanation: We now have all the model weights available to us, so we can examine each parameter's fits one at a time! No need to re-instantiate the object, just call pca_res.pca.param_pct_map() again with a different param argument.
End of explanation
"""
|
rbiswas4/ObsCond | examples/Demo_SkyBrightness.ipynb | gpl-3.0 | fig, ax = plt.subplots()
ax.plot(hwbpDict['g'].wavelen, hwbpDict['g'].sb, 'k')
ax.plot(hwbpDict['g'].wavelen, TotbpDict['g'].sb, 'r')
pointings = pd.read_csv(os.path.join(obscond.example_data_dir, 'example_pointings.csv'), index_col='obsHistID')
skycalc = obscond.SkyCalculations(photparams="LSST", hwBandpassDict=hwbpDict)
pointings[['fieldRA', 'fieldDec', 'expMJD', 'airmass', 'FWHMeff', 'filter']].head()
plt.plot(skycalc.adb.bandpassForAirmass('g', 1.00).sb)
"""
Explanation: Check with a quick plot that Total and HardWare make sense
End of explanation
"""
skycalc.fiveSigmaDepth('g', 1.086662, 0.925184, -0.4789, 61044.077855, use_provided_airmass=False)
"""
Explanation: Demos
Try out an example where the airmass used for calculating the bandpass is calculated
End of explanation
"""
skycalc.fiveSigmaDepth('g', 1.086662, 0.925184, -0.4789, 61044.077855, use_provided_airmass=True)
skycalc.fiveSigmaDepth('g', 1.086662, 0.925184, -0.4789, 61044.077855, provided_airmass=1.008652,
use_provided_airmass=True)
"""
Explanation: Try it out when a value has not been provided but use_provided_airmass is True
End of explanation
"""
skycalc.skymag('r', 0.925, -0.4789, 59580.14)
"""
Explanation: Calculating the skymag
End of explanation
"""
skycalc.calculatePointings(pointings)
x = skycalc.calculatePointings(pointings).join(pointings, rsuffix='opsim')
x['airmass_diff'] = x.airmass - x.airmassopsim
x.airmass_diff.hist()
"""
Explanation: Calculating values for an OpSim dataframe
End of explanation
"""
|
phani-vadrevu/phani-vadrevu.github.io | markdown_generator/publications.ipynb | mit | !cat publications.tsv
"""
Explanation: Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in publications.py. Run either from the markdown_generator folder after replacing publications.tsv with one containing your data.
TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.
Data format
The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top.
excerpt and paper_url can be blank, but the others must have values.
pub_date must be formatted as YYYY-MM-DD.
url_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
End of explanation
"""
import pandas as pd
"""
Explanation: Import pandas
We are using the very handy pandas library for dataframes.
End of explanation
"""
publications = pd.read_csv("publications.tsv", sep="\t", header=0)
publications
publications.columns
"""
Explanation: Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
End of explanation
"""
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
"""
Explanation: Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
End of explanation
"""
import os
for row, item in publications.iterrows():
paper_name = item.paper_url.rsplit('/', 1)[1].split('.')[0]
md_filename = str(item.pub_year) + "-" + paper_name + ".md"
html_filename = str(item.pub_year) + "-" + paper_name
## YAML variables
md = "---\ntitle: \"" + item.title + '"\n'
md += """collection: publications"""
md += """\npermalink: /publication/""" + html_filename
md += "\nyear: " + str(item.pub_year)
md += "\nconference: '" + html_escape(item.conference) + "'"
md += "\nauthors: " + "[" + ", ".join(["'" + a + "'" for a in item.authors.split(', ')]) + "]"
md += "\nlocation: '" + html_escape(item.location) + "'"
md += "\naccepted: '" + str(item.accepted) + "'"
md += "\nsubmitted: '" + str(item.submitted) + "'"
if len(str(item.paper_url)) > 5:
md += "\npaper_url: '" + item.paper_url + "'"
if item.video_url != '-':
md += "\nvideo_url: '" + item.video_url + "'"
md += "\n---"
## Markdown description for individual page
#if len(str(item.paper_url)) > 5:
# md += "\n[Download paper here](" + item.paper_url + ")\n"
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
"""
Explanation: Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
End of explanation
"""
!ls ../_publications/
!cat ../_publications/2022-ape_humans.md
"""
Explanation: These files are in the publications directory, one directory below where we're working from.
End of explanation
"""
|
obust/Pandas-Tutorial | Pandas I - Series and DataFrames.ipynb | mit | import pandas as pd
import numpy as np
pd.set_option('max_columns', 50)
"""
Explanation: Pandas I - Series and DataFrame
Pandas introduces two new data structures to Python, both of which are built on top of NumPy (this means it's fast) :
- Series : one-dimensional object akin to an observation/row in a dataset
- DataFrame : tabular data structure akin to a database table
End of explanation
"""
# create a Series with an arbitrary list
s = pd.Series([7, 'Heisenberg', 3.14, -1789710578, 'Happy Eating!'])
s
"""
Explanation: Summary
Series<br>
1.1 Creating<br>
1.2 Selecting<br>
1.3 Editing<br>
1.3 Mathematical Operations<br>
1.3 Missing Values
DataFrame<br>
2.1 From Dictionnary of Lists<br>
2.2 From/To CSV<br>
2.3 From/To Excel<br>
2.4 From/To Database<br>
2.5 From Clipboard<br>
2.6 From URL<br>
2.7 From Google Analytics API
Merge<br>
3.1 Inner Join (default)<br>
3.2 Left Outer Join<br>
3.3 Right Outer Join<br>
3.4 Full Outer Join<br>
Concatenate
1. Series
A Series is a one-dimensional object similar to an array, list, or column in a table.<br>
It will assign a labeled index to each item in the Series. By default, each item will receive an index label from 0 to N, where N is the length of the Series minus one.
1.1 Creating
End of explanation
"""
s = pd.Series([7, 'Heisenberg', 3.14, -1789710578, 'Happy Eating!'],
index=['A', 'Z', 'C', 'Y', 'E'])
s
"""
Explanation: Alternatively, you can specify an index to use when creating the Series.
End of explanation
"""
d = {'Chicago': 1000, 'New York': 1300, 'Portland': 900, 'San Francisco': 1100,
'Austin': 450, 'Boston': None}
cities = pd.Series(d)
cities
"""
Explanation: The Series constructor can convert a dictonary as well, using the keys of the dictionary as its index.
End of explanation
"""
cities['Chicago']
cities[['Chicago', 'Portland', 'San Francisco']]
"""
Explanation: 1.2 Selecting
You can use the index to select specific items from the Series ...
End of explanation
"""
cities[cities < 1000]
"""
Explanation: Or you can use boolean indexing for selection.
End of explanation
"""
less_than_1000 = cities < 1000
print less_than_1000
print '\n'
print cities[less_than_1000]
"""
Explanation: That last one might be a little weird, so let's make it more clear - cities < 1000 returns a Series of True/False values, which we then pass to our Series cities, returning the corresponding True items.
End of explanation
"""
# changing based on the index
print 'Old value:', cities['Chicago']
cities['Chicago'] = 1400
print 'New value:', cities['Chicago']
# changing values using boolean logic
print cities[cities < 1000]
print '\n'
cities[cities < 1000] = 750
print cities[cities < 1000]
"""
Explanation: 1.3 Editing
You can also change the values in a Series on the fly.
End of explanation
"""
# divide city values by 3
cities / 3
# square city values
np.square(cities)
"""
Explanation: 1.4 Mathematical Operations
Mathematical operations can be done using scalars and functions.
End of explanation
"""
print cities[['Chicago', 'New York', 'Portland']]
print'\n'
print cities[['Austin', 'New York']]
print'\n'
print cities[['Chicago', 'New York', 'Portland']] + cities[['Austin', 'New York']]
"""
Explanation: You can add two Series together, which returns a union of the two Series with the addition occurring on the shared index values. Values on either Series that did not have a shared index will produce a NULL/NaN (not a number).
End of explanation
"""
print 'Seattle' in cities
print 'San Francisco' in cities
"""
Explanation: Notice that because Austin, Chicago, and Portland were not found in both Series, they were returned with NULL/NaN values.
5. Missing Values
What if you aren't sure whether an item is in the Series? You can check using idiomatic Python.
End of explanation
"""
# returns a boolean series indicating which values aren't NULL
cities.notnull()
# use boolean logic to grab the NULL cities
print cities.isnull()
print '\n'
print cities[cities.isnull()]
"""
Explanation: NULL checking can be performed with isnull and notnull.
End of explanation
"""
data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012],
'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'],
'wins': [11, 8, 10, 15, 11, 6, 10, 4],
'losses': [5, 8, 6, 1, 5, 10, 6, 12]}
football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'losses'])
print football
"""
Explanation: 2. DataFrame
A DataFrame is a tabular data structure comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can also think of a DataFrame as a group of Series objects (rows) that share an index (the column names).
2.1 From Dictionnary of Lists
To create a DataFrame out of common Python data structures, we can pass a dictionary of lists to the DataFrame constructor.
Using the columns parameter allows us to tell the constructor how we'd like the columns ordered. By default, the DataFrame constructor will order the columns alphabetically (though this isn't the case when reading from a file - more on that next).
End of explanation
"""
%cd ~/Dropbox/tutorials/pandas/
# Source: baseball-reference.com/players/r/riverma01.shtml
!head -n 5 data/mariano-rivera.csv
from_csv = pd.read_csv('data/mariano-rivera.csv')
from_csv.head()
"""
Explanation: Much more often, you'll have a dataset you want to read into a DataFrame. Let's go through several common ways of doing so.
2.2 From/To CSV
Reading a CSV is as simple as calling the read_csv() function. By default, the read_csv() function expects the column separator to be a comma, but you can change that using the sep parameter.
End of explanation
"""
# command line : read head of file
# Source: pro-football-reference.com/players/M/MannPe00/touchdowns/passing/2012/
!head -n 5 data/peyton-passing-TDs-2012.csv
cols = ['num', 'game', 'date', 'team', 'home_away', 'opponent',
'result', 'quarter', 'distance', 'receiver', 'score_before',
'score_after']
no_headers = pd.read_csv('data/peyton-passing-TDs-2012.csv', sep=',', header=None,
names=cols)
no_headers.head()
"""
Explanation: Our file had headers, which the function inferred upon reading in the file. Had we wanted to be more explicit, we could have passed header=None to the function along with a list of column names to use:
End of explanation
"""
# this is the DataFrame we created from a dictionary earlier
print football.head()
# since our index on the football DataFrame is meaningless, let's not write it
football.to_excel('data/football.xlsx', index=False)
# command line : list .xlsx files
!ls -l data/*.xlsx
# delete the DataFrame
del football
# read from Excel
football = pd.read_excel('data/football.xlsx')
print football
"""
Explanation: pandas various reader functions have many parameters allowing you to do things like skipping lines of the file, parsing dates, or specifying how to handle NA/NULL datapoints.
Writing to CSV
There's also a set of writer functions for writing to a variety of formats (CSVs, HTML tables, JSON). They function exactly as you'd expect and are typically called to_format:
python
my_dataframe.to_csv('path_to_file.csv')
Take a look at the IO documentation to familiarize yourself with file reading/writing functionality.
2.3 From/To Excel
Know who hates VBA? Me. I bet you do, too. Thankfully, pandas allows you to read and write Excel files, so you can easily read from Excel, write your code in Python, and then write back out to Excel - no need for VBA.
Reading Excel files requires the xlrd library. You can install it via pip (pip install xlrd).
Let's first write a DataFrame to Excel.
End of explanation
"""
from pandas.io import sql
import sqlite3
conn = sqlite3.connect('/Users/greda/Dropbox/gregreda.com/_code/towed')
query = "SELECT * FROM towed WHERE make = 'FORD';"
results = sql.read_frame(query, con=conn)
print results.head()
"""
Explanation: 2.4 From/To Database
pandas also has some support for reading/writing DataFrames directly from/to a database [docs]. You'll typically just need to pass a connection object to the read_frame or write_frame functions within the pandas.io module.
Note that write_frame executes as a series of INSERT INTO statements and thus trades speed for simplicity. If you're writing a large DataFrame to a database, it might be quicker to write the DataFrame to CSV and load that directly using the database's file import arguments.
End of explanation
"""
hank = pd.read_clipboard()
hank.head()
"""
Explanation: 2.5 From Clipboard
While the results of a query can be read directly into a DataFrame, I prefer to read the results directly from the clipboard. I'm often tweaking queries in my SQL client (Sequel Pro), so I would rather see the results before I read it into pandas. Once I'm confident I have the data I want, then I'll read it into a DataFrame.
This works just as well with any type of delimited data you've copied to your clipboard. The function does a good job of inferring the delimiter, but you can also use the sep parameter to be explicit.
Hank Aaron
End of explanation
"""
from urllib2 import urlopen
from StringIO import StringIO
# store the text from the URL response in our url variable
url = urlopen('https://raw.github.com/gjreda/best-sandwiches/master/data/best-sandwiches-geocode.tsv').read()
# treat the tab-separated text as a file with StringIO and read it into a DataFrame
from_url = pd.read_table(StringIO(url), sep='\t')
from_url.head(3)
"""
Explanation: 2.6 From URL
We can also use the Python's StringIO library to read data directly from a URL. StringIO allows you to treat a string as a file-like object.
Let's use the best sandwiches data that I wrote about scraping a while back.
End of explanation
"""
left = pd.DataFrame({'key': range(5),
'left_value': ['L0', 'L1', 'L2', 'L3', 'L4']})
right = pd.DataFrame({'key': range(2, 7),
'right_value': ['R0', 'R1', 'R2', 'R3', 'R4']})
print left, '\n'
print right
"""
Explanation: 2.7 From Google Analytics API
pandas also has some integration with the Google Analytics API, though there is some setup required. I won't be covering it, but you can read more about it here and here.
3. Merge
Use the pandas.merge() static method to merge/join datasets in a relational manner. (See DOC)<br>
Like SQL's JOIN clause, pandas.merge() allows two DataFrames to be joined on one or more keys.
parameter how : specify which keys are to be included in the resulting table
parameters on, left_on, right_on, left_index, right_index : to specify the columns or indexes on which to join.
how : {"inner", "left", "right", "outer"}
"left" : use keys from left frame only
"right" : use keys from right frame only
"inner" (default) : use intersection of keys from both frames
"outer" : use union of keys from both frames
There are several cases to consider which are very important to understand:
- one-to-one joins: to define these relationships, only one table is necessary (no join)
- one user has one phone number
- one phone number belongs to one user
- one-to-many joins: to define these relationships, two tables are necessary
- one post has many comments
- one comment belongs to one post
- merge(left, right, on=['key'], how='?')
- many-to-many joins: to define these relationships, three tables are necessary
- one playlist has many songs
- one song belongs to many playlists
- merge(left.reset_index(), right.reset_index(), on=['key'], how='?').set_index(['key_left','key_right'])
Below are the different joins in SQL.
End of explanation
"""
print pd.merge(left, right, on='key', how='inner')
"""
Explanation: 3.1 Inner Join (default)
Selects the rows from both tables with matching keys.
End of explanation
"""
print pd.merge(left, right, on='key', how='left')
"""
Explanation: If our key columns had different names, we could have used the left_on and right_on parameters to specify which fields to join from each frame.
python
pd.merge(left, right, left_on='left_key', right_on='right_key')
If our key columns were indexes, we could use the left_index or right_index parameters to specify to use the index column, with a True/False value. You can mix and match columns and indexes like so:
python
pd.merge(left, right, left_on='key', right_index=True)
3.2 Left Outer Join
Returns all rows from the left frame, with the matching rows in the right frame. The result is NULL in the right side when there is no match (NaN).
End of explanation
"""
print pd.merge(left, right, on='key', how='right')
"""
Explanation: 3.3 Right Outer Join
Returns all rows from the right frame, with the matching rows in the left frame. The result is NULL in the left side when there is no match (NaN).
End of explanation
"""
print pd.merge(left, right, on='key', how='outer')
"""
Explanation: 3.4 Full Outer Join
Combines the result of both Left Outer Join et Right Outer Join.
End of explanation
"""
pd.concat([left, right], axis=1)
"""
Explanation: 4. Concatenate
Use pandas .concat() static method to combine Series/DataFrames into one unified object. (See DOC)
pandas.concat() takes a list of Series or DataFrames and returns a Series or DataFrame of the concatenated objects. Note that because the function takes list, you can combine many objects at once.
Use axis parameter to define along which axis to concatenate:
axis = 0 : concatenate vertically (default)<br>
axis = 1 : concatenante side-by-side
End of explanation
"""
|
AtmaMani/pyChakras | udemy_ml_bootcamp/Python-for-Data-Analysis/Pandas/DataFrames.ipynb | mit | import pandas as pd
import numpy as np
from numpy.random import randn
np.random.seed(101)
df = pd.DataFrame(randn(5,4),index='A B C D E'.split(),columns='W X Y Z'.split())
df
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
DataFrames
DataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index. Let's use pandas to explore this topic!
End of explanation
"""
df['W']
# Pass a list of column names
df[['W','Z']]
# SQL Syntax (NOT RECOMMENDED!)
df.W
"""
Explanation: Selection and Indexing
Let's learn the various methods to grab data from a DataFrame
End of explanation
"""
type(df['W'])
"""
Explanation: DataFrame Columns are just Series
End of explanation
"""
df['new'] = df['W'] + df['Y']
df
"""
Explanation: Creating a new column:
End of explanation
"""
df.drop('new',axis=1)
# Not inplace unless specified!
df
df.drop('new',axis=1,inplace=True)
df
"""
Explanation: Removing Columns
End of explanation
"""
df.drop('E',axis=0)
"""
Explanation: Can also drop rows this way:
End of explanation
"""
df.loc['A']
"""
Explanation: Selecting Rows
End of explanation
"""
df.iloc[2]
"""
Explanation: Or select based off of position instead of label
End of explanation
"""
df.loc['B','Y']
df.loc[['A','B'],['W','Y']]
"""
Explanation: Selecting subset of rows and columns
End of explanation
"""
df
df>0
df[df>0]
df[df['W']>0]
df[df['W']>0]['Y']
df[df['W']>0][['Y','X']]
"""
Explanation: Conditional Selection
An important feature of pandas is conditional selection using bracket notation, very similar to numpy:
End of explanation
"""
df[(df['W']>0) & (df['Y'] > 1)]
"""
Explanation: For two conditions you can use | and & with parenthesis:
End of explanation
"""
df
# Reset to default 0,1...n index
df.reset_index()
newind = 'CA NY WY OR CO'.split()
df['States'] = newind
df
df.set_index('States')
df
df.set_index('States',inplace=True)
df
"""
Explanation: More Index Details
Let's discuss some more features of indexing, including resetting the index or setting it something else. We'll also talk about index hierarchy!
End of explanation
"""
# Index Levels
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
hier_index = list(zip(outside,inside))
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
df = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B'])
df
"""
Explanation: Multi-Index and Index Hierarchy
Let us go over how to work with Multi-Index, first we'll create a quick example of what a Multi-Indexed DataFrame would look like:
End of explanation
"""
df.loc['G1']
df.loc['G1'].loc[1]
df.index.names
df.index.names = ['Group','Num']
df
df.xs('G1')
df.xs(['G1',1])
df.xs(1,level='Num')
"""
Explanation: Now let's show how to index this! For index hierarchy we use df.loc[], if this was on the columns axis, you would just use normal bracket notation df[]. Calling one level of the index returns the sub-dataframe:
End of explanation
"""
|
SlipknotTN/udacity-deeplearning-nanodegree | seq2seq/sequence_to_sequence_implementation.ipynb | mit | import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
"""
Explanation: Character Sequence to Sequence
In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.
<img src="images/sequence-to-sequence.jpg"/>
Dataset
The dataset lives in the /data/ folder. At the moment, it is made up of the following files:
* letters_source.txt: The list of input letter sequences. Each sequence is its own line.
* letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.
End of explanation
"""
source_sentences[:50].split('\n')
"""
Explanation: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
End of explanation
"""
target_sentences[:50].split('\n')
"""
Explanation: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
End of explanation
"""
def extract_character_vocab(data):
special_words = ['<pad>', '<unk>', '<s>', '<\s>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
"""
Explanation: Preprocess
To do anything useful with it, we'll need to turn the characters into a list of integers:
End of explanation
"""
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):
new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in source_ids]
new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in target_ids]
return new_source_ids, new_target_ids
# Use the longest sequence as sequence length
sequence_length = max(
[len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])
# Pad all sequences up to sequence length
source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int,
target_letter_ids, target_letter_to_int, sequence_length)
print("Sequence Length")
print(sequence_length)
print("\n")
print("Input sequence example")
print(source_ids[:3])
print("\n")
print("Target sequence example")
print(target_ids[:3])
"""
Explanation: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
End of explanation
"""
from distutils.version import LooseVersion
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
"""
Explanation: This is the final shape we need them to be in. We can now proceed to building the model.
Model
Check the Version of TensorFlow
This will check to make sure you have the correct version of TensorFlow
End of explanation
"""
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 13
decoding_embedding_size = 13
# Learning Rate
learning_rate = 0.001
"""
Explanation: Hyperparameters
End of explanation
"""
input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])
targets = tf.placeholder(tf.int32, [batch_size, sequence_length])
lr = tf.placeholder(tf.float32)
"""
Explanation: Input
End of explanation
"""
source_vocab_size = len(source_letter_to_int)
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)
"""
Explanation: Sequence to Sequence
The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).
First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.
Then, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.
Let's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).
<img src="images/sequence-to-sequence-inference-decoder.png"/>
We'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.
Notice that the inference decoder feeds the output of each time step as an input to the next.
As for the training decoder, we can think of it as looking like this:
<img src="images/sequence-to-sequence-training-decoder.png"/>
The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).
Encoding
Embed the input data using tf.contrib.layers.embed_sequence
Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.
End of explanation
"""
import numpy as np
# Process the input we'll feed to the decoder
ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)
demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))
sess = tf.InteractiveSession()
print("Targets")
print(demonstration_outputs[:2])
print("\n")
print("Processed Decoding Input")
print(sess.run(dec_input, {targets: demonstration_outputs})[:2])
"""
Explanation: Process Decoding Input
End of explanation
"""
target_vocab_size = len(target_letter_to_int)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decoder RNNs
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)
"""
Explanation: Decoding
Embed the decoding input
Build the decoding RNNs
Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.
End of explanation
"""
with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
"""
Explanation: Decoder During Training
Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.
Apply the output layer to the output of the training decoder
End of explanation
"""
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'],
sequence_length - 1, target_vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
"""
Explanation: Decoder During Inference
Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True)
Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.
The output function is applied to the output in this step
End of explanation
"""
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([batch_size, sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Optimization
Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.
End of explanation
"""
import numpy as np
train_source = source_ids[batch_size:]
train_target = target_ids[batch_size:]
valid_source = source_ids[:batch_size]
valid_target = target_ids[:batch_size]
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch, targets: target_batch, lr: learning_rate})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source})
train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))
valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))
"""
Explanation: Train
We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
End of explanation
"""
input_sentence = 'hello'
input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]
input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))
batch_shell = np.zeros((batch_size, sequence_length))
batch_shell[0] = input_sentence
chatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in input_sentence]))
print(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))
print(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))
"""
Explanation: Prediction
End of explanation
"""
|
xdnian/pyml | code/ch10/ch10.ipynb | mit | %load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn,seaborn
"""
Explanation: Copyright (c) 2015, 2016 Sebastian Raschka
<br>
Li-Yi Wei
https://github.com/1iyiwei/pyml
MIT License
Python Machine Learning - Code Examples
Chapter 10 - Predicting Continuous Target Variables with Regression Analysis
We talk only about classification so far
Regression is also important
Classification versus regression
Classification: discrete output
<img src='./images/01_03.png'>
Regression: continuous output
<img src='./images/01_04.png' width=50%>
Both are supervised learning
* require target variables
More similar than they appear
* similar principles, e.g. optimization
* similar goals, e.g. linear decision boundary for classification versus line fitting for regression
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
"""
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
Overview
Introducing a simple linear regression model
Exploring the Housing Dataset
Visualizing the important characteristics of a dataset
Implementing an ordinary least squares linear regression model
Solving regression for regression parameters with gradient descent
Estimating the coefficient of a regression model via scikit-learn
Fitting a robust regression model using RANSAC
Evaluating the performance of linear regression models
Using regularized methods for regression
Turning a linear regression model into a curve - polynomial regression
Modeling nonlinear relationships in the Housing Dataset
Dealing with nonlinear relationships using random forests
Decision tree regression
Random forest regression
Summary
End of explanation
"""
import pandas as pd
# online dataset
data_src = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data'
# local dataset
data_src = '../datasets/housing/housing.data'
df = pd.read_csv(data_src,
header=None,
sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
print(df.shape)
"""
Explanation: Introducing a simple linear regression model
Model:
$
y = \sum_{i=0}^n w_i x_i = \mathbf{w}^T \mathbf{x}
$
with $x_0 = 1$.
Given a collection of sample data ${\mathbf{x^{(i)}}, y^{(i)} }$, find the line $\mathbf{w}$ that minimizes the regression error:
$$
\begin{align}
L(X, Y, \mathbf{w})
= \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
2D case
$
y = w_0 + w_1 x
$
<img src='./images/10_01.png' width=90%>
General regression models
We can fit different analytic models/functions (not just linear ones) to a given dataset.
Start a linear one with a real-data set first
* easier to understand and interpret (e.g. positive/negative correlation)
* less prone for over-fitting
Followed by non-linear models
Exploring the Housing dataset
Let's explore a realistic problem: predicting house prices based on their features.
This is a regression problem
* house prices are continuous variables not discrete categories
Source: https://archive.ics.uci.edu/ml/datasets/Housing
Boston suburbs
Attributes (1-13) and target (14):
<pre>
1. CRIM per capita crime rate by town
2. ZN proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS proportion of non-retail business acres per town
4. CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX nitric oxides concentration (parts per 10 million)
6. RM average number of rooms per dwelling
7. AGE proportion of owner-occupied units built prior to 1940
8. DIS weighted distances to five Boston employment centres
9. RAD index of accessibility to radial highways
10. TAX full-value property-tax rate per $10,000
11. PTRATIO pupil-teacher ratio by town
12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
13. LSTAT % lower status of the population
14. MEDV Median value of owner-occupied homes in $1000's
</pre>
Read the dataset
End of explanation
"""
if False:
df = pd.read_csv('https://raw.githubusercontent.com/1iyiwei/pyml/master/code/datasets/housing/housing.data',
header=None, sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
"""
Explanation: <hr>
Note:
If the link to the Housing dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/housing/housing.data.
Or you could fetch it via
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV']
sns.pairplot(df[cols], size=2.5)
plt.tight_layout()
# plt.savefig('./figures/scatter.png', dpi=300)
plt.show()
"""
Explanation: Visualizing the important characteristics of a dataset
Before applying analysis and machine learning, it can be good to observate the dataset
* interesting trends that can lead to questions for analysis/ML
* issues in the datasets, such as missing entries, outliers, noises, etc.
Exploratory data analysis (EDA)
Use scatter plots to visualize the correlations between pairs of features.
In the seaborn library below, the diagonal lines are histograms for single features.
End of explanation
"""
import numpy as np
# compute correlation
cm = np.corrcoef(df[cols].values.T)
# visualize correlation matrix
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
# plt.tight_layout()
# plt.savefig('./figures/corr_mat.png', dpi=300)
plt.show()
"""
Explanation: Some observations:
* prices normally distributed with a spiky tail at high range
* number of rooms normally distributed
* prices positively correlated with number of rooms
* prices negatively correlated with low income status
* low income status distribution skewed towards the lower end
* number of rooms and low income status negatively correlated
* vertically aligned samples might be problematic (e.g. clamping values)
"You can observe a lot by just watching" - Yogi Berra
Scientific pipeline
* observation $\rightarrow$ question $\rightarrow$ assumption/model $\rightarrow$ verification $\hookleftarrow$ iteration
Correlation
A single number to summarize the visual trends.
<a href="https://en.wikipedia.org/wiki/Correlation_and_dependence">
<img src="https://upload.wikimedia.org/wikipedia/commons/d/d4/Correlation_examples2.svg" width=80%>
</a>
Correlation $r$ between pairs of underlying variables $x$ and $y$ based on their samples ${x^{(i)}, y^{(i)}}, i=1 \; to \; n $.
$$
\begin{align}
r &= \frac{\rho_{xy}}{\rho_x \rho_y}
\
\rho_{xy} &= \sum_{i=1}^n \left( x^{(i)} - \mu_x \right) \left( y^{(i)} - \mu_y \right)
\
\rho_x &= \sqrt{\sum_{i=1}^{n} \left( x^{(i)} - \mu_x \right)^2}
\
\rho_y &= \sqrt{\sum_{i=1}^{n} \left( y^{(i)} - \mu_y\right)^2}
\end{align}
$$
$\mu$: mean
$\rho_x$: std
$\rho_{xy}$: covariance
$r \in [-1, 1]$
* -1: perfect negative correlation
* +1: perfect positive correlation
* 0: no correlation
End of explanation
"""
sns.reset_orig()
%matplotlib inline
"""
Explanation: Observations:
* high positive correlation between prices and number of rooms (RM)
* high negative correlation between prices and low-income status (LSTAT)
Thus RM or LSTAT can be good candidates for linear regression
End of explanation
"""
class LinearRegressionGD(object):
def __init__(self, eta=0.001, n_iter=20):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
return self.net_input(X)
"""
Explanation: Implementing an ordinary least squares linear regression model
Model:
$
y = \sum_{i=0}^n w_i x_i = \mathbf{w}^T \mathbf{x}
$
with $x_0 = 1$.
Given a collection of sample data ${\mathbf{x^{(i)}}, y^{(i)} }$, find the line $\mathbf{w}$ that minimizes the regression error:
$$
\begin{align}
L(X, Y, \mathbf{w})
= \frac{1}{2} \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \frac{1}{2} \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
As usual, the $\frac{1}{2}$ term is for convenience of differentiation, to cancel out the square terms:
$$
\begin{align}
\frac{1}{2} \frac{d x^2}{dx} = x
\end{align}
$$
This is called ordinary least squares (OLS).
Gradient descent
$$
\begin{align}
\frac{\partial L}{\partial \mathbf{w}}
=
\sum_i \mathbf{x}^{(i)} (\mathbf{w}^t \mathbf{x}^{(i)} - y^{(i)})
\end{align}
$$
$$
\mathbf{w} \leftarrow \mathbf{w} - \eta \frac{\partial L}{\partial \mathbf{w}}
$$
Solving regression for regression parameters with gradient descent
Very similar to Adaline, without the output binary quantization.
Adaline:
<img src="./images/02_09.png" width=80%>
Implementation
End of explanation
"""
X = df[['RM']].values
y = df['MEDV'].values
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
#y_std = sc_y.fit_transform(y) # deprecation warning
#y_std = sc_y.fit_transform(y.reshape(-1, 1)).ravel()
y_std = sc_y.fit_transform(y[:, np.newaxis]).flatten()
lr = LinearRegressionGD()
_ = lr.fit(X_std, y_std)
plt.plot(range(1, lr.n_iter+1), lr.cost_)
plt.ylabel('SSE')
plt.xlabel('Epoch')
plt.tight_layout()
# plt.savefig('./figures/cost.png', dpi=300)
plt.show()
"""
Explanation: Apply LinearRegressionGD to the housing dataset
End of explanation
"""
def lin_regplot(X, y, model):
plt.scatter(X, y, c='lightblue')
plt.plot(X, model.predict(X), color='red', linewidth=2)
return
lin_regplot(X_std, y_std, lr)
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
plt.tight_layout()
# plt.savefig('./figures/gradient_fit.png', dpi=300)
plt.show()
"""
Explanation: The optimization converges after about 5 epochs.
End of explanation
"""
print('Slope: %.3f' % lr.w_[1])
print('Intercept: %.3f' % lr.w_[0])
"""
Explanation: The red line confirms the positive correlation between median prices and number of rooms.
But there are some weird things, such as a bunch of points on the celing (MEDV $\simeq$ 3,000) which indicates clipping.
End of explanation
"""
# use inverse transform to report back the original values
# num_rooms_std = sc_x.transform([[5.0]])
num_rooms_std = sc_x.transform(np.array([[5.0]]))
price_std = lr.predict(num_rooms_std)
print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std))
"""
Explanation: The correlation computed earlier is 0.7, which fits the slope value.
The intercept should be 0 for standardized data.
End of explanation
"""
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
slr.fit(X, y) # no need for standardization
y_pred = slr.predict(X)
print('Slope: %.3f' % slr.coef_[0])
print('Intercept: %.3f' % slr.intercept_)
lin_regplot(X, y, slr)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.tight_layout()
# plt.savefig('./figures/scikit_lr_fit.png', dpi=300)
plt.show()
"""
Explanation: Estimating the coefficient of a regression model via scikit-learn
We don't have to write our own code for linear regression.
Scikit-learn provides various regression models.
* linear and non-linear
End of explanation
"""
# adding a column vector of "ones"
Xb = np.hstack((np.ones((X.shape[0], 1)), X))
w = np.zeros(X.shape[1])
z = np.linalg.inv(np.dot(Xb.T, Xb))
w = np.dot(z, np.dot(Xb.T, y))
print('Slope: %.3f' % w[1])
print('Intercept: %.3f' % w[0])
"""
Explanation: Normal Equations alternative for direct analytic computing without any iteration:
End of explanation
"""
from sklearn.linear_model import RANSACRegressor
if Version(sklearn_version) < '0.18':
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
residual_metric=lambda x: np.sum(np.abs(x), axis=1),
residual_threshold=5.0,
random_state=0)
else:
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
loss='absolute_loss',
residual_threshold=5.0,
random_state=0)
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask],
c='blue', marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask],
c='lightgreen', marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='red')
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/ransac_fit.png', dpi=300)
plt.show()
print('Slope: %.3f' % ransac.estimator_.coef_[0])
print('Intercept: %.3f' % ransac.estimator_.intercept_)
"""
Explanation: Fitting a robust regression model using RANSAC
Linear regression sensitive to outliers
Not always easy to decide which data samples are outliers
RANSAC (random sample consensus) can deal with this
Basic idea:
1. randomly decide which samples are inliers and outliers
2. fit the line to inliers only
3. add those in outliers close enough to the line (potential inliers)
4. refit using updated inliers
5. terminate if error small enough or iteration enough, otherwise go back to step 1 to find a better model
Can work with different base regressors
<a href="https://commons.wikimedia.org/wiki/File%3ARANSAC_LINIE_Animiert.gif">
<img src="https://upload.wikimedia.org/wikipedia/commons/c/c0/RANSAC_LINIE_Animiert.gif" width=80%>
</a>
RANSAC in scikit-learn
End of explanation
"""
# trainig/test data split as usual
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
# use all features
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
# plot the residuals: difference between prediction and ground truth
plt.scatter(y_train_pred, y_train_pred - y_train,
c='blue', marker='o', label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
"""
Explanation: Evaluating the performance of linear regression models
We know how to evaluate classification models.
* training, test, validation datasets
* cross validation
* accuracy, precision, recall, etc.
* hyper-parameter tuning and selection
We can do similar for regression models.
End of explanation
"""
# plot residual against real values
plt.scatter(y_train, y_train_pred - y_train,
c='blue', marker='o', label='Training data')
plt.scatter(y_test, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test data')
plt.xlabel('Real values')
plt.ylabel('Residuals')
plt.legend(loc='best')
plt.hlines(y=0, xmin=-5, xmax=55, lw=2, color='red')
plt.xlim([-5, 55])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
"""
Explanation: Perfect regression will have 0 residuals (the red line).
Good regression will have uniform random distribution along that red line.
Other indicate potential problems.
* outliers are far away from the 0 residual line
* patterns indicate information not captured by our model
End of explanation
"""
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
"""
Explanation: Statistics for regression
For n data samples with prediction $y$ and ground truth $t$:
Mean squared error (MSE)
$$
\begin{align}
MSE = \frac{1}{n} \sum_{i=1}^n \left(y^{(i)} - t^{(i)}\right)^2
\end{align}
$$
Coefficient of determination
Standardized version of MSE
$$
\begin{align}
R^2 &= 1 - \frac{SSE}{SST}
\
SSE &= \sum_{i=1}^{n} \left( y^{(i)} - t^{(i) }\right)^2
\
SST &= \sum_{i=1}^{n} \left( t^{(i)} - \mu_t \right)^2
\end{align}
$$
$$
R^2 = 1 - \frac{MSE}{Var(t)}
$$
$R^2 = 1$ for perfect fit
* for training data, $0 \leq R^2 \leq 1$
* for test data, $R^2$ can be $<0$
End of explanation
"""
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=0.1) # alpha is like lambda above
ridge.fit(X_train, y_train)
y_train_pred = ridge.predict(X_train)
y_test_pred = ridge.predict(X_test)
print(ridge.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=0.1) # alpha is like lambda above
lasso.fit(X_train, y_train)
y_train_pred = lasso.predict(X_train)
y_test_pred = lasso.predict(X_test)
print(lasso.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error
alphas = [0.001, 0.01, 0.1, 1, 10, 100, 1000]
train_errors = []
test_errors = []
for alpha in alphas:
model = ElasticNet(alpha=alpha, l1_ratio=0.5)
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
train_errors.append(mean_squared_error(y_train, y_train_pred))
test_errors.append(mean_squared_error(y_test, y_test_pred))
print(train_errors)
print(test_errors)
"""
Explanation: Using regularized methods for regression
$$
\Phi(\mathbf{X}, \mathbf{T}, \Theta) = L\left(\mathbf{X}, \mathbf{T}, \mathbf{Y}=f(\mathbf{X}, \Theta)\right) + P(\Theta)
$$
* $\mathbf{X}$, $\mathbf{T}$: training data
* $f$: our model with parameters $\Theta$ ($\mathbf{w}$ for linear regression so far)
* $L$: accuracy
$$
\begin{align}
L(X, Y, \mathbf{w})
= \frac{1}{2} \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \frac{1}{2} \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
* $P$: regularization
Reguarlization can help simplify models and reduce overfitting
* e.g. $L_2$ for classification
Popular methods for linear regression:
* ridge regression - $L_2$
* LASSO (least absolute shrinkage and selection operator) - $L_1$
* elastic net - $L_1$ + $L_2$
Ridge regression
Essentially $L_2$ regularization:
$$
\begin{align}
P\left(\mathbf{w}\right) = \lambda \| \mathbf{w} \|^2
\end{align}
$$
$\lambda$ is the regularization strength as usual.
$$
\begin{align}
\| \mathbf{w} \|^2 = \sum_{j=1}^{m} w_j^2
\end{align}
$$
Do not regularize $w_0$, the bias term.
LASSO
Essentially $L_1$ regularization:
$$
\begin{align}
P\left(\mathbf{w}\right) = \lambda \| \mathbf{w} \|
\end{align}
$$
$L_1$ tends to produce more $0$ entries than $L_2$, as discussed before.
Elastic net
Combining $L_1$ and $L_2$ regularization:
$$
\begin{align}
P\left(\mathbf{w}\right) =
\lambda_1 \| \mathbf{w} \|^2
+
\lambda_2 \| \mathbf{w} \|
\end{align}
$$
Regularization for regression in scikit learn
End of explanation
"""
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0, 586.0])[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
from sklearn.preprocessing import PolynomialFeatures
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2) # e.g. from [a, b] to [1, a, b, a*a, a*b, b*b]
X_quad = quadratic.fit_transform(X)
print(X[0, :])
print(X_quad[0, :])
print([1, X[0, :], X[0, :]**2])
# fit linear features
lr.fit(X, y)
X_fit = np.arange(250, 600, 10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# fit quadratic features
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# plot results
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit, label='quadratic fit')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/poly_example.png', dpi=300)
plt.show()
"""
Explanation: Turning a linear regression model into a curve - polynomial regression
Not everything can be explained by linear relationship
How to generalize?
Polynomial regression
$$
\begin{align}
y = w_0 + w_1 x + w_2 x^2 + \cdots + w_d x^d
\end{align}
$$
Still linear in terms of the weights $\mathbf{w}$
Non-linear regression in scikit learn
Polynomial features
* recall kernel SVM
End of explanation
"""
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
print('Training MSE linear: %.3f, quadratic: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('Training R^2 linear: %.3f, quadratic: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
"""
Explanation: Quadratic polynomial fits this dataset better than linear polynomial
Not always a good idea to use higher degree functions
* cost
* overfit
End of explanation
"""
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# create quadratic features
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# fit features
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quadratic_r2 = r2_score(y, regr.predict(X_quad))
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# plot results
plt.scatter(X, y, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='cubic (d=3), $R^2=%.2f$' % cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/polyhouse_example.png', dpi=300)
plt.show()
"""
Explanation: Modeling nonlinear relationships in the Housing Dataset
Regression of MEDV (median house price) versus LSTAT
Compare polynomial curves
* linear
* quadratic
* cubic
End of explanation
"""
X = df[['LSTAT']].values
y = df['MEDV'].values
# transform features
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# training
regr = regr.fit(X_log, y_sqrt)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# fit features
X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis]
y_lin_fit = regr.predict(X_fit)
# plot results
plt.scatter(X_log, y_sqrt, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/transform_example.png', dpi=300)
plt.show()
"""
Explanation: Transforming the dataset from observation:
$$
\begin{align}
X_{log} &= \log{X}
\
Y_{sqrt} &= \sqrt{Y}
\end{align}
$$
End of explanation
"""
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort() # sort from small to large for plotting below
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
# plt.savefig('./figures/tree_regression.png', dpi=300)
plt.show()
"""
Explanation: Dealing with nonlinear relationships using random forests
Decision trees can be applied for both
* classification (talked about this before)
* regression (next topic)
In classification, we associate a class label for each leaf node.
In regression, we fit a function for each leaf node.
In the simplest case, the function is a constant, which will be the mean of all y values within that node if the loss function is based on MSE.
In this case, the whole tree essentially fits a piecewise constant function to the training data.
Similar to building a decision tree for classification, a decision tree for regression can be built by iteratively splitting each node based on optimizing an objective function, such as MSE mentioned above.
Specifically,
$$IG(D_p) = I(D_p) - \sum_{j=1}^m \frac{N_j}{N_p} I(D_j)$$
, where $IG$ is the information gain we try to maximize for splitting each parent node $p$ with dataset $D_p$, $I$ is the information measure for a given dataset, and $N_p$ and $N_j$ are the number of data vectors within each parent and child nodes.
$m = 2$ for the usual binary split.
For MSE, we have
$$I(D) = \sum_{i \in D} \left|y^{(i)} - \mu_D \right|^2$$
, where $\mu_D$ is the mean of the target $y$ values of the dataset $D$.
Decision tree regression
End of explanation
"""
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=1)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
"""
Explanation: Notice the piecewise constant regions of the decision curve.
Random forest regression
A random forest is a collection of decision trees
* randomization of training data and features
* generalizes better than individual trees
Can be used for
* classification (talked about this before)
* regression (next topic)
End of explanation
"""
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
"""
Explanation: Overfitting, but still good performance.
End of explanation
"""
|
ShinjiKatoA16/UCSY-sw-eng | Python-5 Class and object.ipynb | mit | # empty class for data container
class Test_case():
pass
def distance(tc):
'''
tc: test_case instance
return distance from (0,0) to (tc.x, tc,y)
'''
return (tc.x**2 + tc.y**2) ** (0.5)
tc1 = Test_case() # create new instance
tc1.x = 100
tc1.y = 200
tc2 = Test_case()
tc2.x = 10
tc2.y = 30
print("tc1:", distance(tc1))
print("tc2:", distance(tc2))
"""
Explanation: Class and Object(Instance)
Every objects includeing Integer or String are member(Instance) of Class. The behavior of the objects are identified by their Class. For example, + operator returns sum of numeric value (Integer of Float), also returns concatenated string if operands are string.
Class may have methods. As we saw before, String and List have many methods. Methods are functions that related to their Class.
Pythons's Class support Inheritance, when you make your own class, you can specify its perant class.
In a comparison of C++, the Class of Python is not so strict. Private data/method are not supported (but Mangling is possible by attaching __ before variable name). Data(attribute) which is not defined in Class can be added.
Class as a Data container
One of the usage of the class is a data container. Class instance can have attributes (variables). This usage is similar to struct of C/C++.
End of explanation
"""
class Beatles():
def __init__(self, name):
self.name = name
self.best_friend = None
def answer(self):
print('My name is', self.name)
john = Beatles('John Lenon')
paul = Beatles('Paul McCartony')
george = Beatles('George Harrison')
ringo = Beatles('Ringo Starr')
print(john.name)
john.answer() # perform answer methon on object john
john.best_friend = paul
paul.best_friend = george
george.best_friend = ringo
ringo.best_friend = john
print(john.best_friend.name) # print john's best_friend name
print(john.best_friend.best_friend.name) # john's best_friend = paul; paul's best_friend name
john.best_friend.best_friend.best_friend.answer()
"""
Explanation: Class variables and Instance variables
Each instance of the class can have its unique attributes (variables), as for above example, x and y are different value in tc1 and tc2.
Class can have variables that belong to class (every instance refers same value). In order to use Class variables, use Class_name.variable_name
Object oriented
If class is designed properly, Program can be simpler and easier to understand (object oriented).
The advantage of object oriented program is that it can bring higher level of abstraction.
abstract <-> specific $\fallingdotseq$ over all <-> detail
End of explanation
"""
#!/usr/bin/python3
# -*- coding: utf-8 -*-
class rectangle():
def __init__(self, size_x, size_y):
self.size_x = size_x
self.size_y = size_y
self.area = size_x * size_y
def __lt__(self, other):
return self.area < other.area
def __repr__(self):
return ('Rectangle('+str(self.size_x)+'x'+str(self.size_y)+')')
sq1 = rectangle(1,11)
sq2 = rectangle(3,4)
sq3 = rectangle(2,5)
x = [sq1, sq2, sq3]
x.sort()
print(x)
"""
Explanation: Special method of class
__init__(self, ...)
__lt__
__repr__
__add__
...
End of explanation
"""
# 2017 MMCPC rehearsal problem-D
class BinTreeNode():
def __init__(self, val):
self.val = val
self.parent = None
self.left = None
self.right = None
def add_left(self, left):
left.parent = self
self.left = left
def add_right(self, right):
right.parent = self
self.right = right
def add_child(self, child):
if child.val < self.val:
if self.left == None:
self.add_left(child)
else:
self.left.add_child(child)
else:
if self.right == None:
self.add_right(child)
else:
self.right.add_child(child)
def print_node(self):
if self.parent: print('Parent:', self.parent.val, end=' ')
if self.left: print('Left:', self.left.val, end = ' ')
if self.right: print('Right', self.right.val, end = ' ')
print('Node val:', self.val)
if self.left: self.left.print_node()
if self.right: self.right.print_node()
root_node = BinTreeNode(11)
for val in (6, 19, 4, 8, 17, 43, 5, 10, 31,49):
node = BinTreeNode(val)
root_node.add_child(node)
root_node.print_node()
"""
Explanation: Class usage sample (Binary Tree)
End of explanation
"""
|
landlab/landlab | notebooks/tutorials/data_record/DataRecord_tutorial.ipynb | mit | import numpy as np
import xarray as xr
from landlab import RasterModelGrid
from landlab.data_record import DataRecord
from landlab import imshow_grid
import matplotlib.pyplot as plt
from matplotlib.pyplot import plot, subplot, xlabel, ylabel, title, legend, figure
%matplotlib inline
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
DataRecord Tutorial
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to record variables of a Landlab model using DataRecord.
What is DataRecord?
DataRecord is a data structure that can hold data variables relating to a Landlab model or to items living on the Landlab grid.
DataRecord is built on xarray's Dataset structure: a multi-dimensional, in memory, array database. Dataset implements the mapping interface with keys given by variable names and values given by DataArray objects for each variable name. DataRecord inherits all the methods and attributes from xarray.Dataset.
A DataRecord can have one or both (or none) of the following dimensions:
- time: The simulated time in the model.
- item_id: An identifier of a generic item in the model.
Coordinates are one dimensional arrays used for label-based indexing.
The examples below illustrate different use cases for DataRecord.
We start by importing the necessary libraries:
End of explanation
"""
grid_1 = RasterModelGrid((10, 10), (1.0, 1.0))
z = np.random.rand(100)
_ = grid_1.add_field("topographic__elevation", z, at="node")
"""
Explanation: Case 1. DataRecord with 1 dimension: time
Let's start with an example where we set DataRecord to have only time as a dimension.
An example variable that varies over time and relates to the Landlab grid could be the mean elevation of the topographic surface. We will store this example variable in DataRecord.
We create a Raster grid, create a field (at nodes) called topographic__elevation and populate it with random values.
End of explanation
"""
current_mean = np.mean(grid_1.at_node["topographic__elevation"])
print(current_mean)
"""
Explanation: Print the current mean elevation.
End of explanation
"""
dr_1 = DataRecord(
grid_1,
time=[0.0],
items=None,
data_vars={"mean_elevation": (["time"], ([current_mean]))},
attrs={"mean_elevation": "y"},
)
"""
Explanation: Now we will create a DataRecord that will hold the data variable mean_elevation relating to grid_1. The first value, at time=0 is the current mean elevation on the grid.
End of explanation
"""
dr_1
"""
Explanation: The input arguments passed in this case are: the grid, time (as a 1-element list), a data variable dictionary and an attributes dictionary. Note that items is not filled, we will see its use in other cases below.
Note the format of the data_vars dictionary:
python
{'variable_name_1' : (['dimensions'], variable_data_1),
'variable_name_2' : (['dimensions'], variable_data_2),
...}
The attributes dictionary attrs can be used to store metadata about the variables: in this example, we use it to store the variable units.
So far, our DataRecord dr_1 holds one variable mean_elevation with one record at time=0.
End of explanation
"""
dr_1.dataset.to_dataframe()
"""
Explanation: We can visualise this data structure as a pandas dataframe:
End of explanation
"""
total_time = 100
dt = 20
uplift_rate = 0.01 # m/y
for t in range(20, total_time, dt):
grid_1.at_node["topographic__elevation"] += uplift_rate * dt
dr_1.add_record(
time=[t],
new_record={
"mean_elevation": (
["time"],
([np.mean(grid_1.at_node["topographic__elevation"])]),
)
},
)
"""
Explanation: Now we will run a simple model where the grid surface is uplifted several times and the mean elevation is recorded at every time step. We use the method add_record to put the new value in the DataRecord dr_1:
End of explanation
"""
dr_1.dataset["mean_elevation"].values
"""
Explanation: Let's see what was recorded:
End of explanation
"""
dr_1.dataset.time.values
"""
Explanation: The corresponding time coordinates are:
End of explanation
"""
dr_1.time_coordinates
"""
Explanation: Notice the different syntax used here:
- time is a dimension and can be called by dr_1.time (or dr_1['time'])
- whereas mean_elevation is a variable and must be called by dr_1['mean_elevation']
DataRecord also has the handy property time_coordinates that returns these values as a list:
End of explanation
"""
dr_1.get_data(time=[20.0], data_variable="mean_elevation")
dr_1.set_data(time=[80.0], data_variable="mean_elevation", new_value=1.5)
dr_1.dataset["mean_elevation"]
"""
Explanation: You can use the methods get_data and set_data to access and change the data:
End of explanation
"""
grid_2 = RasterModelGrid((5, 5), (2, 2))
boulders = {"grid_element": "node", "element_id": np.array([6, 11, 12, 17, 12])}
initial_boulder_sizes = np.array([1, 1.5, 3, 1, 2])
boulder_lithologies = np.array(
["sandstone", "granite", "sandstone", "sandstone", "limestone"]
)
dr_2 = DataRecord(
grid_2,
time=None,
items=boulders,
data_vars={
"boulder_size": (["item_id"], initial_boulder_sizes),
"boulder_litho": (["item_id"], boulder_lithologies),
},
attrs={"boulder_size": "m"},
)
dr_2.dataset.to_dataframe()
"""
Explanation: Case 2. DataRecord with 1 dimension: item_id
An important feature of DataRecord is that it allows to create items that live on grid elements, and variables describing them. For instance, we can create boulders and store information about their size and lithology.
To create items, we need to instantiate a DataRecord and pass it a dictionary describing where each item lives on the Landlab grid. The format of this dictionary is:
python
{'grid_element' : [grid_element],
'element_id' : [element_id]}
where:
- grid_element is a str or number-of-items-long array containing strings of the grid element(s) on which the items live (e.g.: node, link). Valid locations depend on the grid type (my_grid.groups gives the valid locations for your grid). If grid_element is provided as a string, it is assumed that all items live on the same type of grid element.
- element_id is an array of integers identifying the grid element IDs on which each item resides. For each item, element_id must be less than the number of this item's grid_element that exist on the grid. For example, if the grid has 10 links, no item can live at link 10 or link -3 because only links 0 to 9 exist in this example.
End of explanation
"""
dr_2.add_item(
new_item={
"grid_element": np.array(["link", "node"]),
"element_id": np.array([24, 8]),
},
new_item_spec={"boulder_size": (["item_id"], np.array([1.2, 2.0]))},
)
dr_2.dataset.to_dataframe()
"""
Explanation: Each item (in this case, each boulder) is designated by an item_id, its position on the grid is described by a grid_element and an element_id.
We can use the method add_item to add new boulders to the record:
End of explanation
"""
dr_2.set_data(
data_variable="boulder_litho", item_id=[5, 6], new_value=["sandstone", "granite"]
)
dr_2.dataset.to_dataframe()
"""
Explanation: Notice that we did not specify the lithologies of the new boulders, their recorded values are thus set as NaN. We can use the set_data method to report the boulder lithologies:
End of explanation
"""
mean_size = dr_2.calc_aggregate_value(
func=xr.Dataset.mean, data_variable="boulder_size"
)
mean_size
"""
Explanation: We can use the method calc_aggregate_value to apply a function to a variable aggregated at grid elements. For example, we can calculate the mean size of boulders on each node:
End of explanation
"""
# replace nans with 0:
mean_size[np.isnan(mean_size)] = 0
# show unfiltered mean sizes on the grid:
imshow_grid(grid_2, mean_size)
"""
Explanation: Notice that boulder #5 is on a link so it is not taken into account in this calculation.
End of explanation
"""
# define a filter array:
filter_litho = dr_2.dataset["boulder_litho"] == "sandstone"
# aggregate by node and apply function numpy.mean on boulder_size
filtered_mean = dr_2.calc_aggregate_value(
func=xr.Dataset.mean,
data_variable="boulder_size",
at="node",
filter_array=filter_litho,
)
filtered_mean
"""
Explanation: Before doing this calculation we could filter by lithology and only use the 'sandstone' boulders in the calculation:
End of explanation
"""
grid_3 = RasterModelGrid((5, 5), (2, 2))
initial_boulder_sizes_3 = np.array([[10], [4], [8], [3], [5]])
# boulder_lithologies = np.array(['sandstone', 'granite', 'sandstone', 'sandstone', 'limestone']) #same as above, already run
boulders_3 = {
"grid_element": "node",
"element_id": np.array([[6], [11], [12], [17], [12]]),
}
dr_3 = DataRecord(
grid_3,
time=[0.0],
items=boulders_3,
data_vars={
"boulder_size": (["item_id", "time"], initial_boulder_sizes_3),
"boulder_litho": (["item_id"], boulder_lithologies),
},
attrs={"boulder_size": "m"},
)
dr_3
"""
Explanation: Case 3. DataRecord with 2 dimensions: item_id and time
We may want to record variables that have both dimensions time and item_id.
In the previous example, some variables that characterize the items (boulders) may not vary with time, such as boulder_lithology. Although it can be interesting to keep track of the change in size through time. We will redefine the DataRecord such that the variable boulder_size varies among the items/boulders (identified by item_id) and through time. The variable boulder_litho varies only among the items/boulders and this lithogy variable does not vary through time.
End of explanation
"""
boulder_lithologies.shape, initial_boulder_sizes.shape, initial_boulder_sizes_3.shape
"""
Explanation: Note that the syntax to define the initial_boulder_sizes_3 (as well as element_id) has changed: they are number-of-items-by-1 arrays because they vary along both time and item_id (compared to boulder_lithologies which is just number-of-items long as it only varies along item_id).
End of explanation
"""
dt = 100
total_time = 100000
time_index = 1
for t in range(dt, total_time, dt):
# create a new time coordinate:
dr_3.add_record(time=np.array([t]))
# this propagates grid_element and element_id values forward in time (instead of the 'nan' default filling):
dr_3.ffill_grid_element_and_id()
for i in range(0, dr_3.number_of_items):
# value of block erodibility:
if dr_3.dataset["boulder_litho"].values[i] == "limestone":
k_b = 10 ** -5
elif dr_3.dataset["boulder_litho"].values[i] == "sandstone":
k_b = 3 * 10 ** -6
elif dr_3.dataset["boulder_litho"].values[i] == "granite":
k_b = 3 * 10 ** -7
else:
print("Unknown boulder lithology")
dr_3.dataset["boulder_size"].values[i, time_index] = (
dr_3.dataset["boulder_size"].values[i, time_index - 1]
- k_b * dr_3.dataset["boulder_size"].values[i, time_index - 1] * dt
)
time_index += 1
print("Done")
figure(figsize=(15, 8))
time = range(0, total_time, dt)
boulder_size = dr_3.dataset["boulder_size"].values
subplot(121)
plot(time, boulder_size[1], label="granite")
plot(time, boulder_size[3], label="sandstone")
plot(time, boulder_size[-1], label="limestone")
xlabel("Time (yr)")
ylabel("Boulder size (m)")
legend(loc="lower left")
title("Boulder erosion by lithology")
# normalized plots
subplot(122)
plot(time, boulder_size[1] / boulder_size[1, 0], label="granite")
plot(time, boulder_size[2] / boulder_size[2, 0], label="sandstone")
plot(time, boulder_size[-1] / boulder_size[-1, 0], label="limestone")
xlabel("Time (yr)")
ylabel("Boulder size normalized to size at t=0 (m)")
legend(loc="lower left")
title("Normalized boulder erosion by lithology")
plt.show()
"""
Explanation: Let's define a very simple erosion law for the boulders:
$$
\begin{equation}
\frac{dD}{dt} = -k_{b} . D
\end{equation}
$$
where $D$ is the boulder diameter $[L]$ (this value represents the boulder_size variable), $t$ is time, and $k_{b}$ is the block erodibility $[L.T^{-1}]$.
We will now model boulder erosion and use DataRecord to store their size through time.
End of explanation
"""
dr_3.variable_names
dr_3.number_of_items
dr_3.item_coordinates
dr_3.number_of_timesteps
dr_1.time_coordinates
dr_1.earliest_time
dr_1.latest_time
dr_1.prior_time
"""
Explanation: Other properties provided by DataRecord
End of explanation
"""
|
GoogleCloudPlatform/ml-design-patterns | 06_reproducibility/storm_reports/similar_reports.ipynb | apache-2.0 | %pip install -q google-cloud-bigquery-storage pyarrow==0.16 tfx
## CHANGE AS NEEDED
BEAM_RUNNER = 'DirectRunner' # or DataflowRunner
PROJECT='ai-analytics-solutions'
BUCKET='ai-analytics-solutions-kfpdemo'
REGION='us-west1'
"""
Explanation: Find similar storm reports
This notebook shows a TFX pipeline that does a semantic search to find duplicate storm reports.
This is an example of a Workflow Pipeline that doesn't do any training. Instead, it just sets a TFX pipeline up for inference.
The source of our data are preliminary storm reports filed by storm spotters to National Weather Service offices. This dataset has already been manually cleaned, so for illustration, we'll ignore the year/location when doing the search
End of explanation
"""
%%bigquery
SELECT
EXTRACT(YEAR from timestamp) AS year,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
latitude, longitude,
comments,
'wind' as type,
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
LIMIT 10
%%bigquery
SELECT
EXTRACT(YEAR from timestamp) AS year,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
latitude, longitude,
comments,
size,
'hail' as type
FROM `bigquery-public-data.noaa_preliminary_severe_storms.hail_reports`
LIMIT 10
%%bigquery
SELECT
EXTRACT(YEAR from timestamp) AS year,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
latitude, longitude,
LOWER(comments) AS comments,
REGEXP_EXTRACT(comments, r"\([A-Z]+\)$") AS office,
'tornado' as type
FROM `bigquery-public-data.noaa_preliminary_severe_storms.tornado_reports`
LIMIT 10
query = """
WITH wind AS (
SELECT
EXTRACT(YEAR from timestamp) AS year,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
latitude, longitude,
comments,
'wind' as type
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
),
hail AS (
SELECT
EXTRACT(YEAR from timestamp) AS year,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
latitude, longitude,
comments,
'hail' as type
FROM `bigquery-public-data.noaa_preliminary_severe_storms.hail_reports`
),
tornadoes AS (
SELECT
EXTRACT(YEAR from timestamp) AS year,
EXTRACT(DAYOFYEAR from timestamp) AS julian_day,
latitude, longitude,
comments,
'tornado' as type
FROM `bigquery-public-data.noaa_preliminary_severe_storms.tornado_reports`
)
SELECT * FROM (
SELECT * FROM wind
UNION ALL
SELECT * FROM hail
UNION ALL
SELECT * FROM tornadoes
)
"""
## skip_for_export
import google.cloud.bigquery as bq
df = bq.Client().query(query).result().to_dataframe()
df.groupby('type').count()
"""
Explanation: Explore data in BigQuery
Preview data in BigQuery
End of explanation
"""
import tensorflow as tf
print('tensorflow ' + tf.__version__)
import tfx
print('tfx ' + tfx.__version__)
import apache_beam as beam
print('beam ' + beam.__version__)
from tfx.components import BigQueryExampleGen
example_gen = BigQueryExampleGen(query=query)
import os
beam_pipeline_args = [
'--runner={}'.format(BEAM_RUNNER),
'--project={}'.format(PROJECT),
'--temp_location=' + os.path.join('gs://{}/noaa_similar_reports/'.format(BUCKET), 'tmp'),
'--region=' + REGION,
# Temporary overrides of defaults.
'--disk_size_gb=50',
'--experiments=shuffle_mode=auto',
'--machine_type=n1-standard-8',
]
## skip_for_export
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
context = InteractiveContext()
## skip_for_export
ingest_result = context.run(example_gen, beam_pipeline_args=beam_pipeline_args)
## skip_for_export
context.show(ingest_result)
## skip_for_export
print(ingest_result)
"""
Explanation: Ingest data
We'll use the TFX component BigQueryExampleGen to read in the data.
End of explanation
"""
from tfx.components import StatisticsGen
stats_gen = StatisticsGen(examples=example_gen.outputs['examples'])
## skip_for_export
context.run(stats_gen, beam_pipeline_args=beam_pipeline_args)
from tfx.components import SchemaGen
# This component only generates a new schema if one doesn't already exist.
schema_gen = SchemaGen(statistics=stats_gen.outputs['statistics'],
infer_feature_shape=True)
## skip_for_export
context.run(schema_gen, beam_pipeline_args=beam_pipeline_args)
from tfx.components import ExampleValidator
example_validator = ExampleValidator(
statistics=stats_gen.outputs['statistics'],
schema=schema_gen.outputs['schema']
)
## skip_for_export
context.run(example_validator, beam_pipeline_args=beam_pipeline_args)
"""
Explanation: Validate the data
Let's generate statistics from the data
End of explanation
"""
#%%writefile preprocess.py
import tensorflow_hub as hub
swivel = hub.load("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1")
def preprocessing_fn(inputs):
import tensorflow as tf
outputs = inputs.copy()
comments = inputs['comments']
outputs['office'] = tf.strings.substr(comments, -4, 3)
comments = tf.strings.regex_replace(comments, r"\([A-Z]+\)$", "")
outputs['comments'] = tf.strings.lower(comments)
#if len(outputs['comments'].shape) == 0:
# swivel_input = [outputs['comments']]
#else:
# swivel_input = outputs['comments']
#outputs['embed'] = swivel(swivel_input)
return outputs
## skip_for_export
inputs = df[:].iloc[3]
print(inputs['comments'])
outputs = preprocessing_fn(inputs)
print(outputs)
print(outputs['comments'])
import tensorflow_transform as tft
from tfx.components import Transform
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath('preprocess.py')
)
## skip_for_export
context.run(transform, beam_pipeline_args=beam_pipeline_args)
"""
Explanation: Preprocess data
Let's preprocess the comments:
* Pull out the office name from the comment. It's the last part of the comment (BMX)
* Lower case the comment
End of explanation
"""
|
bhargavvader/gensim | docs/notebooks/word2vec.ipynb | lgpl-2.1 | # import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
"""
Explanation: Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings):
End of explanation
"""
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
print(model)
print(model.wv.vocab)
"""
Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
End of explanation
"""
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter)
# can be a non-repeatable, 1-pass generator
print(new_model)
print(model.wv.vocab)
"""
Explanation: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.
1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
2. The second pass trains the neural model.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:
End of explanation
"""
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
"""
Explanation: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
End of explanation
"""
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
"""
Explanation: Training
Word2Vec accepts several parameters that affect both training speed and quality.
min_count
min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:
End of explanation
"""
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
"""
Explanation: size
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.
Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
End of explanation
"""
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
"""
Explanation: workers
workers, the last of the major parameters (full list here) is for training parallelization, to speed up training:
End of explanation
"""
model.accuracy('./datasets/questions-words.txt')
"""
Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.
The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).
Gensim supports the same evaluation set, in exactly the same format:
End of explanation
"""
model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv')
"""
Explanation: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.
End of explanation
"""
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
"""
Explanation: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods:
End of explanation
"""
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)
# cleaning up temp
os.close(fs)
os.remove(temp_path)
"""
Explanation: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Online training / Resuming training
Advanced users can load a model and continue training it with more sentences and new vocabulary words:
End of explanation
"""
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
model.doesnt_match("input is lunch he sentence cat".split())
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
"""
Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box:
End of explanation
"""
print(model.predict_output_word(['emergency', 'beacon', 'received']))
"""
Explanation: You can get the probability distribution for the center word given the context words as input:
End of explanation
"""
model['tree'] # raw NumPy vector of a word
"""
Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
End of explanation
"""
|
Jim00000/Numerical-Analysis | Projects/project_pi_calculation_monte_carlo.ipynb | unlicense | # Import modules
import time
import math
import numpy as np
import scipy
import matplotlib.pyplot as plt
"""
Explanation: ★ Monte Carlo Simulation To Calculate PI ★
End of explanation
"""
def linear_congruential_generator(x, a, b, m):
x = (a * x + b) % m
u = x / m
return u, x, a, b, m
def stdrand(x):
return linear_congruential_generator(x, pow(7, 5), 0, pow(2, 31) - 1)[:2]
def halton(p, n):
b = np.zeros(math.ceil(math.log(n + 1) / math.log(p)))
u = np.zeros(n)
for j in range(n):
i = 0
b[0] = b[0] + 1
while b[i] > p - 1 + np.finfo(float).eps:
b[i] = 0
i += 1
b[i] += 1
u[j] = 0
for k in range(1, b.size + 1):
u[j] = u[j] + b[k-1] * pow(p, -k)
return u
"""
Explanation: Necesaary Function For Monte Carlo Simulation
End of explanation
"""
def monte_carlo_process_std(toss):
x = time.time()
hit = 0
for i in range(toss):
u1, x = stdrand(x)
u2, x = stdrand(x)
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_std(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
"""
Explanation: Monte Carlo Simulation (with Minimal standard random number generator)
End of explanation
"""
def monte_carlo_process_customized(toss):
x0 = time.time()
args = (x0, 13, 0, 31)
hit = 0
for i in range(toss):
u1, *args = linear_congruential_generator(*args)
u2, *args = linear_congruential_generator(*args)
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_customized(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
"""
Explanation: Monte Carlo Simulation (with LCG where multiplier = 13, offset = 0 and modulus = 31)
End of explanation
"""
def monte_carlo_process_quasi(toss):
hit = 0
px = halton(2, toss)
py = halton(3, toss)
for i in range(toss):
u1 = px[i]
u2 = py[i]
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_quasi(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
"""
Explanation: Monte Carlo Simulation (with Quasi-random numbers)¶
End of explanation
"""
|
chrisfilo/fmri-analysis-vm | analysis/MVPA/RSA.ipynb | mit | import numpy
import nibabel
import os
from haxby_data import HaxbyData
from nilearn.input_data import NiftiMasker
%matplotlib inline
import matplotlib.pyplot as plt
import sklearn.manifold
import scipy.cluster.hierarchy
datadir='/Users/poldrack/data_unsynced/haxby/subj1'
print 'Using data from',datadir
haxbydata=HaxbyData(datadir)
modeldir=os.path.join(datadir,'blockmodel')
try:
os.chdir(modeldir)
except:
print 'problem changing to',modeldir
print 'you may need to run the Classification Analysis script first'
use_whole_brain=False
if use_whole_brain:
maskimg=haxbydata.brainmaskfile
else:
maskimg=haxbydata.vtmaskfile
nifti_masker = NiftiMasker(mask_img=maskimg, standardize=False)
fmri_masked = nifti_masker.fit_transform(os.path.join(modeldir,'zstatdata.nii.gz'))
"""
Explanation: In this notebook we will work through a representational similarity analysis of the Haxby dataset.
End of explanation
"""
cc=numpy.zeros((8,8,12,12))
# loop through conditions
for ci in range(8):
for cj in range(8):
for i in range(12):
for j in range(12):
idx_i=numpy.where(numpy.logical_and(haxbydata.runs==i,haxbydata.condnums==ci+1))[0][0]
idx_j=numpy.where(numpy.logical_and(haxbydata.runs==j,haxbydata.condnums==cj+1))[0][0]
cc[ci,cj,i,j]=numpy.corrcoef(fmri_masked[idx_i,:],fmri_masked[idx_j,:])[0,1]
for ci in range(8):
for cj in range(8):
cci=cc[ci,cj,:,:]
meansim[ci,cj]=numpy.mean(numpy.hstack((cci[numpy.triu_indices(12,1)],
cci[numpy.tril_indices(12,1)])))
plt.imshow(meansim,interpolation='nearest')
l=scipy.cluster.hierarchy.ward(1.0 - meansim)
cl=scipy.cluster.hierarchy.dendrogram(l,labels=haxbydata.condlabels,orientation='right')
"""
Explanation: Let's ask the following question: Are cats (condition 3) more similar to human faces (condition 2) than to chairs (condition 8)? To do this, we compute the between-run similarity for all conditions against each other.
End of explanation
"""
# within-condition
face_corr={}
corr_means=[]
corr_stderr=[]
corr_stimtype=[]
for k in haxbydata.cond_dict.iterkeys():
face_corr[k]=[]
for i in range(12):
for j in range(12):
if i==j:
continue
face_corr[k].append(cc[haxbydata.cond_dict['face']-1,haxbydata.cond_dict[k]-1,i,j])
corr_means.append(numpy.mean(face_corr[k]))
corr_stderr.append(numpy.std(face_corr[k])/numpy.sqrt(len(face_corr[k])))
corr_stimtype.append(k)
idx=numpy.argsort(corr_means)[::-1]
plt.bar(numpy.arange(0.5,8.),[corr_means[i] for i in idx],yerr=[corr_stderr[i] for i in idx]) #,yerr=corr_sterr[idx])
t=plt.xticks(numpy.arange(1,9), [corr_stimtype[i] for i in idx],rotation=70)
plt.ylabel('Mean between-run correlation with faces')
import sklearn.manifold
mds=sklearn.manifold.MDS()
#mds=sklearn.manifold.TSNE(early_exaggeration=10,perplexity=70,learning_rate=100,n_iter=5000)
encoding=mds.fit_transform(fmri_masked)
plt.figure(figsize=(12,12))
ax=plt.axes() #[numpy.min(encoding[0]),numpy.max(encoding[0]),numpy.min(encoding[1]),numpy.max(encoding[1])])
ax.scatter(encoding[:,0],encoding[:,1])
offset=0.01
for i in range(encoding.shape[0]):
ax.annotate(haxbydata.conditions[i].split('-')[0],(encoding[i,0],encoding[i,1]),xytext=[encoding[i,0]+offset,encoding[i,1]+offset])
#for i in range(encoding.shape[0]):
# plt.text(encoding[i,0],encoding[i,1],'%d'%haxbydata.condnums[i])
mdsmeans=numpy.zeros((2,8))
for i in range(8):
mdsmeans[:,i]=numpy.mean(encoding[haxbydata.condnums==(i+1),:],0)
for i in range(2):
print 'Dimension %d:'%int(i+1)
idx=numpy.argsort(mdsmeans[i,:])
for j in idx:
print '%s:\t%f'%(haxbydata.condlabels[j],mdsmeans[i,j])
print ''
"""
Explanation: Let's test whether similarity is higher for faces across runs within-condition versus similarity between faces and all other categories. Note that we would generally want to compute this for each subject and do statistics on the means across subjects, rather than computing the statistics within-subject as we do below (which treats subject as a fixed effect)
End of explanation
"""
|
chrisfilo/fmri-analysis-vm | analysis/connectivity/ConnectivitySimulations.ipynb | mit | import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../utils')
from mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from make_data import make_continuous_data
data=make_continuous_data(N=200)
print 'correlation without activation:',numpy.corrcoef(data.T)[0,1]
plt.plot(range(data.shape[0]),data[:,0],color='blue')
plt.plot(range(data.shape[0]),data[:,1],color='red')
"""
Explanation: This notebook will perform analysis of functional connectivity on simulated data.
End of explanation
"""
design_ts,design=create_design_singlecondition(blockiness=1.0,offset=30,blocklength=20,deslength=data.shape[0])
regressor,_=compute_regressor(design,'spm',numpy.arange(0,len(design_ts)))
regressor*=50.
data_act=data+numpy.hstack((regressor,regressor))
plt.plot(range(data.shape[0]),data_act[:,0],color='blue')
plt.plot(range(data.shape[0]),data_act[:,1],color='red')
print 'correlation with activation:',numpy.corrcoef(data_act.T)[0,1]
"""
Explanation: Now let's add on an activation signal to both voxels
End of explanation
"""
X=numpy.vstack((regressor.T,numpy.ones(data.shape[0]))).T
beta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_act)
y_est=X.dot(beta_hat)
resid=data_act - y_est
print 'correlation of residuals:',numpy.corrcoef(resid.T)[0,1]
"""
Explanation: How can we address this problem? A general solution is to first run a general linear model to remove the task effect and then compute the correlation on the residuals.
End of explanation
"""
regressor_td,_=compute_regressor(design,'spm_time',numpy.arange(0,len(design_ts)))
regressor_lagged=regressor_td.dot(numpy.array([1,0.5]))*50
plt.plot(regressor_lagged)
plt.plot(regressor)
data_lagged=data+numpy.vstack((regressor_lagged,regressor_lagged)).T
beta_hat_lag=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_lagged)
plt.subplot(211)
y_est_lag=X.dot(beta_hat_lag)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_lag
print 'correlation of residuals:',numpy.corrcoef(resid.T)[0,1]
plt.subplot(212)
plt.plot(resid)
"""
Explanation: What happens if we get the hemodynamic model wrong? Let's use the temporal derivative model to generate an HRF that is lagged compared to the canonical.
End of explanation
"""
regressor_fir,_=compute_regressor(design,'fir',numpy.arange(0,len(design_ts)),fir_delays=range(28))
regressor_fir.shape
X_fir=numpy.vstack((regressor_fir.T,numpy.ones(data.shape[0]))).T
beta_hat_fir=numpy.linalg.inv(X_fir.T.dot(X_fir)).dot(X_fir.T).dot(data_lagged)
plt.subplot(211)
y_est_fir=X_fir.dot(beta_hat_fir)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_fir
print 'correlation of residuals:',numpy.corrcoef(resid.T)[0,1]
plt.subplot(212)
plt.plot(resid)
"""
Explanation: Let's see if using a more flexible basis set, like an FIR model, will allow us to get rid of the task-induced correlation.
End of explanation
"""
|
sdss/marvin | docs/sphinx/jupyter/my-first-query.ipynb | bsd-3-clause | # Python 2/3 compatibility
from __future__ import print_function, division, absolute_import
from marvin import config
config.setRelease('MPL-4')
from marvin.tools.query import Query
"""
Explanation: My First Query
One of the most powerful features of Marvin 2.0 is ability to query the newly created DRP and DAP databases. You can do this in two ways:
1. via the Marvin-web Search page or
2. via Python (in the terminal/notebook/script) with Marvin-tools.
The best part is that both interfaces use the same underlying query structure, so your input search will be the same. Here we will run a few queries with Marvin-tools to learn the basics of how to construct a query and also test drive some of the more advanced features that are unique to the Marvin-tools version of querying.
End of explanation
"""
myquery1 = 'nsa.sersic_mass > 3e11'
# or
myquery1 = 'nsa.sersic_logmass > 11.47'
q1 = Query(search_filter=myquery1)
r1 = q1.run()
"""
Explanation: Let's search for galaxies with M$\star$ > 3 $\times$ 10$^{11}$ M$\odot$.
To specify our search parameter, M$_\star$, we must know the database table and name of the parameter. In this case, MaNGA uses the NASA-Sloan Atlas (NSA) for target selection so we will use the Sersic profile determination for stellar mass, which is the sersic_mass parameter of the nsa table, so our search parameter will be nsa.sersic_mass. You can also use nsa.sersic_logmass
Generically, the search parameter will take the form table.parameter.
End of explanation
"""
# show results
r1.results
"""
Explanation: Running the query produces a Results object (r1):
End of explanation
"""
myquery2 = 'nsa.sersic_mass > 3e11 AND nsa.z < 0.1'
q2 = Query(search_filter=myquery2)
r2 = q2.run()
r2.results
"""
Explanation: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Multiple Search Criteria
Let's add to our search to find only galaxies with a redshift less than 0.1.
Redshift is the z parameter and is also in the nsa table, so its full search parameter designation is nsa.z.
End of explanation
"""
myquery3 = '(nsa.sersic_mass > 3e11 AND nsa.z < 0.1) OR (ifu.name=127* AND nsa.elpetro_ba >= 0.95)'
q3 = Query(search_filter=myquery3)
r3 = q3.run()
r3.results
"""
Explanation: Compound Search Statements
We were hoping for a few more than 3 galaxies, so let's try to increase our search by broadening the criteria to also include galaxies with 127 fiber IFUs and a b/a ratio of at least 0.95.
To find 127 fiber IFUs, we'll use the name parameter of the ifu table, which means the full search parameter is ifu.name. However, ifu.name returns the IFU design name, such as 12701, so we need to to set the value to 127*.
The b/a ratio is in nsa table as the elpetro_ba parameter.
We're also going to join this to or previous query with an OR operator and use parentheses to group our individual search statements into a compound search statement.
End of explanation
"""
# Enter your search here
"""
Explanation: Design Your Own Search
OK, now it's your turn to try designing a search.
Exercise: Write a search filter that will find galaxies with a redshift less than 0.02 that were observed with the 1901 IFU?
End of explanation
"""
# You might have to do an svn update to get this to work (otherwise try the next cell)
q = Query()
q.get_available_params()
"""
Explanation: Finding the Available Parameters
Now you might want to go out and try all of the interesting queries that you've been saving up, but you don't know what the parameters are called or what database table they are in.
You can find all of the availabale parameters by:
1. clicking on in the Return Parameters dropdown menu on the left side of the Marvin-web Search page,
2. reading the Marvin Docs page, or
3. via Marvin-tools (see next two cells)
End of explanation
"""
myquery5 = 'nsa.z > 0.1'
bonusparams5 = ['cube.ra', 'cube.dec']
# bonusparams5 = 'cube.ra' # This works too
q5 = Query(search_filter=myquery5, return_params=bonusparams5)
r5 = q5.run()
r5.results
"""
Explanation: Go ahead and try to create some new searches on your own from the parameter list. Please feel free to also try out the some of the same search on the Marvin-web Search page.
Returning Bonus Parameters
Often you want to run a query and see the value of parameters that you didn't explicitly search on. For instance, you want to find galaxies above a redshift of 0.1 and would like to know their RA and DECs.
In Marvin-tools, this is as easy as specifying the return_params option with either a string (for a single bonus parameter) or a list of strings (for multiple bonus parameters).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/2784a8d5822ed9797c0330f973573c10/plot_stats_cluster_erp.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import mne
from mne.channels import find_ch_connectivity, make_1020_channel_selections
from mne.stats import spatio_temporal_cluster_test
np.random.seed(0)
# Load the data
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
name = "NumberOfLetters"
# Split up the data by the median length in letters via the attached metadata
median_value = str(epochs.metadata[name].median())
long_words = epochs[name + " > " + median_value]
short_words = epochs[name + " < " + median_value]
"""
Explanation: Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from [1]; we contrast long vs. short words.
TFCE is described in [2].
End of explanation
"""
time_windows = ((.2, .25), (.35, .45))
elecs = ["Fz", "Cz", "Pz"]
# display the EEG data in Pandas format (first 5 rows)
print(epochs.to_data_frame()[elecs].head())
report = "{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}"
print("\nTargeted statistical test results:")
for (tmin, tmax) in time_windows:
long_df = long_words.copy().crop(tmin, tmax).to_data_frame()
short_df = short_words.copy().crop(tmin, tmax).to_data_frame()
for elec in elecs:
# extract data
A = long_df[elec].groupby("condition").mean()
B = short_df[elec].groupby("condition").mean()
# conduct t test
t, p = ttest_ind(A, B)
# display results
format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,
df=len(epochs.events) - 2, t_val=t, p=p)
print(report.format(**format_dict))
"""
Explanation: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the :class:mne.Epochs object has a convenient
:meth:mne.Epochs.to_data_frame method, which returns a dataframe.
This dataframe can then be queried for specific time windows and sensors.
The extracted data can be submitted to standard statistical tests. Here,
we conduct t-tests on the difference between long and short words.
End of explanation
"""
# Calculate statistical thresholds
con = find_ch_connectivity(epochs.info, "eeg")
# Extract data: transpose because the cluster test requires channels to be last
# In this case, inference is done over items. In the same manner, we could
# also conduct the test over, e.g., subjects.
X = [long_words.get_data().transpose(0, 2, 1),
short_words.get_data().transpose(0, 2, 1)]
tfce = dict(start=.2, step=.2)
t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(
X, tfce, n_permutations=100) # a more standard number would be 1000+
significant_points = cluster_pv.reshape(t_obs.shape).T < .05
print(str(significant_points.sum()) + " points selected by TFCE ...")
"""
Explanation: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
End of explanation
"""
# We need an evoked object to plot the image to be masked
evoked = mne.combine_evoked([long_words.average(), -short_words.average()],
weights='equal') # calculate difference wave
time_unit = dict(time_unit="s")
evoked.plot_joint(title="Long vs. short words", ts_args=time_unit,
topomap_args=time_unit) # show difference wave
# Create ROIs by checking channel labels
selections = make_1020_channel_selections(evoked.info, midline="12z")
# Visualize the results
fig, axes = plt.subplots(nrows=3, figsize=(8, 8))
axes = {sel: ax for sel, ax in zip(selections, axes.ravel())}
evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,
mask=significant_points, show_names="all", titles=None,
**time_unit)
plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3,
label="uV")
plt.show()
"""
Explanation: The results of these mass univariate analyses can be visualised by plotting
:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)
and masking points for significance.
Here, we group channels by Regions of Interest to facilitate localising
effects on the head.
End of explanation
"""
|
macks22/gensim | docs/notebooks/lda_training_tips.ipynb | lgpl-2.1 | # Read data.
import os
# Folder containing all NIPS papers.
data_dir = 'nipstxt/'
# Folders containin individual NIPS papers.
yrs = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']
dirs = ['nips' + yr for yr in yrs]
# Read all texts into a list.
docs = []
for yr_dir in dirs:
files = os.listdir(data_dir + yr_dir)
for filen in files:
# Note: ignoring characters that cause encoding errors.
with open(data_dir + yr_dir + '/' + filen, errors='ignore') as fid:
txt = fid.read()
docs.append(txt)
"""
Explanation: Pre-processing and training LDA
The purpose of this tutorial is to show you how to pre-process text data, and how to train the LDA model on that data. This tutorial will not explain you the LDA model, how inference is made in the LDA model, and it will not necessarily teach you how to use Gensim's implementation. There are plenty of resources for all of those things, but what is somewhat lacking is a hands-on tutorial that helps you train an LDA model with good results... so here is my contribution towards that.
I have used a corpus of NIPS papers in this tutorial, but if you're following this tutorial just to learn about LDA I encourage you to consider picking a corpus on a subject that you are familiar with. Qualitatively evaluating the output of an LDA model is challenging and can require you to understand the subject matter of your corpus (depending on your goal with the model).
I would also encourage you to consider each step when applying the model to your data, instead of just blindly applying my solution. The different steps will depend on your data and possibly your goal with the model.
In the following sections, we will go through pre-processing the data and training the model.
Note:
This tutorial uses the nltk library, although you can replace it with something else if you want. Python 3 is used, although Python 2.7 can be used as well.
In this tutorial we will:
Load data.
Pre-process data.
Transform documents to a vectorized form.
Train an LDA model.
If you are not familiar with the LDA model or how to use it in Gensim, I suggest you read up on that before continuing with this tutorial. Basic understanding of the LDA model should suffice. Examples:
Gentle introduction to the LDA model: http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/
Gensim's LDA API documentation: https://radimrehurek.com/gensim/models/ldamodel.html
Topic modelling in Gensim: http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html
Data
We will be using some papers from the NIPS (Neural Information Processing Systems) conference. NIPS is a machine learning conference so the subject matter should be well suited for most of the target audience of this tutorial.
You can download the data from Sam Roweis' website (http://www.cs.nyu.edu/~roweis/data.html).
Note that the corpus contains 1740 documents, and not particularly long ones. So keep in mind that this tutorial is not geared towards efficiency, and be careful before applying the code to a large dataset.
Below we are simply reading the data.
End of explanation
"""
# Tokenize the documents.
from nltk.tokenize import RegexpTokenizer
# Split the documents into tokens.
tokenizer = RegexpTokenizer(r'\w+')
for idx in range(len(docs)):
docs[idx] = docs[idx].lower() # Convert to lowercase.
docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.
# Remove numbers, but not words that contain numbers.
docs = [[token for token in doc if not token.isnumeric()] for doc in docs]
# Remove words that are only one character.
docs = [[token for token in doc if len(token) > 1] for doc in docs]
"""
Explanation: Pre-process and vectorize the documents
Among other things, we will:
Split the documents into tokens.
Lemmatize the tokens.
Compute bigrams.
Compute a bag-of-words representation of the data.
First we tokenize the text using a regular expression tokenizer from NLTK. We remove numeric tokens and tokens that are only a single character, as they don't tend to be useful, and the dataset contains a lot of them.
End of explanation
"""
# Lemmatize the documents.
from nltk.stem.wordnet import WordNetLemmatizer
# Lemmatize all words in documents.
lemmatizer = WordNetLemmatizer()
docs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]
"""
Explanation: We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a stemmer in this case because it produces more readable words. Output that is easy to read is very desirable in topic modelling.
End of explanation
"""
# Compute bigrams.
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(docs, min_count=20)
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
"""
Explanation: We find bigrams in the documents. Bigrams are sets of two adjacent words. Using bigrams we can get phrases like "machine_learning" in our output (spaces are replaced with underscores); without bigrams we would only get "machine" and "learning".
Note that in the code below, we find bigrams and then add them to the original data, because we would like to keep the words "machine" and "learning" as well as the bigram "machine_learning".
Note that computing n-grams of large dataset can be very computationally intentensive and memory intensive.
End of explanation
"""
# Remove rare and common tokens.
from gensim.corpora import Dictionary
# Create a dictionary representation of the documents.
dictionary = Dictionary(docs)
# Filter out words that occur less than 20 documents, or more than 50% of the documents.
dictionary.filter_extremes(no_below=20, no_above=0.5)
"""
Explanation: We remove rare words and common words based on their document frequency. Below we remove words that appear in less than 20 documents or in more than 50% of the documents. Consider trying to remove words only based on their frequency, or maybe combining that with this approach.
End of explanation
"""
# Vectorize data.
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(doc) for doc in docs]
"""
Explanation: Finally, we transform the documents to a vectorized form. We simply compute the frequency of each word, including the bigrams.
End of explanation
"""
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
"""
Explanation: Let's see how many tokens and documents we have to train on.
End of explanation
"""
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 10
chunksize = 2000
passes = 20
iterations = 400
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make a index to word dictionary.
temp = dictionary[0] # This is only to "load" the dictionary.
id2word = dictionary.id2token
%time model = LdaModel(corpus=corpus, id2word=id2word, chunksize=chunksize, \
alpha='auto', eta='auto', \
iterations=iterations, num_topics=num_topics, \
passes=passes, eval_every=eval_every)
"""
Explanation: Training
We are ready to train the LDA model. We will first discuss how to set some of the training parameters.
First of all, the elephant in the room: how many topics do I need? There is really no easy answer for this, it will depend on both your data and your application. I have used 10 topics here because I wanted to have a few topics that I could interpret and "label", and because that turned out to give me reasonably good results. You might not need to interpret all your topics, so you could use a large number of topics, for example 100.
The chunksize controls how many documents are processed at a time in the training algorithm. Increasing chunksize will speed up training, at least as long as the chunk of documents easily fit into memory. I've set chunksize = 2000, which is more than the amount of documents, so I process all the data in one go. Chunksize can however influence the quality of the model, as discussed in Hoffman and co-authors [2], but the difference was not substantial in this case.
passes controls how often we train the model on the entire corpus. Another word for passes might be "epochs". iterations is somewhat technical, but essentially it controls how often we repeat a particular loop over each document. It is important to set the number of "passes" and "iterations" high enough.
I suggest the following way to choose iterations and passes. First, enable logging (as described in many Gensim tutorials), and set eval_every = 1 in LdaModel. When training the model look for a line in the log that looks something like this:
2016-06-21 15:40:06,753 - gensim.models.ldamodel - DEBUG - 68/1566 documents converged within 400 iterations
If you set passes = 20 you will see this line 20 times. Make sure that by the final passes, most of the documents have converged. So you want to choose both passes and iterations to be high enough for this to happen.
We set alpha = 'auto' and eta = 'auto'. Again this is somewhat technical, but essentially we are automatically learning two parameters in the model that we usually would have to specify explicitly.
End of explanation
"""
top_topics = model.top_topics(corpus, num_words=20)
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence)
from pprint import pprint
pprint(top_topics)
"""
Explanation: We can compute the topic coherence of each topic. Below we display the average topic coherence and print the topics in order of topic coherence.
Note that we use the "Umass" topic coherence measure here (see docs, https://radimrehurek.com/gensim/models/ldamodel.html#gensim.models.ldamodel.LdaModel.top_topics), Gensim has recently obtained an implementation of the "AKSW" topic coherence measure (see accompanying blog post, http://rare-technologies.com/what-is-topic-coherence/).
If you are familiar with the subject of the articles in this dataset, you can see that the topics below make a lot of sense. However, they are not without flaws. We can see that there is substantial overlap between some topics, others are hard to interpret, and most of them have at least some terms that seem out of place. If you were able to do better, feel free to share your methods on the blog at http://rare-technologies.com/lda-training-tips/ !
End of explanation
"""
|
sangheestyle/ml2015project | howto/read_data.ipynb | mit | import csv
import yaml
reader = csv.reader(open("../data/questions.csv"))
"""
Explanation: Here, I will show you how to read questions.csv file. Let's do it.
Read csv formatted file without header
First above all, you need to import csv and yaml then read the csv formatted file.
End of explanation
"""
question_1 = reader.next()
question_1
"""
Explanation: Read the first line and see the structure.
End of explanation
"""
yaml.load(question_1[-1].replace(": u'", ": '"))
"""
Explanation: Yes, each line is converted into list and it has 6 items as expected. However, how can we use the last item? It is string type but it seems dictionary or json.
OK, let's try to convert it into dictionary.
End of explanation
"""
reader = csv.reader(open("../data/train.csv"))
"""
Explanation: Now, you know how to convert csv files into other formats that you want. So, you can handle all the given files.
Convert csv into list
Let's try to read train.csv.
End of explanation
"""
reader.next()
"""
Explanation: However, you know that train.csv has header which is not data we want to use. So, you might need to get rid of the first line. By the way, we need to know that reader returned by csv.reader is enumerater not list. So, you just use reader only once. If you want to use it once again, you need to use csv.reader once.
End of explanation
"""
train_set = []
for row in reader:
train_set.append(row)
print len(train_set)
print len(train_set[0])
print train_set[0]
print train_set[-1]
"""
Explanation: OK, now reader is on the 2nd line of the csv flie. Try to convert it into list.
End of explanation
"""
|
KitwareMedical/ITKTubeTK | examples/TubeNumPyArrayAndPropertyHistograms.ipynb | apache-2.0 | import os
import sys
from _tubetk_numpy import tubes_from_file
tubes = tubes_from_file("data/Normal071-VascularNetwork.tre")
"""
Explanation: This notebook illustrates the TubeTK tube NumPy array data structure and how to create histograms of the properties of a VesselTube.
First, import the function for reading a tube file in as a NumPy array, and read in the file.
End of explanation
"""
print(type(tubes))
print(tubes.dtype)
"""
Explanation: The result is a NumPy Record Array where the fields of the array correspond to the properties of a VesselTubeSpatialObjectPoint.
End of explanation
"""
print(len(tubes))
print(tubes.shape)
"""
Explanation: The length of the array corresponds to the number of points that make up the tubes.
End of explanation
"""
print('Entire points 0, 2:')
print(tubes[:4:2])
print('\nPosition of points 0, 2')
print(tubes['PositionInWorldSpace'][:4:2])
"""
Explanation: Individual points can be sliced, or views can be created on individual fields.
End of explanation
"""
%pylab inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(1, 2, 1)
ax.hist(tubes['RadiusInWorldSpace'], bins=100)
ax.set_xlabel('Radius')
ax.set_ylabel('Count')
ax = fig.add_subplot(1, 2, 2, projection='3d')
subsample = 100
position = tubes['PositionInWorldSpace'][::subsample]
radius = tubes['RadiusInWorldSpace'][::subsample]
ax.scatter(position[:,0], position[:,1], position[:,2], s=(2*radius)**2)
ax.set_title('Point Positions')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z');
"""
Explanation: We can easily create a histogram of the radii or visualize the point positions.
End of explanation
"""
|
NuGrid/NuPyCEE | regression_tests/temp/SYGMA_DTD.ipynb | bsd-3-clause | %pylab nbagg
import sygma as s
reload(s)
s.__file__
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import numpy as np
"""
Explanation: Input parameter for the DTDs.
Check different input for the SNIa DTD.
$\odot$ Power law & Maoz
$\odot$ Gaussian
$\odot$ Exponential
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='power_law',beta_pow=-1,
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='power_law',beta_pow=-2,
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s3_maoz=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='maoz',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
"""
Explanation: Power law & Maoz
default beta_pow = -1 # t^beta_pow
End of explanation
"""
s1.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t{^-1}$',marker1='o')
s2.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t^{-2}$',marker1='x',color1='b')
s3_maoz.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t^{-1}$, maoz',marker1='x',color1='b',shape1='--')
"""
Explanation: Maoz and power law with -1 is the same as visible below.
End of explanation
"""
gauss_dtd=[1e9,6.6e8]
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',gauss_dtd=gauss_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
g_dt1=s2
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
[1e9,6.6e8]
tau= gauss_dtd[0] #1e9 #3.3e9 #characteristic delay time
sigma=gauss_dtd[1] #0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=6,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.show()
"""
Explanation: Gaussian
gauss_dtd=[3.3e9,6.6e8] (as used in Wiersma09)
End of explanation
"""
gauss_dtd=[4e9,2e9]
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',gauss_dtd=gauss_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
g_dt2=s2
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
[1e9,6.6e8]
tau= gauss_dtd[0] #1e9 #3.3e9 #characteristic delay time
sigma=gauss_dtd[1] #0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=7,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.show()
"""
Explanation: gauss_dtd=[4e9,3.2e9] (as mentioned in Wiersma09)
End of explanation
"""
g_dt1.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='gauss, 1',marker1='o',shape1='--')
g_dt2.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='gauss, 2',marker1='x',markevery=1)
print g_dt1.gauss_dtd
print g_dt2.gauss_dtd
"""
Explanation: Difference in rate
End of explanation
"""
exp_dtd=2e9
#import read_yields as ry
import sygma as s
reload(s)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',exp_dtd=exp_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
e_dt1=s1
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(fig=8,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
"""
Explanation: Exponential
exp_dtd (as used in Wiersma09) 2e9
End of explanation
"""
exp_dtd=10e9
#import read_yields as ry
import sygma as s
reload(s)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',exp_dtd=exp_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
e_dt2=s1
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= exp_dtd
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(fig=9,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
e_dt1.plot_sn_distr(fig=77,rate=True,rate_only='sn1a',label1='exp, 1',marker1='o')
e_dt2.plot_sn_distr(fig=77,rate=True,rate_only='sn1a',label1='exp, 2',marker1='x',markevery=1)
print e_dt1.exp_dtd,
print e_dt2.exp_dtd
"""
Explanation: exp_dtd (as used in Wiersma09) 10e9
End of explanation
"""
|
poppy-project/community-notebooks | tutorials-education/poppy-humanoid_poppy-torso__vrep_installation et prise en main/poppy simulé/Ergo_simulation prise en main.ipynb | lgpl-3.0 | from poppy_ergo_jr import PoppyErgoJr
creature = PoppyErgoJr(simulator='vrep')
"""
Explanation: <img src="png/poppy.png" HEIGHT=200 WIDTH=200 ALIGN=right>
<img src="png/inria.jpg" HEIGHT=150 WIDTH=325 ALIGN=left >
Premiers pas avec une créature
8 choses à savoir sur Ergo_Jr
Ouvrir l'interface
Instancier Ergo_Jr (démarrer la simulation)
Redémarrer la simulation
Eteindre la simulation
Moteurs de Ergo_Jr
Bouger - fonction goal_position & goto_position
Capteurs de Ergo_Jr
Particularités des modèles réels
1 - Ouvrir l'interface
Ouvrir V-REP
Ouvrir Jupyter - ipython vidéo démonstrative
via un terminal avec la commande : ipython notebook
via l'exécutable: rechercher dans 'tout les programmes' anaconda\IPython (Py 2.7) Notebook
via l'exécutable 'Launcher' : rechercher dans 'tout les programmes' anaconda\Launcher
Pour Ergo réel : via http://poppy.local (ou si vous avez modifié le nom de votre Poppy: http://new-name.local)
2 - Instancier Ergo_Jr
Pour démarrer la simulation ; éxécuter les commandes suivantes:
End of explanation
"""
creature.reset_simulation()
"""
Explanation: 3 - Redémarrer la simulatiuon
End of explanation
"""
creature.stop_simulation()
import pypot
pypot.vrep.close_all_connections()
"""
Explanation: 4 - Eteindre la simulation
End of explanation
"""
from poppy_ergo_jr import PoppyErgoJr
ergo = PoppyErgoJr(simulator='vrep')
"""
Explanation: Puis re-instancier Ergo_Jr
End of explanation
"""
print"Réponse:"
print "j'ai", len( ergo.motors ), "moteurs"
print "ils sont tous indexés dans une ", type( ergo.motors ), "qui s'appelle ergo.motors \n\n la voici: "
for m in ergo.motors:
print "-------------"
print "nom du moteur: ", m.name
print "position actuelle du moteur: ", m.present_position, "degrès"
"""
Explanation: 5 - Moteurs
Ergo, comment tu fonctionnes?
End of explanation
"""
# éteindre la simulation précédente...
import pypot
ergo.stop_simulation()
pypot.vrep.close_all_connections()
# ...avant d'en démarrer une nouvelle.
from poppy_ergo_jr import PoppyErgoJr
ergo = PoppyErgoJr(simulator='vrep')
# Ergo dit oui
for i in range(2):
ergo.m6.goal_position = -20
ergo.m6.goal_position = +20
ergo.m6.goal_position = 0
"""
Explanation: Explication:
Ici on utilise une liste pour stocker le nom des moteurs: ergo.motors
chaque moteur possède:
un nom ; exemple : ergo.head_z.name
une id ; exemple : ergo.head_z.id
une position courante ; exemple : ergo.head_z.present_position
Aperçu de l'ensemble des moteurs:
<img src="png/moteurs.png" HEIGHT=800 WIDTH=600 ALIGN=center>
6 - Bouger
Fonction 'goal_position'
Ergo, tu es prêt ?
End of explanation
"""
# Poppy dit oui
import time
for i in range(2):
ergo.m6.goal_position = -20
time.sleep(1)
ergo.m6.goal_position = +20
time.sleep(1)
ergo.m6.goal_position = 0
"""
Explanation: Il ne se passe rien... si !
mais Ego va trop vite, essayons ça :
End of explanation
"""
# Poppy dit oui
for i in range(2):
ergo.m6.goto_position(-20,1,)
ergo.m6.goto_position(+20,1)
ergo.m6.goal_position = 0
# Poppy dit oui
for i in range(2):
ergo.m6.goto_position(-20,1,wait=True)
ergo.m6.goto_position(+20,1,wait=True)
ergo.m6.goal_position = 0
"""
Explanation: Explication:
Ici on utilise la fonction '<b>goal_position</b>', précédée du nom du moteur, précédé du nom de la créature.
Elle accepte des valeurs de positions allant de -180° à +180°<br>
Les lignes de code s'exécutent de façon quasi instantannées ; même si la position (demandée en ligne précédente) n'a pas été atteinte.
Le module 'Time' nous permet d'attendre (grâce à la fonction 'time.sleep') que le moteur ai atteint la position voulue avant d'exécuter la commande suivante.
Fonction 'goto_position'
Bonjour Ergo
End of explanation
"""
#essaie ton code ici
#RAPPEL, pour relancer la simulation
ergo.reset_simulation()
"""
Explanation: Explication:
Ici on utilise la fonction '<b>goto_position</b>', précédée du nom du moteur, précédé du nom de la créature.
Elle accepte entre 2 et 3 paramètres:
- la position en dégrès
- le temps en secondes pour atteindre cette position
- paramètre facultatif 'wait=True'
L'option 'wait=True' permet d'attendre que la position soit atteinte avant de passer à la ligne suivante.<br>
Par défaut 'wait=False' ne bloque pas le défilement, on peut donc lancer plusieurs moteurs au même moment.
7 - Capteurs
L'Ergo_Jr possède un certain nombre de capteurs dans ses moteurs : position actuelle, force, température etc.
L'Ergo_Jr possède égale une webcam lui permettant de reconnaitre des Qr code ou d'autres formes particulières présentes dans son environnement.
8 - Particularités des modèles réels
Les moteurs peuvent être dans deux états: compliant / non compliant
l'état compliant permet de déplacer manuellement les moteurs sans résistance.
l'état non compliant bloque les moteurs.
Exemple: <br>
poppy.head_z.compliant = True<br>
poppy.head_z.compliant = False<br>
La vitesse des moteurs peut être modifiée via la fonction 'moving_speed'
Exemple: <br>
poppy.head_z.moving_speed = 150 #vitesse en milliseconde
A vous de jouer
Créez une combinaison de mouvement pour que Ergo_Jr vous dise bonjour!
End of explanation
"""
# essaies ton propre code ;)
"""
Explanation: Pour aller plus loin
La possibilité d'ajouter des objets interactifs (balle, cube, etc) plus de details ici
Installation détaillée ici
d'autres notebooks pour
V-REP ;
Torso ;
Snap! ;
et l'ensemble des notebooks via le site poppy-project.org
End of explanation
"""
|
ShubhamDebnath/Coursera-Machine-Learning | Course 4/Face Recognition for the Happy House v3.ipynb | mit | from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
"""
Explanation: Face Recognition for the Happy House
Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace.
Face recognition problems commonly fall into two categories:
Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
In this assignment, you will:
- Implement the triplet loss function
- Use a pretrained model to map face images into 128-dimensional encodings
- Use these encodings to perform face verification and face recognition
In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
Let's load the required packages.
End of explanation
"""
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
"""
Explanation: 0 - Naive Face Verification
In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 1 </u></center></caption>
Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.
1 - Encoding face images into a 128-dimensional vector
1.1 - Using an ConvNet to compute encodings
The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).
The key things you need to know are:
This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images.
End of explanation
"""
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(y_pred[1] - y_pred[0]), axis = -1)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(y_pred[2] - y_pred[0]), axis = -1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
"""
Explanation: Expected Output
<table>
<center>
Total Params: 3743280
</center>
</table>
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:
<img src="images/distance_kiank.png" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other
- The encodings of two images of different persons are very different
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;">
<br>
<caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
1.2 - The Triplet Loss
For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
<!--
We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
!-->
Training will use triplets of images $(A, P, N)$:
A is an "Anchor" image--a picture of a person.
P is a "Positive" image--a picture of the same person as the Anchor image.
N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$
Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
Notes:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
- $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$.
Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.
Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
3. Compute the full formula by taking the max with zero and summing over the training examples:
$$\mathcal{J} = \sum^{m}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$
Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum().
For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
End of explanation
"""
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**loss**
</td>
<td>
528.143
</td>
</tr>
</table>
2 - Loading the trained model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
End of explanation
"""
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
"""
Explanation: Here're some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;">
<br>
<caption><center> <u> <font color='purple'> Figure 4:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Let's now use this model to perform face verification and face recognition!
3 - Applying the model
Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.
However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.
So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.
3.1 - Face Verification
Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
End of explanation
"""
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = None
else:
print("It's not " + str(identity) + ", please go away")
door_open = None
### END CODE HERE ###
return dist, door_open
"""
Explanation: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps:
1. Compute the encoding of the image from image_path
2. Compute the distance about this encoding and the encoding of the identity image stored in the database
3. Open the door if the distance is less than 0.7, else do not open.
As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
End of explanation
"""
verify("images/camera_0.jpg", "younes", database, FRmodel)
"""
Explanation: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg" style="width:100px;height:100px;">
End of explanation
"""
verify("images/camera_2.jpg", "kian", database, FRmodel)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**It's younes, welcome home!**
</td>
<td>
(0.65939283, True)
</td>
</tr>
</table>
Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
End of explanation
"""
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(np.subtract(encoding ,database[name]))
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**It's not kian, please go away**
</td>
<td>
(0.86224014, False)
</td>
</tr>
</table>
3.2 - Face Recognition
Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!
To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.
Exercise: Implement who_is_it(). You will have to go through the following steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items().
- Compute L2 distance between the target "encoding" and the current "encoding" from the database.
- If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
End of explanation
"""
who_is_it("images/camera_0.jpg", database, FRmodel)
"""
Explanation: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
End of explanation
"""
|
zhouqifanbdh/liupengyuan.github.io | chapter2/homework/localization/4-5/201611680085(2017.4.11).ipynb | mit | m=int(input('请输入数字下界,按回车键结束'))
k=int(input('请输入数字上界,按回车键结束'))
n=int(input('请输入数字个数'))
i=0
import random
while i<n:
number=random.randint(m,k)
i+=1
print(number)
total=number+number+number
print((total/n)**(1/2))
"""
Explanation: 练习 1:写函数,求n个随机整数均值的平方根,整数范围在m与k之间(n,m,k由用户输入)。
End of explanation
"""
m=int(input('请输入数字下界,按回车键结束'))
k=int(input('请输入数字上界,按回车键结束'))
n=int(input('请输入数字个数'))
i=0
import random
import math
while i<n:
number=random.randint(m,k)
i+=1
print(number)
result_1=math.log(number)
print(result_1, 1/(result_1))
"""
Explanation: 写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数)
End of explanation
"""
m=int(input('请输入你想求和的数字个数'))
def compute_sum(end):
import random
i=0
total_1=0
total_2=0
while i<m:
i+=1
total_1=total_1+total_2+a*10**(i-1)
total_2=total_1-total_2
print(total_1)
a=random.randint(1,9)
compute_sum(a)
def win():
print('Win!')
def lose():
print('Lose!')
def game_over():
print('Game Over!')
def menu():
print('''=====游戏菜单=====
1. 游戏说明
2. 开始游戏
3. 退出游戏
4. 制作团队
=====游戏菜单=====''')
def guess_game():
n = int(input('请输入一个大于0的整数作为数字的上限,按回车结束。'))
import random
number=random.randint(1,n)
print('标准数字为' , number)
m=random.randint(1,n)
print('计算机猜测数字为' , m)
k=int(input('请判断猜测数字的大小,若等于请输入0,若大于标准数字请输入1,若小于标准数字请输入2'))
if k==0:
win()
elif k==1:
m_1=random.randint(1,m)
else:
m_1=random.randint(m,n)
def win():
print('Win!')
def lose():
print('Lose!')
def game_over():
print('Game Over!')
def show_team():
print('wow')
def show_instruction():
print('ok')
def menu():
print('''=====游戏菜单=====
1. 游戏说明
2. 开始游戏
3. 退出游戏
4. 制作团队
=====游戏菜单=====''')
def guess_game():
n = int(input('请输入一个大于0的整数作为数字的上限,按回车结束。'))
import random
number=random.randint(1,n)
print('标准数字为' , number)
m=random.randint(1,n)
print('计算机猜测数字为' , m)
k=int(input('请判断猜测数字的大小,若等于请输入0,若大于标准数字请输入1,若小于标准数字请输入2'))
if k==0:
win()
elif k==1:
m_1=random.randint(1,m)
else:
m_1=random.randint(m,n)
def main():
while True:
menu()
choice = int(input('请输入你的选择'))
if choice == 1:
show_instruction()
elif choice == 2:
guess_game()
elif choice == 3:
game_over()
break
else:
show_team()
if __name__ == '__main__':
main()
"""
Explanation: 写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
End of explanation
"""
|
pylablanche/MillionSong | Exploration_of_data_in_MillionMusicSubset.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sqlite3
import h5py as h5
%matplotlib inline
plt.rcParams['figure.figsize'] = (8,6)
sns.set_palette('Dark2')
sns.set_style('whitegrid')
path_to_data = '../MillionSongSubset/'
"""
Explanation: Required imports
End of explanation
"""
con_simi = sqlite3.connect(path_to_data+'AdditionalFiles/subset_artist_similarity.db')
con_term = sqlite3.connect(path_to_data+'AdditionalFiles/subset_artist_term.db')
con_meta = sqlite3.connect(path_to_data+'AdditionalFiles/subset_track_metadata.db')
cur_simi = con_simi.cursor()
cur_term = con_term.cursor()
cur_meta = con_meta.cursor()
"""
Explanation: Reading SQL tables
Alternatively, there is a demo available at https://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt that was made specifically for reading these files
End of explanation
"""
# subset_artist_similarity.db
res = con_simi.execute("SELECT name FROM sqlite_master WHERE type='table';")
for name in res:
print(name[0])
# subset_artist_term
res = con_term.execute("SELECT name FROM sqlite_master WHERE type='table';")
for name in res:
print(name[0])
# subset_track_metadata
res = con_meta.execute("SELECT name FROM sqlite_master WHERE type='table';")
for name in res:
print(name[0])
"""
Explanation: First we need to find out the table names in each of our files:
End of explanation
"""
songs = pd.read_sql_query('SELECT * FROM songs WHERE year!=0',con_meta)
songs.head(5)
"""
Explanation: Exploring the tables
End of explanation
"""
songs.artist_hotttnesss.hist(bins=np.linspace(0.0,1.0,41));
plt.xlabel('Artist Hotness')
"""
Explanation: Histogram of artist_hotttnesss
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True,
figsize=(15,8))
ax[0].scatter(songs.year, songs.artist_hotttnesss, marker='.')
ax[1].hexbin(songs.year, songs.artist_hotttnesss, cmap='viridis', gridsize=41, mincnt=1.0)
plt.subplots_adjust(wspace=0.02);
"""
Explanation: Scatter plots of artist_hotttnesss vs year
End of explanation
"""
fig, ax = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True,
figsize=(15,12))
ax[0,0].scatter(songs.year, songs.artist_familiarity, marker='.')
ax[0,1].hexbin(songs.year, songs.artist_familiarity, cmap='viridis', gridsize=41, mincnt=1.0)
ax[1,0].scatter(songs.year, songs.artist_hotttnesss, marker='.')
ax[1,1].hexbin(songs.year, songs.artist_hotttnesss, cmap='viridis', gridsize=41, mincnt=1.0)
ax[-1,-1].set_xlim(1920,songs.year.max());
plt.subplots_adjust(wspace=0.02, hspace=0.05)
"""
Explanation: Scatter plots of artist_familiarity vs year compared to artist_hotttnesss vs year
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True,
figsize=(15,8))
ax[0].scatter(songs.artist_familiarity, songs.artist_hotttnesss, marker='.')
ax[1].hexbin(songs.artist_familiarity, songs.artist_hotttnesss, cmap='viridis', gridsize=51, mincnt=1.0)
"""
Explanation: Artist_hotttnesss vs artist familiarity
End of explanation
"""
plt.subplots_adjust(wspace=0.02);
### Artist_hotttnesss vs artist familiarity
sns.lmplot(data=songs, x='artist_familiarity', y='artist_hotttnesss',
markers='.', size=10);
"""
Explanation: Artist_hotttnesss vs artist familiarity with a linear fit
End of explanation
"""
tmp = songs.groupby('year').mean()
tmp[['artist_familiarity','artist_hotttnesss']].plot();
"""
Explanation: Artist_familiarity compared to artist_hotttnesss over time
End of explanation
"""
with pd.HDFStore(path_to_data+'AdditionalFiles/subset_msd_summary_file.h5') as store:
print(store)
analysis_summary = store.select('analysis/songs')
metadata_summary = store.select('metadata/songs')
musicbrainz_summary = store.select('musicbrainz/songs')
analysis_summary.head()
metadata_summary.head()
musicbrainz_summary.head()
"""
Explanation: Reading HDF5 files
End of explanation
"""
|
danresende/deep-learning | sentiment_network/.ipynb_checkpoints/Sentiment Classification - Project 3 Solution-checkpoint.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Project 2: Creating the Input/Output Data
End of explanation
"""
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation
"""
|
nvoron23/word2vec | examples/word2vec.ipynb | apache-2.0 | import word2vec
"""
Explanation: word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from Google.
Training
Download some data, for example: http://mattmahoney.net/dc/text8.zip
End of explanation
"""
word2vec.word2phrase('/Users/drodriguez/Downloads/text8', '/Users/drodriguez/Downloads/text8-phrases', verbose=True)
"""
Explanation: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
End of explanation
"""
word2vec.word2vec('/Users/drodriguez/Downloads/text8-phrases', '/Users/drodriguez/Downloads/text8.bin', size=100, verbose=True)
"""
Explanation: This will create a text8-phrases that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the origial data as input for word2vec.
Train the model using the word2phrase output.
End of explanation
"""
word2vec.word2clusters('/Users/drodriguez/Downloads/text8', '/Users/drodriguez/Downloads/text8-clusters.txt', 100, verbose=True)
"""
Explanation: That generated a text8.bin file containing the word vectors in a binary format.
Do the clustering of the vectors based on the trained model.
End of explanation
"""
import word2vec
"""
Explanation: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
End of explanation
"""
model = word2vec.load('/Users/drodriguez/Downloads/text8.bin')
"""
Explanation: Import the word2vec binary file created above
End of explanation
"""
model.vocab
"""
Explanation: We can take a look at the vocabulaty as a numpy array
End of explanation
"""
model.vectors.shape
model.vectors
"""
Explanation: Or take a look at the whole matrix
End of explanation
"""
model['dog'].shape
model['dog'][:10]
"""
Explanation: We can retreive the vector of individual words
End of explanation
"""
indexes, metrics = model.cosine('socks')
indexes, metrics
"""
Explanation: We can do simple queries to retreive words similar to "socks" based on cosine similarity:
End of explanation
"""
model.vocab[indexes]
"""
Explanation: This returned a tuple with 2 items:
1. numpy array with the indexes of the similar words in the vocabulary
2. numpy array with cosine similarity to each word
Its possible to get the words of those indexes
End of explanation
"""
model.generate_response(indexes, metrics)
"""
Explanation: There is a helper function to create a combined response: a numpy record array
End of explanation
"""
model.generate_response(indexes, metrics).tolist()
"""
Explanation: Is easy to make that numpy array a pure python response:
End of explanation
"""
indexes, metrics = model.cosine('los_angeles')
model.generate_response(indexes, metrics).tolist()
"""
Explanation: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases"
End of explanation
"""
indexes, metrics = model.analogy(pos=['king', 'woman'], neg=['man'], n=10)
indexes, metrics
model.generate_response(indexes, metrics).tolist()
"""
Explanation: Analogies
Its possible to do more complex queries like analogies such as: king - man + woman = queen
This method returns the same as cosine the indexes of the words in the vocab and the metric
End of explanation
"""
clusters = word2vec.load_clusters('/Users/drodriguez/Downloads/text8-clusters.txt')
"""
Explanation: Clusters
End of explanation
"""
clusters['dog']
"""
Explanation: We can see get the cluster number for individual words
End of explanation
"""
clusters.get_words_on_cluster(90).shape
clusters.get_words_on_cluster(90)[:10]
"""
Explanation: We can see get all the words grouped on an specific cluster
End of explanation
"""
model.clusters = clusters
indexes, metrics = model.analogy(pos=['paris', 'germany'], neg=['france'], n=10)
model.generate_response(indexes, metrics).tolist()
"""
Explanation: We can add the clusters to the word2vec model and generate a response that includes the clusters
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.