Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
7,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABC calibration of $I_\text{Kur}$ in Courtemanche model to original dataset.
Note the term $I_\text{sus}$ for sustained outward Potassium current is used throughout the notebook.
Step1: Initial set-up
Load experiments of unified dataset
Step2: Plot steady-state and tau functions of original model
Step3: Activation gate ($a$) calibration
Combine model and experiments to produce
Step4: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
Step5: Run ABC calibration
Step6: Analysis of results | Python Code:
import os, tempfile
import logging
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from ionchannelABC import theoretical_population_size
from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor
from ionchannelABC.experiment import setup
from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom
import myokit
from pyabc import Distribution, RV, History, ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler
from pyabc.populationstrategy import ConstantPopulationSize
Explanation: ABC calibration of $I_\text{Kur}$ in Courtemanche model to original dataset.
Note the term $I_\text{sus}$ for sustained outward Potassium current is used throughout the notebook.
End of explanation
from experiments.isus_wang import wang_act_and_kin
from experiments.isus_courtemanche import courtemanche_deact
from experiments.isus_firek import firek_inact
from experiments.isus_nygren import (nygren_inact_kin,
nygren_rec)
modelfile = 'models/courtemanche_isus.mmt'
Explanation: Initial set-up
Load experiments of unified dataset:
- Steady-state activation [Wang1993]
- Activation time constant [Wang1993]
- Deactivation time constant [Courtemanche1998]
- Steady-state inactivation [Firek1995]
- Inactivation time constant [Nygren1998]
- Recovery time constant [Nygren1998]
End of explanation
from ionchannelABC.visualization import plot_variables
sns.set_context('talk')
V = np.arange(-100, 50, 0.01)
cou_par_map = {'ri': 'isus.a_inf',
'si': 'isus.i_inf',
'rt': 'isus.tau_a',
'st': 'isus.tau_i'}
f, ax = plot_variables(V, cou_par_map, modelfile, figshape=(2,2))
Explanation: Plot steady-state and tau functions of original model
End of explanation
observations, model, summary_statistics = setup(modelfile,
firek_inact,
nygren_inact_kin,
nygren_rec)
assert len(observations)==len(summary_statistics(model({})))
g = plot_sim_results(modelfile,
firek_inact,
nygren_inact_kin,
nygren_rec)
Explanation: Activation gate ($a$) calibration
Combine model and experiments to produce:
- observations dataframe
- model function to run experiments and return traces
- summary statistics function to accept traces
End of explanation
limits = {'isus.q1': (-200, 200),
'isus.q2': (1e-7, 50),
'log_isus.q3': (-1, 4),
'isus.q4': (-200, 200),
'isus.q5': (1e-7, 50),
'isus.q6': (-200, 0),
'isus.q7': (1e-7, 50)}
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
# Test this works correctly with set-up functions
assert len(observations) == len(summary_statistics(model(prior.rvs())))
Explanation: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
End of explanation
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "courtemanche_isus_igate_unified.db"))
pop_size = theoretical_population_size(2, len(limits))
print("Theoretical minimum population size is {} particles".format(pop_size))
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
exp_id=list(observations.exp_id),
variance=list(observations.variance),
delta=0.05),
population_size=ConstantPopulationSize(1000),
summary_statistics=summary_statistics,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(initial_epsilon=100),
sampler=MulticoreEvalParallelSampler(n_procs=16),
acceptor=IonChannelAcceptor())
obs = observations.to_dict()['y']
obs = {str(k): v for k, v in obs.items()}
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
Explanation: Run ABC calibration
End of explanation
history = History('sqlite:///results/courtemanche/isus/unified/courtemanche_isus_igate_unified.db')
history.all_runs() # most recent is the relevant run
df, w = history.get_distribution()
df.describe()
sns.set_context('poster')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
g = plot_sim_results(modelfile,
firek_inact,
nygren_inact_kin,
nygren_rec,
df=df, w=w)
plt.tight_layout()
import pandas as pd
N = 100
cou_par_samples = df.sample(n=N, weights=w, replace=True)
cou_par_samples = cou_par_samples.set_index([pd.Index(range(N))])
cou_par_samples = cou_par_samples.to_dict(orient='records')
sns.set_context('talk')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
f, ax = plot_variables(V, cou_par_map,
'models/courtemanche_isus.mmt',
[cou_par_samples],
figshape=(2,2))
plt.tight_layout()
m,_,_ = myokit.load(modelfile)
originals = {}
for name in limits.keys():
if name.startswith("log"):
name_ = name[4:]
else:
name_ = name
val = m.value(name_)
if name.startswith("log"):
val_ = np.log10(val)
else:
val_ = val
originals[name] = val_
sns.set_context('paper')
g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals)
plt.tight_layout()
Explanation: Analysis of results
End of explanation |
7,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Application
Step1: Load and Subset on Individual Contributors
Step2: What proportion of contributons were by blacks, whites, Hispanics, and Asians?
Step3: What proportion of the donors were blacks, whites, Hispanics, and Asians?
Step4: What proportion of the total donation was given by blacks, whites, Hispanics, and Asians? | Python Code:
import pandas as pd
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', nrows=100)
df.columns
from ethnicolr import census_ln
Explanation: Application: 2000/2010 Political Campaign Contributions by Race
Using ethnicolr, we look to answer three basic questions:
<ol>
<li>What proportion of contributions were made by blacks, whites, Hispanics, and Asians?
<li>What proportion of unique contributors were blacks, whites, Hispanics, and Asians?
<li>What proportion of total donations were given by blacks, whites, Hispanics, and Asians?
</ol>
End of explanation
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])
sdf = df[df.contributor_type=='I'].copy()
rdf2000 = census_ln(sdf, 'contributor_lname', 2000)
rdf2000['year'] = 2000
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])
sdf = df[df.contributor_type=='I'].copy()
rdf2010 = census_ln(sdf, 'contributor_lname', 2010)
rdf2010['year'] = 2010
rdf = pd.concat([rdf2000, rdf2010])
rdf.head(20)
rdf.replace('(S)', 0, inplace=True)
rdf[['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace', 'pcthispanic']] = rdf[['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace', 'pcthispanic']].astype(float)
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
Explanation: Load and Subset on Individual Contributors
End of explanation
rdf['white'] = rdf.pctwhite / 100.0
rdf['black'] = rdf.pctblack / 100.0
rdf['api'] = rdf.pctapi / 100.0
rdf['hispanic'] = rdf.pcthispanic / 100.0
gdf = rdf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'})
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
Explanation: What proportion of contributons were by blacks, whites, Hispanics, and Asians?
End of explanation
udf = rdf.drop_duplicates(subset=['contributor_name']).copy()
udf['white'] = udf.pctwhite / 100.0
udf['black'] = udf.pctblack / 100.0
udf['api'] = udf.pctapi / 100.0
udf['hispanic'] = udf.pcthispanic / 100.0
gdf = udf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'})
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
Explanation: What proportion of the donors were blacks, whites, Hispanics, and Asians?
End of explanation
rdf['white'] = rdf.amount * rdf.pctwhite / 100.0
rdf['black'] = rdf.amount * rdf.pctblack / 100.0
rdf['api'] = rdf.amount * rdf.pctapi / 100.0
rdf['hispanic'] = rdf.amount * rdf.pcthispanic / 100.0
gdf = rdf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'}) / 10e6
gdf.style.format("{:0.2f}")
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
Explanation: What proportion of the total donation was given by blacks, whites, Hispanics, and Asians?
End of explanation |
7,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Relax and hold steady
Many problems in physics have no time dependence, yet are rich with physical meaning
Step1: To visualize 2D data, we can use pyplot.imshow(), but a 3D plot can sometimes show a more intuitive view the solution. Or it's just prettier!
Be sure to enjoy the many examples of 3D plots in the mplot3d section of the Matplotlib Gallery.
We'll import the Axes3D library from Matplotlib and also grab the cm package, which provides different colormaps for visualizing plots.
Step2: Let's define a function for setting up our plotting environment, to avoid repeating this set-up over and over again. It will save us some typing.
Step3: Note
This plotting function uses Viridis, a new (and awesome) colormap available in Matplotlib versions 1.5 and greater. If you see an error when you try to plot using <tt>cm.viridis</tt>, just update Matplotlib using <tt>conda</tt> or <tt>pip</tt>.
Analytical solution
The Laplace equation with the boundary conditions listed above has an analytical solution, given by
\begin{equation}
p(x,y) = \frac{\sinh \left( \frac{\frac{3}{2} \pi y}{L}\right)}{\sinh \left( \frac{\frac{3}{2} \pi H}{L}\right)} \sin \left( \frac{\frac{3}{2} \pi x}{L} \right)
\end{equation}
where $L$ and $H$ are the length of the domain in the $x$ and $y$ directions, respectively.
We will use numpy.meshgrid to plot our 2D solutions. This is a function that takes two vectors (x and y, say) and returns two 2D arrays of $x$ and $y$ coordinates that we then use to create the contour plot. Always useful, linspace creates 1-row arrays of equally spaced numbers
Step4: Ok, let's try out the analytical solution and use it to test the plot_3D function we wrote above.
Step5: It worked! This is what the solution should look like when we're 'done' relaxing. (And isn't viridis a cool colormap?)
How long do we iterate?
We noted above that there is no time dependence in the Laplace equation. So it doesn't make a lot of sense to use a for loop with nt iterations.
Instead, we can use a while loop that continues to iteratively apply the relaxation scheme until the difference between two successive iterations is small enough.
But how small is small enough? That's a good question. We'll try to work that out as we go along.
To compare two successive potential fields, a good option is to use the L2 norm of the difference. It's defined as
\begin{equation}
|\textbf{x}| = \sqrt{\sum_{i=0, j=0}^k \left|p^{k+1}{i,j} - p^k{i,j}\right|^2}
\end{equation}
But there's one flaw with this formula. We are summing the difference between successive iterations at each point on the grid. So what happens when the grid grows? (For example, if we're refining the grid, for whatever reason.) There will be more grid points to compare and so more contributions to the sum. The norm will be a larger number just because of the grid size!
That doesn't seem right. We'll fix it by normalizing the norm, dividing the above formula by the norm of the potential field at iteration $k$.
For two successive iterations, the relative L2 norm is then calculated as
\begin{equation}
|\textbf{x}| = \frac{\sqrt{\sum_{i=0, j=0}^k \left|p^{k+1}{i,j} - p^k{i,j}\right|^2}}{\sqrt{\sum_{i=0, j=0}^k \left|p^k_{i,j}\right|^2}}
\end{equation}
Our Python code for this calculation is a one-line function
Step6: Now, let's define a function that will apply Jacobi's method for Laplace's equation. Three of the boundaries are Dirichlet boundaries and so we can simply leave them alone. Only the Neumann boundary needs to be explicitly calculated at each iteration, and we'll do it by discretizing the derivative in its first order approximation
Step7: Rows and columns, and index order
The physical problem has two dimensions, so we also store the temperatures in two dimensions
Step8: Now let's visualize the initial conditions using the plot_3D function, just to check we've got it right.
Step9: The p array is equal to zero everywhere, except along the boundary $y = 1$. Hopefully you can see how the relaxed solution and this initial condition are related.
Now, run the iterative solver with a target L2-norm difference between successive iterations of $10^{-8}$.
Step10: Let's make a gorgeous plot of the final field using the newly minted plot_3D function.
Step11: Awesome! That looks pretty good. But we'll need more than a simple visual check, though. The "eyeball metric" is very forgiving!
Convergence analysis
Convergence, Take 1
We want to make sure that our Jacobi function is working properly. Since we have an analytical solution, what better way than to do a grid-convergence analysis? We will run our solver for several grid sizes and look at how fast the L2 norm of the difference between consecutive iterations decreases.
Let's make our lives easier by writing a function to "reset" the initial guess for each grid so we don't have to keep copying and pasting them.
Step12: Now run Jacobi's method on the Laplace equation using four different grids, with the same exit criterion of $10^{-8}$ each time. Then, we look at the error versus the grid size in a log-log plot. What do we get?
Step13: Hmm. That doesn't look like 2nd-order convergence, but we're using second-order finite differences. What's going on? The culprit is the boundary conditions. Dirichlet conditions are order-agnostic (a set value is a set value), but the scheme we used for the Neumann boundary condition is 1st-order.
Remember when we said that the boundaries drive the problem? One boundary that's 1st-order completely tanked our spatial convergence. Let's fix it!
2nd-order Neumann BCs
Up to this point, we have used the first-order approximation of a derivative to satisfy Neumann B.C.'s. For a boundary located at $x=0$ this reads,
\begin{equation}
\frac{p^{k+1}{1,j} - p^{k+1}{0,j}}{\Delta x} = 0
\end{equation}
which, solving for $p^{k+1}_{0,j}$ gives us
\begin{equation}
p^{k+1}{0,j} = p^{k+1}{1,j}
\end{equation}
Using that Neumann condition will limit us to 1st-order convergence. Instead, we can start with a 2nd-order approximation (the central-difference approximation)
Step14: Again, this is the exact same code as before, but now we're running the Jacobi solver with a 2nd-order Neumann boundary condition. Let's do a grid-refinement analysis, and plot the error versus the grid spacing.
Step15: Nice! That's much better. It might not be exactly 2nd-order, but it's awfully close. (What is "close enough" in regards to observed convergence rates is a thorny question.)
Now, notice from this plot that the error on the finest grid is around $0.0002$. Given this, perhaps we didn't need to continue iterating until a target difference between two solutions of $10^{-8}$. The spatial accuracy of the finite difference approximation is much worse than that! But we didn't know it ahead of time, did we? That's the "catch 22" of iterative solution of systems arising from discretization of PDEs.
Final word
The Jacobi method is the simplest relaxation scheme to explain and to apply. It is also the worst iterative solver! In practice, it is seldom used on its own as a solver, although it is useful as a smoother with multi-grid methods. There are much better iterative methods! If you are curious you can find them in this lesson. | Python Code:
from matplotlib import pyplot
import numpy
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
Explanation: Relax and hold steady
Many problems in physics have no time dependence, yet are rich with physical meaning: the gravitational field produced by a massive object, the electrostatic potential of a charge distribution, the displacement of a stretched membrane and the steady flow of fluid through a porous medium ... all these can be modeled by Poisson's equation:
\begin{equation}
\nabla^2 u = f
\end{equation}
where the unknown $u$ and the known $f$ are functions of space, in a domain $\Omega$. To find the solution, we require boundary conditions. These could be Dirichlet boundary conditions, specifying the value of the solution on the boundary,
\begin{equation}
u = b_1 \text{ on } \partial\Omega,
\end{equation}
or Neumann boundary conditions, specifying the normal derivative of the solution on the boundary,
\begin{equation}
\frac{\partial u}{\partial n} = b_2 \text{ on } \partial\Omega.
\end{equation}
A boundary-value problem consists of finding $u$, given the above information. Numerically, we can do this using relaxation methods, which start with an initial guess for $u$ and then iterate towards the solution. Let's find out how!
Laplace's equation
The particular case of $f=0$ (homogeneous case) results in Laplace's equation:
\begin{equation}
\nabla^2 u = 0
\end{equation}
For example, the equation for steady, two-dimensional heat conduction is:
\begin{equation}
\frac{\partial ^2 T}{\partial x^2} + \frac{\partial ^2 T}{\partial y^2} = 0
\end{equation}
where $T$ is a temperature that has reached steady state. The Laplace equation models the equilibrium state of a system under the supplied boundary conditions.
The study of solutions to Laplace's equation is called potential theory, and the solutions themselves are often potential fields. Let's use $p$ from now on to represent our generic dependent variable, and write Laplace's equation again (in two dimensions):
\begin{equation}
\frac{\partial ^2 p}{\partial x^2} + \frac{\partial ^2 p}{\partial y^2} = 0
\end{equation}
Like in the diffusion equation, we discretize the second-order derivatives with central differences. If you need to refresh your mind, check out this lesson and try to discretize the equation by yourself. On a two-dimensional Cartesian grid, it gives:
\begin{equation}
\frac{p_{i+1, j} - 2p_{i,j} + p_{i-1,j} }{\Delta x^2} + \frac{p_{i,j+1} - 2p_{i,j} + p_{i, j-1} }{\Delta y^2} = 0
\end{equation}
When $\Delta x = \Delta y$, we end up with the following equation:
\begin{equation}
p_{i+1, j} + p_{i-1,j} + p_{i,j+1} + p_{i, j-1}- 4 p_{i,j} = 0
\end{equation}
This tells us that the Laplacian differential operator at grid point $(i,j)$ can be evaluated discretely using the value of $p$ at that point (with a factor $-4$) and the four neighboring points to the left and right, above and below grid point $(i,j)$.
The stencil of the discrete Laplacian operator is shown in Figure 1. It is typically called the five-point stencil, for obvious reasons.
<img src="./figures/laplace.svg">
Figure 1: Laplace five-point stencil.
The discrete equation above is valid for every interior point in the domain. If we write the equations for all interior points, we have a linear system of algebraic equations. We could solve the linear system directly (e.g., with Gaussian elimination), but we can be more clever than that!
Notice that the coefficient matrix of such a linear system has mostly zeroes. For a uniform spatial grid, the matrix is block diagonal: it has diagonal blocks that are tridiagonal with $-4$ on the main diagonal and $1$ on two off-center diagonals, and two more diagonals with $1$. All of the other elements are zero. Iterative methods are particularly suited for a system with this structure, and save us from storing all those zeroes.
We will start with an initial guess for the solution, $p_{i,j}^{0}$, and use the discrete Laplacian to get an update, $p_{i,j}^{1}$, then continue on computing $p_{i,j}^{k}$ until we're happy. Note that $k$ is not a time index here, but an index corresponding to the number of iterations we perform in the relaxation scheme.
At each iteration, we compute updated values $p_{i,j}^{k+1}$ in a (hopefully) clever way so that they converge to a set of values satisfying Laplace's equation. The system will reach equilibrium only as the number of iterations tends to $\infty$, but we can approximate the equilibrium state by iterating until the change between one iteration and the next is very small.
The most intuitive method of iterative solution is known as the Jacobi method, in which the values at the grid points are replaced by the corresponding weighted averages:
\begin{equation}
p^{k+1}{i,j} = \frac{1}{4} \left(p^{k}{i,j-1} + p^k_{i,j+1} + p^{k}{i-1,j} + p^k{i+1,j} \right)
\end{equation}
This method does indeed converge to the solution of Laplace's equation. Thank you Professor Jacobi!
Challenge task
Grab a piece of paper and write out the coefficient matrix for a discretization with 7 grid points in the $x$ direction (5 interior points) and 5 points in the $y$ direction (3 interior). The system should have 15 unknowns, and the coefficient matrix three diagonal blocks. Assume prescribed Dirichlet boundary conditions on all sides (not necessarily zero).
Boundary conditions and relaxation
Suppose we want to model steady-state heat transfer on (say) a computer chip with one side insulated (zero Neumann BC), two sides held at a fixed temperature (Dirichlet condition) and one side touching a component that has a sinusoidal distribution of temperature.
We would need to solve Laplace's equation with boundary conditions like
\begin{equation}
\begin{gathered}
p=0 \text{ at } x=0\
\frac{\partial p}{\partial x} = 0 \text{ at } x = L\
p = 0 \text{ at }y = 0 \
p = \sin \left( \frac{\frac{3}{2}\pi x}{L} \right) \text{ at } y = H.
\end{gathered}
\end{equation}
We'll take $L=1$ and $H=1$ for the sizes of the domain in the $x$ and $y$ directions.
One of the defining features of elliptic PDEs is that they are "driven" by the boundary conditions. In the iterative solution of Laplace's equation, boundary conditions are set and the solution relaxes from an initial guess to join the boundaries together smoothly, given those conditions. Our initial guess will be $p=0$ everywhere. Now, let's relax!
First, we import our usual smattering of libraries (plus a few new ones!)
End of explanation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
Explanation: To visualize 2D data, we can use pyplot.imshow(), but a 3D plot can sometimes show a more intuitive view the solution. Or it's just prettier!
Be sure to enjoy the many examples of 3D plots in the mplot3d section of the Matplotlib Gallery.
We'll import the Axes3D library from Matplotlib and also grab the cm package, which provides different colormaps for visualizing plots.
End of explanation
def plot_3D(x, y, p):
'''Creates 3D plot with appropriate limits and viewing angle
Parameters:
----------
x: array of float
nodal coordinates in x
y: array of float
nodal coordinates in y
p: 2D array of float
calculated potential field
'''
fig = pyplot.figure(figsize=(11,7), dpi=100)
ax = fig.gca(projection='3d')
X,Y = numpy.meshgrid(x,y)
surf = ax.plot_surface(X,Y,p[:], rstride=1, cstride=1, cmap=cm.viridis,
linewidth=0, antialiased=False)
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$z$')
ax.view_init(30,45)
Explanation: Let's define a function for setting up our plotting environment, to avoid repeating this set-up over and over again. It will save us some typing.
End of explanation
def p_analytical(x, y):
X, Y = numpy.meshgrid(x,y)
p_an = numpy.sinh(1.5*numpy.pi*Y / x[-1]) /\
(numpy.sinh(1.5*numpy.pi*y[-1]/x[-1]))*numpy.sin(1.5*numpy.pi*X/x[-1])
return p_an
Explanation: Note
This plotting function uses Viridis, a new (and awesome) colormap available in Matplotlib versions 1.5 and greater. If you see an error when you try to plot using <tt>cm.viridis</tt>, just update Matplotlib using <tt>conda</tt> or <tt>pip</tt>.
Analytical solution
The Laplace equation with the boundary conditions listed above has an analytical solution, given by
\begin{equation}
p(x,y) = \frac{\sinh \left( \frac{\frac{3}{2} \pi y}{L}\right)}{\sinh \left( \frac{\frac{3}{2} \pi H}{L}\right)} \sin \left( \frac{\frac{3}{2} \pi x}{L} \right)
\end{equation}
where $L$ and $H$ are the length of the domain in the $x$ and $y$ directions, respectively.
We will use numpy.meshgrid to plot our 2D solutions. This is a function that takes two vectors (x and y, say) and returns two 2D arrays of $x$ and $y$ coordinates that we then use to create the contour plot. Always useful, linspace creates 1-row arrays of equally spaced numbers: it helps for defining $x$ and $y$ axes in line plots, but now we want the analytical solution plotted for every point in our domain. To do this, we'll use in the analytical solution the 2D arrays generated by numpy.meshgrid.
End of explanation
nx = 41
ny = 41
x = numpy.linspace(0,1,nx)
y = numpy.linspace(0,1,ny)
p_an = p_analytical(x,y)
plot_3D(x,y,p_an)
Explanation: Ok, let's try out the analytical solution and use it to test the plot_3D function we wrote above.
End of explanation
def L2_error(p, pn):
return numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
Explanation: It worked! This is what the solution should look like when we're 'done' relaxing. (And isn't viridis a cool colormap?)
How long do we iterate?
We noted above that there is no time dependence in the Laplace equation. So it doesn't make a lot of sense to use a for loop with nt iterations.
Instead, we can use a while loop that continues to iteratively apply the relaxation scheme until the difference between two successive iterations is small enough.
But how small is small enough? That's a good question. We'll try to work that out as we go along.
To compare two successive potential fields, a good option is to use the L2 norm of the difference. It's defined as
\begin{equation}
|\textbf{x}| = \sqrt{\sum_{i=0, j=0}^k \left|p^{k+1}{i,j} - p^k{i,j}\right|^2}
\end{equation}
But there's one flaw with this formula. We are summing the difference between successive iterations at each point on the grid. So what happens when the grid grows? (For example, if we're refining the grid, for whatever reason.) There will be more grid points to compare and so more contributions to the sum. The norm will be a larger number just because of the grid size!
That doesn't seem right. We'll fix it by normalizing the norm, dividing the above formula by the norm of the potential field at iteration $k$.
For two successive iterations, the relative L2 norm is then calculated as
\begin{equation}
|\textbf{x}| = \frac{\sqrt{\sum_{i=0, j=0}^k \left|p^{k+1}{i,j} - p^k{i,j}\right|^2}}{\sqrt{\sum_{i=0, j=0}^k \left|p^k_{i,j}\right|^2}}
\end{equation}
Our Python code for this calculation is a one-line function:
End of explanation
def laplace2d(p, l2_target):
'''Iteratively solves the Laplace equation using the Jacobi method
Parameters:
----------
p: 2D array of float
Initial potential distribution
l2_target: float
target for the difference between consecutive solutions
Returns:
-------
p: 2D array of float
Potential distribution after relaxation
'''
l2norm = 1
pn = numpy.empty_like(p)
while l2norm > l2_target:
pn = p.copy()
p[1:-1,1:-1] = .25 * (pn[1:-1,2:] + pn[1:-1, :-2] \
+ pn[2:, 1:-1] + pn[:-2, 1:-1])
##Neumann B.C. along x = L
p[1:-1, -1] = p[1:-1, -2] # 1st order approx of a derivative
l2norm = L2_error(p, pn)
return p
Explanation: Now, let's define a function that will apply Jacobi's method for Laplace's equation. Three of the boundaries are Dirichlet boundaries and so we can simply leave them alone. Only the Neumann boundary needs to be explicitly calculated at each iteration, and we'll do it by discretizing the derivative in its first order approximation:
End of explanation
##variable declarations
nx = 41
ny = 41
##initial conditions
p = numpy.zeros((ny,nx)) ##create a XxY vector of 0's
##plotting aids
x = numpy.linspace(0,1,nx)
y = numpy.linspace(0,1,ny)
##Dirichlet boundary conditions
p[-1,:] = numpy.sin(1.5*numpy.pi*x/x[-1])
Explanation: Rows and columns, and index order
The physical problem has two dimensions, so we also store the temperatures in two dimensions: in a 2D array.
We chose to store it with the $y$ coordinates corresponding to the rows of the array and $x$ coordinates varying with the columns (this is just a code design decision!). If we are consistent with the stencil formula (with $x$ corresponding to index $i$ and $y$ to index $j$), then $p_{i,j}$ will be stored in array format as p[j,i].
This might be a little confusing as most of us are used to writing coordinates in the format $(x,y)$, but our preference is to have the data stored so that it matches the physical orientation of the problem. Then, when we make a plot of the solution, the visualization will make sense to us, with respect to the geometry of our set-up. That's just nicer than to have the plot rotated!
<img src="./figures/rowcolumn.svg" width="400px">
Figure 2: Row-column data storage
As you can see on Figure 2 above, if we want to access the value $18$ we would write those coordinates as $(x_2, y_3)$. You can also see that its location is the 3rd row, 2nd column, so its array address would be p[3,2].
Again, this is a design decision. However you can choose to manipulate and store your data however you like; just remember to be consistent!
Let's relax!
The initial values of the potential field are zero everywhere (initial guess), except at the boundary:
$$p = \sin \left( \frac{\frac{3}{2}\pi x}{L} \right) \text{ at } y=H$$
To initialize the domain, numpy.zeros will handle everything except that one Dirichlet condition. Let's do it!
End of explanation
plot_3D(x, y, p)
Explanation: Now let's visualize the initial conditions using the plot_3D function, just to check we've got it right.
End of explanation
p = laplace2d(p.copy(), 1e-8)
Explanation: The p array is equal to zero everywhere, except along the boundary $y = 1$. Hopefully you can see how the relaxed solution and this initial condition are related.
Now, run the iterative solver with a target L2-norm difference between successive iterations of $10^{-8}$.
End of explanation
plot_3D(x,y,p)
Explanation: Let's make a gorgeous plot of the final field using the newly minted plot_3D function.
End of explanation
def laplace_IG(nx):
'''Generates initial guess for Laplace 2D problem for a
given number of grid points (nx) within the domain [0,1]x[0,1]
Parameters:
----------
nx: int
number of grid points in x (and implicitly y) direction
Returns:
-------
p: 2D array of float
Pressure distribution after relaxation
x: array of float
linspace coordinates in x
y: array of float
linspace coordinates in y
'''
##initial conditions
p = numpy.zeros((nx,nx)) ##create a XxY vector of 0's
##plotting aids
x = numpy.linspace(0,1,nx)
y = x
##Dirichlet boundary conditions
p[:,0] = 0
p[0,:] = 0
p[-1,:] = numpy.sin(1.5*numpy.pi*x/x[-1])
return p, x, y
Explanation: Awesome! That looks pretty good. But we'll need more than a simple visual check, though. The "eyeball metric" is very forgiving!
Convergence analysis
Convergence, Take 1
We want to make sure that our Jacobi function is working properly. Since we have an analytical solution, what better way than to do a grid-convergence analysis? We will run our solver for several grid sizes and look at how fast the L2 norm of the difference between consecutive iterations decreases.
Let's make our lives easier by writing a function to "reset" the initial guess for each grid so we don't have to keep copying and pasting them.
End of explanation
nx_values = [11, 21, 41, 81]
l2_target = 1e-8
error = numpy.empty_like(nx_values, dtype=numpy.float)
for i, nx in enumerate(nx_values):
p, x, y = laplace_IG(nx)
p = laplace2d(p.copy(), l2_target)
p_an = p_analytical(x, y)
error[i] = L2_error(p, p_an)
pyplot.figure(figsize=(6,6))
pyplot.grid(True)
pyplot.xlabel(r'$n_x$', fontsize=18)
pyplot.ylabel(r'$L_2$-norm of the error', fontsize=18)
pyplot.loglog(nx_values, error, color='k', ls='--', lw=2, marker='o')
pyplot.axis('equal');
Explanation: Now run Jacobi's method on the Laplace equation using four different grids, with the same exit criterion of $10^{-8}$ each time. Then, we look at the error versus the grid size in a log-log plot. What do we get?
End of explanation
def laplace2d_neumann(p, l2_target):
'''Iteratively solves the Laplace equation using the Jacobi method
with second-order Neumann boundary conditions
Parameters:
----------
p: 2D array of float
Initial potential distribution
l2_target: float
target for the difference between consecutive solutions
Returns:
-------
p: 2D array of float
Potential distribution after relaxation
'''
l2norm = 1
pn = numpy.empty_like(p)
while l2norm > l2_target:
pn = p.copy()
p[1:-1,1:-1] = .25 * (pn[1:-1,2:] + pn[1:-1, :-2] \
+ pn[2:, 1:-1] + pn[:-2, 1:-1])
##2nd-order Neumann B.C. along x = L
p[1:-1,-1] = .25 * (2*pn[1:-1,-2] + pn[2:, -1] + pn[:-2, -1])
l2norm = L2_error(p, pn)
return p
Explanation: Hmm. That doesn't look like 2nd-order convergence, but we're using second-order finite differences. What's going on? The culprit is the boundary conditions. Dirichlet conditions are order-agnostic (a set value is a set value), but the scheme we used for the Neumann boundary condition is 1st-order.
Remember when we said that the boundaries drive the problem? One boundary that's 1st-order completely tanked our spatial convergence. Let's fix it!
2nd-order Neumann BCs
Up to this point, we have used the first-order approximation of a derivative to satisfy Neumann B.C.'s. For a boundary located at $x=0$ this reads,
\begin{equation}
\frac{p^{k+1}{1,j} - p^{k+1}{0,j}}{\Delta x} = 0
\end{equation}
which, solving for $p^{k+1}_{0,j}$ gives us
\begin{equation}
p^{k+1}{0,j} = p^{k+1}{1,j}
\end{equation}
Using that Neumann condition will limit us to 1st-order convergence. Instead, we can start with a 2nd-order approximation (the central-difference approximation):
\begin{equation}
\frac{p^{k+1}{1,j} - p^{k+1}{-1,j}}{2 \Delta x} = 0
\end{equation}
That seems problematic, since there is no grid point $p^{k}_{-1,j}$. But no matter … let's carry on. According to the 2nd-order approximation,
\begin{equation}
p^{k+1}{-1,j} = p^{k+1}{1,j}
\end{equation}
Recall the finite-difference Jacobi equation with $i=0$:
\begin{equation}
p^{k+1}{0,j} = \frac{1}{4} \left(p^{k}{0,j-1} + p^k_{0,j+1} + p^{k}{-1,j} + p^k{1,j} \right)
\end{equation}
Notice that the equation relies on the troublesome (nonexistent) point $p^k_{-1,j}$, but according to the equality just above, we have a value we can substitute, namely $p^k_{1,j}$. Ah! We've completed the 2nd-order Neumann condition:
\begin{equation}
p^{k+1}{0,j} = \frac{1}{4} \left(p^{k}{0,j-1} + p^k_{0,j+1} + 2p^{k}_{1,j} \right)
\end{equation}
That's a bit more complicated than the first-order version, but it's relatively straightforward to code.
Note
Do not confuse $p^{k+1}{-1,j}$ with <tt>p[-1]</tt>:
<tt>p[-1]</tt> is a piece of Python code used to refer to the last element of a list or array named <tt>p</tt>. $p^{k+1}{-1,j}$ is a 'ghost' point that describes a position that lies outside the actual domain.
Convergence, Take 2
We can copy the previous Jacobi function and replace only the line implementing the Neumann boundary condition.
Careful!
Remember that our problem has the Neumann boundary located at $x = L$ and not $x = 0$ as we assumed in the derivation above.
End of explanation
nx_values = [11, 21, 41, 81]
l2_target = 1e-8
error = numpy.empty_like(nx_values, dtype=numpy.float)
for i, nx in enumerate(nx_values):
p, x, y = laplace_IG(nx)
p = laplace2d_neumann(p.copy(), l2_target)
p_an = p_analytical(x, y)
error[i] = L2_error(p, p_an)
pyplot.figure(figsize=(6,6))
pyplot.grid(True)
pyplot.xlabel(r'$n_x$', fontsize=18)
pyplot.ylabel(r'$L_2$-norm of the error', fontsize=18)
pyplot.loglog(nx_values, error, color='k', ls='--', lw=2, marker='o')
pyplot.axis('equal');
Explanation: Again, this is the exact same code as before, but now we're running the Jacobi solver with a 2nd-order Neumann boundary condition. Let's do a grid-refinement analysis, and plot the error versus the grid spacing.
End of explanation
#Ignore this cell, It simply loads a style for the notebook.
from IPython.core.display import HTML
def css_styling():
try:
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
except:
pass
css_styling()
Explanation: Nice! That's much better. It might not be exactly 2nd-order, but it's awfully close. (What is "close enough" in regards to observed convergence rates is a thorny question.)
Now, notice from this plot that the error on the finest grid is around $0.0002$. Given this, perhaps we didn't need to continue iterating until a target difference between two solutions of $10^{-8}$. The spatial accuracy of the finite difference approximation is much worse than that! But we didn't know it ahead of time, did we? That's the "catch 22" of iterative solution of systems arising from discretization of PDEs.
Final word
The Jacobi method is the simplest relaxation scheme to explain and to apply. It is also the worst iterative solver! In practice, it is seldom used on its own as a solver, although it is useful as a smoother with multi-grid methods. There are much better iterative methods! If you are curious you can find them in this lesson.
End of explanation |
7,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a song recommender
Fire up GraphLab Create
Step1: Load music data
Step2: Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
Step3: Showing the most popular songs in the dataset
Step4: Count number of unique users in the dataset
Step5: Create a song recommender
Step6: Simple popularity-based recommender
Step7: Use the popularity model to make some predictions
A popularity model makes the same prediction for all users, so provides no personalization.
Step8: Build a song recommender with personalization
We now create a model that allows us to make personalized recommendations to each user.
Step9: Applying the personalized model to make song recommendations
As you can see, different users get different recommendations now.
Step10: We can also apply the model to find similar songs to any song in the dataset
Step11: Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves. | Python Code:
import graphlab
Explanation: Building a song recommender
Fire up GraphLab Create
End of explanation
song_data = graphlab.SFrame('song_data.gl/')
Explanation: Load music data
End of explanation
song_data.head()
Explanation: Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
End of explanation
graphlab.canvas.set_target('ipynb')
song_data['song'].show()
len(song_data)
Explanation: Showing the most popular songs in the dataset
End of explanation
users = song_data['user_id'].unique()
for artist in ['Kanye West','Foo Fighters','Taylor Swift','Lady GaGa']:
print artist, len(song_data[song_data['artist'] == artist]['user_id'].unique())
for artist in ['Kings Of Leon','Coldplay','Taylor Swift','Lady GaGa']:
print artist, song_data[song_data['artist'] == artist]['listen_count'].sum()
pop = song_data.groupby(key_columns='artist', operations={'total_count': graphlab.aggregate.SUM('listen_count')})
pop.sort('total_count', ascending=False)
pop.sort('total_count', ascending=True)
len(users)
Explanation: Count number of unique users in the dataset
End of explanation
train_data,test_data = song_data.random_split(.8,seed=0)
Explanation: Create a song recommender
End of explanation
popularity_model = graphlab.popularity_recommender.create(train_data,
user_id='user_id',
item_id='song')
Explanation: Simple popularity-based recommender
End of explanation
popularity_model.recommend(users=[users[0]])
popularity_model.recommend(users=[users[1]])
Explanation: Use the popularity model to make some predictions
A popularity model makes the same prediction for all users, so provides no personalization.
End of explanation
personalized_model = graphlab.item_similarity_recommender.create(train_data,
user_id='user_id',
item_id='song')
subset_test_users = test_data['user_id'].unique()[0:10000]
rec_songs = personalized_model.recommend(users=subset_test_users)
print len(rec_songs)
rec_1song = rec_songs[rec_songs['rank']==1]
res = rec_1song.groupby(key_columns='song', operations={'count': graphlab.aggregate.COUNT()})
print res.sort('count', ascending=False)
print len(rec_songs)
Explanation: Build a song recommender with personalization
We now create a model that allows us to make personalized recommendations to each user.
End of explanation
personalized_model.recommend(users=[users[0]])
personalized_model.recommend(users=[users[1]])
Explanation: Applying the personalized model to make song recommendations
As you can see, different users get different recommendations now.
End of explanation
personalized_model.get_similar_items(['With Or Without You - U2'])
personalized_model.get_similar_items(['Chan Chan (Live) - Buena Vista Social Club'])
Explanation: We can also apply the model to find similar songs to any song in the dataset
End of explanation
%matplotlib inline
model_performance = graphlab.recommender.util.compare_models(test_data,
[popularity_model,personalized_model],
user_sample=0.05)
Explanation: Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves.
End of explanation |
7,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo2.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 12:06
* Hora final : 12:26
* Filamento extruido:
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 5.3 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En los caso 3 y 5 se mantiene un incremento de +2.
* En los casos 4 y 6 se reduce el incremento a -1.
Este experimento dura 20min por que a simple vista se ve que no aporta ninguna mejora, de hecho, añade más inestabilidad al sitema.
Se opta por añadir más reglas al sistema, e intentar hacer que la velocidad de tracción no llegue a los límites.
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
7,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project
Step1: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation
Step3: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset
Step4: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable
Step5: Answer
Step6: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint
Step7: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint
Step9: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint
Step10: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
Step11: Answer
Step12: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.
- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.
- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.
- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Minimum price of the data
minimum_price = np.amin(prices)
# Maximum price of the data
maximum_price = np.amax(prices)
# Mean price of the data
mean_price = np.mean(prices)
# Median price of the data
median_price = np.median(prices)
# Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.
- Store each calculation in their respective variable.
End of explanation
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer:
- I think 'MEDV' will increase with an increase in the value of 'RM',the reason is that larger 'RM' means more people around and more richer men.
- 'LSTAT' increase , 'MEDV' decrease.The reson just like the above one.Less 'lower class' mean more 'higher class' in the neighborhood.
- 'PTRATIO' increase, 'MEDV' decrease.when 'PTRATIO' is low mean the teachers have more attention to take good care of every students.All people want their children have good education
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
Would you consider this model to have successfully captured the variation of the target variable? Why or why not?
Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
End of explanation
from sklearn.model_selection import train_test_split
# Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=2)
# Success
print "Training and testing split was successful."
Explanation: Answer:
I think this model have successfully captured the cariation of the target variable.
reason:
Best possible score is 1.0,and the result 0.923 is high enough
Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer:
if we don't split training and testing data,just like we give the answers to the students who is taking an exam.Of course they can get high score.but it can't tell us whether it is a good model to predict other data.
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer:
I choose the second graph, mac_depth=3
1. What happens to the score of the training curve as more training points are added?
as the the increase , the score decrease quiltly, but the rate of decrease speed reduce,and the score trend to be a constant.
2. What about the testing curve?
the score increase quiltly, but the rate of decrease speed reduce,and the score trend to be a constant.
3. Would having more training points benefit the model?
I don't think so.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
End of explanation
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
regressor = DecisionTreeRegressor()
# Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11, 1)}
print params
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer:
1. max_depth=1 high bias
2. max_depth=10 high variance
3. train_score,when max_depth=1, train _score is small than 0.5,it must be high bias;when max_depth=10 train_score is about 1.0 at the same time test_score is very low which means over-fit。
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer:
max_depth=4 is the best
according to the largest test_score.
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer:
1. exhaustive search over specified parameter values for an estimator.
2. it can automatically cross validation using each of those parameters keeping track of the resulting scores
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer:
the data split into smaller sets which number is k.
Step1:we choose one of the k fold as test data, and other k-1 folds as train folds.
Step2:get the score
Step3:repeat Step1 and Step2 until every k fold is used as test data.
Step4:get the mean of the scores.
if we don't use k-fold,there are two main problem:
The first one is the results can depend on a particular random choice for the pair of (train, validation) sets.
The second one we reduce the numbers of samples that can be used in learning model.
luckily, k-fold can fix these problem.
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_iter') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
Explanation: Answer:
Just like my guess in Question, Max_depth=4 has the best performance
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: Answer:
- Client 1 : $415,800.00
Client 2: $236,478.26
Client 3: $888,720.00
yes,these prices seem reasonable。
Reason:
In Question1 I have explain the relationship between features and prices.
according to the datas.
RM: Client3 > Client1 > Client2
LSTAT: Client3 < Client1 < Client2
PTRATIO: Client3 > Client1 > Client2
So Client 3's home should have the highest price,and Client 2's home have the lowest price。
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation |
7,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step7: Chapter 4 Linear Algebra
Vectors
Step9: Matrices | Python Code:
height = [70, # inches,
170, # pounds,
40] # years
grades = [95, # exam1,
80, # exam2,
75, # exam3,
62] # exam4
def vector_add(v, w):
'''adds corresponding elements'''
return [v_i + w_i
for v_i, w_i in zip(v, w)]
def vector_substract(v, w):
substracts corresponding elements
return [v_i - w_i
for v_i, w_i in zip(v, w)]
def vector_sum(vectors):
sums all corresponding elements
result = vectors[0] # start with the first vector
for vector in vectors[1: ]: # then loop over the others
result = vector_add(result, vector) # and add them to the result
return result
def vector_sum2(vectors):
return reduce(vector_add, vectors)
def scalar_multiply(c, v):
c is a number and v is a vector
return [c * v_i for v_i in v]
def vector_mean(vectors):
compute the vector whose ith element is the mean
of the ith elements of the input vectors
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
def dot(v, w):
v_1 * w_1 + v_2 * w_2 + ... + v_n * w_n
return sum(v_i * w_i
for v_i, w_i in zip(v, w))
def sum_of_squares(v):
v_1 * v_1 + v_2 * v_2 + v_3 * v_3 + ... + v_n * v_n
return dot(v, v)
import math
def magnitude(v):
return math.sqrt(sum_of_squares(v))
def squared_distance(v, w):
(v_1 - w_1)^2 + (v_2 - w_2)^2 + ... + (v_n - w_n)^2
return sum_of_squares(substract(v, w))
def distance1(v, w):
return math.sqrt(squared_distance(v, w))
def distance2(v, w):
return magnitude(substract(v, w))
Explanation: Chapter 4 Linear Algebra
Vectors
End of explanation
A = [[1, 2, 3], # A has 2 rows and 3 columns
[4, 5, 6]]
B = [[1, 2], # B has 3 rows and 2 columns
[3, 4],
[5, 6]]
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] for A_i in A]
def make_matrix(num_rows, num_cols, entry_fn):
returns a num_rows x num_cols matrix
whose (i, j)th entry is generated by function entry_fn(i, j)
return [[entry_fn(i, j)
for j in range(num_cols)]
for i in range(num_rows)]
def is_diagonal(i, j):
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
identity_matrix
Explanation: Matrices
End of explanation |
7,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a functional label from source estimates
Threshold source estimates and produce a functional label. The label
is typically the region of interest that contains high values.
Here we compare the average time course in the anatomical label obtained
by FreeSurfer segmentation and the average time course from the
functional label. As expected the time course in the functional
label yields higher values.
Step1: plot the time courses....
Step2: plot brain in 3D with PySurfer if available | Python Code:
# Author: Luke Bloy <luke.bloy@gmail.com>
# Alex Gramfort <alexandre.gramfort@inria.fr>
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
subjects_dir = data_path + '/subjects'
subject = 'sample'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Compute a label/ROI based on the peak power between 80 and 120 ms.
# The label bankssts-lh is used for the comparison.
aparc_label_name = 'bankssts-lh'
tmin, tmax = 0.080, 0.120
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src'] # get the source space
# Compute inverse solution
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori='normal')
# Make an STC in the time interval of interest and take the mean
stc_mean = stc.copy().crop(tmin, tmax).mean()
# use the stc_mean to generate a functional label
# region growing is halted at 60% of the peak value within the
# anatomical label / ROI specified by aparc_label_name
label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
stc_mean_label = stc_mean.in_label(label)
data = np.abs(stc_mean_label.data)
stc_mean_label.data[data < 0.6 * np.max(data)] = 0.
# 8.5% of original source space vertices were omitted during forward
# calculation, suppress the warning here with verbose='error'
func_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True,
subjects_dir=subjects_dir, connected=True,
verbose='error')
# take first as func_labels are ordered based on maximum values in stc
func_label = func_labels[0]
# load the anatomical ROI for comparison
anat_label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
# extract the anatomical time course for each label
stc_anat_label = stc.in_label(anat_label)
pca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0]
stc_func_label = stc.in_label(func_label)
pca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0]
# flip the pca so that the max power between tmin and tmax is positive
pca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))])
pca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))])
Explanation: Generate a functional label from source estimates
Threshold source estimates and produce a functional label. The label
is typically the region of interest that contains high values.
Here we compare the average time course in the anatomical label obtained
by FreeSurfer segmentation and the average time course from the
functional label. As expected the time course in the functional
label yields higher values.
End of explanation
plt.figure()
plt.plot(1e3 * stc_anat_label.times, pca_anat, 'k',
label='Anatomical %s' % aparc_label_name)
plt.plot(1e3 * stc_func_label.times, pca_func, 'b',
label='Functional %s' % aparc_label_name)
plt.legend()
plt.show()
Explanation: plot the time courses....
End of explanation
brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir)
brain.show_view('lateral')
# show both labels
brain.add_label(anat_label, borders=True, color='k')
brain.add_label(func_label, borders=True, color='b')
Explanation: plot brain in 3D with PySurfer if available
End of explanation |
7,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test environment setup
Step1: Create a new RTA workload generator object
The wlgen
Step2: Workload Generation Examples
Single periodic task
An RTApp workload is defined by specifying a kind, which represents the way
we want to defined the behavior of each task.<br>
The most common kind is profile, which allows to define each task using one
of the predefined profile supported by the RTA base class.<br>
<br>
The following example shows how to generate a "periodic" task<br>
Step3: The output of the previous cell reports the main properties of the generated
tasks. Thus for example we see that the first task is configure to be
Step4: Workload mix
Using the wlgen
Step5: Workload composition | Python Code:
# Let's use the local host as a target
te = TestEnv(
target_conf={
"platform": 'host',
"username": 'put_here_your_username'
})
Explanation: Test environment setup
End of explanation
# Create a new RTApp workload generator
rtapp = RTA(
target=te.target, # Target execution on the local machine
name='example', # This is the name of the JSON configuration file reporting
# the generated RTApp configuration
calibration={0: 10, 1: 11, 2: 12, 3: 13} # These are a set of fake
# calibration values
)
Explanation: Create a new RTA workload generator object
The wlgen::RTA class is a workload generator which exposes an API to configure
RTApp based workload as well as to execute them on a target.
End of explanation
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
# 3. PERIODIC task
#
# This class defines a task which load is periodic with a configured
# period and duty-cycle.
#
# This class is a specialization of the 'pulse' class since a periodic
# load is generated as a sequence of pulse loads.
#
# Args:
# cuty_cycle_pct (int, [0-100]): the pulses load [%]
# default: 50[%]
# duration_s (float): the duration in [s] of the entire workload
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# sched (dict): the scheduler configuration for this task
'task_per20': Periodic(
period_ms=100, # period
duty_cycle_pct=20, # duty cycle
duration_s=5, # duration
cpus=None, # run on all CPUS
sched={
"policy": "FIFO", # Run this task as a SCHED_FIFO task
},
delay_s=0 # start at the start of RTApp
).get(),
},
# 4. use this folder for task logfiles
run_dir='/tmp'
);
Explanation: Workload Generation Examples
Single periodic task
An RTApp workload is defined by specifying a kind, which represents the way
we want to defined the behavior of each task.<br>
The most common kind is profile, which allows to define each task using one
of the predefined profile supported by the RTA base class.<br>
<br>
The following example shows how to generate a "periodic" task<br>
End of explanation
# Dump the configured JSON file for that task
with open("./example_00.json") as fh:
rtapp_config = json.load(fh)
print json.dumps(rtapp_config, indent=4)
Explanation: The output of the previous cell reports the main properties of the generated
tasks. Thus for example we see that the first task is configure to be:
1. named task_per20
2. will be executed as a SCHED_FIFO task
3. generating a load which is calibrated with respect to the CPU 0
3. with one single "phase" which defines a peripodic load for the duration of 5[s]
4. that periodic load consistes of 50 cycles
5. each cycle has a period of 100[ms] and a duty-cycle of 20%
6. which means that the task, for every cycle, will run for 20[ms] and then sleep for 20[ms]
All these properties are translated into a JSON configuration file for RTApp.<br>
Let see what it looks like the generated configuration file:
End of explanation
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
# 3. RAMP task
#
# This class defines a task which load is a ramp with a configured number
# of steps according to the input parameters.
#
# Args:
# start_pct (int, [0-100]): the initial load [%], (default 0[%])
# end_pct (int, [0-100]): the final load [%], (default 100[%])
# delta_pct (int, [0-100]): the load increase/decrease [%],
# default: 10[%]
# increase if start_prc < end_prc
# decrease if start_prc > end_prc
# time_s (float): the duration in [s] of each load step
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_rmp20_5-60': Ramp(
period_ms=100, # period
start_pct=5, # intial load
end_pct=65, # end load
delta_pct=20, # load % increase...
time_s=1, # ... every 1[s]
cpus="0" # run just on first CPU
).get(),
# 4. STEP task
#
# This class defines a task which load is a step with a configured
# initial and final load.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default 0[%])
# end_pct (int, [0-100]): the final load [%]
# default 100[%]
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_stp10-50': Step(
period_ms=100, # period
start_pct=0, # intial load
end_pct=50, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
# 5. PULSE task
#
# This class defines a task which load is a pulse with a configured
# initial and final load.
#
# The main difference with the 'step' class is that a pulse workload is
# by definition a 'step down', i.e. the workload switch from an finial
# load to a final one which is always lower than the initial one.
# Moreover, a pulse load does not generate a sleep phase in case of 0[%]
# load, i.e. the task ends as soon as the non null initial load has
# completed.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default: 0[%]
# end_pct (int, [0-100]): the final load [%]
# default: 100[%]
# NOTE: must be lower than start_pct value
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# NOTE: if end_pct is 0, the task end after the
# start_pct period completed
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_pls5-80': Pulse(
period_ms=100, # period
start_pct=65, # intial load
end_pct=5, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
},
# 6. use this folder for task logfiles
run_dir='/tmp'
);
# Dump the configured JSON file for that task
with open("./example_00.json") as fh:
rtapp_config = json.load(fh)
print json.dumps(rtapp_config, indent=4)
Explanation: Workload mix
Using the wlgen::RTA workload generator we can easily create multiple tasks, each one with different "profiles", which are executed once the rtapp application is started in the target.<br>
<br>
In the following example we configure a workload mix composed by a RAMP task, a STEP task and a PULSE task:
End of explanation
# Initial phase and pinning parameters
ramp = Ramp(period_ms=100, start_pct=5, end_pct=65, delta_pct=20, time_s=1,
cpus="0")
# Following phases
medium_slow = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=100)
high_fast = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=10)
medium_fast = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=1)
high_slow = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=100)
#Compose the task
complex_task = ramp + medium_slow + high_fast + medium_fast + high_slow
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
'complex' : complex_task.get()
},
# 6. use this folder for task logfiles
run_dir='/tmp'
)
Explanation: Workload composition
End of explanation |
7,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This code generates a Fourier transform of a specified shape as outlined in Timmer and Koenig 1995 (A&A vol 300 p 707-710), and plots it and its corresponding time series. The user can specify a mean count rate for the light curve, fractional rms^2 variance for the power spectrum, and 'Poissonify' the light curve.
by Abigail Stevens, A.L.Stevens at uva.nl
Step1: Define a function to make a pulsation light curve
Step3: Define a function to re-bin a power spectrum in frequency by a specified constant > 1.
Step4: Define functions to make different power spectral shapes
Step5: Defining functions for applying an inverse fractional rms^2 and inverse Leahy normalization to the noise psd shape
Step6: Define some basics
Step7: Defining the psd shape of the noise
Step8: Generating a noise process with the specific shape noise_psd_shape
Step9: Plotting the noise process
Step10: Generating a pulsation using the above function 'make_pulsation'
Step11: Plotting just the pulsation
Step12: Summing together the pulsation and the noise (where the noise has a power law, QPO, etc.)
Step13: Plotting the periodic pulsation + noise process
Step14: Playing around with other stuff | Python Code:
import numpy as np
from scipy import fftpack
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
import matplotlib.font_manager as font_manager
import itertools
## Shows the plots inline, instead of in a separate window:
%matplotlib inline
## Sets the font size for plotting
font_prop = font_manager.FontProperties(size=18)
def power_of_two(num):
## Checks if an input is a power of 2 (1 <= num < 2147483648).
n = int(num)
x = 2
assert n > 0, "ERROR: Number must be positive."
if n == 1:
return True
else:
while x < n and x < 2147483648:
x *= 2
return n == x
Explanation: This code generates a Fourier transform of a specified shape as outlined in Timmer and Koenig 1995 (A&A vol 300 p 707-710), and plots it and its corresponding time series. The user can specify a mean count rate for the light curve, fractional rms^2 variance for the power spectrum, and 'Poissonify' the light curve.
by Abigail Stevens, A.L.Stevens at uva.nl
End of explanation
def make_pulsation(n_bins, dt, freq, amp, mean, phase):
binning = 10
period = 1.0 / freq # in seconds
bins_per_period = period / dt
tiny_bins = np.arange(0, n_bins, 1.0/binning)
smooth_sine = amp * np.sin(2.0 * np.pi * tiny_bins / bins_per_period + phase) + mean
time_series = np.mean(np.array_split(smooth_sine, n_bins), axis=1)
return time_series
Explanation: Define a function to make a pulsation light curve
End of explanation
def geometric_rebinning(freq, power, rebin_const):
geometric_rebinning
Re-bins the power spectrum in frequency space by some re-binning constant (rebin_const > 1).
## Initializing variables
rb_power = np.asarray([]) # List of re-binned power
rb_freq = np.asarray([]) # List of re-binned frequencies
real_index = 1.0 # The unrounded next index in power
int_index = 1 # The int of real_index, added to current_m every iteration
current_m = 1 # Current index in power
prev_m = 0 # Previous index m
bin_power = 0.0 # The power of the current re-binned bin
bin_freq = 0.0 # The frequency of the current re-binned bin
bin_range = 0.0 # The range of un-binned bins covered by this re-binned bin
freq_min = np.asarray([])
freq_max = np.asarray([])
## Looping through the length of the array power, geometric bin by geometric bin,
## to compute the average power and frequency of that geometric bin.
## Equations for frequency, power, and error are from Adam Ingram's PhD thesis.
while current_m < len(power):
# while current_m < 100: # used for debugging
## Initializing clean variables for each iteration of the while-loop
bin_power = 0.0 # the averaged power at each index of rb_power
bin_range = 0.0
bin_freq = 0.0
## Determining the range of indices this specific geometric bin covers
bin_range = np.absolute(current_m - prev_m)
## Want mean of data points contained within one geometric bin
bin_power = np.mean(power[prev_m:current_m])
## Computing the mean frequency of a geometric bin
bin_freq = np.mean(freq[prev_m:current_m])
## Appending values to arrays
rb_power = np.append(rb_power, bin_power)
rb_freq = np.append(rb_freq, bin_freq)
freq_min = np.append(freq_min, freq[prev_m])
freq_max = np.append(freq_max, freq[current_m])
## Incrementing for the next iteration of the loop
## Since the for-loop goes from prev_m to current_m-1 (since that's how
## the range function and array slicing works) it's ok that we set
## prev_m = current_m here for the next round. This will not cause any
## double-counting bins or skipping bins.
prev_m = current_m
real_index *= rebin_const
int_index = int(round(real_index))
current_m += int_index
bin_range = None
bin_freq = None
bin_power = None
## End of while-loop
return rb_freq, rb_power, freq_min, freq_max
## End of function 'geometric_rebinning'
Explanation: Define a function to re-bin a power spectrum in frequency by a specified constant > 1.
End of explanation
def powerlaw(w, beta):
## Gives a powerlaw of (1/w)^beta
pl = np.zeros(len(w))
pl[1:] = w[1:] ** (beta)
pl[0] = np.inf
return pl
def lorentzian(w, w_0, gamma):
## Gives a Lorentzian centered on w_0 with a FWHM of gamma
numerator = gamma / (np.pi * 2.0)
denominator = (w - w_0) ** 2 + (1.0/2.0 * gamma) ** 2
L = numerator / denominator
return L
def gaussian(w, mean, std_dev):
## Gives a Gaussian with a mean of mean and a standard deviation of std_dev
## FWHM = 2 * np.sqrt(2 * np.log(2))*std_dev
exp_numerator = -(w - mean)**2
exp_denominator = 2 * std_dev**2
G = np.exp(exp_numerator / exp_denominator)
return G
def powerlaw_expdecay(w, beta, alpha):
pl_exp = np.where(w != 0, (1.0 / w) ** beta * np.exp(-alpha * w), np.inf)
return pl_exp
def broken_powerlaw(w, w_b, beta_1, beta_2):
c = w_b ** (-beta_1 + beta_2) ## scale factor so that they're equal at the break frequency
pl_1 = w[np.where(w <= w_b)] ** (-beta_1)
pl_2 = c * w[np.where(w > w_b)] ** (-beta_2)
pl = np.append(pl_1, pl_2)
return pl
Explanation: Define functions to make different power spectral shapes: power law, Lorentzian, Gaussian, power law with exponential decay, broken power law
End of explanation
def inv_frac_rms2_norm(amplitudes, dt, n_bins, mean_rate):
# rms2_power = 2.0 * power * dt / float(n_bins) / (mean_rate ** 2)
inv_rms2 = amplitudes * n_bins * (mean_rate ** 2) / 2.0 / dt
return inv_rms2
def inv_leahy_norm(amplitudes, dt, n_bins, mean_rate):
# leahy_power = 2.0 * power * dt / float(n_bins) / mean_rate
inv_leahy = amplitudes * n_bins * mean_rate / 2.0 / dt
return inv_leahy
Explanation: Defining functions for applying an inverse fractional rms^2 and inverse Leahy normalization to the noise psd shape
End of explanation
n_bins = 8192
# n_bins = 64
dt = 64.0 / 8192.0
print "dt = %.15f" % dt
df = 1.0 / dt / n_bins
# print df
assert power_of_two(n_bins), "ERROR: N_bins must be a power of 2 and an even integer."
## Making an array of Fourier frequencies
frequencies = np.arange(float(-n_bins/2)+1, float(n_bins/2)+1) * df
pos_freq = frequencies[np.where(frequencies >= 0)]
## positive should have 2 more than negative, because of the 0 freq and the nyquist freq
neg_freq = frequencies[np.where(frequencies < 0)]
nyquist = pos_freq[-1]
Explanation: Define some basics: number of bins per segment, timestep between bins, and making the fourier frequencies.
End of explanation
noise_psd_variance = 0.007 ## in fractional rms^2 units
# noise_mean_rate = 1000.0 ## in count rate units
noise_mean_rate = 500
beta = -1.0 ## Slope of power law (include negative here if needed)
## For a Lorentzian QPO
# w_0 = 5.46710256 ## Centroid frequency of QPO
# fwhm = 0.80653875 ## FWHM of QPO
## For a Gaussian QPO
w_0 = 5.4
# g_stddev = 0.473032436922
# fwhm = 2.0 * np.sqrt(2.0 * np.log(2.0)) * g_stddev
fwhm = 0.9
pl_scale = 0.08 ## relative scale factor
qpo_scale = 1.0 ## relative scale factor
Q = w_0 / fwhm ## For QPOs, Q factor is w_0 / gamma
print "Q =", Q
# noise_psd_shape = noise_psd_variance * powerlaw(pos_freq, beta)
noise_psd_shape = noise_psd_variance * (qpo_scale * lorentzian(pos_freq, w_0, fwhm) + pl_scale * powerlaw(pos_freq, beta))
# noise_psd_shape = noise_psd_variance * (qpo_scale * gaussian(pos_freq, w_0, g_stddev) + pl_scale * powerlaw(pos_freq, beta))
# noise_psd_shape = lorentzian(pos_freq, w_1, gamma_1) + lorentzian(pos_freq, w_2, gamma_2) + powerlaw(pos_freq, beta)
# noise_psd_shape = lorentzian(pos_freq, w_1, gamma_1) + lorentzian(pos_freq, w_2, gamma_2)
noise_psd_shape = inv_frac_rms2_norm(noise_psd_shape, dt, n_bins, noise_mean_rate)
Explanation: Defining the psd shape of the noise
End of explanation
rand_r = np.random.standard_normal(len(pos_freq))
rand_i = np.random.standard_normal(len(pos_freq)-1)
rand_i = np.append(rand_i, 0.0) # because the nyquist frequency should only have a real value
## Creating the real and imaginary values from the lists of random numbers and the frequencies
r_values = rand_r * np.sqrt(0.5 * noise_psd_shape)
i_values = rand_i * np.sqrt(0.5 * noise_psd_shape)
r_values[np.where(pos_freq == 0)] = 0
i_values[np.where(pos_freq == 0)] = 0
## Combining to make the Fourier transform
FT_pos = r_values + i_values*1j
FT_neg = np.conj(FT_pos[1:-1])
FT_neg = FT_neg[::-1] ## Need to flip direction of the negative frequency FT values so that they match up correctly
FT = np.append(FT_pos, FT_neg)
## Making the light curve from the Fourier transform and Poissonifying it
noise_lc = fftpack.ifft(FT).real + noise_mean_rate
noise_lc[np.where(noise_lc < 0)] = 0.0
# noise_lc_poiss = np.random.poisson(noise_lc * dt)
noise_lc_poiss = np.random.poisson(noise_lc * dt) / dt
## Making the power spectrum from the Poissonified light curve
real_mean = np.mean(noise_lc_poiss)
noise_power = np.absolute(fftpack.fft(noise_lc_poiss - real_mean))**2
noise_power = noise_power[0:len(pos_freq)]
## Applying the fractional rms^2 normalization to the power spectrum
noise_power *= 2.0 * dt / float(n_bins) / (real_mean **2)
noise_level = 2.0 / real_mean
noise_power -= noise_level
## Re-binning the power spectrum in frequency
rb_freq, rb_power, freq_min, freq_max = geometric_rebinning(pos_freq, noise_power, 1.01)
Explanation: Generating a noise process with the specific shape noise_psd_shape
End of explanation
super_title_noise="Noise process"
npn_noise = rb_power * rb_freq
time_bins = np.arange(n_bins)
time = time_bins * dt
# fig, ax1 = plt.subplots(1, 1, figsize=(10,5))
fig, (ax1, ax2) = plt.subplots(2,1, figsize=(10,10))
# fig.suptitle(super_title_noise, fontsize=20, y=1.03)
ax1.plot(time, noise_lc_poiss, linewidth=2.0, color='purple')
ax1.set_xlabel('Time (s)', fontproperties=font_prop)
ax1.set_ylabel('Count rate (cts/s)', fontproperties=font_prop)
# ax1.set_xlim(0,0.3)
ax1.set_xlim(np.min(time), np.max(time))
ax1.set_xlim(0,1)
# ax1.set_ylim(0,450)
ax1.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax1.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
# ax1.set_title('Light curve', fontproperties=font_prop)
ax1.set_title('Timmer & Koenig Simulated Light Curve Segment of QPO', fontproperties=font_prop)
# ax2.plot(pos_freq, pulse_power * pos_freq, linewidth=2.0)
ax2.plot(rb_freq, npn_noise, linewidth=2.0)
ax2.set_xscale('log')
ax2.set_yscale('log')
ax2.set_xlabel(r'Frequency (Hz)', fontproperties=font_prop)
ax2.set_ylabel(r'Power $\times$ frequency', fontproperties=font_prop)
ax2.set_xlim(0, nyquist)
# ax2.set_xlim(0,200)
# ax2.set_ylim(1e-5, 1e-1)
ax2.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax2.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax2.set_title('Power density spectrum', fontproperties=font_prop)
fig.tight_layout(pad=1.0, h_pad=2.0)
plt.savefig("FAKEGX339-BQPO_QPO_lightcurve.eps")
plt.show()
# print np.var(noise_power, ddof=1)
print "Q =", Q
print "w_0 =", w_0
print "fwhm =", fwhm
print "beta =", beta
Explanation: Plotting the noise process: light curve and power spectrum.
End of explanation
pulse_mean = 1.0 # fractional
pulse_amp = 0.05 # fractional
freq = 40
assert freq < nyquist, "ERROR: Pulsation frequency must be less than the Nyquist frequency."
period = 1.0 / freq # in seconds
bins_per_period = period / dt
pulse_lc = make_pulsation(n_bins, dt, freq, pulse_amp, pulse_mean, 0.0)
pulse_unnorm_power = np.absolute(fftpack.fft(pulse_lc)) ** 2
pulse_unnorm_power = pulse_unnorm_power[0:len(pos_freq)]
pulse_power = 2.0 * pulse_unnorm_power * dt / float(n_bins) / (pulse_mean ** 2)
Explanation: Generating a pulsation using the above function 'make_pulsation'
End of explanation
super_title_pulse = "Periodic pulsation"
npn_pulse = pulse_power[1:] * pos_freq[1:]
# npn_pulse = pulse_power[1:]
fig, (ax1, ax2) = plt.subplots(2,1, figsize=(10,10))
fig.suptitle(super_title_pulse, fontsize=20, y=1.03)
ax1.plot(time, pulse_lc, linewidth=2.0, color='g')
ax1.set_xlabel('Elapsed time (seconds)', fontproperties=font_prop)
ax1.set_ylabel('Relative count rate', fontproperties=font_prop)
ax1.set_xlim(0, 0.3)
ax1.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax1.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax1.set_title('Light curve', fontproperties=font_prop)
ax2.plot(pos_freq[1:], npn_pulse, linewidth=2.0)
ax2.set_xlabel('Frequency (Hz)', fontproperties=font_prop)
ax2.set_ylabel(r'Power $\times$ frequency', fontproperties=font_prop)
ax2.set_xscale('log')
ax2.set_xlim(pos_freq[1], nyquist)
# ax2.set_xlim(0,200)
ax2.set_ylim(0, np.max(npn_pulse)+.5*np.max(npn_pulse))
## Setting the y-axis minor ticks. It's complicated.
y_maj_loc = ax2.get_yticks()
y_min_mult = 0.5 * (y_maj_loc[1] - y_maj_loc[0])
yLocator = MultipleLocator(y_min_mult) ## location of minor ticks on the y-axis
ax2.yaxis.set_minor_locator(yLocator)
ax2.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax2.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax2.set_title('Power density spectrum', fontproperties=font_prop)
fig.tight_layout(pad=1.0, h_pad=2.0)
plt.show()
print "dt = %.13f s" % dt
print "df = %.2f Hz" % df
print "nyquist = %.2f Hz" % nyquist
print "n_bins = %d" % n_bins
Explanation: Plotting just the pulsation: light curve and power spectrum.
End of explanation
total_lc = pulse_lc * noise_lc
total_lc[np.where(total_lc < 0)] = 0.0
total_lc_poiss = np.random.poisson(total_lc * dt) / dt
mean_total_lc = np.mean(total_lc_poiss)
print "Mean count rate of light curve:", mean_total_lc
total_unnorm_power = np.absolute(fftpack.fft(total_lc_poiss - mean_total_lc)) ** 2
total_unnorm_power = total_unnorm_power[0:len(pos_freq)]
total_power = 2.0 * total_unnorm_power * dt / float(n_bins) / (mean_total_lc ** 2)
total_power -= 2.0 / mean_total_lc
print "Mean of total power:", np.mean(total_power)
total_pow_var = np.sum(total_power * df)
## Re-binning the power spectrum in frequency
rb_freq, rb_total_power, tfreq_min, tfreq_max = geometric_rebinning(pos_freq, total_power, 1.01)
Explanation: Summing together the pulsation and the noise (where the noise has a power law, QPO, etc.)
End of explanation
super_title_total = "Together"
npn_total = rb_total_power * rb_freq
# npn_total = total_power
fig, (ax1, ax2, ax3) = plt.subplots(3,1, figsize=(10,15))
fig.suptitle(super_title_total, fontsize=20, y=1.03)
## Plotting the light curve
ax1.plot(time, total_lc_poiss, linewidth=2.0, color='g')
ax1.set_xlabel('Elapsed time (seconds)', fontproperties=font_prop)
ax1.set_ylabel('Photon count rate', fontproperties=font_prop)
ax1.set_xlim(0, 0.3)
ax1.set_ylim(0,)
ax1.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax1.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax1.set_title('Light curve', fontproperties=font_prop)
## Linearly plotting the power spectrum
ax2.plot(pos_freq, total_power, linewidth=2.0)
ax2.set_xlabel(r'$\nu$ (Hz)', fontproperties=font_prop)
ax2.set_ylabel(r'Power (frac. rms$^{2}$)', fontproperties=font_prop)
ax2.set_xlim(0, nyquist)
ax2.set_ylim(0, np.max(total_power) + (0.1 * np.max(total_power)))
## Setting the axes' minor ticks. It's complicated.
x_maj_loc = ax2.get_xticks()
y_maj_loc = ax2.get_yticks()
x_min_mult = 0.2 * (x_maj_loc[1] - x_maj_loc[0])
y_min_mult = 0.5 * (y_maj_loc[1] - y_maj_loc[0])
xLocator = MultipleLocator(x_min_mult) ## location of minor ticks on the y-axis
yLocator = MultipleLocator(y_min_mult) ## location of minor ticks on the y-axis
ax2.xaxis.set_minor_locator(xLocator)
ax2.yaxis.set_minor_locator(yLocator)
ax2.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax2.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax2.set_title('Linear power density spectrum', fontproperties=font_prop)
## Logaritmically plotting the re-binned power * frequency spectrum
ax3.plot(rb_freq, npn_total, linewidth=2.0)
ax3.set_xscale('log')
ax3.set_yscale('log')
ax3.set_xlabel(r'$\nu$ (Hz)', fontproperties=font_prop)
ax3.set_ylabel(r'Power $\times$ frequency', fontproperties=font_prop)
ax3.set_xlim(0, nyquist)
# ax3.set_xlim(4000,)
# ax3.set_ylim(0,300)
ax3.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax3.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax3.set_title('Re-binned power density spectrum', fontproperties=font_prop)
fig.tight_layout(pad=1.0, h_pad=2.0)
plt.show()
print "dt = %.13f s" % dt
print "df = %.2f Hz" % df
print "nyquist = %.2f Hz" % nyquist
print "n_bins = %d" % n_bins
print "power variance = %.2e (frac rms2)" % total_pow_var
Explanation: Plotting the periodic pulsation + noise process: light curve and power spectrum.
End of explanation
## Titles in unicode
# print u"\n\t\tPower law; \u03B2 = %s\n" % str(beta)
# print u"\n\t\tLorentzian; \u0393 = %s at \u03C9\u2080 = %s\n" % (str(gamma), str(w_0))
# print u"\n\t\tPower law with exponential decay; \u03B2 = %s, \u03B1 = %s\n" \
# % (str(beta), str(alpha))
# print u"\n\n\tBroken power law; \u03C9_break = %s, \u03B2\u2081 = %s, \u03B2\u2082 = %s\n" \
# % (str(w_b), str(beta_1), str(beta_2))
# super_title = r"Power law with exponential decay: $\beta$ = %s, $\alpha$ = %s" % (str(beta), str(alpha))
# super_title = r"Broken power law; $\omega_{break}$ = %.2f Hz, $\beta_1$ = %.2f, $\beta_2$ = %.2f" \
# % (w_b, beta_1, beta_2)
Explanation: Playing around with other stuff
End of explanation |
7,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
map(func)
Retorna um novo RDD formado pela passagem de cada elemento do RDD de origem através de uma da função func.
Exemplo
Step1: filter(func)
Retorna um novo RDD formado pela seleção daqueles elemento do RDD de origem que, quando passados para função func, retorna true.
Exemplo
Step2: flatMap(func)
Semelhante ao map, porém cada item de entrada pode ser mapeado para 0 ou mais itens de saída (assim, func deve retornar uma lista em vez de um único item).
Exemplo
Step3: intersection(otherRDD)
Retorna um novo RDD que contém a interseção dos elementos no RDD de origem e o outro RDD (argumento).
Exemplo
Step4: groupByKey()
Quando chamado em um RDD de pares (K, V), retorna um conjunto de dados de pares (K, Iterable<V>).
Exemplo
Step5: reduceByKey(func)
Quando chamado em um RDD de pares (K, V), retorna um RDD de pares (K, V) onde os valores de cada chave são agregados usando a função de redução func, que deve ser do tipo (V, V)
Step6: sortByKey([asceding])
Quando chamado em um RDD de pares (K, V) em que K é ordenável, retorna um RDD de pares (K, V) ordenados por chaves em ordem ascendente ou descendente, conforme especificado no argumento ascending.
Exemplo | Python Code:
data = sc.parallelize(range(1, 11))
def duplicar(x): return x*x
# data é um rdd
res = data.map( duplicar )
print (res.collect())
Explanation: map(func)
Retorna um novo RDD formado pela passagem de cada elemento do RDD de origem através de uma da função func.
Exemplo:
End of explanation
data = sc.parallelize(range(1, 11))
res = data.filter(lambda x: x%2 ==1)
print(res.collect())
Explanation: filter(func)
Retorna um novo RDD formado pela seleção daqueles elemento do RDD de origem que, quando passados para função func, retorna true.
Exemplo:
End of explanation
data = sc.parallelize(["Linha 1", "Linha 2"])
def partir(l): return l.split(" ")
print ('map:', data.map(partir).collect())
print ('flatMap:', data.flatMap(partir).collect())
Explanation: flatMap(func)
Semelhante ao map, porém cada item de entrada pode ser mapeado para 0 ou mais itens de saída (assim, func deve retornar uma lista em vez de um único item).
Exemplo:
End of explanation
two_multiples = sc.parallelize(range(0, 20, 2))
three_multiples = sc.parallelize(range(0, 20, 3))
print (two_multiples.intersection(three_multiples).collect())
Explanation: intersection(otherRDD)
Retorna um novo RDD que contém a interseção dos elementos no RDD de origem e o outro RDD (argumento).
Exemplo:
End of explanation
data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])
for pair in data.groupByKey().collect():
print (pair[0], list(pair[1]))
Explanation: groupByKey()
Quando chamado em um RDD de pares (K, V), retorna um conjunto de dados de pares (K, Iterable<V>).
Exemplo:
End of explanation
data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])
res = data.reduceByKey( lambda x,y: x+y )
print (res.collect())
Explanation: reduceByKey(func)
Quando chamado em um RDD de pares (K, V), retorna um RDD de pares (K, V) onde os valores de cada chave são agregados usando a função de redução func, que deve ser do tipo (V, V): V (recebe 2 valores e retorna um novo valor).
Exemplo:
End of explanation
data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])
print(data.sortByKey(ascending=False).collect())
Explanation: sortByKey([asceding])
Quando chamado em um RDD de pares (K, V) em que K é ordenável, retorna um RDD de pares (K, V) ordenados por chaves em ordem ascendente ou descendente, conforme especificado no argumento ascending.
Exemplo:
End of explanation |
7,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using third-party Native Libraries
Sometimes, the functionality you need is only available in third-party native libraries. These libraries can still be used from within Pythran, using Pythran's support for capsules.
Pythran Code
The pythran code requires function pointers to the third-party functions, passed as parameters to your pythran routine, as in the following
Step1: In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function.
This capsule can be created using ctypes
Step2: The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line.
Step3: With Pointers
Now, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran?
Step4: There is some magic happening here
Step5: With Pythran
It is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world.
One just need to use the capsule keyword to indicate we want to generate a capsule.
Step6: It's not possible to call the capsule directly, it's an opaque structure.
Step7: It's possible to pass it to the according pythran function though.
Step8: With Cython
The capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython
Step9: The cythonized module has a special dictionary that holds the capsule we're looking for. | Python Code:
import pythran
%load_ext pythran.magic
%%pythran
#pythran export pythran_cbrt(float64(float64), float64)
def pythran_cbrt(libm_cbrt, val):
return libm_cbrt(val)
Explanation: Using third-party Native Libraries
Sometimes, the functionality you need is only available in third-party native libraries. These libraries can still be used from within Pythran, using Pythran's support for capsules.
Pythran Code
The pythran code requires function pointers to the third-party functions, passed as parameters to your pythran routine, as in the following:
End of explanation
import ctypes
# capsulefactory
PyCapsule_New = ctypes.pythonapi.PyCapsule_New
PyCapsule_New.restype = ctypes.py_object
PyCapsule_New.argtypes = ctypes.c_void_p, ctypes.c_char_p, ctypes.c_void_p
# load libm
libm = ctypes.CDLL('libm.so.6')
# extract the proper symbol
cbrt = libm.cbrt
# wrap it
cbrt_capsule = PyCapsule_New(cbrt, "double(double)".encode(), None)
Explanation: In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function.
This capsule can be created using ctypes:
End of explanation
pythran_cbrt(cbrt_capsule, 8.)
Explanation: The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line.
End of explanation
%%pythran
#pythran export pythran_sincos(None(float64, float64*, float64*), float64)
def pythran_sincos(libm_sincos, val):
import numpy as np
val_sin, val_cos = np.empty(1), np.empty(1)
libm_sincos(val, val_sin, val_cos)
return val_sin[0], val_cos[0]
Explanation: With Pointers
Now, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran?
End of explanation
sincos_capsule = PyCapsule_New(libm.sincos, "unchecked anyway".encode(), None)
pythran_sincos(sincos_capsule, 0.)
Explanation: There is some magic happening here:
None is used to state the function pointer does not return anything.
In order to create pointers, we actually create empty one-dimensional array and let pythran handle them as pointer. Beware that you're in charge of all the memory checking stuff!
Apart from that, we can now call our function with the proper capsule parameter.
End of explanation
%%pythran
## This is the capsule.
#pythran export capsule corp((int, str), str set)
def corp(param, lookup):
res, key = param
return res if key in lookup else -1
## This is some dummy callsite
#pythran export brief(int, int((int, str), str set)):
def brief(val, capsule):
return capsule((val, "doctor"), {"some"})
Explanation: With Pythran
It is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world.
One just need to use the capsule keyword to indicate we want to generate a capsule.
End of explanation
try:
corp((1,"some"),set())
except TypeError as e:
print(e)
Explanation: It's not possible to call the capsule directly, it's an opaque structure.
End of explanation
brief(1, corp)
Explanation: It's possible to pass it to the according pythran function though.
End of explanation
!find -name 'cube*' -delete
%%file cube.pyx
#cython: language_level=3
cdef api double cube(double x) nogil:
return x * x * x
from setuptools import setup
from Cython.Build import cythonize
_ = setup(
name='cube',
ext_modules=cythonize("cube.pyx"),
zip_safe=False,
# fake CLI call
script_name='setup.py',
script_args=['--quiet', 'build_ext', '--inplace']
)
Explanation: With Cython
The capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython: api and __pyx_capi__. nogil is of importance here: Pythran releases the GIL, so better not call a cythonized function that uses it.
End of explanation
import sys
sys.path.insert(0, '.')
import cube
print(type(cube.__pyx_capi__['cube']))
cython_cube = cube.__pyx_capi__['cube']
pythran_cbrt(cython_cube, 2.)
Explanation: The cythonized module has a special dictionary that holds the capsule we're looking for.
End of explanation |
7,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Lorenz Differential Equations
Before we start, we import some preliminary libraries. We will also import (below) the accompanying lorenz.py file, which contains the actual solver and plotting routine.
Step1: We explore the Lorenz system of differential equations
Step2: For the default set of parameters, we see the trajectories swirling around two points, called attractors.
The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments
Step3: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
Step4: Creating histograms of the average positions (across different trajectories) show that, on average, the trajectories swirl about the attractors. | Python Code:
%matplotlib inline
from ipywidgets import interactive, fixed
Explanation: The Lorenz Differential Equations
Before we start, we import some preliminary libraries. We will also import (below) the accompanying lorenz.py file, which contains the actual solver and plotting routine.
End of explanation
from lorenz import solve_lorenz
w=interactive(solve_lorenz,sigma=(0.0,50.0),rho=(0.0,50.0))
w
Explanation: We explore the Lorenz system of differential equations:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \
\dot{y} & = \rho x - y - xz \
\dot{z} & = -\beta z + xy
\end{aligned}
$$
Let's change (\(\sigma\), \(\beta\), \(\rho\)) with ipywidgets and examine the trajectories.
End of explanation
t, x_t = w.result
w.kwargs
Explanation: For the default set of parameters, we see the trajectories swirling around two points, called attractors.
The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments:
End of explanation
xyz_avg = x_t.mean(axis=1)
xyz_avg.shape
Explanation: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
End of explanation
from matplotlib import pyplot as plt
plt.hist(xyz_avg[:,0])
plt.title('Average $x(t)$');
plt.hist(xyz_avg[:,1])
plt.title('Average $y(t)$');
Explanation: Creating histograms of the average positions (across different trajectories) show that, on average, the trajectories swirl about the attractors.
End of explanation |
7,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To understand what CUDA is and why it is important, I read
this article. It is a subtle plug for their services, but it aligns well with all that I have so far heard about GPUs in data science and analytics. I even learned who made it and what the acronym means
Step1: Yup, it looks like it is not using the GPU
Step2: I do, and the solution says it has something to do with my path not being set properly. Lets see what my path is for this notebook instance.
Step3: It doesn't include any of the CUDA or VS stuff, so I'm betting this is the problem. I'll add the path items related to CUDA or VS I have in my path to this path, and try again.
Step4: Still not working!
Step5: Doesn't work. Lets see if I can find this nvcc compiler on the system. It is on the system, and also on the system path because this command worked fine on the command line.
```bash
nvcc -V
nvcc
Step6: But I threw in a call to show the system path as well, to see what it says. I note that the system path is not updated with the CUDA variables, but it does have everything else I have on my path normally, unlike the python system path above. I think I understand now.
I started this jupyter session before I installed CUDA and VS, so it was started before I or the installers made any changes to the system path. The system path variable the jupyter notebook server grabbed and is holding in memory is the old, out of date one. If I'm right, this problem with CUDA should fix itself when I restart the notebook server.
Restarted Jupyter Notebook server
Lets try the commands and tests again. | Python Code:
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
Explanation: To understand what CUDA is and why it is important, I read
this article. It is a subtle plug for their services, but it aligns well with all that I have so far heard about GPUs in data science and analytics. I even learned who made it and what the acronym means: "Nvidia introduced ... CUDA (Compute Unified Device Architecture."
CUDA is apparently hard to get to work on Windows, and we're going to see if that is true or not. This is a link to the tutorial/instruction set I'm going to try out.
Make sure your computer has a compatible CUDA graphics card
Head here to check it out for your own card. I the card name in my device manager, a GeForce GTX 960M, and it is on the CUDA list. Then I confirmed I had the card my system thinks it has by running the Nvidia hardware scanner. Check! Onward!
Download CUDA
I grabbed the download from here, and chose these options:
- version 8.0
- for windows 10
- x86_64 architecture
- local installer
I opted for local in case I have to run the whole thing over again after some problem comes up. It's 1.2 GB, so the download is going to take a while.
Download and Install Visual Studio 2013 Community version
I tried to grab it from here. That failed because it wanted me to sign up for some account, and the process was painful to start. So I searched for another way to avoid having to sign up for anything, and I found this site. Downloaded the 2013 vs-community.exe, and it starts up saying it is VS 2013, hooray!
In the options box during the installer, chose to install
- "Microsoft Foundation Classes for C++"
as the only optional feature, and the download is still 9GB. I'll be back in an even longer while.
Install CUDA, then check it
VS 2013 appears to have installed just fine, so now to try CUDA. I ran the extractor and installer, no problems.
I tried to find the file, vs2013.sln, the instructions say to run, but no luck in the spot it suggested it would be. The CUDA folder isn't there. I tried searching the C drive, but no luck either. I tried re-extracting into a different folder, and as an administrator, and that turns out that was the problem.
The first time I ran it wasn't as an administrator, but it just crashed without error, which is always handy. Now running it as an admin, it ran fine. I chose the custom install when it asked, and I selected everything under CUDA and de-selecting everything else because I already had newer versions installed for my graphics card. It then installed, for real, and it reported no errors.
I ran
bash
C:\ProgramData\NVIDIA Corporation\CUDA Samples\v7.0\1_Utilities\deviceQuery\deviceQuery_vs2013.sln
in Visual studio, and VS built the run time and it reported PASS at the bottom of this report.
```bash
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 960M"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 2048 MBytes (2147483648 bytes)
( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores
GPU Max Clock rate: 1176 MHz (1.18 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 960M
Result = PASS
```
Setup System Variables
As per instruction, I added
bash
THEANO_FLAGS = "floatX=float32,device=gpu,nvcc.fastmath=True"
then added
bash
VS2013path = C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin
and finally tacked on
bash
%VS2013path%
to the system path.
Final Check
I tried importing Theano on the command line in the machine learning environment I created previously, and got the success report out the other side!
```bash
import theano
Using gpu device 0: GeForce GTX 960M (CNMeM is disabled, cuDNN not available)
```
That seems too easy, but I'll take it. Now to start playing with it in PyMC3, and see if I notice a difference.
Making Theano work the the GPU in jupyter
In the theano documentation, I found this script test to see if I am using the GPU. I goig to ru it from this notebook, and I think I know what will happen.
End of explanation
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
Explanation: Yup, it looks like it is not using the GPU :( Running the same thing from the command line results in success, though.
bash
Using gpu device 0: GeForce GTX 960M (CNMeM is disabled, cuDNN not available)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.875091 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the gpu
Argh!
I found this stack overflow question, and I think I have the same problem. Lets see if I get the same error message.
End of explanation
import sys
sys.path
Explanation: I do, and the solution says it has something to do with my path not being set properly. Lets see what my path is for this notebook instance.
End of explanation
cuda_paths = ['C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\bin',
'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\libnvvp',
'C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common',
'C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin',
]
for path in cuda_paths:
sys.path.append(path)
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
Explanation: It doesn't include any of the CUDA or VS stuff, so I'm betting this is the problem. I'll add the path items related to CUDA or VS I have in my path to this path, and try again.
End of explanation
sys.path
Explanation: Still not working!
End of explanation
%%cmd
nvcc -V
path
Explanation: Doesn't work. Lets see if I can find this nvcc compiler on the system. It is on the system, and also on the system path because this command worked fine on the command line.
```bash
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sat_Sep__3_19:05:48_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
```
If I try running the cmd from a notebook cell, I get an error.
End of explanation
%%cmd
nvcc -V
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
Explanation: But I threw in a call to show the system path as well, to see what it says. I note that the system path is not updated with the CUDA variables, but it does have everything else I have on my path normally, unlike the python system path above. I think I understand now.
I started this jupyter session before I installed CUDA and VS, so it was started before I or the installers made any changes to the system path. The system path variable the jupyter notebook server grabbed and is holding in memory is the old, out of date one. If I'm right, this problem with CUDA should fix itself when I restart the notebook server.
Restarted Jupyter Notebook server
Lets try the commands and tests again.
End of explanation |
7,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
采用Spark处理OpenStreetMap的osm文件。
Spark DataFrame参考
Step1: 配置环境SparkConf和创建SparkContext运行环境对象。
Step2: 显示Spark的配置信息。
Step3: Spark的文本RDD操作。
按照文本方式读取osm的json格式文件,将JSON字符串转为dict对象。
Step4: 从RDD中按照文本方式进行关键词查询。
Step5: Spark的DataFrame操作。
使用SQL引擎直接生成Spark的DataFrame对象,支持查询等操作。
读取osm的node数据表。
Step6: Spark DataFrame的 select() 操作。show()方法可以指定最多显示的记录数。
Step7: 读取osm的way表。
Step8: 查看way表中的数据。
Step9: 构建way的几何对象。
从way中的每一条记录生成NodeID的字符串列表,用于下一步查询node的坐标信息表。
Step10: 将多个way的node信息查询出来。
Step11: 将经纬度坐标转换为一个GeoJSON的几何对象表示,并保存回way的geometry字段。
Step12: 查找指定关键词。
自定义函数处理。 | Python Code:
from pprint import *
import pyspark
from pyspark import SparkConf, SparkContext
sc = None
print(pyspark.status)
Explanation: 采用Spark处理OpenStreetMap的osm文件。
Spark DataFrame参考: https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#interoperating-with-rdds
by openthings@163.com,2016-4-23. License: GPL, MUST include this header.
说明:
使用sc.read.json()读取json文件(osm-all2json从osm转换而来),生成Spark的DataFrame对象。
查询从json文件创建的DataFrame对象,创建新的DataFrame。
读取way的nd索引(Node的ID),并构建way的geometry对象。
后续:
将数据保存到MongoDB/Hbase/HDFS等其它存储系统。
将数据进行分块,保存为分区域的DataFrame数据集合。
将DataFrame转换为GeoPandas.DataFrame,然后保存为shape files。
将DataFrame直接转换为GIScript.Dataset,然后保存为UDB files。
End of explanation
conf = (SparkConf()
.setMaster("local")
.setAppName("MyApp")
.set("spark.executor.memory", "1g"))
if sc is None:
sc = SparkContext(conf = conf)
print(type(sc))
print(sc)
print(sc.applicationId)
Explanation: 配置环境SparkConf和创建SparkContext运行环境对象。
End of explanation
print(conf)
conf_kv = conf.getAll()
pprint(conf_kv)
Explanation: 显示Spark的配置信息。
End of explanation
fl = sc.textFile("../data/muenchen.osm_node.json")
for node in fl.collect()[0:2]:
node_dict = eval(node)
pprint(node_dict)
Explanation: Spark的文本RDD操作。
按照文本方式读取osm的json格式文件,将JSON字符串转为dict对象。
End of explanation
lines = fl.filter(lambda line: "soemisch" in line)
print(lines.count())
print(lines.collect()[0])
Explanation: 从RDD中按照文本方式进行关键词查询。
End of explanation
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
nodeDF = sqlContext.read.json("../data/muenchen.osm_node.json")
#print(nodeDF)
nodeDF.printSchema()
Explanation: Spark的DataFrame操作。
使用SQL引擎直接生成Spark的DataFrame对象,支持查询等操作。
读取osm的node数据表。
End of explanation
nodeDF.select("id","lat","lon","timestamp").show(10,True)
#help(nodeDF.show)
Explanation: Spark DataFrame的 select() 操作。show()方法可以指定最多显示的记录数。
End of explanation
wayDF = sqlContext.read.json("../data/muenchen.osm_way.json")
wayDF.printSchema()
Explanation: 读取osm的way表。
End of explanation
wayDF.select("id","tag","nd").show(10,True)
Explanation: 查看way表中的数据。
End of explanation
def sepator():
print("===============================================================")
#### 将给定way的nd对象的nodeID列表提取出来,并生成一个查询的过滤字符串。
def nodelist_way(nd_list):
print("WayID:",nd_list["id"],"\tNode count:",len(nd_list["nd"]))
ndFilter = "("
for nd in nd_list["nd"]:
ndFilter = ndFilter + nd["ref"] + ","
ndFilter = ndFilter.strip(',') + ")"
print(ndFilter)
return ndFilter
#### 根据way的节点ID从nodeDF中提取node信息,包含经纬度等坐标域。
def nodecoord_way(nodeID_list):
nodeDF.registerTempTable("nodeDF")
nodeset = sqlContext.sql("select id,lat,lon,timestamp from nodeDF where nodeDF.id in " + nodeID_list)
nodeset.show(10,True)
Explanation: 构建way的几何对象。
从way中的每一条记录生成NodeID的字符串列表,用于下一步查询node的坐标信息表。
End of explanation
for wayset in wayDF.select("id","nd").collect()[4:6]:
ndFilter = nodelist_way(wayset)
nodecoord_way(ndFilter)
#pprint(nd_list["nd"])
#sepator()
Explanation: 将多个way的node信息查询出来。
End of explanation
relationDF = sqlContext.read.json("../data/muenchen.osm_relation.json")
#print(relationDF)
relationDF.printSchema()
relationDF.show(10,True)
Explanation: 将经纬度坐标转换为一个GeoJSON的几何对象表示,并保存回way的geometry字段。
End of explanation
def myFunc(s):
words = s.split()
return len(words)
#wc = fl.map(myFunc).collect()
wc = fl.map(myFunc).collect()
wc
#df = sqlContext.read.format("com.databricks.spark.xml").option("rowTag", "result").load("../data/muenchen.osm")
#df
Explanation: 查找指定关键词。
自定义函数处理。
End of explanation |
7,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 1
We proceed building the alogrithm for testing the accuracy of the numerical derivative
Step1: a)
Step2: We can see that until $h = 10^7$ the trend is that the absolute error diminishes, but after that it goes back up. This is due to the fact that when we compute $f(h) - f(0)$ we are using an ill-conditioned operation. In fact these two values are really close to each other.
Exercise 2
a.
We can easily see that when $\|x\| \ll 1$ we have that both $\frac{1 - x }{x + 1}$ and $\frac{1}{2 x + 1}$ are almost equal to 1, so the subtraction is ill-conditined.
Step3: We can modify the previous expression for being well conditioned around 0. This is well conditoned.
Step4: A comparison between the two ways of computing this value. we can clearly see that if we are far from 1 the methods are nearly identical, but the closer you get to 0 the two methods diverge
Step5: $ \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} = \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} \frac{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}} }{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}}} = \frac{2x}{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}}}$
Exercise 3.
a.
If we assume we posess a 6 faced dice, we have at each throw three possible outcome. So we have to take all the combination of 6 numbers repeating 3 times. It is intuitive that our $\Omega$ will be composed by $6^3$ samples, and will be of type
Step6: Concerning the $\sigma$-algebra we need to state that there does not exist only a $\sigma$-algebra for a given $\Omega$, but it this case a reasonable choice would be the powerset of $\Omega$.
b.
In case of fairness of dice we will have the discrete uniform distribution. And for computing the value of $\rho(\omega)$ we just need to compute the inverse of our sample space $\rho(\omega) = \frac{1}{6^3}$
Step7: c.
If we want to determine the set $A$ we can take in consideration its complementary $A^c = {\text{Not even one throw is 6}}$. This event is analogous the sample space of a 5-faced dice. So it's dimension will be $5^3$. For computing the size of $A$ we can simply compute $6^3 - 5^3$ and for the event its self we just need to $\Omega \setminus A^c = A$ | Python Code:
def f(x):
return np.exp(np.sin(x))
def df(x):
return f(x) * np.cos(x)
def absolute_err(f, df, h):
g = (f(h) - f(0)) / h
return np.abs(df(0) - g)
hs = 10. ** -np.arange(15)
epsilons = np.empty(15)
for i, h in enumerate(hs):
epsilons[i] = absolute_err(f, df, h)
Explanation: Exercise 1
We proceed building the alogrithm for testing the accuracy of the numerical derivative
End of explanation
plt.plot(hs, epsilons, 'o')
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r'h')
plt.ylabel(r'$\epsilon(h)$')
plt.grid(linestyle='dotted')
Explanation: a)
End of explanation
x_1 = symbols('x_1')
fun1 = 1 / (1 + 2*x_1) - (1 - x_1) / (1 + x_1)
fun1
Explanation: We can see that until $h = 10^7$ the trend is that the absolute error diminishes, but after that it goes back up. This is due to the fact that when we compute $f(h) - f(0)$ we are using an ill-conditioned operation. In fact these two values are really close to each other.
Exercise 2
a.
We can easily see that when $\|x\| \ll 1$ we have that both $\frac{1 - x }{x + 1}$ and $\frac{1}{2 x + 1}$ are almost equal to 1, so the subtraction is ill-conditined.
End of explanation
fun2 = simplify(fun1)
fun2
Explanation: We can modify the previous expression for being well conditioned around 0. This is well conditoned.
End of explanation
def f1(x):
return 1 / (1 + 2*x) - (1 - x) / (1 + x)
def f2(x):
return 2*x**2/(2*x**2 + 3*x + 1)
for i, h in enumerate(hs):
epsilons[i] = (f1(h) - f2(h)) / h
plt.plot(hs, epsilons, 'o')
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r'h')
plt.ylabel(r'$\epsilon(h)$')
plt.grid(linestyle='dotted')
Explanation: A comparison between the two ways of computing this value. we can clearly see that if we are far from 1 the methods are nearly identical, but the closer you get to 0 the two methods diverge
End of explanation
import itertools
x = [1, 2, 3, 4, 5, 6]
omega = set([p for p in itertools.product(x, repeat=3)])
print(r'Omega has', len(omega), 'elements and they are:')
print(omega)
Explanation: $ \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} = \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} \frac{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}} }{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}}} = \frac{2x}{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}}}$
Exercise 3.
a.
If we assume we posess a 6 faced dice, we have at each throw three possible outcome. So we have to take all the combination of 6 numbers repeating 3 times. It is intuitive that our $\Omega$ will be composed by $6^3$ samples, and will be of type:
$(1, 1, 1), (1, 1, 2), (1, 1, 3), ... (6, 6, 5), (6, 6, 6)$
End of explanation
1/(6**3)
Explanation: Concerning the $\sigma$-algebra we need to state that there does not exist only a $\sigma$-algebra for a given $\Omega$, but it this case a reasonable choice would be the powerset of $\Omega$.
b.
In case of fairness of dice we will have the discrete uniform distribution. And for computing the value of $\rho(\omega)$ we just need to compute the inverse of our sample space $\rho(\omega) = \frac{1}{6^3}$
End of explanation
print('Size of A^c:', 5**3)
print('Size of A: ', 6 ** 3 - 5 ** 3)
36 + 5 *6 + 5 * 5
x = [1, 2, 3, 4, 5]
A_c = set([p for p in itertools.product(x, repeat=3)])
print('A^c has ', len(A_c), 'elements.\nA^c =', A_c)
print('A has ', len(omega - A_c), 'elements.\nA^c =', omega - A_c)
Explanation: c.
If we want to determine the set $A$ we can take in consideration its complementary $A^c = {\text{Not even one throw is 6}}$. This event is analogous the sample space of a 5-faced dice. So it's dimension will be $5^3$. For computing the size of $A$ we can simply compute $6^3 - 5^3$ and for the event its self we just need to $\Omega \setminus A^c = A$
End of explanation |
7,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
assert conv1.get_shape().as_list() == [None, 28, 28, 16], print(conv1.get_shape().as_list())
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME')
# Now 14x14x16
assert maxpool1.get_shape().as_list() == [None, 14, 14, 16], print(maxpool1.get_shape().as_list())
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME')
# Now 4x4x8
assert encoded.get_shape().as_list() == [None, 4, 4, 8], print(encoded.get_shape().as_list())
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
assert upsample1.get_shape().as_list() == [None, 7, 7, 8], print(upsample1.get_shape().as_list())
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
assert upsample2.get_shape().as_list() == [None, 14, 14, 8], print(upsample2.get_shape().as_list())
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
upsample2.get_shape()
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
7,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pythonのコードをセル内に書いて実行することができます
Step1: このようにPythonをインタラクティブに動作させるだけでなく、GCPの各種サービスとシームレスに連携できることがDatalabの大きな利点となります。
DatalabからBigQueryを呼び出す
bqというマジックコマンドを使う
Step2: BigQueryコマンドの結果をPythonのオブジェクトして使う(パターン1)
Step4: BigQueryコマンドの結果をPythonのオブジェクトして使う(パターン2) | Python Code:
# このセルにカーソルを当ててCtrl+Enter もしくは Shift+Enterを押すと'hello world'と出力することができます
print('hello world')
# 各種制御構文、クラスや関数なども含めて通常のプログラミングと同じように動作させることができます
for i in range(10):
print(i)
# 変数の定義はセル内だけでなく、ノートブック全体がスコープとなり、通常のPythonと同じような扱いとなります
x = 10
# xは10なので、10+20となります
y = x + 20
print(y)
Explanation: Pythonのコードをセル内に書いて実行することができます
End of explanation
%%bq query
SELECT id, title, num_characters
FROM `publicdata.samples.wikipedia`
WHERE wp_namespace = 0
ORDER BY num_characters DESC
LIMIT 10
Explanation: このようにPythonをインタラクティブに動作させるだけでなく、GCPの各種サービスとシームレスに連携できることがDatalabの大きな利点となります。
DatalabからBigQueryを呼び出す
bqというマジックコマンドを使う
End of explanation
%%bq query -n requests
SELECT timestamp, latency, endpoint
FROM `cloud-datalab-samples.httplogs.logs_20140615`
WHERE endpoint = 'Popular' OR endpoint = 'Recent'
import google.datalab.bigquery as bq
import pandas as pd
# クエリの結果をPandasのデータフレームとしていれる
df = requests.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
Explanation: BigQueryコマンドの結果をPythonのオブジェクトして使う(パターン1)
End of explanation
import google.datalab.bigquery as bq
import pandas as pd
# 発行するクエリ
query = SELECT timestamp, latency, endpoint
FROM `cloud-datalab-samples.httplogs.logs_20140615`
WHERE endpoint = 'Popular' OR endpoint = 'Recent'
# クエリオブジェクトを作る
qobj = bq.Query(query)
# pandasのデータフレームとしてクエリの結果を取得
df2 = qobj.execute(output_options=bq.QueryOutput.dataframe()).result()
# 以下pandasの操作へ
df2.head()
Explanation: BigQueryコマンドの結果をPythonのオブジェクトして使う(パターン2)
End of explanation |
7,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ensemble Design Pattern
Stacking is an Ensemble method which combines the outputs of a collection of models to make a prediction. The initial models, which are typically of different model types, are trained to completion on the full training dataset. Then, a secondary meta-model is trained using the initial model outputs as features. This second meta-model learns how to best combine the outcomes of the initial models to decrease the training error and can be any type of machine learning model.
Create a Stacking Ensemble model
In this notebook, we'll create an Ensemble of three neural network models and train on the natality dataset.
Step1: Create our tf.data input pipeline
Step2: Check that our tf.data dataset
Step3: Create our feature columns
Step4: Create our ensemble models
We'll train three different neural network models.
Step5: The function below trains a model and reports the MSE and RMSE on the test set.
Step6: Next, we'll train each neural network and save the trained model to file.
Step7: The RMSE varies on each of the neural networks.
Load the trained models and create the stacked ensemble model.
The function below loads the trained models and returns them in a list.
Step8: We will need to freeze the layers of the pre-trained models since we won't train these models any further. The Stacked Ensemble will the trainable and learn how to best combine the results of the ensemble members.
Step9: Lastly, we'll create our Stacked Ensemble model. It is also a neural network. We'll use the Functional Keras API.
Step10: We need to adapt our tf.data pipeline to accommodate the multiple inputs for our Stacked Ensemble model.
Step11: Lastly, we will evaluate our Stacked Ensemble against the test set. | Python Code:
import os
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models, Model
df = pd.read_csv("./data/babyweight_train.csv")
df.head()
Explanation: Ensemble Design Pattern
Stacking is an Ensemble method which combines the outputs of a collection of models to make a prediction. The initial models, which are typically of different model types, are trained to completion on the full training dataset. Then, a secondary meta-model is trained using the initial model outputs as features. This second meta-model learns how to best combine the outcomes of the initial models to decrease the training error and can be any type of machine learning model.
Create a Stacking Ensemble model
In this notebook, we'll create an Ensemble of three neural network models and train on the natality dataset.
End of explanation
# Determine CSV, label, and key columns
# Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks",
"mother_race"]
# Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0], ["0"]]
def get_dataset(file_path):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=15, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
select_columns=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=1,
ignore_errors=True)
return dataset
train_data = get_dataset("./data/babyweight_train.csv")
test_data = get_dataset("./data/babyweight_eval.csv")
Explanation: Create our tf.data input pipeline
End of explanation
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print("{:20s}: {}".format(key,value.numpy()))
show_batch(train_data)
Explanation: Check that our tf.data dataset:
End of explanation
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"],
'is_male' : ["True", "False", "Unknown"],
'mother_race': [str(_) for _ in df.mother_race.unique()]
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(fc.indicator_column(cat_col))
Explanation: Create our feature columns
End of explanation
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality", "mother_race"]})
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# model_1
model1_h1 = layers.Dense(50, activation="relu")(dnn_inputs)
model1_h2 = layers.Dense(30, activation="relu")(model1_h1)
model1_output = layers.Dense(1, activation="relu")(model1_h2)
model_1 = tf.keras.models.Model(inputs=inputs, outputs=model1_output, name="model_1")
# model_2
model2_h1 = layers.Dense(64, activation="relu")(dnn_inputs)
model2_h2 = layers.Dense(32, activation="relu")(model2_h1)
model2_output = layers.Dense(1, activation="relu")(model2_h2)
model_2 = tf.keras.models.Model(inputs=inputs, outputs=model2_output, name="model_2")
# model_3
model3_h1 = layers.Dense(32, activation="relu")(dnn_inputs)
model3_output = layers.Dense(1, activation="relu")(model3_h1)
model_3 = tf.keras.models.Model(inputs=inputs, outputs=model3_output, name="model_3")
Explanation: Create our ensemble models
We'll train three different neural network models.
End of explanation
# fit model on dataset
def fit_model(model):
# define model
model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer='adam', metrics=['mse'])
# fit model
model.fit(train_data.shuffle(500), epochs=1)
# evaluate model
test_loss, test_mse = model.evaluate(test_data)
print('\n\n{}:\nTest Loss {}, Test RMSE {}'.format(
model.name, test_loss, test_mse**0.5))
return model
# create directory for models
try:
os.makedirs('models')
except:
print("directory already exists")
Explanation: The function below trains a model and reports the MSE and RMSE on the test set.
End of explanation
members = [model_1, model_2, model_3]
# fit and save models
n_members = len(members)
for i in range(n_members):
# fit model
model = fit_model(members[i])
# save model
filename = 'models/model_' + str(i + 1) + '.h5'
model.save(filename, save_format='tf')
print('Saved {}\n'.format(filename))
Explanation: Next, we'll train each neural network and save the trained model to file.
End of explanation
# load trained models from file
def load_models(n_models):
all_models = []
for i in range(n_models):
filename = 'models/model_' + str(i + 1) + '.h5'
# load model from file
model = models.load_model(filename)
# add to list of members
all_models.append(model)
print('>loaded %s' % filename)
return all_models
# load all models
members = load_models(n_members)
print('Loaded %d models' % len(members))
Explanation: The RMSE varies on each of the neural networks.
Load the trained models and create the stacked ensemble model.
The function below loads the trained models and returns them in a list.
End of explanation
# update all layers in all models to not be trainable
for i in range(n_members):
model = members[i]
for layer in model.layers:
# make not trainable
layer.trainable = False
# rename to avoid 'unique layer name' issue
layer._name = 'ensemble_' + str(i+1) + '_' + layer.name
Explanation: We will need to freeze the layers of the pre-trained models since we won't train these models any further. The Stacked Ensemble will the trainable and learn how to best combine the results of the ensemble members.
End of explanation
member_inputs = [model.input for model in members]
# concatenate merge output from each model
member_outputs = [model.output for model in members]
merge = layers.concatenate(member_outputs)
h1 = layers.Dense(30, activation='relu')(merge)
h2 = layers.Dense(20, activation='relu')(h1)
h3 = layers.Dense(10, activation='relu')(h2)
h4 = layers.Dense(5, activation='relu')(h2)
ensemble_output = layers.Dense(1, activation='relu')(h3)
ensemble_model = Model(inputs=member_inputs, outputs=ensemble_output)
# plot graph of ensemble
tf.keras.utils.plot_model(ensemble_model, show_shapes=True, to_file='ensemble_graph.png')
# compile
ensemble_model.compile(loss='mse', optimizer='adam', metrics=['mse'])
Explanation: Lastly, we'll create our Stacked Ensemble model. It is also a neural network. We'll use the Functional Keras API.
End of explanation
FEATURES = ["is_male", "mother_age", "plurality",
"gestation_weeks", "mother_race"]
# stack input features for our tf.dataset
def stack_features(features, label):
for feature in FEATURES:
for i in range(n_members):
features['ensemble_' + str(i+1) + '_' + feature] = features[feature]
return features, label
ensemble_data = train_data.map(stack_features).repeat(1)
ensemble_model.fit(ensemble_data.shuffle(500), epochs=1)
Explanation: We need to adapt our tf.data pipeline to accommodate the multiple inputs for our Stacked Ensemble model.
End of explanation
val_loss, val_mse = ensemble_model.evaluate(test_data.map(stack_features))
print("Validation RMSE: {}".format(val_mse**0.5))
Explanation: Lastly, we will evaluate our Stacked Ensemble against the test set.
End of explanation |
7,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, batch_size, n_steps):
'''Create a generator that returns batches of size
batch_size x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
batch_size: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch =
n_batches =
# Keep only enough characters to make full batches
arr =
# Reshape into batch_size rows
arr =
for n in range(0, arr.shape[1], n_steps):
# The features
x =
# The targets, shifted by one
y =
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We start with our text encoded as integers in one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array arr, you divide the length of arr by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from arr, $N * M * K$.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (batch_size below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs =
targets =
# Keep probability placeholder for drop out layers
keep_prob =
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm =
# Add dropout to the cell outputs
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
initial_state =
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Print losses every N interations
print_every_n = 50
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
if (counter % print_every_n == 0):
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
7,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating a Yo-Yo
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: Yo-yo
Suppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.
The following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.
In this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations
Step2: The results are
$T = m g I / I^* $
$a = -m g r^2 / I^* $
$\alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
You can also see the derivation of these equations in this video.
We can use these equations for $a$ and $\alpha$ to write a slope function and simulate this system.
Exercise
Step3: Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string.
L is the length of the string.
g is the acceleration of gravity.
Step4: Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here).
In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
Step5: And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds.
Step6: The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v.
Here is a State object with the the initial conditions.
Step7: And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m).
Step8: Write a slope function for this system, using these results from the book
Step9: Test your slope function with the initial conditions.
The results should be approximately
0, 180.5, 0, -2.9
Step10: Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall.
Write an event function that will stop the simulation when y is 0.
Step11: Test your event function
Step12: Then run the simulation.
Step13: Check the final state. If things have gone according to plan, the final value of y should be close to 0.
Step14: How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable?
The following cells plot the results.
theta should increase and accelerate.
Step15: y should decrease and accelerate down.
Step16: Plot velocity as a function of time; is the acceleration constant?
Step17: We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g?
Step18: And we can use the formula for r to plot the radius of the spooled thread over time. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Simulating a Yo-Yo
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from sympy import symbols, Eq, solve
T, a, alpha, I, m, g, r = symbols('T a alpha I m g r')
eq1 = Eq(a, -r * alpha)
eq1
eq2 = Eq(T - m * g, m * a)
eq2
eq3 = Eq(T * r, I * alpha)
eq3
soln = solve([eq1, eq2, eq3], [T, a, alpha])
soln[T]
soln[a]
soln[alpha]
Explanation: Yo-yo
Suppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.
The following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.
In this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations:
$\sum F = m a $
$\sum \tau = I \alpha$
where the summations indicate that we are adding up forces and torques.
As in the previous examples, linear and angular velocity are related because of the way the string unrolls:
$\frac{dy}{dt} = -r \frac{d \theta}{dt} $
In this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\theta$ increases and $y$, which is the length of the rolled part of the string, decreases.
Taking the derivative of both sides yields a similar relationship between linear and angular acceleration:
$\frac{d^2 y}{dt^2} = -r \frac{d^2 \theta}{dt^2} $
Which we can write more concisely:
$ a = -r \alpha $
This relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping.
Because of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo.
We can compute the acceleration of the yo-yo by adding up the linear forces:
$\sum F = T - mg = ma $
Where $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down.
Because gravity acts on the center of mass, it creates no torque, so the only torque is due to tension:
$\sum \tau = T r = I \alpha $
Positive (upward) tension yields positive (counter-clockwise) angular acceleration.
Now we have three equations in three unknowns, $T$, $a$, and $\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. We could solve these equations by hand, but we can also get SymPy to do it for us.
End of explanation
Rmin = 8e-3 # m
Rmax = 16e-3 # m
Rout = 35e-3 # m
mass = 50e-3 # kg
L = 1 # m
g = 9.8 # m / s**2
Explanation: The results are
$T = m g I / I^* $
$a = -m g r^2 / I^* $
$\alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
You can also see the derivation of these equations in this video.
We can use these equations for $a$ and $\alpha$ to write a slope function and simulate this system.
Exercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string?
Here are the system parameters:
End of explanation
1 / (Rmax)
Explanation: Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string.
L is the length of the string.
g is the acceleration of gravity.
End of explanation
I = mass * Rout**2 / 2
I
Explanation: Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here).
In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
End of explanation
k = (Rmax**2 - Rmin**2) / 2 / L
k
Explanation: And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds.
End of explanation
init = State(theta=0, omega=0, y=L, v=0)
Explanation: The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v.
Here is a State object with the the initial conditions.
End of explanation
system = System(init=init, t_end=2)
Explanation: And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m).
End of explanation
# Solution
def slope_func(t, state, system):
theta, omega, y, v = state
r = np.sqrt(2*k*y + Rmin**2)
alpha = mass * g * r / (I + mass * r**2)
a = -r * alpha
return omega, alpha, v, a
Explanation: Write a slope function for this system, using these results from the book:
$ r = \sqrt{2 k y + R_{min}^2} $
$ T = m g I / I^* $
$ a = -m g r^2 / I^* $
$ \alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
End of explanation
# Solution
slope_func(0, system.init, system)
Explanation: Test your slope function with the initial conditions.
The results should be approximately
0, 180.5, 0, -2.9
End of explanation
# Solution
def event_func(t, state, system):
theta, omega, y, v = state
return y
Explanation: Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall.
Write an event function that will stop the simulation when y is 0.
End of explanation
# Solution
event_func(0, system.init, system)
Explanation: Test your event function:
End of explanation
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func, max_step=0.05)
details.message
Explanation: Then run the simulation.
End of explanation
# Solution
results.tail()
Explanation: Check the final state. If things have gone according to plan, the final value of y should be close to 0.
End of explanation
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable?
The following cells plot the results.
theta should increase and accelerate.
End of explanation
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: y should decrease and accelerate down.
End of explanation
results.v.plot(label='velocity', color='C3')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
Explanation: Plot velocity as a function of time; is the acceleration constant?
End of explanation
a = gradient(results.v)
a.plot(label='acceleration', color='C4')
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
Explanation: We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g?
End of explanation
r = np.sqrt(2*k*results.y + Rmin**2)
r.plot(label='radius')
decorate(xlabel='Time (s)',
ylabel='Radius of spooled thread (m)')
import pandas as pd
s = pd.date_range('2020-1', '2020-12', freq='M').to_series()
list(s.dt.month_name())
pd.interval_range(start=pd.Timestamp('2017-01-01'),
periods=3, freq='MS').dt
Explanation: And we can use the formula for r to plot the radius of the spooled thread over time.
End of explanation |
7,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Earth Engine REST API Quickstart
This is a demonstration notebook for using the Earth Engine REST API. See the complete guide for more information
Step1: Define service account credentials
Step2: Create an authorized session to make HTTP requests
Step3: Get a list of images at a point
Query for Sentinel-2 images at a specific location, in a specific time range and with estimated cloud cover less than 10%.
Step4: Inspect an image
Get the asset name from the previous output and request its metadata.
Step5: Get pixels from one of the images
Step6: Get a thumbnail of an image
Note that name and asset are already defined from the request to get the asset metadata. | Python Code:
# INSERT YOUR PROJECT HERE
PROJECT = 'your-project'
!gcloud auth login --project {PROJECT}
Explanation: Earth Engine REST API Quickstart
This is a demonstration notebook for using the Earth Engine REST API. See the complete guide for more information: https://developers.google.com/earth-engine/reference/Quickstart.
Authentication
The first step is to choose a project and login to Google Cloud.
End of explanation
# INSERT YOUR SERVICE ACCOUNT HERE
SERVICE_ACCOUNT='your-service-account@your-project.iam.gserviceaccount.com'
KEY = 'private-key.json'
!gcloud iam service-accounts keys create {KEY} --iam-account {SERVICE_ACCOUNT}
Explanation: Define service account credentials
End of explanation
from google.auth.transport.requests import AuthorizedSession
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(KEY)
scoped_credentials = credentials.with_scopes(
['https://www.googleapis.com/auth/cloud-platform'])
session = AuthorizedSession(scoped_credentials)
url = 'https://earthengine.googleapis.com/v1alpha/projects/earthengine-public/assets/LANDSAT'
response = session.get(url)
from pprint import pprint
import json
pprint(json.loads(response.content))
Explanation: Create an authorized session to make HTTP requests
End of explanation
import urllib
coords = [-122.085, 37.422]
project = 'projects/earthengine-public'
asset_id = 'COPERNICUS/S2'
name = '{}/assets/{}'.format(project, asset_id)
url = 'https://earthengine.googleapis.com/v1alpha/{}:listImages?{}'.format(
name, urllib.parse.urlencode({
'startTime': '2017-04-01T00:00:00.000Z',
'endTime': '2017-05-01T00:00:00.000Z',
'region': '{"type":"Point", "coordinates":' + str(coords) + '}',
'filter': 'CLOUDY_PIXEL_PERCENTAGE < 10',
}))
response = session.get(url)
content = response.content
for asset in json.loads(content)['images']:
id = asset['id']
cloud_cover = asset['properties']['CLOUDY_PIXEL_PERCENTAGE']
print('%s : %s' % (id, cloud_cover))
Explanation: Get a list of images at a point
Query for Sentinel-2 images at a specific location, in a specific time range and with estimated cloud cover less than 10%.
End of explanation
asset_id = 'COPERNICUS/S2/20170430T190351_20170430T190351_T10SEG'
name = '{}/assets/{}'.format(project, asset_id)
url = 'https://earthengine.googleapis.com/v1alpha/{}'.format(name)
response = session.get(url)
content = response.content
asset = json.loads(content)
print('Band Names: %s' % ','.join(band['id'] for band in asset['bands']))
print('First Band: %s' % json.dumps(asset['bands'][0], indent=2, sort_keys=True))
Explanation: Inspect an image
Get the asset name from the previous output and request its metadata.
End of explanation
import numpy
import io
name = '{}/assets/{}'.format(project, asset_id)
url = 'https://earthengine.googleapis.com/v1alpha/{}:getPixels'.format(name)
body = json.dumps({
'fileFormat': 'NPY',
'bandIds': ['B2', 'B3', 'B4', 'B8'],
'grid': {
'affineTransform': {
'scaleX': 10,
'scaleY': -10,
'translateX': 499980,
'translateY': 4200000,
},
'dimensions': {'width': 256, 'height': 256},
},
})
pixels_response = session.post(url, body)
pixels_content = pixels_response.content
array = numpy.load(io.BytesIO(pixels_content))
print('Shape: %s' % (array.shape,))
print('Data:')
print(array)
Explanation: Get pixels from one of the images
End of explanation
url = 'https://earthengine.googleapis.com/v1alpha/{}:getPixels'.format(name)
body = json.dumps({
'fileFormat': 'PNG',
'bandIds': ['B4', 'B3', 'B2'],
'region': asset['geometry'],
'grid': {
'dimensions': {'width': 256, 'height': 256},
},
'visualizationOptions': {
'ranges': [{'min': 0, 'max': 3000}],
},
})
image_response = session.post(url, body)
image_content = image_response.content
# Import the Image function from the IPython.display module.
from IPython.display import Image
Image(image_content)
Explanation: Get a thumbnail of an image
Note that name and asset are already defined from the request to get the asset metadata.
End of explanation |
7,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare Notebook
To run this code, written at the very end of Chapter 3, you need a working empty database.
To move the project into a valid state, please use the command git chapter [chapter-number] to find a valid commit. A commit at the end of Chapter 3, or any commit in Chapters 4-9, should work just fine. Use git checkout [commit] to change the state of the project. Once there, delete the database (if it exists).
$ rm db.sqlite3
To (re-)create the database
Step1: Interacting With the Database
Step2: Creation and Destruction with Managers
Step3: Methods of Data Retrieval
Step4: The get method
Step5: The filter method
Step6: Chaining Calls
Step7: values and values_list
Step8: Data in Memory vs Data in the Database
Step9: Connecting Data through Relations | Python Code:
from datetime import date
from organizer.models import Tag, Startup, NewsLink
from blog.models import Post
Explanation: Prepare Notebook
To run this code, written at the very end of Chapter 3, you need a working empty database.
To move the project into a valid state, please use the command git chapter [chapter-number] to find a valid commit. A commit at the end of Chapter 3, or any commit in Chapters 4-9, should work just fine. Use git checkout [commit] to change the state of the project. Once there, delete the database (if it exists).
$ rm db.sqlite3
To (re-)create the database:
$ ./manage.py migrate
Please see the Read Me file or the actual book for more details.
End of explanation
edut = Tag(name='Education', slug='education')
edut
edut.save()
edut.delete()
edut # still in memory!
Explanation: Interacting With the Database
End of explanation
type(Tag.objects) # a model manager
Tag.objects.create(name='Video Games', slug='video-games')
# create multiple objects in a go!
Tag.objects.bulk_create([
Tag(name='Django', slug='django'),
Tag(name='Mobile', slug='mobile'),
Tag(name='Web', slug='web'),
])
Tag.objects.all()
Tag.objects.all()[0] # acts like a list
type(Tag.objects.all()) # is not a list
# managers are not accessible to model instances, only to model classes!
try:
edut.objects
except AttributeError as e:
print(e)
Explanation: Creation and Destruction with Managers
End of explanation
Tag.objects.all()
Tag.objects.count()
Explanation: Methods of Data Retrieval
End of explanation
Tag.objects.get(slug='django')
type(Tag.objects.all())
type(Tag.objects.get(slug='django'))
# case-sensitive!
try:
Tag.objects.get(slug='Django')
except Tag.DoesNotExist as e:
print(e)
# the i is for case-Insensitive
Tag.objects.get(slug__iexact='DJANGO')
Tag.objects.get(slug__istartswith='DJ')
Tag.objects.get(slug__contains='an')
# get always returns a single object
try:
# djangO, mObile, videO-games
Tag.objects.get(slug__contains='o')
except Tag.MultipleObjectsReturned as e:
print(e)
Explanation: The get method
End of explanation
## unlike get, can fetch multiple objects
Tag.objects.filter(slug__contains='o')
type(Tag.objects.filter(slug__contains='o'))
Explanation: The filter method
End of explanation
Tag.objects.filter(slug__contains='o').order_by('-name')
# first we call order_by on the manager
Tag.objects.order_by('-name')
# now we call filter on the manager, and order the resulting queryset
Tag.objects.filter(slug__contains='e').order_by('-name')
Explanation: Chaining Calls
End of explanation
Tag.objects.values_list()
type(Tag.objects.values_list())
Tag.objects.values_list('name', 'slug')
Tag.objects.values_list('name')
Tag.objects.values_list('name', flat=True)
type(Tag.objects.values_list('name', flat=True))
Explanation: values and values_list
End of explanation
jb = Startup.objects.create(
name='JamBon Software',
slug='jambon-software',
contact='django@jambonsw.com',
description='Web and Mobile Consulting.\n'
'Django Tutoring.\n',
founded_date=date(2013, 1, 18),
website='https://jambonsw.com/',
)
jb # this output only clear because of __str__()
jb.founded_date
jb.founded_date = date(2014,1,1)
# we're not calling save() !
jb.founded_date
# get version in database
jb = Startup.objects.get(slug='jambon-software')
# work above is all for nought because we didn't save()
jb.founded_date
Explanation: Data in Memory vs Data in the Database
End of explanation
djt = Post.objects.create(
title='Django Training',
slug='django-training',
text=(
"Learn Django in a classroom setting "
"with JamBon Software."),
)
djt
djt.pub_date = date(2013, 1, 18)
djt.save()
djt
type(djt.tags)
type(djt.startups)
djt.tags.all()
djt.startups.all()
django = Tag.objects.get(slug__contains='django')
djt.tags.add(django)
djt.tags.all()
django.blog_posts.all() # a "reverse" relation
django.startup_set.add(jb) # a "reverse" relation
django.startup_set.all()
jb.tags.all() # the "forward" relation
# on more time, for repetition!
djt
# "forward" relation
djt.startups.add(jb)
djt.startups.all()
jb.blog_posts.all() # "reverse" relation
Explanation: Connecting Data through Relations
End of explanation |
7,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calibrations
This notebook demonstrates how to calibrate real and reciprocal space coordinates of scanning electron diffraction data. Calibrations include correcting the diffraction pattern for lens distortions and determining the rotation between the scan and diffraction planes based on data acquired from reference standards.
This functionaility has been checked to run in pyxem-0.11.0 (May 2020). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here
Step1: Download and the data for this demo from here and put in directory with notebooks
Step2: Load a VDF image of Au X-grating for scan pixel calibration
Step3: Load spatially averaged diffraction pattern from MoO3 standard for rotation calibration
Step4: Load a VDF image of MoO3 standard for rotation calibration
Step5: Initialise a CalibrationGenerator with the CalibrationDataLibrary
Step6: <a id='ids'></a>
2. Determine Lens Distortions
Lens distortions are assumed to be dominated by elliptical distortion due to the projector lens system. See, for example
Step7: Obtain residuals before and after distortion correction and plot to inspect, the aim is for any differences to be small and circularly symmetric
Step8: Plot distortion corrected diffraction pattern with adjustable reference circle for inspection
Step9: Check the affine matrix, which may be applied to other data
Step10: Inspect the ring fitting parameters
Step11: Calculate correction matrix and confirm that in this case it is equal to the affine matrix
Step12: <a href='#cal'></a>
3. Determining Real & Reciprocal Space Scales
Determine the diffraction pattern calibration in reciprocal Angstroms per pixel
Step13: Plot the calibrated diffraction data to check it looks about right
Step14: Plot the cross grating image data to define the line along which to take trace
Step15: Obtain the navigation calibration from the trace | Python Code:
%matplotlib inline
import numpy as np
import pyxem as pxm
import hyperspy.api as hs
from pyxem.libraries.calibration_library import CalibrationDataLibrary
from pyxem.generators.calibration_generator import CalibrationGenerator
Explanation: Calibrations
This notebook demonstrates how to calibrate real and reciprocal space coordinates of scanning electron diffraction data. Calibrations include correcting the diffraction pattern for lens distortions and determining the rotation between the scan and diffraction planes based on data acquired from reference standards.
This functionaility has been checked to run in pyxem-0.11.0 (May 2020). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues
Contents
<a href='#ini'> Load Data & Initialize Generator</a>
<a href='#dis'> Determine Lens Distortions</a>
<a href='#cal'> Determine Real & Reciprocal Space Calibrations</a>
<a href='#rot'> Determin Real & Reciprocal Space Rotation</a>
Import pyxem, required libraries and pyxem modules
End of explanation
au_dpeg = hs.load('./data/03/au_xgrating_20cm.tif')
au_dpeg.plot(vmax=1)
Explanation: Download and the data for this demo from here and put in directory with notebooks:
https://drive.google.com/drive/folders/1guzxUcHYNkB3CMClQ-Dhv9cCc1-N15Fj?usp=sharing
<a id='ini'></a>
1. Load Data & Initialize Generator
Load spatially averaged diffraction pattern from Au X-grating for distortion calibration
End of explanation
au_im = hs.load('./data/03/au_xgrating_100kX.hspy')
au_im.plot()
Explanation: Load a VDF image of Au X-grating for scan pixel calibration
End of explanation
moo3_dpeg = hs.load('./data/03/moo3_20cm.tif')
moo3_dpeg.plot(vmax=1)
Explanation: Load spatially averaged diffraction pattern from MoO3 standard for rotation calibration
End of explanation
moo3_im = hs.load('./data/03/moo3_100kX.tif')
moo3_im.plot()
Explanation: Load a VDF image of MoO3 standard for rotation calibration
End of explanation
#Calibration Standard can only be gold for now
cal = CalibrationGenerator(diffraction_pattern=au_dpeg,
grating_image=au_im)
Explanation: Initialise a CalibrationGenerator with the CalibrationDataLibrary
End of explanation
cal.get_elliptical_distortion(mask_radius=10,
scale=100, amplitude=1000,
asymmetry=0.9,spread=2)
Explanation: <a id='ids'></a>
2. Determine Lens Distortions
Lens distortions are assumed to be dominated by elliptical distortion due to the projector lens system. See, for example: https://www.sciencedirect.com/science/article/pii/S0304399105001087?via%3Dihub
Distortion correction is based on measuring the ellipticity of a ring pattern obtained from an Au X-grating calibration standard in scaninng mode.
Determine distortion correction matrix by ring fitting
End of explanation
residuals = cal.get_distortion_residuals(mask_radius=10, spread=2)
residuals.plot(cmap='RdBu', vmax=0.04)
Explanation: Obtain residuals before and after distortion correction and plot to inspect, the aim is for any differences to be small and circularly symmetric
End of explanation
cal.plot_corrected_diffraction_pattern(vmax=0.1)
Explanation: Plot distortion corrected diffraction pattern with adjustable reference circle for inspection
End of explanation
cal.affine_matrix
Explanation: Check the affine matrix, which may be applied to other data
End of explanation
cal.ring_params
Explanation: Inspect the ring fitting parameters
End of explanation
cal.get_correction_matrix()
Explanation: Calculate correction matrix and confirm that in this case it is equal to the affine matrix
End of explanation
cal.get_diffraction_calibration(mask_length=30,
linewidth=5)
Explanation: <a href='#cal'></a>
3. Determining Real & Reciprocal Space Scales
Determine the diffraction pattern calibration in reciprocal Angstroms per pixel
End of explanation
cal.plot_calibrated_data(data_to_plot='au_x_grating_dp',
cmap='magma', vmax=0.1)
Explanation: Plot the calibrated diffraction data to check it looks about right
End of explanation
cal.grating_image.plot()
line = hs.roi.Line2DROI(x1=4.83957, y1=44.4148, x2=246.46, y2=119.159, linewidth=5.57199)
line.add_widget(cal.grating_image)
trace = line(cal.grating_image)
trace = trace.as_signal1D(spectral_axis=0)
trace.plot()
Explanation: Plot the cross grating image data to define the line along which to take trace
End of explanation
cal.get_navigation_calibration(line_roi=line, x1=40.,x2=232.,
n=3, xspace=500.)
Explanation: Obtain the navigation calibration from the trace
End of explanation |
7,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
7,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http
Step1: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written
Step2: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
Step3: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
Step4: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http
Step5: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
Step6: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http
Step7: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
Step8: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
Step9: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import requests
from io import BytesIO
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
Explanation: Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
# Get the RGNP data to replicate Hamilton
from statsmodels.tsa.regime_switching.tests.test_markov_autoregression import rgnp
dta_hamilton = pd.Series(rgnp, index=pd.date_range('1951-04-01', '1984-10-01', freq='QS'))
# Plot the data
dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3))
# Fit the model
mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False)
res_hamilton = mod_hamilton.fit()
res_hamilton.summary()
Explanation: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:
$$
y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t
$$
Each period, the regime transitions according to the following matrix of transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
p_{01} & p_{11}
\end{bmatrix}
$$
where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$.
The model class is MarkovAutoregression in the time-series part of Statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that.
After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
End of explanation
fig, axes = plt.subplots(2, figsize=(7,7))
ax = axes[0]
ax.plot(res_hamilton.filtered_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Filtered probability of recession')
ax = axes[1]
ax.plot(res_hamilton.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Smoothed probability of recession')
fig.tight_layout()
Explanation: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
End of explanation
print(res_hamilton.expected_durations)
Explanation: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
End of explanation
# Get the dataset
ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content
raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python')
raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS')
dta_kns = raw.ix[:'1986'] - raw.ix[:'1986'].mean()
# Plot the dataset
dta_kns[0].plot(title='Excess returns', figsize=(12, 3))
# Fit the model
mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True)
res_kns = mod_kns.fit()
res_kns.summary()
Explanation: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.
The model in question is:
$$
\begin{align}
y_t & = \varepsilon_t \
\varepsilon_t & \sim N(0, \sigma_{S_t}^2)
\end{align}
$$
Since there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypotheized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).
End of explanation
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_kns.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-variance regime for stock returns')
ax = axes[1]
ax.plot(res_kns.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-variance regime for stock returns')
ax = axes[2]
ax.plot(res_kns.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-variance regime for stock returns')
fig.tight_layout()
Explanation: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
End of explanation
# Get the dataset
filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content
dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python')
dta_filardo.columns = ['month', 'ip', 'leading']
dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS')
dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100
# Deflated pre-1960 observations by ratio of std. devs.
# See hmt_tvp.opt or Filardo (1994) p. 302
std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std()
dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio
dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100
dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean()
# Plot the data
dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3))
plt.figure()
dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));
Explanation: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.
In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).
Each period, the regime now transitions according to the following matrix of time-varying transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00,t} & p_{10,t} \
p_{01,t} & p_{11,t}
\end{bmatrix}
$$
where $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be:
$$
p_{ij,t} = \frac{\exp{ x_{t-1}' \beta_{ij} }}{1 + \exp{ x_{t-1}' \beta_{ij} }}
$$
Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.
End of explanation
mod_filardo = sm.tsa.MarkovAutoregression(
dta_filardo.ix[2:, 'dlip'], k_regimes=2, order=4, switching_ar=False,
exog_tvtp=sm.add_constant(dta_filardo.ix[1:-1, 'dmdlleading']))
np.random.seed(12345)
res_filardo = mod_filardo.fit(search_reps=20)
res_filardo.summary()
Explanation: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
End of explanation
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(res_filardo.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2)
ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])
ax.set(title='Smoothed probability of a low-production state');
Explanation: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
End of explanation
res_filardo.expected_durations[0].plot(
title='Expected duration of a low-production state', figsize=(12,3));
Explanation: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
End of explanation |
7,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Consumer Choice and Intertemporal Choice
We setup and solve a very simple generic one-period consumer choice problem over two goods. We later specialize to the case of intertemporal trade over two periods and choice over lotteries. The consumer is assumed to have time-consistent preofereces. Later, in a separate notebook, we look at a a three period model where consumers have time-inconsistent (quasi-hyperbolic) preferences.
The code to generate the static and interactive figures is at the end of this notebook (and should be run first to make this interactive).
Choice over two goods
A consumer chooses a consumption bundle to maximize Cobb-Douglas utility
$$U(c_1,c_2) = c_1^\alpha c_2^{\beta}$$
subject to the budget constraint
$$p_1 c_1 + p_2 c_2 \leq I $$
To plot the budget constraint we rearrange to get
Step1: The consumer's optimum
$$L(c_1,c_2) = U(c_1,c_2) + \lambda (I - p_1 c_1 - p_2 c_2) $$
Differentiate with respect to $c_1$ and $c_2$ and $\lambda$ to get
Step2: The expenditure function
The indirect utility function
Step3: Interactive plot with sliders (visible if if running on a notebook server)
Step4: In this particular case the consumer consumes
Step5: Her endowment is
Step6: And she therefore saves
Step7: in period 1.
A borrowing case
In the diagram below the consumer is seen to be borrowing, i.e. $s_1^ = y_1 - c_1^ <0$.
Step8: <a id='codesection'></a>
Code section
Run the code below first to re-generate the figures and interactions above.
Step9: Code for simple consumer choice
Step10: Parameters for default plot
Step11: Code for intertemporal Choice model
Step12: Parameters for default plot
Step13: Interactive plot | Python Code:
consume_plot()
Explanation: Consumer Choice and Intertemporal Choice
We setup and solve a very simple generic one-period consumer choice problem over two goods. We later specialize to the case of intertemporal trade over two periods and choice over lotteries. The consumer is assumed to have time-consistent preofereces. Later, in a separate notebook, we look at a a three period model where consumers have time-inconsistent (quasi-hyperbolic) preferences.
The code to generate the static and interactive figures is at the end of this notebook (and should be run first to make this interactive).
Choice over two goods
A consumer chooses a consumption bundle to maximize Cobb-Douglas utility
$$U(c_1,c_2) = c_1^\alpha c_2^{\beta}$$
subject to the budget constraint
$$p_1 c_1 + p_2 c_2 \leq I $$
To plot the budget constraint we rearrange to get:
$$c_2 = \frac{I}{p_2} - \frac{p_1}{p_2} c_1$$
Likewise, to draw the indifference curve defined by $u(c_1,c_2) = \bar u$ we solve for $c_2$ to get:
$$c_2 = \left( \frac{\bar u}{c_1^\alpha}\right)^\frac{1}{\beta}$$
For Cobb-Douglas utility the marginal rate of substitution (MRS) between $c_2$ and $c_1$ is:
$$MRS = \frac{U_1}{U_2} = \frac{\alpha}{\beta} \frac{c_2}{c_1}$$
where $U_1 =\frac{\partial U}{\partial c_1}$ and $U_2 =\frac{\partial U}{\partial c_2}$
End of explanation
interact(consume_plot,p1=(pmin,pmax,0.1),p2=(pmin,pmax,0.1), I=(Imin,Imax,10),alpha=(0.05,0.95,0.05));
Explanation: The consumer's optimum
$$L(c_1,c_2) = U(c_1,c_2) + \lambda (I - p_1 c_1 - p_2 c_2) $$
Differentiate with respect to $c_1$ and $c_2$ and $\lambda$ to get:
$$ U_1 = \lambda{p_1}$$
$$ U_2 = \lambda{p_2}$$
$$ I = p_1 c_1 + p_2 c_2$$
Dividing the first equation by the second we get the familiar necessary tangency condition for an interior optimum:
$$MRS = \frac{U_1}{U_2} =\frac{p_1}{p_2}$$
Using our earlier expression for the MRS of a Cobb-Douglas indifference curve, substituting this into the budget constraint and rearranging then allows us to solve for the Marshallian demand functions:
$$c_1(p_1,p_2,I)=\frac{\alpha}{\alpha+\beta} \frac{I}{p_1}$$
$$c_1(p_1,p_2,I)=\frac{\beta}{\alpha+\beta} \frac{I}{p_2}$$
Interactive plot with sliders (visible if if running on a notebook server):
End of explanation
consume_plot2(r, delta, rho, y1, y2)
Explanation: The expenditure function
The indirect utility function:
$$v(p_1,p_2,I) = u(c_1(p_1,p_2,I),c_2(p_1,p_2,I))$$
$$c_1(p_1,p_2,I)=\alpha \frac{I}{p_1}$$
$$c_1(p_1,p_2,I)=(1-\alpha)\frac{I}{p_2}$$
$$v(p_1,p_2,I) = I \cdot \alpha^\alpha (1-\alpha)^{1-\alpha} \cdot
\left [ \frac{p_2}{p_1} \right ]^\alpha \frac{1}{p_2} $$
$$E(p_z,p_2,\bar u) = \frac{\bar u}{\alpha^\alpha (1-\alpha)^{1-\alpha} \cdot
\left [ \frac{p_2}{p_1} \right ]^\alpha \frac{1}{p_2}}$$
To minimize expenditure needed to achieve level of utility $\bar u$ we solve:
$$\min_{c_1,c_2} p_1 c_1 + p_2 c_2 $$
s.t.
$$u(c_1, c_2) =\bar u$$
The first order conditions are identical to those we got for the utility maximization problem, and hence we have the same tangency
$$\frac{U_1}{U_2} = \frac{\alpha}{1-\alpha} \frac{c_2}{c_1} = \frac{p_1}{p_2}$$
From this we can solve: $$c_2 = \frac{p_1}{p_2} \frac{1-\alpha}{\alpha} c_1 $$
Now substitute this into the constraint to get:
$$ u \left ( c_1, \frac{p_1}{p_2} \frac{1-\alpha}{\alpha} c_1 \right )=\bar u$$
$$ u \left ( c_1, \frac{p_1}{p_2} \frac{1-\alpha}{\alpha} c_1 \right )=\bar u$$
Intertemporal Consumption choices
We now look at the special case of intertemporal consumption, or consumption of the same good (say corn) over two periods. As modeled below the consumer's income is now given by the market value of and endowment bundle $(y_1,y_2)$.
The variables $c_1$ and $c_2$ now refer to consumption of (say corn) in period 1 and period 2.
As is typical of intertemporal maximization problems we will use a time-additive utility function. The consumer who has access to a competitive financial market (they can borrow or save at interest rate $r$) maximizes:
$$U(c_1,c_2) = u(c_1) + \delta u(c_2)$$
subject to the intertemporal budget constraint:
$$ c_1 + \frac{c_2}{1+r} = y_1 + \frac{y_2}{1+r} $$
This is just like an ordinary utility maximization problem with prices $p_1 = 1$ and $p_2 =\frac{1}{1+r}$. Think of it this way, the price of corn is \$1 per unit in each period, but \$1 in period 1 can be placed into savings that will grow to $(1+r)$ dollars in period 2. That means that from the standpoint of period 1 owning one unit of corn (or one dollar worth of corn) in period 2 is the equivalent of owning $\frac{1}{1+r}$ units of corn today (because placed in savings that amount of period 1 corn would grow to $\frac{1+r}{1+r} = 1$ units of corn in period 2).
The first order necessary condition for an interior optimum is:
$$u'(c_1^) = \delta u'(c_2^)$$
Let's adopt the widely used Constant-Relative Risk Aversion (CRRA) felicity function of the form:
$$
\begin{equation}
u\left(c_{t}\right)=\begin{cases}
\frac{c^{1-\rho}}{1-\rho}, & \text{if } \rho>0 \text{ and } \rho \neq 1 \
ln\left(c\right) & \text{if } \rho=1
\end{cases}
\end{equation}
$$
The first order condition then becomes simply
$${c_1^}^{-\rho} = \delta (1+r) {c_2^}^{-\rho}$$
or
$$c_2^ = \left [\delta (1+r) \right]^\frac{1}{\rho}c_1^ $$
From the binding budget constraint we also have
$$c_2^ = Ey-c_1^(1+r)$$
where $E[y] = y_1 + \frac{y_2}{1+r}$
Solving for $c_1^*$ (from the FOC and this binding budget):
$$c_1^* = \frac{E[y]}{1+\frac{\left [\delta (1+r) \right]^\frac{1}{\rho}}{1+r}}$$
If we simplify to the simple case where $\delta =\frac{1}{1+r}$, where the consumer discounts future consumption at the same rate as the market interest rate then at an optimum the consumer will keep their consumption flat at $c_2^ = c_1^$. If we specialize further and assume that $r=0$ then the consumer will set $c_2^ = c_1^ =\frac{y_1+y_2}{2}$
Saving and borrowing visualized
Let us visualize the situation. The consumer has CRRA preferences as described above (summarized by parameters $\delta$ and $\rho$ and starts with an income endowment $(y_1, y_2)$. The market cost of funds is $r$.
A savings case
In the diagram below the consumer is seen to be saving, i.e. $s_1^ = y_1 - c_1^ >0$.
End of explanation
interact(consume_plot2, r=(rmin,rmax,0.1), rho=fixed(rho), delta=(0.5,1,0.1), y1=(10,100,1), y2=(10,100,1));
c1e, c2e, uebar = find_opt2(r, rho, delta, y1, y2)
Explanation: Interactive plot with sliders (visible if if running on a notebook server):
End of explanation
c1e, c2e
Explanation: In this particular case the consumer consumes:
End of explanation
y1,y2
Explanation: Her endowment is
End of explanation
y1-c1e
Explanation: And she therefore saves
End of explanation
y1,y2 = 20,80
consume_plot2(r, delta, rho, y1, y2)
Explanation: in period 1.
A borrowing case
In the diagram below the consumer is seen to be borrowing, i.e. $s_1^ = y_1 - c_1^ <0$.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
Explanation: <a id='codesection'></a>
Code section
Run the code below first to re-generate the figures and interactions above.
End of explanation
def U(c1, c2, alpha):
return (c1**alpha)*(c2**(1-alpha))
def budgetc(c1, p1, p2, I):
return (I/p2)-(p1/p2)*c1
def indif(c1, ubar, alpha):
return (ubar/(c1**alpha))**(1/(1-alpha))
def find_opt(p1,p2,I,alpha):
c1 = alpha * I/p1
c2 = (1-alpha)*I/p2
u = U(c1,c2,alpha)
return c1, c2, u
Explanation: Code for simple consumer choice
End of explanation
alpha = 0.5
p1, p2 = 1, 1
I = 100
pmin, pmax = 1, 4
Imin, Imax = 10, 200
cmax = (3/4)*Imax/pmin
def consume_plot(p1=p1, p2=p2, I=I, alpha=alpha):
c1 = np.linspace(0.1,cmax,num=100)
c1e, c2e, uebar = find_opt(p1, p2 ,I, alpha)
idfc = indif(c1, uebar, alpha)
budg = budgetc(c1, p1, p2, I)
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(c1, budg, lw=2.5)
ax.plot(c1, idfc, lw=2.5)
ax.vlines(c1e,0,c2e, linestyles="dashed")
ax.hlines(c2e,0,c1e, linestyles="dashed")
ax.plot(c1e,c2e,'ob')
ax.set_xlim(0, cmax)
ax.set_ylim(0, cmax)
ax.set_xlabel(r'$c_1$', fontsize=16)
ax.set_ylabel('$c_2$', fontsize=16)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.grid()
plt.show()
Explanation: Parameters for default plot
End of explanation
def u(c, rho):
return (1/rho)* c**(1-rho)
def U2(c1, c2, rho, delta):
return u(c1, rho) + delta*u(c2, rho)
def budget2(c1, r, y1, y2):
Ey = y1 + y2/(1+r)
return Ey*(1+r) - c1*(1+r)
def indif2(c1, ubar, rho, delta):
return ( ((1-rho)/delta)*(ubar - u(c1, rho)) )**(1/(1-rho))
def find_opt2(r, rho, delta, y1, y2):
Ey = y1 + y2/(1+r)
A = (delta*(1+r))**(1/rho)
c1 = Ey/(1+A/(1+r))
c2 = c1*A
u = U2(c1, c2, rho, delta)
return c1, c2, u
Explanation: Code for intertemporal Choice model
End of explanation
rho = 0.5
delta = 1
r = 0
y1, y2 = 80, 20
rmin, rmax = 0, 1
cmax = 150
def consume_plot2(r, delta, rho, y1, y2):
c1 = np.linspace(0.1,cmax,num=100)
c1e, c2e, uebar = find_opt2(r, rho, delta, y1, y2)
idfc = indif2(c1, uebar, rho, delta)
budg = budget2(c1, r, y1, y2)
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(c1, budg, lw=2.5)
ax.plot(c1, idfc, lw=2.5)
ax.vlines(c1e,0,c2e, linestyles="dashed")
ax.hlines(c2e,0,c1e, linestyles="dashed")
ax.plot(c1e,c2e,'ob')
ax.vlines(y1,0,y2, linestyles="dashed")
ax.hlines(y2,0,y1, linestyles="dashed")
ax.plot(y1,y2,'ob')
ax.text(y1-6,y2-6,r'$y^*$',fontsize=16)
ax.set_xlim(0, cmax)
ax.set_ylim(0, cmax)
ax.set_xlabel(r'$c_1$', fontsize=16)
ax.set_ylabel('$c_2$', fontsize=16)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.grid()
plt.show()
Explanation: Parameters for default plot
End of explanation
interact(consume_plot2, r=(rmin,rmax,0.1), rho=fixed(rho), delta=(0.5,1,0.1), y1=(10,100,1), y2=(10,100,1));
Explanation: Interactive plot
End of explanation |
7,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Choose subject ID
Step1: Load data
Step2: Cell below opens an html report in a web-browser
Step3: Exclude ICA components
To exclude/include an ICA component click on mne_browse window | Python Code:
name_sel = widgets.Select(
description='Subject ID:',
options=subject_ids
)
display(name_sel)
cond_sel = widgets.RadioButtons(
description='Condition:',
options=sessions,
)
display(cond_sel)
%%capture
if cond_sel.value == sessions[0]:
session = sessions[0]
elif cond_sel.value == sessions[1]:
session = sessions[1]
subj_ID = name_sel.value
data_path = os.path.join(main_path, subj_ID)
pipeline_path = os.path.join(main_path, preproc_pipeline_name)
sbj_data_path = os.path.join(main_path, subj_ID, session, 'meg')
basename = subj_ID + '_task-rest_run-01_meg_raw_filt_dsamp'
results_folder = os.path.join('preproc_meeg', '_sess_index_' + session + '_subject_id_' + subj_ID)
raw_fname = basename + '.fif'
ica_fname = basename + '_ica.fif'
ica_TS_fname = basename + '_ica-tseries.fif'
report_fname = basename + '-report.html'
ica_solution_fname = basename + '_ica_solution.fif'
raw_file = os.path.join(pipeline_path, results_folder, 'preproc', raw_fname) # filtered data
raw_ica_file = os.path.join(pipeline_path, results_folder, 'ica', ica_fname) # cleaned data
new_raw_ica_file = os.path.join(sbj_data_path, ica_fname) # path where to save the cleaned data after inspection
ica_TS_file = os.path.join(pipeline_path, results_folder, 'ica', ica_TS_fname)
ica_solution_file = os.path.join(pipeline_path, results_folder, 'ica', ica_solution_fname)
report_file = os.path.join(pipeline_path, results_folder, 'ica', report_fname)
Explanation: Choose subject ID:
End of explanation
# Load data -> we load the filtered data to see the TS of all ICs
print('Load raw file -> {} \n\n'.format(raw_file))
raw = read_raw_fif(raw_file, preload=True)
ica = read_ica(ica_solution_file)
ica.labels_ = dict()
ica_TS = ica.get_sources(raw)
Explanation: Load data
End of explanation
%%bash -s "$report_file"
firefox -new-window $1
ica.exclude
ica.plot_sources(raw)
ica.plot_components(inst=raw)
# ica.exclude
if ica.exclude:
ica.plot_properties(raw, picks=ica.exclude)
Explanation: Cell below opens an html report in a web-browser
End of explanation
print('You want to exclude the following components: *** {} ***'.format(ica.exclude))
%%capture
ica.apply(raw)
raw.save(raw_ica_file, overwrite=True) # save in workflow dir
# raw.save(new_raw_ica_file, overwrite=True) # save in subject dir
ica.save(ica_solution_file)
print('You REMOVED the following components: *** {} *** \n'.format(ica.exclude))
print('You SAVED the new CLEANED file here: *** {} ***'.format(raw_ica_file))
Explanation: Exclude ICA components
To exclude/include an ICA component click on mne_browse window: the red ones will be excluded. To keep the new excluded ICA components CLOSE the mne_browe window!
Apply ica solution to raw data and save the result
Check in the next cells if you are excluding the components you want!!! You can choose to save the cleaned raw file either in the ica folder (raw_ica_file) of workflow dir or in the subject folder (new_raw_ica_file)
End of explanation |
7,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3.1. Spark DataFrames & Pandas Plotting - Python
Create Dataproc Cluster with Jupyter
This notebook is designed to be run on Google Cloud Dataproc.
Follow the links below for instructions on how to create a Dataproc Cluster with the Juypter component installed.
Tutorial - Install and run a Jupyter notebook on a Dataproc cluster
Blog post - Apache Spark and Jupyter Notebooks made easy with Dataproc component gateway
Python 3 Kernel
Use a Python 3 kernel (not PySpark) to allow you to configure the SparkSession in the notebook and include the spark-bigquery-connector required to use the BigQuery Storage API.
Scala Version
Check what version of Scala you are running so you can include the correct spark-bigquery-connector jar
Step1: Create Spark Session
Include the correct version of the spark-bigquery-connector jar
Scala version 2.11 - 'gs
Step2: Enable repl.eagerEval
This will output the results of DataFrames in each step without the new need to show df.show() and also improves the formatting of the output
Step3: Read BigQuery table into Spark DataFrame
Use filter() to query data from a partitioned table.
Step4: Select required columns and apply a filter using where() which is an alias for filter() then cache the table
Step5: Group by title and order by page views to see the top pages
Step6: Convert Spark DataFrame to Pandas DataFrame
Convert the Spark DataFrame to Pandas DataFrame and set the datehour as the index
Step7: Plotting Pandas Dataframe
Import matplotlib
Step8: Use the Pandas plot function to create a line chart
Step9: Plot Multiple Columns
Create a new Spark DataFrame and pivot the wiki column to create multiple rows for each wiki value
Step10: Convert to Pandas DataFrame
Step11: Create plot with line for each column
Step12: Create stacked area plot | Python Code:
!scala -version
Explanation: 3.1. Spark DataFrames & Pandas Plotting - Python
Create Dataproc Cluster with Jupyter
This notebook is designed to be run on Google Cloud Dataproc.
Follow the links below for instructions on how to create a Dataproc Cluster with the Juypter component installed.
Tutorial - Install and run a Jupyter notebook on a Dataproc cluster
Blog post - Apache Spark and Jupyter Notebooks made easy with Dataproc component gateway
Python 3 Kernel
Use a Python 3 kernel (not PySpark) to allow you to configure the SparkSession in the notebook and include the spark-bigquery-connector required to use the BigQuery Storage API.
Scala Version
Check what version of Scala you are running so you can include the correct spark-bigquery-connector jar
End of explanation
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName('Spark DataFrames & Pandas Plotting')\
.config('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar') \
.getOrCreate()
Explanation: Create Spark Session
Include the correct version of the spark-bigquery-connector jar
Scala version 2.11 - 'gs://spark-lib/bigquery/spark-bigquery-latest.jar'.
Scala version 2.12 - 'gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar'.
End of explanation
spark.conf.set("spark.sql.repl.eagerEval.enabled",True)
Explanation: Enable repl.eagerEval
This will output the results of DataFrames in each step without the new need to show df.show() and also improves the formatting of the output
End of explanation
table = "bigquery-public-data.wikipedia.pageviews_2020"
df_wiki_pageviews = spark.read \
.format("bigquery") \
.option("table", table) \
.option("filter", "datehour >= '2020-03-01' AND datehour < '2020-03-02'") \
.load()
df_wiki_pageviews.printSchema()
Explanation: Read BigQuery table into Spark DataFrame
Use filter() to query data from a partitioned table.
End of explanation
df_wiki_en = df_wiki_pageviews \
.select("datehour", "wiki", "views") \
.where("views > 1000 AND wiki in ('en', 'en.m')") \
.cache()
df_wiki_en
Explanation: Select required columns and apply a filter using where() which is an alias for filter() then cache the table
End of explanation
import pyspark.sql.functions as F
df_datehour_totals = df_wiki_en \
.groupBy("datehour") \
.agg(F.sum('views').alias('total_views'))
df_datehour_totals.orderBy('total_views', ascending=False)
Explanation: Group by title and order by page views to see the top pages
End of explanation
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
%time pandas_datehour_totals = df_datehour_totals.toPandas()
pandas_datehour_totals.set_index('datehour', inplace=True)
pandas_datehour_totals.head()
Explanation: Convert Spark DataFrame to Pandas DataFrame
Convert the Spark DataFrame to Pandas DataFrame and set the datehour as the index
End of explanation
import matplotlib.pyplot as plt
Explanation: Plotting Pandas Dataframe
Import matplotlib
End of explanation
pandas_datehour_totals.plot(kind='line',figsize=(12,6));
Explanation: Use the Pandas plot function to create a line chart
End of explanation
import pyspark.sql.functions as F
df_wiki_totals = df_wiki_en \
.groupBy("datehour") \
.pivot("wiki") \
.agg(F.sum('views').alias('total_views'))
df_wiki_totals
Explanation: Plot Multiple Columns
Create a new Spark DataFrame and pivot the wiki column to create multiple rows for each wiki value
End of explanation
pandas_wiki_totals = df_wiki_totals.toPandas()
pandas_wiki_totals.set_index('datehour', inplace=True)
pandas_wiki_totals.head()
Explanation: Convert to Pandas DataFrame
End of explanation
pandas_wiki_totals.plot(kind='line',figsize=(12,6))
Explanation: Create plot with line for each column
End of explanation
pandas_wiki_totals.plot.area(figsize=(12,6))
Explanation: Create stacked area plot
End of explanation |
7,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Synthetic Data
Developed by Stijn Klop and Mark Bakker
This Notebook contains a number of examples and tests with synthetic data. The purpose of this notebook is to demonstrate the noise model of Pastas.
In this Notebook, heads are generated with a known response function. Next, Pastas is used to solve for the parameters of the model it is verified that Pastas finds the correct parameters back. Several different types of errors are introduced in the generated heads and it is tested whether the confidence intervals computed by Pastas are reasonable.
Step1: Load data and define functions
The rainfall and reference evaporation are read from file and truncated for the period 1980 - 2000. The rainfall and evaporation series are taken from KNMI station De Bilt. The reading of the data is done using Pastas.
Heads are generated with a Gamma response function which is defined below.
Step2: The Gamma response function requires 3 input arguments; A, n and a. The values for these parameters are defined along with the parameter d, the base groundwater level. The response function is created using the functions defined above.
Step3: Create synthetic observations
Rainfall is used as input series for this example. No errors are introduced. A Pastas model is created to test whether Pastas is able to . The generated head series is purposely not generated with convolution.
Heads are computed for the period 1990 - 2000. Computations start in 1980 as a warm-up period. Convolution is not used so that it is clear how the head is computed. The computed head at day 1 is the head at the end of day 1 due to rainfall during day 1. No errors are introduced.
Step4: Create Pastas model
The next step is to create a Pastas model. The head generated using the Gamma response function is used as input for the Pastas model.
A StressModel instance is created and added to the Pastas model. The StressModel intance takes the rainfall series as input aswell as the type of response function, in this case the Gamma response function ( ps.Gamma).
The Pastas model is solved without a noise model since there is no noise present in the data. The results of the Pastas model are plotted.
Step5: The results of the Pastas model show the calibrated parameters for the Gamma response function. The parameters calibrated using pastas are equal to the Atrue, ntrue, atrue and dtrue parameters defined above. The Explained Variance Percentage for this example model is 100%.
The results plots show that the Pastas simulation is identical to the observed groundwater. The residuals of the simulation are shown in the plot together with the response function and the contribution for each stress.
Below the Pastas block response and the true Gamma response function are plotted.
Step6: Test 1
Step7: Create Pastas model
A pastas model is created using the head with noise. A stress model is added to the Pastas model and the model is solved.
Step8: The results of the simulation show that Pastas is able to filter the noise from the observed groundwater head. The simulated groundwater head and the generated synthetic head are plotted below. The parameters found with the Pastas optimization are similair to the original parameters of the Gamma response function.
Step9: Test 2
Step10: Create Pastas model
A Pastas model is created using the head with correlated noise as input. A stressmodel is added to the model and the Pastas model is solved. The results of the model are plotted.
Step11: The Pastas model is able to calibrate the model parameters fairly well. The calibrated parameters are close to the true values defined above. The noise_alpha parameter calibrated by Pastas is close the the alphatrue parameter defined for the correlated noise series.
Below the head simulated with the Pastas model is plotted together with the head series and the head series with the correlated noise. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import gammainc, gammaincinv
import pandas as pd
import pastas as ps
ps.show_versions()
Explanation: Synthetic Data
Developed by Stijn Klop and Mark Bakker
This Notebook contains a number of examples and tests with synthetic data. The purpose of this notebook is to demonstrate the noise model of Pastas.
In this Notebook, heads are generated with a known response function. Next, Pastas is used to solve for the parameters of the model it is verified that Pastas finds the correct parameters back. Several different types of errors are introduced in the generated heads and it is tested whether the confidence intervals computed by Pastas are reasonable.
End of explanation
rain = ps.read.read_knmi('../examples/data/etmgeg_260.txt', variables='RH').series
evap = ps.read.read_knmi('../examples/data/etmgeg_260.txt', variables='EV24').series
def gamma_tmax(A, n, a, cutoff=0.999):
return gammaincinv(n, cutoff) * a
def gamma_step(A, n, a, cutoff=0.999):
tmax = gamma_tmax(A, n, a, cutoff)
t = np.arange(0, tmax, 1)
s = A * gammainc(n, t / a)
return s
def gamma_block(A, n, a, cutoff=0.999):
# returns the gamma block response starting at t=0 with intervals of delt = 1
s = gamma_step(A, n, a, cutoff)
return np.append(s[0], s[1:] - s[:-1])
Explanation: Load data and define functions
The rainfall and reference evaporation are read from file and truncated for the period 1980 - 2000. The rainfall and evaporation series are taken from KNMI station De Bilt. The reading of the data is done using Pastas.
Heads are generated with a Gamma response function which is defined below.
End of explanation
Atrue = 800
ntrue = 1.1
atrue = 200
dtrue = 20
h = gamma_block(Atrue, ntrue, atrue)
tmax = gamma_tmax(Atrue, ntrue, atrue)
plt.plot(h)
plt.xlabel('Time (days)')
plt.ylabel('Head response (m) due to 1 mm of rain in day 1')
plt.title('Gamma block response with tmax=' + str(int(tmax)));
Explanation: The Gamma response function requires 3 input arguments; A, n and a. The values for these parameters are defined along with the parameter d, the base groundwater level. The response function is created using the functions defined above.
End of explanation
step = gamma_block(Atrue, ntrue, atrue)[1:]
lenstep = len(step)
h = dtrue * np.ones(len(rain) + lenstep)
for i in range(len(rain)):
h[i:i + lenstep] += rain[i] * step
head = pd.DataFrame(index=rain.index, data=h[:len(rain)],)
head = head['1990':'1999']
plt.figure(figsize=(12,5))
plt.plot(head,'k.', label='head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Time (years)');
Explanation: Create synthetic observations
Rainfall is used as input series for this example. No errors are introduced. A Pastas model is created to test whether Pastas is able to . The generated head series is purposely not generated with convolution.
Heads are computed for the period 1990 - 2000. Computations start in 1980 as a warm-up period. Convolution is not used so that it is clear how the head is computed. The computed head at day 1 is the head at the end of day 1 due to rainfall during day 1. No errors are introduced.
End of explanation
ml = ps.Model(head)
sm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm)
ml.solve(noise=False, ftol=1e-8)
ml.plots.results();
Explanation: Create Pastas model
The next step is to create a Pastas model. The head generated using the Gamma response function is used as input for the Pastas model.
A StressModel instance is created and added to the Pastas model. The StressModel intance takes the rainfall series as input aswell as the type of response function, in this case the Gamma response function ( ps.Gamma).
The Pastas model is solved without a noise model since there is no noise present in the data. The results of the Pastas model are plotted.
End of explanation
plt.plot(gamma_block(Atrue, ntrue, atrue), label='Synthetic response')
plt.plot(ml.get_block_response('recharge'), '-.', label='Pastas response')
plt.legend(loc=0)
plt.ylabel('Head response (m) due to 1 m of rain in day 1')
plt.xlabel('Time (days)');
Explanation: The results of the Pastas model show the calibrated parameters for the Gamma response function. The parameters calibrated using pastas are equal to the Atrue, ntrue, atrue and dtrue parameters defined above. The Explained Variance Percentage for this example model is 100%.
The results plots show that the Pastas simulation is identical to the observed groundwater. The residuals of the simulation are shown in the plot together with the response function and the contribution for each stress.
Below the Pastas block response and the true Gamma response function are plotted.
End of explanation
random_seed = np.random.RandomState(15892)
noise = random_seed.normal(0,1,len(head)) * np.std(head.values) * 0.5
head_noise = head[0] + noise
Explanation: Test 1: Adding noise
In the next test example noise is added to the observations of the groundwater head. The noise is normally distributed noise with a mean of 0 and a standard deviation of 1 and is scaled with the standard deviation of the head.
The noise series is added to the head series created in the previous example.
End of explanation
ml2 = ps.Model(head_noise)
sm2 = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml2.add_stressmodel(sm2)
ml2.solve(noise=True)
ml2.plots.results();
Explanation: Create Pastas model
A pastas model is created using the head with noise. A stress model is added to the Pastas model and the model is solved.
End of explanation
plt.figure(figsize=(12,5))
plt.plot(head_noise, '.k', alpha=0.1, label='Head with noise')
plt.plot(head, '.k', label='Head true')
plt.plot(ml2.simulate(), label='Pastas simulation')
plt.title('Simulated Pastas head compared with synthetic head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Date (years)');
Explanation: The results of the simulation show that Pastas is able to filter the noise from the observed groundwater head. The simulated groundwater head and the generated synthetic head are plotted below. The parameters found with the Pastas optimization are similair to the original parameters of the Gamma response function.
End of explanation
noise_corr = np.zeros(len(noise))
noise_corr[0] = noise[0]
alphatrue = 2
for i in range(1, len(noise_corr)):
noise_corr[i] = np.exp(-1/alphatrue) * noise_corr[i - 1] + noise[i]
head_noise_corr = head[0] + noise_corr
Explanation: Test 2: Adding correlated noise
In this example correlated noise is added to the observed head. The correlated noise is generated using the noise series created in the previous example. The correlated noise is implemented as exponential decay using the following formula:
$$ n_{c}(t) = e^{-1/\alpha} \cdot n_{c}(t-1) + n(t)$$
where $n_{c}$ is the correlated noise, $\alpha$ is the noise decay parameter and $n$ is the uncorrelated noise. The noise series that is created is added to the observed groundwater head.
End of explanation
ml3 = ps.Model(head_noise_corr)
sm3 = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml3.add_stressmodel(sm3)
ml3.solve(noise=True)
ml3.plots.results();
Explanation: Create Pastas model
A Pastas model is created using the head with correlated noise as input. A stressmodel is added to the model and the Pastas model is solved. The results of the model are plotted.
End of explanation
plt.figure(figsize=(12,5))
plt.plot(head_noise_corr, '.k', alpha=0.1, label='Head with correlated noise')
plt.plot(head, '.k', label='Head true')
plt.plot(ml3.simulate(), label='Pastas simulation')
plt.title('Simulated Pastas head compared with synthetic head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Date (years)');
Explanation: The Pastas model is able to calibrate the model parameters fairly well. The calibrated parameters are close to the true values defined above. The noise_alpha parameter calibrated by Pastas is close the the alphatrue parameter defined for the correlated noise series.
Below the head simulated with the Pastas model is plotted together with the head series and the head series with the correlated noise.
End of explanation |
7,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# mnist.train.images[0]
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_shape = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='inputs')
targets_ = tf.placeholder(tf.float32,shape=(None,image_shape),name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_,encoding_dim,activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded,image_shape) # linear activation
# Sigmoid output from logits
decoded = tf.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
7,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing the open food databse to extract a dataset to use for the visualization.
Step1: Working from the full database, because the usda_imports_filtered.csv file in the shared drive does not have brand information, which might be useful for displaying.
Step2: The code column can be problematic as it's a long number that can be truncated to a floating point representation when manipulated by certain programs. Going to convert it to append a character at the start to it will be read unambiguously as a string.
First convert the code column to a string
Step3: Quick check for NA values
Step4: Need to get rid of rows with null product_name as we will be using that later for display etc.
Step5: Now going to want to try and find similar products to a specific item. In the demo, did this by just pulling items with certain text in the name and then doing some hand sorting. Can't scale that approach. Going to try to do it based on a combination of words in the name and the ingredients.
Try out the approach of using CountVectorizer on the product_name field to see how it does.
Step6: Get the extracted words
Step7: Count occurences by word
Step8: Ok -- challenge here is that many of the words are descriptive adjectives vs nouns. Even the ones that are nouns are going to be hard to separate e.g. rice could be rice crackers, chicken and rice, etc.
Wonder if I can get anywhere with bigrams or trigrams...try again
Step9: So some kind of bigram and trigram approach might be scalable here. But don't really have a lot of time to perfect that. However, could use these trigrams to expand the simple 'demo cat' approach by picking only certain bigrams and then using the simple algorithm from the demo approach to find a range of other products with that bigram.
Going to write out the trigrams, do some hand coding of which ones we want to use, then bring the results back in
Step10: Can drop all the rows where wanted is NaN
Step11: So now got an identifier to use to append wanted categories from
Want to append the trigram as the 'demo category' for any product which contains that trigram. It is possible that a product will fit into multiple trigrams, in which case I will choose to put it in the lowest total count trigram category. Going to do this by looping through the wanted trigrams from highest to lowest and updating the product's category as needed.
Step12: Some categories have very little variation in them. Going to check for that and drop them.
Step13: Now got a category applied to a subset of the database. Can run the same code as before to use that category to create a subset of recommendations for each category picked...
Step14: Now need to try some processing on product name as many will be too similar and uninformative without brand name.
Step15: Now going to save original product name and replace with a combination of that plus the brand
Step16: Need to add blanks for the other columns which only exist in the hand-curated demo data.
Step17: Now want to append the original demo data products and their hall of shame status so we always have them in too.
Step18: Now want to append the original demo data to the other data
Step19: Can now write this out to file. | Python Code:
import pandas as pd
import numpy as np
import re
from scipy import sparse as sparse
# SK-learn libraries for feature extraction from text.
from sklearn.feature_extraction.text import *
data_dir = "/Users/seddont/Dropbox/Tom/MIDS/W209_work/Tom_project/"
code_dir = "/Users/seddont/Dropbox/Tom/MIDS/W209_work/w209finalproject/"
Explanation: Processing the open food databse to extract a dataset to use for the visualization.
End of explanation
# Get sample of the full database to understand what columns we want
smp = pd.read_csv(data_dir+"en.openfoodfacts.org.products.csv", sep = "\t", nrows = 100)
for c in smp.columns:
print(c)
# Specify what columns we need for the visualization. For speed purposes going to
# remove any we don't really need
wanted_cols = ['code', 'creator', 'product_name', 'brands', 'brands_tags', 'serving_size',
'serving_size', 'energy_100g', 'fat_100g', 'cholesterol_100g',
'carbohydrates_100g', 'sugars_100g', 'fiber_100g', 'proteins_100g', 'sodium_100g']
# Create a list of columns to drop to check it worked ok
drop_cols = [c for c in smp.columns if c not in wanted_cols]
print(drop_cols)
# Pull in full dataset, only the columns we want
df = pd.read_csv(data_dir+"en.openfoodfacts.org.products.csv", sep = "\t")
# Drop unwanted columns
df.drop(drop_cols, axis = 1, inplace = True)
# Take a quick look
df
# Drop all rows that are not from the usda ndb import
df = df[df.creator == "usda-ndb-import"]
# Drop all rows where Brands == Nan as we can't really identify those products
df = df[df.brands.notnull()]
df
Explanation: Working from the full database, because the usda_imports_filtered.csv file in the shared drive does not have brand information, which might be useful for displaying.
End of explanation
df.code.apply(str)
# Add an N in front of the number string
df.code = "N"+df.code.astype(str)
Explanation: The code column can be problematic as it's a long number that can be truncated to a floating point representation when manipulated by certain programs. Going to convert it to append a character at the start to it will be read unambiguously as a string.
First convert the code column to a string
End of explanation
df.isnull().sum(axis = 0)
Explanation: Quick check for NA values
End of explanation
df = df[df.product_name.notnull()]
Explanation: Need to get rid of rows with null product_name as we will be using that later for display etc.
End of explanation
df
# Get all the non-null product name fields
name_data = df.product_name
# Pass them to the Count Vectorizer
vectorizer = CountVectorizer()
wv = vectorizer.fit_transform(name_data)
# Get some basic stats
print("Size of vocabulary:", wv.shape[1],"words")
print("Average non-zero entries per example:%5.2f" % (1.0*wv.nnz/wv.shape[0]))
Explanation: Now going to want to try and find similar products to a specific item. In the demo, did this by just pulling items with certain text in the name and then doing some hand sorting. Can't scale that approach. Going to try to do it based on a combination of words in the name and the ingredients.
Try out the approach of using CountVectorizer on the product_name field to see how it does.
End of explanation
a = vectorizer.get_feature_names()
print("First feature string:", sorted(a)[0])
print("Last feature string:", sorted(a)[len(a)-1])
Explanation: Get the extracted words
End of explanation
fn = vectorizer.get_feature_names()
wc = wv.sum(axis = 0)
word_frame = pd.DataFrame({"word": fn, "count":np.ravel(wc)})
word_frame.sort_values(by = ["count"], ascending = False)
Explanation: Count occurences by word
End of explanation
# Get all the non-null product name fields
name_data = df.product_name[df.product_name.notnull()]
# Pass them to the Count Vectorizer
vectorizer = CountVectorizer(analyzer = "word", ngram_range = (3,3))
wv = vectorizer.fit_transform(name_data)
# Get some basic stats
print("Size of vocabulary:", wv.shape[1],"words")
print("Average non-zero entries per example:%5.2f" % (1.0*wv.nnz/wv.shape[0]))
fn = vectorizer.get_feature_names()
wc = wv.sum(axis = 0)
word_frame = pd.DataFrame({"word": fn, "count":np.ravel(wc)})
word_frame.sort_values(by = ["count"], ascending = False)
Explanation: Ok -- challenge here is that many of the words are descriptive adjectives vs nouns. Even the ones that are nouns are going to be hard to separate e.g. rice could be rice crackers, chicken and rice, etc.
Wonder if I can get anywhere with bigrams or trigrams...try again
End of explanation
word_frame.to_csv(data_dir+"trigrams_uncoded.csv")
tri_w = pd.read_csv(code_dir+"trigrams_wanted_v2.csv")
tri_w
Explanation: So some kind of bigram and trigram approach might be scalable here. But don't really have a lot of time to perfect that. However, could use these trigrams to expand the simple 'demo cat' approach by picking only certain bigrams and then using the simple algorithm from the demo approach to find a range of other products with that bigram.
Going to write out the trigrams, do some hand coding of which ones we want to use, then bring the results back in
End of explanation
tri_w = tri_w[tri_w.wanted.notnull()]
tri_w
Explanation: Can drop all the rows where wanted is NaN
End of explanation
# Initialize demo category with None
df["demo_cat"] = "None"
# loop over each trigram
for trigram in tri_w.word:
# get the index of the correct column for that trigram in the vectorized output
wv_index = fn.index(trigram)
# Get locations of matches and convert to a dense representation for indexing
matches = wv[:,wv_index] == 1
matches = np.ravel(sparse.csr_matrix.todense(matches))
# Set the 'demo_cat' field to that trigram value
df.loc[matches, ["demo_cat"]] = trigram
df[df.demo_cat != "None"]
Explanation: So now got an identifier to use to append wanted categories from
Want to append the trigram as the 'demo category' for any product which contains that trigram. It is possible that a product will fit into multiple trigrams, in which case I will choose to put it in the lowest total count trigram category. Going to do this by looping through the wanted trigrams from highest to lowest and updating the product's category as needed.
End of explanation
# loop over each trigram
enough_variance = []
for trigram in tri_w.word:
fat_var = df[df.demo_cat == trigram]["fat_100g"].var()
if fat_var > 2:
enough_variance.append(trigram)
print(enough_variance)
print(len(enough_variance))
Explanation: Some categories have very little variation in them. Going to check for that and drop them.
End of explanation
# What we want to get variation on
pick_factors = ['fat_100g', 'sugars_100g', 'proteins_100g', 'sodium_100g']
# Points we want to pick (percentiles). Can tune this to get more or fewer picks.
pick_percentiles = [0.1, 0.5, 0.9]
# pick_percentiles = [0, 0.25, 0.5, 0.75, 1.0]
demo_picks = []
# loop over each trigram that has enough variance in it
for cat in enough_variance:
# first get all the items containing the cat word
catf = df[df["demo_cat"] == cat]
# Identify what rank each product is in that category, for each main factor
for p in pick_factors:
catf[p + "_rank"] = catf[p].rank(method = "first")
# Select products at chosen percentiles on each
high = catf[p + "_rank"].max()
pick_index = [max(1, round(n * high)) for n in pick_percentiles]
# add codes for those products
demo_picks.extend(catf[catf[p+"_rank"].isin(pick_index)].code)
demo_df = df[df.code.isin(demo_picks)]
demo_df
demo_df[demo_df.demo_cat == "pasta enriched macaroni"]
df[df.demo_cat == "pasta enriched macaroni"]
Explanation: Now got a category applied to a subset of the database. Can run the same code as before to use that category to create a subset of recommendations for each category picked...
End of explanation
def truncate_brand(s):
if type(s) != str:
return ""
elif s.find(",") == -1:
return s
else:
return s[:s.find(",")]
print(truncate_brand("Kroger, The Kroger Co."))
print(truncate_brand("Roundy's"))
demo_df.dtypes
demo_df["short_brand"] = demo_df.brands.apply(truncate_brand)
demo_df
Explanation: Now need to try some processing on product name as many will be too similar and uninformative without brand name.
End of explanation
demo_df["orig_product_name"] = demo_df.product_name
demo_df["new_product_name"] = demo_df.short_brand + " " + demo_df.product_name
demo_df.product_name = demo_df.new_product_name
Explanation: Now going to save original product name and replace with a combination of that plus the brand
End of explanation
demo_df["hos"] = 0
demo_df["image_url"] = None
Explanation: Need to add blanks for the other columns which only exist in the hand-curated demo data.
End of explanation
orig_demo = pd.read_csv(data_dir+"demo_food_data_latest.csv")
# Specify what columns we need to keep to match the regular dataframe above.
wanted_cols = ['code', 'creator', 'hos', 'image_url', 'product_name', 'brands', 'brands_tags', 'serving_size',
'serving_size', 'energy_100g', 'fat_100g', 'cholesterol_100g',
'carbohydrates_100g', 'sugars_100g', 'fiber_100g', 'proteins_100g', 'sodium_100g',
'demo_cat']
# Create a list of columns to drop to check it worked ok
drop_cols = [c for c in orig_demo.columns if c not in wanted_cols]
print(drop_cols)
# Drop unwanted columns in orig demo
orig_demo.drop(drop_cols, axis = 1, inplace = True)
orig_demo
# Drop unwanted columns in demo_df
# Create a list of columns to drop to check it worked ok
drop_cols = [c for c in demo_df.columns if c not in wanted_cols]
print(drop_cols)
demo_df.drop(drop_cols, axis = 1, inplace = True)
demo_df
missing_cols = [col for col in demo_df.columns if col not in orig_demo.columns]
print("Missing columns", missing_cols)
Explanation: Now want to append the original demo data products and their hall of shame status so we always have them in too.
End of explanation
finished = demo_df.append(orig_demo)
finished
Explanation: Now want to append the original demo data to the other data
End of explanation
# finished.to_csv(code_dir+"demo_food_data_final.csv", index = False)
finished.to_csv(code_dir+"demo_food_data_final2.csv", index = False)
Explanation: Can now write this out to file.
End of explanation |
7,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evolution d'indicateurs dans les communes
Step1: Jointure entre 2 fichiers
Step2: Il y a bien les colonnes "status", "mean_altitude", "superficie", "is_metropole" et "metropole_name"
Nombre de personnes qui utilisent la voiture pour aller travailler en métropole (pourcentage)
Step3: Il va falloir re-travailler la données pour pouvoir donner le pourcentage de personnes qui prenne la voiture en 2011 / 2012 ainsi qu'avoir la progression
Step5: Calculer une augmentation
Step6: Transport en commun en pourcentage
Step7: Transport vélo
Step8: Célib | Python Code:
commune_metropole = pd.read_csv('data/commune_metropole.csv', encoding='utf-8')
commune_metropole.shape
commune_metropole.head()
insee = pd.read_csv('data/insee.csv',
sep=";", # séparateur du fichier
dtype={'COM' : np.dtype(str)}, # On force la colonne COM est être en string
encoding='utf-8') # encoding
insee.shape
insee.info()
insee.head()
pd.set_option('display.max_columns', 30) # Changer le nombre de colonnnes afficher dans le notebook
insee.head()
Explanation: Evolution d'indicateurs dans les communes :
Documentation : https://github.com/anthill/open-moulinette/blob/master/insee/documentation.md
Chargement de nos données :
End of explanation
data = insee.merge(commune_metropole, on='COM', how='left')
data.shape
data.head()
Explanation: Jointure entre 2 fichiers :
End of explanation
# Clefs pour regrouper par ville
key = ['CODGEO',
'LIBGEO',
'COM',
'LIBCOM',
'REG',
'DEP',
'ARR',
'CV',
'ZE2010',
'UU2010',
'TRIRIS',
'REG2016',
'status_rank']
# 'is_metropole']
# Autres valeurs
features = [col for col in data.columns if col not in key]
# Nom des colonnes qui ne sont pas dans les colonnes de key
# On cherche à regrouper nos données sur le nom de la métropole :
# On somme tous nos indicateurs
metropole_sum = data[features][data.is_metropole == 1].groupby('metropole_name').sum().reset_index()
metropole_sum.shape
metropole_sum
voiture_colonnes = ['metropole_name' ,'C12_ACTOCC15P_VOIT', 'C11_ACTOCC15P_VOIT','P12_ACTOCC15P', 'P11_ACTOCC15P']
voiture = metropole_sum[voiture_colonnes].copy()
voiture
Explanation: Il y a bien les colonnes "status", "mean_altitude", "superficie", "is_metropole" et "metropole_name"
Nombre de personnes qui utilisent la voiture pour aller travailler en métropole (pourcentage) :
End of explanation
voiture['pourcentage_car_11'] = (voiture["C11_ACTOCC15P_VOIT"] / voiture["P11_ACTOCC15P"])*100
voiture
voiture['pourcentage_car_12'] = (voiture["C12_ACTOCC15P_VOIT"] / voiture["P12_ACTOCC15P"])*100
voiture
Explanation: Il va falloir re-travailler la données pour pouvoir donner le pourcentage de personnes qui prenne la voiture en 2011 / 2012 ainsi qu'avoir la progression
End of explanation
def augmentation(depart, arrive):
Calcul de l'augmentation entre 2 valeurs :
# ( ( valeur d'arrivée - valeur de départ ) / valeur de départ ) x 100
return ((arrive - depart) / depart) * 100
voiture['augmentation'] = augmentation(voiture['pourcentage_car_11'], voiture['pourcentage_car_12'])
# Les métropole qui utilise le moins la voiture pour aller travailler :
voiture.sort_values('augmentation')
Explanation: Calculer une augmentation :
End of explanation
transp_com_colonnes = ['metropole_name' , 'C11_ACTOCC15P_TCOM', 'C12_ACTOCC15P_TCOM','P12_ACTOCC15P', 'P11_ACTOCC15P']
transp_com = metropole_sum[transp_com_colonnes].copy()
transp_com
transp_com['pourcentage_trans_com_12'] = (transp_com["C12_ACTOCC15P_TCOM"] / transp_com["P12_ACTOCC15P"])*100
transp_com
transp_com['pourcentage_trans_com_11'] = (transp_com["C11_ACTOCC15P_TCOM"] / transp_com["P11_ACTOCC15P"])*100
transp_com
transp_com['augmentation'] = augmentation(transp_com['pourcentage_trans_com_11'], transp_com['pourcentage_trans_com_12'])
transp_com.sort_values('augmentation')
Explanation: Transport en commun en pourcentage :
End of explanation
transp_velo_colonnes = ['metropole_name' , 'C11_ACTOCC15P_DROU', 'C12_ACTOCC15P_DROU','P12_ACTOCC15P', 'P11_ACTOCC15P']
transp_velo = metropole_sum[transp_com_colonnes].copy()
transp_velo
transp_velo['pourcentage_trans_com_12'] = (transp_velo["C12_ACTOCC15P_TCOM"] / transp_velo["P12_ACTOCC15P"])*100
transp_velo
transp_velo['pourcentage_trans_com_11'] = (transp_velo["C11_ACTOCC15P_TCOM"] / transp_velo["P11_ACTOCC15P"])*100
transp_velo
Explanation: Transport vélo :
End of explanation
data.head()
bdx = data[data.LIBCOM == "Bordeaux"]
bdx
bdx.LIBGEO.unique()
bdx[bdx.LIBGEO.str.contains("Cauderan")][['P11_POP15P_CELIB', 'P12_POP15P_CELIB', 'P12_F1529', 'P12_H1529']].sum()
bdx[bdx.LIBGEO.str.contains("Chartron")][['P11_POP15P_CELIB', 'P12_POP15P_CELIB', 'P12_F1529', 'P12_H1529']].sum()
commune_metropole.head()
Explanation: Célib : Age / Uniquement à Bordeaux par Quartier :
End of explanation |
7,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic MEG and EEG data processing
MNE-Python reimplements most of MNE-C's (the original MNE command line utils)
functionality and offers transparent scripting.
On top of that it extends MNE-C's functionality considerably
(customize events, compute contrasts, group statistics, time-frequency
analysis, EEG-sensor space analyses, etc.) It uses the same files as standard
MNE unix commands
Step1: If you'd like to turn information status messages off
Step2: But it's generally a good idea to leave them on
Step3: You can set the default level in every session by setting the environment
variable "MNE_LOGGING_LEVEL", or by having mne-python write preferences to a
file with
Step4: By default logging messages print to the console, but look at
Step5: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
patient (approx. 2GB)</p></div>
Read data from file
Step6: Look at the channels in raw
Step7: Read and plot a segment of raw data
Step8: Save a segment of 150s of raw data (MEG only)
Step9: Define and read epochs
^^^^^^^^^^^^^^^^^^^^^^
First extract events
Step10: Note that, by default, we use stim_channel='STI 014'. If you have a different
system (e.g., a newer system that uses channel 'STI101' by default), you can
use the following to set the default stim channel to use for finding events
Step11: Events are stored as a 2D numpy array where the first column is the time
instant and the last one is the event number. It is therefore easy to
manipulate.
Define epochs parameters
Step12: Exclude some channels (original bads + 2 more)
Step13: The variable raw.info['bads'] is just a python list.
Pick the good channels, excluding raw.info['bads']
Step14: Alternatively one can restrict to magnetometers or gradiometers with
Step15: Define the baseline period
Step16: Define peak-to-peak rejection parameters for gradiometers, magnetometers
and EOG
Step17: Read epochs
Step18: Get single epochs for one condition
Step19: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
instants).
Scipy supports read and write of matlab files. You can save your single
trials with
Step20: or if you want to keep all the information about the data you can save your
epochs in a fif file
Step21: and read them later with
Step22: Compute evoked responses for auditory responses by averaging and plot it
Step23: .. topic
Step24: It is also possible to read evoked data stored in a fif file
Step25: Or another one stored in the same file
Step26: Two evoked objects can be contrasted using
Step27: To do a weighted sum based on the number of averages, which will give
you what you would have gotten from pooling all trials together in
Step28: Instead of dealing with mismatches in the number of averages, we can use
trial-count equalization before computing a contrast, which can have some
benefits in inverse imaging (note that here weights='nave' will
give the same result as weights='equal')
Step29: Time-Frequency
Step30: Compute induced power and phase-locking values and plot gradiometers
Step31: Inverse modeling
Step32: Read the inverse operator
Step33: Define the inverse parameters
Step34: Compute the inverse solution
Step35: Save the source time courses to disk
Step36: Now, let's compute dSPM on a raw file within a label
Step37: Compute inverse solution during the first 15s
Step38: Save result in stc files
Step39: What else can you do?
^^^^^^^^^^^^^^^^^^^^^
- detect heart beat QRS component
- detect eye blinks and EOG artifacts
- compute SSP projections to remove ECG or EOG artifacts
- compute Independent Component Analysis (ICA) to remove artifacts or
select latent sources
- estimate noise covariance matrix from Raw and Epochs
- visualize cross-trial response dynamics using epochs images
- compute forward solutions
- estimate power in the source space
- estimate connectivity in sensor and source space
- morph stc from one brain to another for group studies
- compute mass univariate statistics base on custom contrasts
- visualize source estimates
- export raw, epochs, and evoked data to other python data analysis
libraries e.g. pandas
- and many more things ...
Want to know more ?
^^^^^^^^^^^^^^^^^^^
Browse the examples gallery <auto_examples/index.html>_. | Python Code:
import mne
Explanation: Basic MEG and EEG data processing
MNE-Python reimplements most of MNE-C's (the original MNE command line utils)
functionality and offers transparent scripting.
On top of that it extends MNE-C's functionality considerably
(customize events, compute contrasts, group statistics, time-frequency
analysis, EEG-sensor space analyses, etc.) It uses the same files as standard
MNE unix commands: no need to convert your files to a new system or database.
This package is based on the FIF file format from Neuromag. It
can read and convert CTF, BTI/4D, KIT and various EEG formats to FIF.
What you can do with MNE Python
Raw data visualization to visualize recordings, can also use
mne_browse_raw for extended functionality (see ch_browse)
Epoching: Define epochs, baseline correction, handle conditions etc.
Averaging to get Evoked data
Compute SSP projectors to remove ECG and EOG artifacts
Compute ICA to remove artifacts or select latent sources.
Maxwell filtering to remove environmental noise.
Boundary Element Modeling: single and three-layer BEM model
creation and solution computation.
Forward modeling: BEM computation and mesh creation
(see ch_forward)
Linear inverse solvers (MNE, dSPM, sLORETA, eLORETA, LCMV, DICS)
Sparse inverse solvers (L1/L2 mixed norm MxNE, Gamma Map,
Time-Frequency MxNE)
Connectivity estimation in sensor and source space
Visualization of sensor and source space data
Time-frequency analysis with Morlet wavelets (induced power,
intertrial coherence, phase lock value) also in the source space
Spectrum estimation using multi-taper method
Mixed Source Models combining cortical and subcortical structures
Dipole Fitting
Decoding multivariate pattern analysis of M/EEG topographies
Compute contrasts between conditions, between sensors, across
subjects etc.
Non-parametric statistics in time, space and frequency
(including cluster-level)
Scripting (batch and parallel computing)
What you're not supposed to do with MNE Python
Brain and head surface segmentation for use with BEM
models -- use Freesurfer.
Installation of the required materials
See install_python_and_mne_python.
From raw data to evoked data
Now, launch ipython_ (Advanced Python shell) using the QT backend, which
is best supported across systems:
.. code-block:: console
$ ipython --matplotlib=qt
<div class="alert alert-info"><h4>Note</h4><p>In IPython, you can press **shift-enter** with a given cell
selected to execute it and advance to the next cell.
Also, the standard location for the MNE-sample data is
``~/mne_data``. If you downloaded data and an example asks you
whether to download it again, make sure the data reside in the
examples directory and you run the script from its current directory.
From IPython e.g. say:
.. code-block:: IPython
In [1]: cd examples/preprocessing
In [2]: %run plot_find_ecg_artifacts.py</p></div>
First, load the mne package:
End of explanation
mne.set_log_level('WARNING')
Explanation: If you'd like to turn information status messages off:
End of explanation
mne.set_log_level('INFO')
Explanation: But it's generally a good idea to leave them on:
End of explanation
print(mne.get_config_path())
Explanation: You can set the default level in every session by setting the environment
variable "MNE_LOGGING_LEVEL", or by having mne-python write preferences to a
file with::
>>> mne.set_config('MNE_LOGGING_LEVEL', 'WARNING')
Note that the location of the mne-python preferences file (for easier manual
editing) can be found using:
End of explanation
from mne.datasets import sample # noqa
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
print(raw_fname)
Explanation: By default logging messages print to the console, but look at
:func:mne.set_log_file to save output to a file.
Access raw data
^^^^^^^^^^^^^^^
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
print(raw)
print(raw.info)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
patient (approx. 2GB)</p></div>
Read data from file:
End of explanation
print(raw.ch_names)
Explanation: Look at the channels in raw:
End of explanation
start, stop = raw.time_as_index([100, 115]) # 100 s to 115 s data segment
data, times = raw[:, start:stop]
print(data.shape)
print(times.shape)
data, times = raw[2:20:3, start:stop] # access underlying data
raw.plot()
Explanation: Read and plot a segment of raw data
End of explanation
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True,
exclude='bads')
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
Explanation: Save a segment of 150s of raw data (MEG only):
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5])
Explanation: Define and read epochs
^^^^^^^^^^^^^^^^^^^^^^
First extract events:
End of explanation
mne.set_config('MNE_STIM_CHANNEL', 'STI101', set_env=True)
Explanation: Note that, by default, we use stim_channel='STI 014'. If you have a different
system (e.g., a newer system that uses channel 'STI101' by default), you can
use the following to set the default stim channel to use for finding events:
End of explanation
event_id = dict(aud_l=1, aud_r=2) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
Explanation: Events are stored as a 2D numpy array where the first column is the time
instant and the last one is the event number. It is therefore easy to
manipulate.
Define epochs parameters:
End of explanation
raw.info['bads'] += ['MEG 2443', 'EEG 053']
Explanation: Exclude some channels (original bads + 2 more):
End of explanation
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, stim=False,
exclude='bads')
Explanation: The variable raw.info['bads'] is just a python list.
Pick the good channels, excluding raw.info['bads']:
End of explanation
mag_picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')
grad_picks = mne.pick_types(raw.info, meg='grad', eog=True, exclude='bads')
Explanation: Alternatively one can restrict to magnetometers or gradiometers with:
End of explanation
baseline = (None, 0) # means from the first instant to t = 0
Explanation: Define the baseline period:
End of explanation
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
Explanation: Define peak-to-peak rejection parameters for gradiometers, magnetometers
and EOG:
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=False, reject=reject)
print(epochs)
Explanation: Read epochs:
End of explanation
epochs_data = epochs['aud_l'].get_data()
print(epochs_data.shape)
Explanation: Get single epochs for one condition:
End of explanation
from scipy import io # noqa
io.savemat('epochs_data.mat', dict(epochs_data=epochs_data), oned_as='row')
Explanation: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
instants).
Scipy supports read and write of matlab files. You can save your single
trials with:
End of explanation
epochs.save('sample-epo.fif')
Explanation: or if you want to keep all the information about the data you can save your
epochs in a fif file:
End of explanation
saved_epochs = mne.read_epochs('sample-epo.fif')
Explanation: and read them later with:
End of explanation
evoked = epochs['aud_l'].average()
print(evoked)
evoked.plot(time_unit='s')
Explanation: Compute evoked responses for auditory responses by averaging and plot it:
End of explanation
max_in_each_epoch = [e.max() for e in epochs['aud_l']] # doctest:+ELLIPSIS
print(max_in_each_epoch[:4]) # doctest:+ELLIPSIS
Explanation: .. topic:: Exercise
Extract the max value of each epoch
End of explanation
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked1 = mne.read_evokeds(
evoked_fname, condition='Left Auditory', baseline=(None, 0), proj=True)
Explanation: It is also possible to read evoked data stored in a fif file:
End of explanation
evoked2 = mne.read_evokeds(
evoked_fname, condition='Right Auditory', baseline=(None, 0), proj=True)
Explanation: Or another one stored in the same file:
End of explanation
contrast = mne.combine_evoked([evoked1, evoked2], weights=[0.5, -0.5])
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
Explanation: Two evoked objects can be contrasted using :func:mne.combine_evoked.
This function can use weights='equal', which provides a simple
element-by-element subtraction (and sets the
mne.Evoked.nave attribute properly based on the underlying number
of trials) using either equivalent call:
End of explanation
average = mne.combine_evoked([evoked1, evoked2], weights='nave')
print(contrast)
Explanation: To do a weighted sum based on the number of averages, which will give
you what you would have gotten from pooling all trials together in
:class:mne.Epochs before creating the :class:mne.Evoked instance,
you can use weights='nave':
End of explanation
epochs_eq = epochs.copy().equalize_event_counts(['aud_l', 'aud_r'])[0]
evoked1, evoked2 = epochs_eq['aud_l'].average(), epochs_eq['aud_r'].average()
print(evoked1)
print(evoked2)
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
Explanation: Instead of dealing with mismatches in the number of averages, we can use
trial-count equalization before computing a contrast, which can have some
benefits in inverse imaging (note that here weights='nave' will
give the same result as weights='equal'):
End of explanation
import numpy as np # noqa
n_cycles = 2 # number of cycles in Morlet wavelet
freqs = np.arange(7, 30, 3) # frequencies of interest
Explanation: Time-Frequency: Induced power and inter trial coherence
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Define parameters:
End of explanation
from mne.time_frequency import tfr_morlet # noqa
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles,
return_itc=True, decim=3, n_jobs=1)
power.plot([power.ch_names.index('MEG 1332')])
Explanation: Compute induced power and phase-locking values and plot gradiometers:
End of explanation
from mne.minimum_norm import apply_inverse, read_inverse_operator # noqa
Explanation: Inverse modeling: MNE and dSPM on evoked and raw data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Import the required functions:
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = read_inverse_operator(fname_inv)
Explanation: Read the inverse operator:
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM"
Explanation: Define the inverse parameters:
End of explanation
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
Explanation: Compute the inverse solution:
End of explanation
stc.save('mne_dSPM_inverse')
Explanation: Save the source time courses to disk:
End of explanation
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
label = mne.read_label(fname_label)
Explanation: Now, let's compute dSPM on a raw file within a label:
End of explanation
from mne.minimum_norm import apply_inverse_raw # noqa
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop)
Explanation: Compute inverse solution during the first 15s:
End of explanation
stc.save('mne_dSPM_raw_inverse_Aud')
Explanation: Save result in stc files:
End of explanation
print("Done!")
Explanation: What else can you do?
^^^^^^^^^^^^^^^^^^^^^
- detect heart beat QRS component
- detect eye blinks and EOG artifacts
- compute SSP projections to remove ECG or EOG artifacts
- compute Independent Component Analysis (ICA) to remove artifacts or
select latent sources
- estimate noise covariance matrix from Raw and Epochs
- visualize cross-trial response dynamics using epochs images
- compute forward solutions
- estimate power in the source space
- estimate connectivity in sensor and source space
- morph stc from one brain to another for group studies
- compute mass univariate statistics base on custom contrasts
- visualize source estimates
- export raw, epochs, and evoked data to other python data analysis
libraries e.g. pandas
- and many more things ...
Want to know more ?
^^^^^^^^^^^^^^^^^^^
Browse the examples gallery <auto_examples/index.html>_.
End of explanation |
7,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run Average Time Experiment
Step1: Average Time excluding PQT Time
This is equivalent to running the algorithms using a PQT generated on the current points.
Step2: Average Time including PQT Time
This is equivalent to running the algorithms using a previously generated PQT.
Due to the fact that the number of repetitions was identical for each experiment, we may calculate the average time including PQT by merely summing the averages
Step4: Average Cost
Step5: Average Cost | Python Code:
splice_avg_times, plg_avg_times, asplice_avg_times, pqt_avg_times = \
compute_avg_times(n_pairs_li, n_reps, gen_pd_edges, p_hat, verbose=True)
Explanation: Run Average Time Experiment
End of explanation
fig, ax = plt.subplots()
ax.set_yscale('log')
ax.plot(n_pairs_li, splice_avg_times, color='b', linestyle='-',
label=r"SPLICE")
ax.plot(n_pairs_li, plg_avg_times, color='g', linestyle='-.',
label=r"PLG ($\,\hat{{p}}={}$)".format(p_hat))
ax.plot(n_pairs_li, asplice_avg_times, color='r', linestyle='--',
label=r"ASPLICE ($\,\hat{{p}}={}$)".format(p_hat))
ax.set_xlabel("Number of pd Pairs", fontsize=15)
ax.set_ylabel("Runtime (s)", fontsize=15)
ax.set_title('Algorithm Runtime vs Number of Pairs (excluding PQT)')
ax.legend(loc=2)
ax.grid(True)
fig.tight_layout()
Explanation: Average Time excluding PQT Time
This is equivalent to running the algorithms using a PQT generated on the current points.
End of explanation
splice_avg_times_inc = np.array(splice_avg_times) + np.array(pqt_avg_times)
asplice_avg_times_inc = np.array(asplice_avg_times) + np.array(pqt_avg_times)
plg_avg_times_inc = np.array(plg_avg_times) + np.array(pqt_avg_times)
fig, ax = plt.subplots()
ax.set_yscale('log')
ax.plot(n_pairs_li, splice_avg_times, color='b', linestyle='-',
label=r"SPLICE")
ax.plot(n_pairs_li, plg_avg_times, color='g', linestyle='-.',
label=r"PLG")
ax.plot(n_pairs_li, asplice_avg_times, color='r', linestyle='--',
label=r"ASPLICE")
ax.plot(n_pairs_li, plg_avg_times_inc, color='g', linestyle='-.',
label=r"PLG inc")
ax.plot(n_pairs_li, asplice_avg_times_inc, color='r', linestyle='--',
label=r"ASPLICE inc")
ax.set_xlabel("Number of pd Pairs", fontsize=15)
ax.set_ylabel("Runtime (s)", fontsize=15)
ax.set_title(r"Algorithm Runtime vs Number of Pairs ($\,\hat{{p}}={}$)".format(p_hat))
ax.legend(loc=2, labelspacing=0)
ax.grid(True)
fig.tight_layout()
print()
Explanation: Average Time including PQT Time
This is equivalent to running the algorithms using a previously generated PQT.
Due to the fact that the number of repetitions was identical for each experiment, we may calculate the average time including PQT by merely summing the averages
End of explanation
def compute_avg_costs(n_pairs_li, n_reps, gen_pd_edges, p_hat=0.01, verbose=False):
Parameters:
n_pairs_li - a list of the number of pairs to generate
n_reps - the number of repetitions of experiment with that number of pairs
gen_pd_edges - function the generates n pairs
include_pqt_time - whether or not the time to compute the pqt should be included
verbose - whether or not to print the repetitions to stdout
splice_avg_costs = []
plg_avg_costs = []
asplice_avg_costs = []
pqt_avg_costs = []
# Run experiment
for n_pairs in n_pairs_li:
splice_cum_cost = 0.
asplice_cum_cost = 0.
plg_cum_cost = 0.
for rep in xrange(n_reps):
if verbose:
print("Number of pairs: {} Rep: {} at ({})".format(n_pairs,
rep, time.strftime("%H:%M %S", time.gmtime())))
sys.stdout.flush()
# Generate pairs
pd_edges = gen_pd_edges(n_pairs)
# Run SPLICE
_, cost = splice_alg(pd_edges)
splice_cum_cost += cost
# Generate PQT
pqt = PQTDecomposition().from_points(pd_edges.keys(), p_hat=p_hat)
# Run ASPLICE
_, cost = asplice_alg(pd_edges, pqt=pqt)
asplice_cum_cost += cost
# Run PLG
_, cost = plg_alg(pd_edges, pqt=pqt)
plg_cum_cost += cost
splice_avg_costs.append(np.mean(splice_cum_cost))
plg_avg_costs.append(np.mean(plg_cum_cost))
asplice_avg_costs.append(np.mean(asplice_cum_cost))
return splice_avg_costs, plg_avg_costs, asplice_avg_costs
splice_avg_costs, plg_avg_costs, asplice_avg_costs = \
compute_avg_costs(n_pairs_li, n_reps, gen_pd_edges, p_hat, verbose=True)
Explanation: Average Cost
End of explanation
fig, ax = plt.subplots()
#ax.set_yscale('log')
ax.plot(n_pairs_li, splice_avg_costs, color='b', linestyle='-',
label=r"SPLICE")
ax.plot(n_pairs_li, plg_avg_costs, color='g', linestyle='-.',
label=r"PLG")
ax.plot(n_pairs_li, asplice_avg_costs, color='r', linestyle='--',
label=r"ASPLICE")
ax.set_xlabel("Number of pd Pairs", fontsize=15)
ax.set_ylabel("Runtime (s)", fontsize=15)
ax.set_title(r"Algorithm Cost vs Number of Pairs ($\,\hat{{p}}={}$)".format(p_hat))
ax.legend(loc=2, labelspacing=0)
ax.grid(True)
fig.tight_layout()
Explanation: Average Cost
End of explanation |
7,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prerequisites (downloading tensorflow_models and checkpoints)
Checkpoint based inference
Frozen inference
Prerequisites (downloading tensorflow_models and checkpoints)
Step1: Checkpoint based inference
Step2: Frozen inference | Python Code:
!git clone https://github.com/tensorflow/models
from __future__ import print_function
from IPython import display
checkpoint_name = 'mobilenet_v2_1.0_224' #@param
url = 'https://storage.googleapis.com/mobilenet_v2/checkpoints/' + checkpoint_name + '.tgz'
print('Downloading from ', url)
!wget {url}
print('Unpacking')
!tar -xvf {checkpoint_name}.tgz
checkpoint = checkpoint_name + '.ckpt'
display.clear_output()
print('Successfully downloaded checkpoint from ', url,
'. It is available as', checkpoint)
!wget https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG -O panda.jpg
# setup path
import sys
sys.path.append('/content/models/research/slim')
Explanation: Prerequisites (downloading tensorflow_models and checkpoints)
Checkpoint based inference
Frozen inference
Prerequisites (downloading tensorflow_models and checkpoints)
End of explanation
import tensorflow.compat.v1 as tf
from nets.mobilenet import mobilenet_v2
tf.reset_default_graph()
# For simplicity we just decode jpeg inside tensorflow.
# But one can provide any input obviously.
file_input = tf.placeholder(tf.string, ())
image = tf.image.decode_jpeg(tf.read_file(file_input))
images = tf.expand_dims(image, 0)
images = tf.cast(images, tf.float32) / 128. - 1
images.set_shape((None, None, None, 3))
images = tf.image.resize_images(images, (224, 224))
# Note: arg_scope is optional for inference.
with tf.contrib.slim.arg_scope(mobilenet_v2.training_scope(is_training=False)):
logits, endpoints = mobilenet_v2.mobilenet(images)
# Restore using exponential moving average since it produces (1.5-2%) higher
# accuracy
ema = tf.train.ExponentialMovingAverage(0.999)
vars = ema.variables_to_restore()
saver = tf.train.Saver(vars)
from IPython import display
import pylab
from datasets import imagenet
import PIL
display.display(display.Image('panda.jpg'))
with tf.Session() as sess:
saver.restore(sess, checkpoint)
x = endpoints['Predictions'].eval(feed_dict={file_input: 'panda.jpg'})
label_map = imagenet.create_readable_names_for_imagenet_labels()
print("Top 1 prediction: ", x.argmax(),label_map[x.argmax()], x.max())
Explanation: Checkpoint based inference
End of explanation
import numpy as np
img = np.array(PIL.Image.open('panda.jpg').resize((224, 224))).astype(np.float) / 128 - 1
gd = tf.GraphDef.FromString(open(checkpoint_name + '_frozen.pb', 'rb').read())
inp, predictions = tf.import_graph_def(gd, return_elements = ['input:0', 'MobilenetV2/Predictions/Reshape_1:0'])
with tf.Session(graph=inp.graph):
x = predictions.eval(feed_dict={inp: img.reshape(1, 224,224, 3)})
label_map = imagenet.create_readable_names_for_imagenet_labels()
print("Top 1 Prediction: ", x.argmax(),label_map[x.argmax()], x.max())
Explanation: Frozen inference
End of explanation |
7,636 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am trying to convert a MATLAB code in Python. I don't know how to initialize an empty matrix in Python. | Problem:
import numpy as np
result = np.array([]) |
7,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../img/logo_white_bkg_small.png" align="left" />
Worksheet 3
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: Exercise 4
Step5: Part 1
Step6: Part 2
Step7: Part 3
Step8: Part 4 | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%pylab inline
data = pd.read_csv( '../../data/dailybots.csv' )
#Look at a summary of the data
data.describe()
Explanation: <img src="../../img/logo_white_bkg_small.png" align="left" />
Worksheet 3: EDA Worksheet
This worksheet covers concepts covered in the first half of Module 1 - Exploratory Data Analysis in One Dimension. It should take no more than 20-30 minutes to complete. Please raise your hand if you get stuck.
There are many ways to accomplish the tasks that you are presented with, however you will find that by using the techniques covered in class, the exercises should be relatively simple.
Import the Libraries
For this exercise, we will be using:
* Pandas (http://pandas.pydata.org/pandas-docs/stable/)
* Numpy (https://docs.scipy.org/doc/numpy/reference/)
* Matplotlib (http://matplotlib.org/api/pyplot_api.html)
End of explanation
#Generate a series of random numbers between 1 and 100.
random_numbers = pd.Series( np.random.randint(1, 100, 50) )
#Your code here...
#Filter the Series
random_numbers = random_numbers[random_numbers >= 10]
#Sort the Series
random_numbers.sort_values(inplace=True)
#Calculate the Tukey 5 Number Summary
random_numbers.describe()
#Count the number of even and odd numbers
even_numbers = random_numbers[random_numbers % 2 == 0].count()
odd_numbers = random_numbers[random_numbers % 2 != 0].count()
print( "Even numbers: " + str(even_numbers))
print( "Odd numbers: " + str(odd_numbers))
#Find the five largest and smallest numbers
print( "Smallest Numbers:")
print( random_numbers.head(5))
print( "Largest Numbers:")
print( random_numbers.tail(5))
Explanation: Exercise 1: Summarize the Data
For this exercise, you are given a Series of random numbers creatively names random_numbers. For the first exercise please do the following:
Remove all the numbers less than 10
Sort the series
Calculate the Tukey 5 number summary for this dataset
Count the number of even and odd numbers
Find the five largest and 5 smallest numbers in the series
End of explanation
#Your code here...
random_numbers.hist(bins=10)
Explanation: Exercise 2: Creating a Histogram
Using the random number Series create a histogram with 10 bins.
End of explanation
phone_numbers = [
'(833) 759-6854',
'(811) 268-9951',
'(855) 449-4648',
'(833) 212-2929',
'(833) 893-7475',
'(822) 346-3086',
'(844) 259-9074',
'(855) 975-8945',
'(811) 385-8515',
'(811) 523-5090',
'(844) 593-5677',
'(833) 534-5793',
'(899) 898-3043',
'(833) 662-7621',
'(899) 146-8244',
'(822) 793-4965',
'(822) 641-7853',
'(833) 153-7848',
'(811) 958-2930',
'(822) 332-3070',
'(833) 223-1776',
'(811) 397-1451',
'(844) 096-0377',
'(822) 000-0717',
'(899) 311-1880']
#Your code here...
phone_number_series = pd.Series(phone_numbers)
area_codes = phone_number_series.str.slice(1,4)
area_codes2 = phone_number_series.str.extract( '\((\d{3})\)', expand=False)
area_codes2.value_counts()
Explanation: Exercise 3: Counting Values
You have been given a list of US phone numbers. The area code is the first three digits. Your task is to produce a summary of how many times each area code appears in the list. To do this you will need to:
1. Extract the area code from each phone number
2. Count the unique occurances.
End of explanation
data = pd.read_csv( '../../data/dailybots.csv' )
data.head()
data.describe()
data.info()
data['botfam'].value_counts()
Explanation: Exercise 4: Putting it all together: Bot Analysis
First you're going to want to create a data frame from the dailybots.csv file which can be found in the data directory. You should be able to do this with the pd.read_csv() function. Take a minute to look at the dataframe because we are going to be using it for to answer several different questions.
End of explanation
grouped_df = data[data.botfam == "Ramnit"].groupby(['industry'])
grouped_df.sum()
Explanation: Part 1: Which industry sees the most Ramnit infections? Least?
Count the number of infected days for "Ramnit" in each industry industry.
How:
1. First filter the data to remove all the infections we don't care about
2. Aggregate the data on the column of interest. HINT: You might want to use the groupby() function
3. Add up the results
End of explanation
group2 = data[['botfam','orgs']].groupby( ['botfam'])
summary = group2.agg([np.min, np.max, np.mean, np.median, np.std])
summary.sort_values( [('orgs', 'median')], ascending=False)
Explanation: Part 2: Calculate the min, max, median and mean infected orgs for each bot family, sort by median
In this exercise, you are asked to calculate the min, max, median and mean of infected orgs for each bot family sorted by median. HINT:
1. Using the groupby() function, create a grouped data frame
2. You can do this one metric at a time OR you can use the .agg() function. You might want to refer to the documentation here: http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once
3. Sort the values (HINT HINT) by the median column
End of explanation
df3 = data[['date','hosts']].groupby('date').sum()
df3.sort_values(by='hosts', ascending=False).head(10)
Explanation: Part 3: Which date had the total most bot infections and how many infections on that day?
For the next step, aggregate and sum the number of infections (hosts) by date. Once you've done that, the next step is to sort in descending order.
End of explanation
filteredData = data[ data['botfam'].isin(['Necurs', 'Ramnit', 'PushDo']) ][['date', 'botfam', 'hosts']]
groupedFilteredData = filteredData.groupby( ['date', 'botfam']).sum()
groupedFilteredData.unstack(level=1).plot(kind='line', subplots=False)
Explanation: Part 4: Plot the daily infected hosts for Necurs, Ramnit and PushDo
For the final step, you're going to plot the daily infected hosts for three infection types. In order to do this, you'll need to do the following steps:
1. Filter the data to remove the botfamilies we don't care about.
2. Use groupby() to aggregate the data by date and family, then sum up the hosts in each group
3. Plot the data. Hint: You might want to use the unstack() function to prepare the data for plotting.
4. Use the plot() method to plot the results.
End of explanation |
7,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
7,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of the topic coherence pipeline in Gensim
Introduction
We will be using the u_mass and c_v coherence for two different LDA models
Step1: Set up corpus
As stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. We will be setting up two LDA models. One with 50 iterations of training and the other with just 1. Hence the one with 50 iterations ("better" model) should be able to capture this underlying pattern of the corpus better than the "bad" LDA model. Therefore, in theory, our topic coherence for the good LDA model should be greater than the one for the bad LDA model.
Step2: Set up two topic models
We'll be setting up two different LDA Topic models. A good one and bad one. To build a "good" topic model, we'll simply train it using more iterations than the bad one. Therefore the u_mass coherence should in theory be better for the good model than the bad one since it would be producing more "human-interpretable" topics.
Step3: Using U_Mass Coherence
Step4: View the pipeline parameters for one coherence model
Following are the pipeline parameters for u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.
Step5: Interpreting the topics
As we will see below using LDA visualization, the better model comes up with two topics composed of the following words
Step6: Using C_V coherence
Step7: Pipeline parameters for C_V coherence
Step8: Print coherence values
Step9: Support for wrappers
This API supports gensim's ldavowpalwabbit and ldamallet wrappers as input parameter to model.
Step10: Support for other topic models
The gensim topics coherence pipeline can be used with other topics models too. Only the tokenized topics should be made available for the pipeline. Eg. with the gensim HDP model | Python Code:
from __future__ import print_function
import os
import logging
import json
import warnings
try:
raise ImportError
import pyLDAvis.gensim
CAN_VISUALIZE = True
pyLDAvis.enable_notebook()
from IPython.display import display
except ImportError:
ValueError("SKIP: please install pyLDAvis")
CAN_VISUALIZE = False
import numpy as np
from gensim.models import CoherenceModel, LdaModel, HdpModel
from gensim.models.wrappers import LdaVowpalWabbit, LdaMallet
from gensim.corpora import Dictionary
warnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity
Explanation: Demonstration of the topic coherence pipeline in Gensim
Introduction
We will be using the u_mass and c_v coherence for two different LDA models: a "good" and a "bad" LDA model. The good LDA model will be trained over 50 iterations and the bad one for 1 iteration. Hence in theory, the good LDA model will be able come up with better or more human-understandable topics. Therefore the coherence measure output for the good LDA model should be more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable.
End of explanation
texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
Explanation: Set up corpus
As stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. We will be setting up two LDA models. One with 50 iterations of training and the other with just 1. Hence the one with 50 iterations ("better" model) should be able to capture this underlying pattern of the corpus better than the "bad" LDA model. Therefore, in theory, our topic coherence for the good LDA model should be greater than the one for the bad LDA model.
End of explanation
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)
badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
Explanation: Set up two topic models
We'll be setting up two different LDA Topic models. A good one and bad one. To build a "good" topic model, we'll simply train it using more iterations than the bad one. Therefore the u_mass coherence should in theory be better for the good model than the bad one since it would be producing more "human-interpretable" topics.
End of explanation
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
Explanation: Using U_Mass Coherence
End of explanation
print(goodcm)
Explanation: View the pipeline parameters for one coherence model
Following are the pipeline parameters for u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.
End of explanation
if CAN_VISUALIZE:
prepared = pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
display(pyLDAvis.display(prepared))
if CAN_VISUALIZE:
prepared = pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
display(pyLDAvis.display(prepared))
print(goodcm.get_coherence())
print(badcm.get_coherence())
Explanation: Interpreting the topics
As we will see below using LDA visualization, the better model comes up with two topics composed of the following words:
1. goodLdaModel:
- Topic 1: More weightage assigned to words such as "system", "user", "eps", "interface" etc which captures the first set of documents.
- Topic 2: More weightage assigned to words such as "graph", "trees", "survey" which captures the topic in the second set of documents.
2. badLdaModel:
- Topic 1: More weightage assigned to words such as "system", "user", "trees", "graph" which doesn't make the topic clear enough.
- Topic 2: More weightage assigned to words such as "system", "trees", "graph", "user" which is similar to the first topic. Hence both topics are not human-interpretable.
Therefore, the topic coherence for the goodLdaModel should be greater for this than the badLdaModel since the topics it comes up with are more human-interpretable. We will see this using u_mass and c_v topic coherence measures.
Visualize topic models
End of explanation
goodcm = CoherenceModel(model=goodLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
badcm = CoherenceModel(model=badLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
Explanation: Using C_V coherence
End of explanation
print(goodcm)
Explanation: Pipeline parameters for C_V coherence
End of explanation
print(goodcm.get_coherence())
print(badcm.get_coherence())
Explanation: Print coherence values
End of explanation
# Replace with path to your Vowpal Wabbit installation
vw_path = '/usr/local/bin/vw'
# Replace with path to your Mallet installation
home = os.path.expanduser('~')
mallet_path = os.path.join(home, 'mallet-2.0.8', 'bin', 'mallet')
model1 = LdaVowpalWabbit(vw_path, corpus=corpus, num_topics=2, id2word=dictionary, passes=50)
model2 = LdaVowpalWabbit(vw_path, corpus=corpus, num_topics=2, id2word=dictionary, passes=1)
cm1 = CoherenceModel(model=model1, corpus=corpus, coherence='u_mass')
cm2 = CoherenceModel(model=model2, corpus=corpus, coherence='u_mass')
print(cm1.get_coherence())
print(cm2.get_coherence())
model1 = LdaMallet(mallet_path, corpus=corpus, num_topics=2, id2word=dictionary, iterations=50)
model2 = LdaMallet(mallet_path, corpus=corpus, num_topics=2, id2word=dictionary, iterations=1)
cm1 = CoherenceModel(model=model1, texts=texts, coherence='c_v')
cm2 = CoherenceModel(model=model2, texts=texts, coherence='c_v')
print(cm1.get_coherence())
print(cm2.get_coherence())
Explanation: Support for wrappers
This API supports gensim's ldavowpalwabbit and ldamallet wrappers as input parameter to model.
End of explanation
hm = HdpModel(corpus=corpus, id2word=dictionary)
# To get the topic words from the model
topics = []
for topic_id, topic in hm.show_topics(num_topics=10, formatted=False):
topic = [word for word, _ in topic]
topics.append(topic)
topics[:2]
# Initialize CoherenceModel using `topics` parameter
cm = CoherenceModel(topics=topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')
cm.get_coherence()
Explanation: Support for other topic models
The gensim topics coherence pipeline can be used with other topics models too. Only the tokenized topics should be made available for the pipeline. Eg. with the gensim HDP model
End of explanation |
7,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chaos Theory and the Logistic Map
In this tutorial, we will see how to implement Geoff Boeing's
excellent blog post on Chaos Theory and the Logistic
Map using
our newly release library,
HoloViews. For an example of how
this material may be approached using pandas and matplotlib directly
please see Geoff's original
notebook.
We will see how using HoloViews allows the same content to be
expressed in a far more succinct way that also makes the material
presented easier to understand for the reader. In particular, we will
show how composing Layout and Overlay objects makes it easier
to compare data side-by-side, without needing to scroll vertically
between plots.
We now start by importing numpy and the classes we need from HoloViews
before loading the IPython extension
Step1: The Logistic Model
Here we define a very simple logistic_map function that is defined
by the difference equation $x_{t+1} = rx_t(1-x_t)$. This is the
logistic map, a very
simple model of population dynamics with chaotic behavior
Step2: We will shortly look a few curve plots of this function, but before we
do we will declare that all
Curve
objects will be indexed by the 'Generation' as the key dimension
(i.e x-axis) with a corresponding 'Population' calue dimension for
the y-axis
Step3: Now with the
NdOverlay
class, we can quickly visualize the population evolution for different
growth rates
Step4: As described in the original
tutorial
we see examples of population collapse, one perfectly static
population trace and irregular oscillations in the population value.
Bifurcation diagrams
Now we will plot some bifurcation
diagrams using the
Points
class. We will lower the default
Points
size to $1$ and set the dimension labels as we did for
Curve
Step5: Now we look at the set of population values over growth rate
Step6: In plot A, only discarding the initial population value is
discarded. In B, the initial hundred population values are
discarded. In C we overlay B on top of A to confirm that
B is a subset of A.
Note how HoloViews makes it easy to present this information in a
compact form that allows immediate comparison across subfigures with
convenient sub-figure labels to refer to.
Looking at chaos in more detail
Now we will zoom in on the first bifurcation point and examine a chaotic region in more detail
Step7: Again, we use an
Overlay
to view the region of A expanded in B. Note that the
declaration of +axiswise at the start of this section has decoupled
the axes in A and B. Try setting -axiswise in the %opts
declaration above to see A and B in directly comparable
coordinate spaces.
Self-similarity
The next part of the tutorial looks at self-similarity, by zooming
into a small portion of the first bifurcation diagram shown above
Step8: Here we see qualitatively similar patterns on two very different
scales, demonstrating a nice example of
self-similarity.
Sensitivity to initial conditions
Chaotic systems are well known for their sensitive-dependence on
initial conditions. Here we look at sensitivity to both the population
growth rate and the initial population value
Step9: In A we see how a tiny difference in the growth rate eventually
results in wildly diverging behaviours. In B we see how tiny
differences in the initial population value also eventually results in
divergence.
In this example, we used a
Layout
container to place A next to B where each subfigure is an
NdOverlay
of
Curve
objects.
Poincaré Plots
Now we will examine Poincaré plots for different growth rates. First,
we will redefine the default dimensions associated with Points
objects and set suitable, normalized soft-ranges
Step10: Now we use an NdLayout to view the Poincaré plots for four different growth rates
Step11: As the chaotic regime is the most interesting, let's look at the
Poincaré plots for 50 growth values equally spaced between $3.6$ and
$4.0$. To distinguish all these curves, we will use a Palette
using the 'hot' color map
Step12: What is fascinating about this family of parabolas is that they never
overlap; otherwise two different growth rates starting with the same
intial population would end up with identical evolution. The
logistic_map function is determinstic after all. This type of
non-overlapping, non-repeating yet structured evolution is a general
feature of fractal geometries. In the next section, we will constrast
chaotic behaviour to random behaviour.
Chaos versus randomness
At first glance, the evolution of a chaotic system can be difficult to
tell apart from a set of samples drawn from a random distribution
Step13: The Poincaré plots from the previous section do provide a clear way of
distinguishing chaotic evolution from randomness
Step14: In this example, we index into one element of the
NdOverlay
of Poincaré plots defined earlier (A) in order to contrast it to
the random case shown in B. Using these plots, the two sources of
data can be clearly distinguished.
The 3D attractor
Finally, we generalize the 2D plots shown above into a
three-dimensional space using the
Scatter3D
element, plotting the values at time $t$ against $t+1$ and $t+2$
Step15: As we can see, our chaotic system is constrained to a limited subspace
(Note | Python Code:
import numpy as np
import holoviews as hv
from holoviews import Dimension
hv.notebook_extension()
Explanation: Chaos Theory and the Logistic Map
In this tutorial, we will see how to implement Geoff Boeing's
excellent blog post on Chaos Theory and the Logistic
Map using
our newly release library,
HoloViews. For an example of how
this material may be approached using pandas and matplotlib directly
please see Geoff's original
notebook.
We will see how using HoloViews allows the same content to be
expressed in a far more succinct way that also makes the material
presented easier to understand for the reader. In particular, we will
show how composing Layout and Overlay objects makes it easier
to compare data side-by-side, without needing to scroll vertically
between plots.
We now start by importing numpy and the classes we need from HoloViews
before loading the IPython extension:
End of explanation
def logistic_map(gens=20, init=0.5, growth=0.5):
population = [init]
for gen in range(gens-1):
current = population[gen]
population.append(current * growth * (1 - current))
return population
Explanation: The Logistic Model
Here we define a very simple logistic_map function that is defined
by the difference equation $x_{t+1} = rx_t(1-x_t)$. This is the
logistic map, a very
simple model of population dynamics with chaotic behavior:
End of explanation
hv.Curve.kdims = [Dimension('Generation')]
hv.Curve.kdims = [Dimension('Population')]
Explanation: We will shortly look a few curve plots of this function, but before we
do we will declare that all
Curve
objects will be indexed by the 'Generation' as the key dimension
(i.e x-axis) with a corresponding 'Population' calue dimension for
the y-axis:
End of explanation
%%opts Curve (color=Palette('jet')) NdOverlay [figure_size=200 aspect=1.5 legend_position='right']
hv.NdOverlay({growth: hv.Curve(enumerate(logistic_map(growth=growth)))
for growth in [round(el, 3) for el in np.linspace(0.5, 3.5, 7)]},
label = 'Logistic model results, by growth rate',
kdims=['Growth rate'])
Explanation: Now with the
NdOverlay
class, we can quickly visualize the population evolution for different
growth rates:
End of explanation
%opts Points (s=1) {+axiswise}
hv.Points.kdims = [Dimension('Growth rate'), Dimension('Population')]
Explanation: As described in the original
tutorial
we see examples of population collapse, one perfectly static
population trace and irregular oscillations in the population value.
Bifurcation diagrams
Now we will plot some bifurcation
diagrams using the
Points
class. We will lower the default
Points
size to $1$ and set the dimension labels as we did for
Curve:
End of explanation
growth_rates = np.linspace(0, 4, 1000)
p1 = hv.Points([(rate, pop) for rate in growth_rates for
(gen, pop) in enumerate(logistic_map(gens=100, growth=rate))
if gen!=0]) # Discard the initial generation (where population is 0.5)
p2 = hv.Points([(rate, pop) for rate in growth_rates for
(gen, pop) in enumerate(logistic_map(gens=200, growth=rate))
if gen>=100]) # Discard the first 100 generations to view attractors
(p1.relabel('Discarding the first generation')
+ p2.relabel('Discarding the first 100 generations') + (p1 * p2).relabel('Overlay of B on A'))
Explanation: Now we look at the set of population values over growth rate:
End of explanation
growth_rates = np.linspace(2.8, 4, 1000)
p3 = hv.Points([(rate, pop) for rate in growth_rates for
(gen, pop) in enumerate(logistic_map(gens=300, growth=rate))
if gen>=200])
growth_rates = np.linspace(3.7, 3.9, 1000)
p4 = hv.Points([(rate, pop) for rate in growth_rates for
(gen, pop) in enumerate(logistic_map(gens=200, growth=rate))
if gen>=100])
(p3 * p4) + p4
Explanation: In plot A, only discarding the initial population value is
discarded. In B, the initial hundred population values are
discarded. In C we overlay B on top of A to confirm that
B is a subset of A.
Note how HoloViews makes it easy to present this information in a
compact form that allows immediate comparison across subfigures with
convenient sub-figure labels to refer to.
Looking at chaos in more detail
Now we will zoom in on the first bifurcation point and examine a chaotic region in more detail:
End of explanation
growth_rates = np.linspace(3.84, 3.856, 1000)
p5 = hv.Points([(rate, pop) for rate in growth_rates for
(gen, pop) in enumerate(logistic_map(gens=500, growth=rate))
if gen>=300])[:, 0.445:0.552]
(p1 * p5) + p5
Explanation: Again, we use an
Overlay
to view the region of A expanded in B. Note that the
declaration of +axiswise at the start of this section has decoupled
the axes in A and B. Try setting -axiswise in the %opts
declaration above to see A and B in directly comparable
coordinate spaces.
Self-similarity
The next part of the tutorial looks at self-similarity, by zooming
into a small portion of the first bifurcation diagram shown above:
End of explanation
%%opts Curve {+axiswise} NdOverlay [aspect=1.5] Layout [figure_size=150]
plot1 = hv.NdOverlay({str(growth): hv.Curve(enumerate(logistic_map(gens=30, growth=growth)))
for growth in [3.9, 3.90001]},
kdims=['Growth rate'],
label = 'Sensitivity to the growth rate')
plot2 = hv.NdOverlay({str(init): hv.Curve(enumerate(logistic_map(gens=50, growth=3.9, init=init)))
for init in [0.5, 0.50001]},
kdims=['Initial population'],
label = 'Sensitivity to the initial conditions')
(plot1 + plot2)
Explanation: Here we see qualitatively similar patterns on two very different
scales, demonstrating a nice example of
self-similarity.
Sensitivity to initial conditions
Chaotic systems are well known for their sensitive-dependence on
initial conditions. Here we look at sensitivity to both the population
growth rate and the initial population value:
End of explanation
%opts NdLayout [title_format='Poincaré Plots']
hv.Points.kdims = [hv.Dimension('Population (t)', soft_range=(0,1)),
hv.Dimension('Population (t+1)', soft_range=(0,1))]
Explanation: In A we see how a tiny difference in the growth rate eventually
results in wildly diverging behaviours. In B we see how tiny
differences in the initial population value also eventually results in
divergence.
In this example, we used a
Layout
container to place A next to B where each subfigure is an
NdOverlay
of
Curve
objects.
Poincaré Plots
Now we will examine Poincaré plots for different growth rates. First,
we will redefine the default dimensions associated with Points
objects and set suitable, normalized soft-ranges:
End of explanation
%%opts Points (s=5)
layout = hv.NdLayout({rate: hv.Points(zip(logistic_map(gens=500, growth=rate)[1:],
logistic_map(gens=500, growth=rate)[2:]))
for rate in [2.9, 3.2, 3.5, 3.9]}, key_dimensions=['Growth rate']).cols(2)
layout
Explanation: Now we use an NdLayout to view the Poincaré plots for four different growth rates:
End of explanation
%%opts NdOverlay [show_legend=False figure_size=200] Points (s=1 color=Palette('hot'))
hv.NdOverlay({rate: hv.Points(zip(logistic_map(gens=300, growth=rate)[1:],
logistic_map(gens=300, growth=rate)[2:]), extents=(0.0001,0.0001,1,1))
for rate in np.linspace(3.6, 4.0, 50)}, key_dimensions=['Growth rate'])
Explanation: As the chaotic regime is the most interesting, let's look at the
Poincaré plots for 50 growth values equally spaced between $3.6$ and
$4.0$. To distinguish all these curves, we will use a Palette
using the 'hot' color map:
End of explanation
%%opts NdOverlay [figure_size=200 aspect=1.5]
chaotic = hv.Curve([(gen, pop) for gen, pop in enumerate(logistic_map(gens=100, growth=3.999))],
kdims=['Generation'])[40:90]
random = hv.Curve([(gen, np.random.random()) for gen in range(0, 100)],
kdims=['Generation'])[40:90]
hv.NdOverlay({'chaotic':chaotic, 'random':random},
label='Time series, deterministic versus random data')
Explanation: What is fascinating about this family of parabolas is that they never
overlap; otherwise two different growth rates starting with the same
intial population would end up with identical evolution. The
logistic_map function is determinstic after all. This type of
non-overlapping, non-repeating yet structured evolution is a general
feature of fractal geometries. In the next section, we will constrast
chaotic behaviour to random behaviour.
Chaos versus randomness
At first glance, the evolution of a chaotic system can be difficult to
tell apart from a set of samples drawn from a random distribution:
End of explanation
randpoints = [np.random.random() for _ in range(0, 1000)]
poincare_random = hv.Points(zip(randpoints[1:], randpoints[2:]))
layout[3.9] + poincare_random
Explanation: The Poincaré plots from the previous section do provide a clear way of
distinguishing chaotic evolution from randomness:
End of explanation
%opts Scatter3D [title_format='3D Poincaré Plot'] (s=1 color='r') Layout [figure_size=200]
population = logistic_map(2000, 0.5, 3.99)
attractor3D = hv.Scatter3D(zip(population[1:], population[2:], population[3:]),
kdims=['Population (t)', 'Population (t+1)', 'Population (t+2)'])
random = [np.random.random() for _ in range(0, 2000)]
rand3D = hv.Scatter3D(zip(random[1:], random[2:], random[3:]),
kdims=['Value (t)', 'Value (t+1)', 'Value (t+2)'])
attractor3D + rand3D
Explanation: In this example, we index into one element of the
NdOverlay
of Poincaré plots defined earlier (A) in order to contrast it to
the random case shown in B. Using these plots, the two sources of
data can be clearly distinguished.
The 3D attractor
Finally, we generalize the 2D plots shown above into a
three-dimensional space using the
Scatter3D
element, plotting the values at time $t$ against $t+1$ and $t+2$:
End of explanation
hv.Points.kdims = [hv.Dimension('Growth rate', soft_range=(0,4)),
hv.Dimension('Population', soft_range=(0,1))]
hv.HoloMap({(gens,cutoff): hv.Points([(rate, pop) for rate in np.linspace(0, 4, 1000) for
(gen, pop) in enumerate(logistic_map(gens=gens, growth=rate))
if gen>=cutoff])
for gens in [20,40,60,80,100] for cutoff in [1,5,10,15]},
kdims=['Generations', 'Cutoff'])
Explanation: As we can see, our chaotic system is constrained to a limited subspace
(Note: the bug where the y and z axes have the same label has been
corrected on the HoloViews master branch and will be fixed in the next
release).
Further exploration
Throughout the tutorial we have been running the logistic_map for
some arbitrary number of generations, without justifying the values
used. In addition, we have used a cutoff value, only examining the
population evolution after some number of generations has elapsed
(e.g. by default the initial population value is a constant of $0.5$
which would result in a distracting horizontal line).
With HoloViews, you can easily animate and interact with your data
with as many dimensions as you like. We will now interactively
investigate the effect of these two parameters using a
HoloMap:
End of explanation |
7,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
... and so we begin
Critical information
First steps
Order of the day
Learn to use Jupyter / iPython Notebook
Get familiar with basic Python
Start with Spyder, a traditional editor
Fundamental Python-in-Science skills
What is Jupyter
(previously iPython Notebook)
An interactive Q&A-style Python prompt, with output in formatted text, images, graphs and more (and it even works with other languages too)
A bit like a cross between Mathematica and Wolfram Alpha, that runs in your browser, but you can save all your worksheets locally. We will explore this, as it is very useful for doing quick calculations, collaborating on research, as a whiteboard, nonlinear discussions where you can adjust graphs or calculations, as teaching tool (I hope), or simply storing your train of thought in computations and notes. This series of slides was prepared in Jupyter, which is why I can do this...
It lets you do things like...
Step1: You want to add in .weekday()
Lets you output LaTeX-style (formatted) maths
Example calculating the output of $ \int x^3 dx $ | Python Code:
import datetime
print(datetime.date.today())
Explanation: ... and so we begin
Critical information
First steps
Order of the day
Learn to use Jupyter / iPython Notebook
Get familiar with basic Python
Start with Spyder, a traditional editor
Fundamental Python-in-Science skills
What is Jupyter
(previously iPython Notebook)
An interactive Q&A-style Python prompt, with output in formatted text, images, graphs and more (and it even works with other languages too)
A bit like a cross between Mathematica and Wolfram Alpha, that runs in your browser, but you can save all your worksheets locally. We will explore this, as it is very useful for doing quick calculations, collaborating on research, as a whiteboard, nonlinear discussions where you can adjust graphs or calculations, as teaching tool (I hope), or simply storing your train of thought in computations and notes. This series of slides was prepared in Jupyter, which is why I can do this...
It lets you do things like...
End of explanation
from sympy import *
init_printing()
x = Symbol("x")
integrate(x ** 3, x)
Explanation: You want to add in .weekday()
Lets you output LaTeX-style (formatted) maths
Example calculating the output of $ \int x^3 dx $:
End of explanation |
7,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Obtain all the data for the Master students, starting from 2007. Compute how many months it took each master student to complete their master, for those that completed it. Partition the data between male and female students, and compute the average -- is the difference in average statistically significant?
Notice that master students' data is more tricky than the bachelors' one, as there are many missing records in the IS-Academia database. Therefore, try to guess how much time a master student spent at EPFL by at least checking the distance in months between Master semestre 1 and Master semestre 2. If the Mineur field is not empty, the student should also appear registered in Master semestre 3. Last but not the least, don't forget to check if the student has an entry also in the Projet Master tables. Once you can handle well this data, compute the "average stay at EPFL" for master students. Now extract all the students with a Spécialisation and compute the "average stay" per each category of that attribute -- compared to the general average, can you find any specialization for which the difference in average is statistically significant?
Step1: Let's get the first page in which we will be able to extract some interesting content !
Step2: Now we need to make other requests to IS Academia, which specify every parameter
Step3: Now, we got all the information to get all the master students !
Let's make all the requests we need to build our data.
We will try to do requests such as
Step4: The requests are now ready to be sent to IS Academia. Let's try it out !
TIME OUT
Step5: DON'T RUN THE NEXT CELL OR IT WILL CRASH ! | Python Code:
# Requests : make http requests to websites
import requests
# BeautifulSoup : parser to manipulate easily html content
from bs4 import BeautifulSoup
# Regular expressions
import re
# Aren't pandas awesome ?
import pandas as pd
Explanation: Obtain all the data for the Master students, starting from 2007. Compute how many months it took each master student to complete their master, for those that completed it. Partition the data between male and female students, and compute the average -- is the difference in average statistically significant?
Notice that master students' data is more tricky than the bachelors' one, as there are many missing records in the IS-Academia database. Therefore, try to guess how much time a master student spent at EPFL by at least checking the distance in months between Master semestre 1 and Master semestre 2. If the Mineur field is not empty, the student should also appear registered in Master semestre 3. Last but not the least, don't forget to check if the student has an entry also in the Projet Master tables. Once you can handle well this data, compute the "average stay at EPFL" for master students. Now extract all the students with a Spécialisation and compute the "average stay" per each category of that attribute -- compared to the general average, can you find any specialization for which the difference in average is statistically significant?
End of explanation
# Ask for the first page on IS Academia. To see it, just type it on your browser address bar : http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.filter?ww_i_reportModel=133685247
r = requests.get('http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.filter?ww_i_reportModel=133685247')
htmlContent = BeautifulSoup(r.content, 'html.parser')
print(htmlContent.prettify())
Explanation: Let's get the first page in which we will be able to extract some interesting content !
End of explanation
# We first get the "Computer science" value
computerScienceField = htmlContent.find('option', text='Informatique')
computerScienceField
computerScienceValue = computerScienceField.get('value')
computerScienceValue
# Then, we're going to need all the academic years values.
academicYearsField = htmlContent.find('select', attrs={'name':'ww_x_PERIODE_ACAD'})
academicYearsSet = academicYearsField.findAll('option')
# Since there are several years to remember, we're storing all of them in a table to use them later
academicYearValues = []
# We'll put the textual content in a table aswell ("Master semestre 1", "Master semestre 2"...)
academicYearContent = []
for option in academicYearsSet:
value = option.get('value')
# However, we don't want any "null" value
if value != 'null':
academicYearValues.append(value)
academicYearContent.append(option.text)
# Now, we have all the academic years that might interest us. We wrangle them a little bit so be able to make request more easily later.
academicYearValues_series = pd.Series(academicYearValues)
academicYearContent_series = pd.Series(academicYearContent)
academicYear_df = pd.concat([academicYearContent_series, academicYearValues_series], axis = 1)
academicYear_df.columns= ['Academic_year', 'Value']
academicYear_df = academicYear_df.sort_values(['Academic_year', 'Value'], ascending=[1, 0])
academicYear_df
# Then, let's get all the pedagogic periods we need. It's a little bit more complicated here because we need to link the pedagogic period with a season (eg : Bachelor 1 is autumn, Bachelor 2 is spring etc.)
# Thus, we need more than the pedagogic values. For doing some tests to associate them with the right season, we need the actual textual value ("Bachelor semestre 1", "Bachelor semestre 2" etc.)
pedagogicPeriodsField = htmlContent.find('select', attrs={'name':'ww_x_PERIODE_PEDAGO'})
pedagogicPeriodsSet = pedagogicPeriodsField.findAll('option')
# Same as above, we'll store the values in a table
pedagogicPeriodValues = []
# We'll put the textual content in a table aswell ("Master semestre 1", "Master semestre 2"...)
pedagogicPeriodContent = []
for option in pedagogicPeriodsSet:
value = option.get('value')
if value != 'null':
pedagogicPeriodValues.append(value)
pedagogicPeriodContent.append(option.text)
# Let's make the values and content meet each other
pedagogicPeriodContent_series = pd.Series(pedagogicPeriodContent)
pedagogicPeriodValues_series = pd.Series(pedagogicPeriodValues)
pedagogicPeriod_df = pd.concat([pedagogicPeriodContent_series, pedagogicPeriodValues_series], axis = 1);
pedagogicPeriod_df.columns = ['Pedagogic_period', 'Value']
# We keep all semesters related to master students
pedagogicPeriod_df_master = pedagogicPeriod_df[[period.startswith('Master') for period in pedagogicPeriod_df.Pedagogic_period]]
pedagogicPeriod_df_minor = pedagogicPeriod_df[[period.startswith('Mineur') for period in pedagogicPeriod_df.Pedagogic_period]]
pedagogicPeriod_df_project = pedagogicPeriod_df[[period.startswith('Projet Master') for period in pedagogicPeriod_df.Pedagogic_period]]
pedagogicPeriod_df = pd.concat([pedagogicPeriod_df_master, pedagogicPeriod_df_minor, pedagogicPeriod_df_project])
pedagogicPeriod_df
# Lastly, we need to extract the values associated with autumn and spring semesters.
semesterTypeField = htmlContent.find('select', attrs={'name':'ww_x_HIVERETE'})
semesterTypeSet = semesterTypeField.findAll('option')
# Again, we need to store the values in a table
semesterTypeValues = []
# We'll put the textual content in a table aswell
semesterTypeContent = []
for option in semesterTypeSet:
value = option.get('value')
if value != 'null':
semesterTypeValues.append(value)
semesterTypeContent.append(option.text)
# Here are the values for autumn and spring semester :
semesterTypeValues_series = pd.Series(semesterTypeValues)
semesterTypeContent_series = pd.Series(semesterTypeContent)
semesterType_df = pd.concat([semesterTypeContent_series, semesterTypeValues_series], axis = 1)
semesterType_df.columns = ['Semester_type', 'Value']
semesterType_df
Explanation: Now we need to make other requests to IS Academia, which specify every parameter : computer science students, all the years, and all bachelor semester (which are a couple of two values : pedagogic period and semester type). Thus, we're going to get all the parameters we need to make the next request :
End of explanation
# Let's put the semester types aside, because we're going to need them
autumn_semester_value = semesterType_df.loc[semesterType_df['Semester_type'] == 'Semestre d\'automne', 'Value']
autumn_semester_value = autumn_semester_value.iloc[0]
spring_semester_value = semesterType_df.loc[semesterType_df['Semester_type'] == 'Semestre de printemps', 'Value']
spring_semester_value = spring_semester_value.iloc[0]
# Here is the list of the GET requests we will sent to IS Academia
requestsToISAcademia = []
# We'll need to associate all the information associated with the requests to help wrangling data later :
academicYearRequests = []
pedagogicPeriodRequests = []
semesterTypeRequests = []
# Go all over the years ('2007-2008', '2008-2009' and so on)
for academicYear_row in academicYear_df.itertuples(index=True, name='Academic_year'):
# The year (eg: '2007-2008')
academicYear = academicYear_row.Academic_year
# The associated value (eg: '978181')
academicYear_value = academicYear_row.Value
# We get all the pedagogic periods associated with this academic year
for pegagogicPeriod_row in pedagogicPeriod_df.itertuples(index=True, name='Pedagogic_period'):
# The period (eg: 'Master semestre 1')
pedagogicPeriod = pegagogicPeriod_row.Pedagogic_period
# The associated value (eg: '2230106')
pegagogicPeriod_Value = pegagogicPeriod_row.Value
# We need to associate the corresponding semester type (eg: Master semester 1 is autumn, but Master semester 2 will be spring)
if (pedagogicPeriod.endswith('1') or pedagogicPeriod.endswith('3') or pedagogicPeriod.endswith('automne')):
semester_Value = autumn_semester_value
semester = 'Autumn'
else:
semester_Value = spring_semester_value
semester = 'Spring'
# This print line is only for debugging if you want to check something
# print("academic year = " + academicYear_value + ", pedagogic value = " + pegagogicPeriod_Value + ", pedagogic period is " + pedagogicPeriod + " (semester type value = " + semester_Value + ")")
# We're ready to cook the request !
request = 'http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?ww_x_GPS=-1&ww_i_reportModel=133685247&ww_i_reportModelXsl=133685270&ww_x_UNITE_ACAD=' + computerScienceValue
request = request + '&ww_x_PERIODE_ACAD=' + academicYear_value
request = request + '&ww_x_PERIODE_PEDAGO=' + pegagogicPeriod_Value
request = request + '&ww_x_HIVERETE=' + semester_Value
# Add the newly created request to our wish list...
requestsToISAcademia.append(request)
# And we save the corresponding information for each request
pedagogicPeriodRequests.append(pedagogicPeriod)
academicYearRequests.append(academicYear)
semesterTypeRequests.append(semester)
# Here is the list of all the requests we have to send !
# requestsToISAcademia
# Here are the corresponding years for each request
# academicYearRequests
# Same for associated pedagogic periods
# pedagogicPeriodRequests
# Last but not the least, the semester types
# semesterTypeRequests
academicYearRequests_series = pd.Series(academicYearRequests)
pedagogicPeriodRequests_series = pd.Series(pedagogicPeriodRequests)
requestsToISAcademia_series = pd.Series(requestsToISAcademia)
# Let's summarize everything in a dataframe...
requests_df = pd.concat([academicYearRequests_series, pedagogicPeriodRequests_series, requestsToISAcademia_series], axis = 1)
requests_df.columns = ['Academic_year', 'Pedagogic_period', 'Request']
requests_df
Explanation: Now, we got all the information to get all the master students !
Let's make all the requests we need to build our data.
We will try to do requests such as :
- Get students from master semester 1 of 2007-2008
- ...
- Get students from master semester 4 of 2007-2008
- Get students from mineur semester 1 of 2007-2008
- Get students from mineur semester 2 of 2007-2008
- Get students from master project semester 1 of 2007-2008
- Get students from master project semester 2 of 2007-2008
... and so on for each academic year until 2015-2016, the last complete year.
We can even take the first semester of 2016-2017 into account, to check if some students we though they finished last year are actually still studying. This can be for different reasons : doing a mineur, a project, repeating a semester...
We can ask for a list of student in two formats : HTML or CSV.
We choosed to get them in a HTML format because this is the first time that we wrangle data in HTML format, and that may be really useful to learn in order to work with most of the websites in the future !
The request sent by the browser to IS Academia, to get a list of student in a HTML format, looks like this :
http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?arg1=xxx&arg2=yyy
With "xxx" the value associated with the argument named "arg1", "yyy" the value associated with the argument named "arg2" etc. It uses to have a lot more arguments.
For instance, we tried to send a request as a "human" through our browser and intercepted it with Postman interceptor.
We found that the folowing arguments have to be sent :
ww_x_GPS = -1
ww_i_reportModel = 133685247
ww_i_reportModelXsl = 133685270
ww_x_UNITE_ACAD = 249847 (which is the value of computer science !)
ww_x_PERIODE_ACAD = X (eg : the value corresponding to 2007-2008 would be 978181)
ww_x_PERIODE_PEDAGO = Y (eg : 2230106 for Master semestre 1)
ww_x_HIVERETE = Z (eg : 2936286 for autumn semester)
The last three values X, Y and Z must be replaced with the ones we extracted previously. For instance, if we want to get students from Master, semester 1 (which is necessarily autumn semester) of 2007-2008, the "GET Request" would be the following :
http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?ww_x_GPS=-1&ww_i_reportModel=133685247&ww_i_reportModelXsl=133685270&ww_x_UNITE_ACAD=249847&ww_x_PERIODE_ACAD=978181&ww_x_PERIODE_PEDAGO=2230106&ww_x_HIVERETE=2936286
So let's cook all the requests we're going to send !
End of explanation
# WARNING : NEXT LINE IS COMMENTED FOR DEBGUGGING THE FIRST REQUEST ONLY. UNCOMMENT IT AND INDENT THE CODE CORRECTLY TO MAKE ALL THE REQUESTS
#for request in requestsToISAcademia: # LINE TO UNCOMMENT TO SEND ALL REQUESTS
request = requestsToISAcademia[0] # LINE TO COMMENT TO SEND ALL REQUESTS
print(request)
# Send the request to IS Academia
r = requests.get(request)
# Here is the HTML content of IS Academia's response
htmlContent = BeautifulSoup(r.content, 'html.parser')
# Let's extract some data...
computerScienceField = htmlContent.find('option', text='Informatique')
# Getting the table of students
# Let's make the columns
columns = []
table = htmlContent.find('table')
th = table.find('th', text='Civilité')
columns.append(th.text)
# Go through the table until the last column
while th.findNext('').name == 'th':
th = th.findNext('')
columns.append(th.text)
# This array will contain all the students
studentsTable = []
Explanation: The requests are now ready to be sent to IS Academia. Let's try it out !
TIME OUT : We stopped right here for our homework. What is below should look like the beginning of a loop that gets students lists from IS Academia. It's not finished at all :(
End of explanation
# Getting the information about the student we're "looping on"
currentStudent = []
tr = th.findNext('tr')
children = tr.children
for child in children:
currentStudent.append(child.text)
# Add the student to the array
studentsTable.append(currentStudent)
a = tr.findNext('tr')
a
while tr.findNext('tr') is not None:
tr = th.findNext('tr')
children = tr.children
for child in children:
currentStudent.append(child.text)
studentsTable.append(currentStudent)
studentsTable
#tr = th.parent
#td = th.findNext('td')
#td.text
#th.findNext('th')
#th.findNext('th')
#tr = tr.findNext('tr')
#tr
print(htmlContent.prettify())
Explanation: DON'T RUN THE NEXT CELL OR IT WILL CRASH ! :x
End of explanation |
7,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Pandas
Step1: <a id=movielens></a>
MovieLens data
The data comes as a zip file that contains several csv's. We get the details from the README inside. (It's written in Markdown, so it's easier to read if we use a browser to format it. Or we could cut and paste into a Markdown cell in an IPython notebook.)
The file descriptions are
Step2: Exercise. Something to do together. suppose we wanted to save the files on our computer. How would we do it? Would we prefer individual csv's or a single zip?
Step3: <a id=merge-movies></a>
Merging ratings and movie titles
The movie ratings in the dataframe ratings give us individual opinions about movies, but they don't include the name of the movie. Why not? Rather than include the name every time a movie is rated, the MovieLens data associates each rating with a movie code, than stores the names of movies associatd with each movie code in the dataframe movies. We run across this a lot
Step4: Merging
Here's roughly what's involved in what we're doing. We take the movieId variable from ratings and look it up in movies. When we find it, we look up the title and add it as a column in ratings. The variable movieId is common, so we can use it to link the two dataframes.
Step5: Exercise. Some of these we know how to do, the others we don't. For the ones we know, what is the answer? For the others, what (in loose terms) do we need to be able to do to come up with an answer?
What is the overall average rating?
What is the overall distribution of ratings?
What is the average rating of each movie?
How many ratings does each movie get?
Step6: <a id=population></a>
Population "estimates" and "projections"
We look (again) at the UN's population data, specifically the age distribution of the population. The data comes in two sheets
Step7: Comment. Note that they have different numbers of columns. Let's see where that comes from.
Step9: Clean data
Pick a useable subset and fix extra column so that we can combine them. The problem here is that until 1990, the highest age category was '80+. From 1990 on, we have a finer breakdown.
We fix this by reassigning '80+' to '80-84' and not worrying that some of these people are 85 or older. Note that df.fillna(0.0) replaces missing values with zeros.
Step10: Merge estimates and projections
If we have two blocks of data, and just want to put them on top of each other, we use the Pandas' concatenate function. Ditto two blocks next to each other.
But first we need to fix the difference in the columns of the two dataframes.
Step11: Exercise. What happens if we try to merge the original dataframes, including the one with the extra 80+ column? Run the code below and comment on what you get.
Step12: Shape data
We want age categories in the index (the default x axis in a plot) and the years in the columns. The country we don't care about because there's only one.
Step13: Exercise. Use set_index, stack, and unstack to shape the dataframe popi into popt. | Python Code:
%matplotlib inline
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics
import datetime as dt # date tools, used to note current date
# these are new
import os # operating system tools (check files)
import requests, io # internet and input tools
import zipfile as zf # zip file tools
import shutil # file management tools
Explanation: Advanced Pandas: Combining data
Sometimes we need to combine data from two or more dataframes. That's colloquially known as a merge or a join. There are lots of ways to do this. We do a couple but supply references to more at the end.
Along the way we take an extended detour to review methods for downloading and unzipping compressed files. The tools we use here have a broad range of other applications, including web scraping.
Outline:
MovieLens data. A collection of movies and individual ratings.
Automate file download. Use the requests package to get a zipped file, then other tools to unzip it and read in the contents.
Merge movie names and ratings. Merge information from two dataframes with Pandas' merge function.
UN population data. We merge ("concatenate") estimates from the past with projections of the future.
Note: requires internet access to run.
This IPython notebook was created by Dave Backus, Chase Coleman, Brian LeBlanc, and Spencer Lyon for the NYU Stern course Data Bootcamp.
<a id=prelims></a>
Preliminaries
Import packages, etc.
End of explanation
# get "response" from url
url = 'http://files.grouplens.org/datasets/movielens/ml-latest-small.zip'
r = requests.get(url)
# describe response
print('Response status code:', r.status_code)
print('Response type:', type(r))
print('Response .content:', type(r.content))
print('Response headers:\n', r.headers, sep='')
# convert bytes to zip file
mlz = zf.ZipFile(io.BytesIO(r.content))
print('Type of zipfile object:', type(mlz))
# what's in the zip file?
mlz.namelist()
mlz.open('ml-latest-small/links.csv')
pd.read_csv(mlz.open('ml-latest-small/links.csv'))
# extract and read csv's
movies = pd.read_csv(mlz.open(mlz.namelist()[2]))
ratings = pd.read_csv(mlz.open(mlz.namelist()[3]))
# what do we have?
for df in [movies, ratings]:
print('Type:', type(df))
print('Dimensions:', df.shape, '\n')
print('Variables:', df.columns.tolist(), '\n')
print('First few rows', df.head(3), '\n')
Explanation: <a id=movielens></a>
MovieLens data
The data comes as a zip file that contains several csv's. We get the details from the README inside. (It's written in Markdown, so it's easier to read if we use a browser to format it. Or we could cut and paste into a Markdown cell in an IPython notebook.)
The file descriptions are:
ratings.csv: each line is an individual film rating with the rater and movie id's and the rating. Order: userId, movieId, rating, timestamp.
tags.csv: each line is a tag on a specific film. Order: userId, movieId, tag, timestamp.
movies.csv: each line is a movie name, its id, and its genre. Order: movieId, title, genres. Multiple genres are separated by "pipes" |.
links.csv: each line contains the movie id and corresponding id's at IMBd and TMDb.
The easy way to input this data is to download the zip file onto our computer, unzip it, and read the individual csv files using read.csv(). But anyone can do it the easy way. We want to automate this, so we can redo it without any manual steps. This takes some effort, but once we have it down we can apply it to lots of other data sources.
<a id=requests></a>
Automate file download
We're looking for an automated way, so that if we do this again, possibly with updated data, the whole process is in our code. Automated data entry involves these steps:
Get the file. We use the requests package, which handles internet files and comes pre-installed with Anaconda. This kind of thing was hidden behind the scenes in the Pandas read_csv function, but here we need to do it for ourselves. The package authors add:
Recreational use of other HTTP libraries may result in dangerous side-effects, including: security vulnerabilities, verbose code, reinventing the wheel, constantly reading documentation, depression, headaches, or even death.
Convert to zip. Requests simply loads whatever's at the given url. The io module's io.Bytes reconstructs it as a file, here a zip file.
Unzip the file. We use the zipfile module, which is part of core Python, to extract the files inside.
Read in the csv's. Now that we've extracted the csv files, we use read_csv as usual.
We found this Stack Overflow exchange helpful.
Digression. This is probably more than you want to know, but it's a reminder of what goes on behind the scenes when we apply read_csv to a url. Here we grab whatever is at the url. Then we get its contents, convert it to bytes, identify it as a zip file, and read its components using read_csv. It's a lot easier when this happens automatically, but a reminder what's involved if we ever have to look into the details.
End of explanation
# writing csv (specify different location)
with open('test_01.csv', 'wb') as out_file:
shutil.copyfileobj(mlz.open(mlz.namelist()[2]), out_file)
# experiment via http://stackoverflow.com/a/18043472/804513
with open('test.zip', 'wb') as out_file:
shutil.copyfileobj(io.BytesIO(r.content), out_file)
Explanation: Exercise. Something to do together. suppose we wanted to save the files on our computer. How would we do it? Would we prefer individual csv's or a single zip?
End of explanation
ratings.head(3)
movies.head(3)
Explanation: <a id=merge-movies></a>
Merging ratings and movie titles
The movie ratings in the dataframe ratings give us individual opinions about movies, but they don't include the name of the movie. Why not? Rather than include the name every time a movie is rated, the MovieLens data associates each rating with a movie code, than stores the names of movies associatd with each movie code in the dataframe movies. We run across this a lot: some information is in one data table, other information is in another.
Our want is therefore to add the movie name to the ratings dataframe. We say we merge the two dataferames. There are lots of ways to merge. Here we do one as an illustration.
Let's start by reminding ourselves what we have.
End of explanation
combo = pd.merge(ratings, movies, # left and right df's
how='left', # add to left
on='movieId' # link with this variable/column
)
print('Dimensions of ratings:', ratings.shape)
print('Dimensions of movies:', movies.shape)
print('Dimensions of new df:', combo.shape)
combo.head(20)
combo_1 = ratings.merge(movies, how='left', on='movieId')
combo_1.head()
combo_2 = ratings.merge(movies, how='inner', on='movieId')
combo_2.shape
combo_3 = movies.merge(ratings, how='right', on='movieId')
combo_3.shape
# save as csv file for future use
combo.to_csv('mlcombined.csv')
count_2 = movies['movieId'].isin(ratings['movieId'])
count_2.sum()
print('Current directory:\n', os.getcwd(), sep='')
print('List of files:', os.listdir(), sep='\n')
Explanation: Merging
Here's roughly what's involved in what we're doing. We take the movieId variable from ratings and look it up in movies. When we find it, we look up the title and add it as a column in ratings. The variable movieId is common, so we can use it to link the two dataframes.
End of explanation
combo['rating'].mean()
fig, ax = plt.subplots()
bins = [bin/100 for bin in list(range(25, 575, 50))]
print(bins)
combo['rating'].plot(kind='hist', ax=ax, bins=bins, color='blue', alpha=0.5)
ax.set_xlim(0,5.5)
ax.set_ylabel('Number')
ax.set_xlabel('Rating')
plt.show()
from plotly.offline import iplot # plotting functions
import plotly.graph_objs as go # ditto
import plotly
plotly.offline.init_notebook_mode(connected=True)
trace = go.Histogram(
x=combo['rating'],
histnorm='count',
name='control',
autobinx=False,
xbins=dict(
start=.5,
end=5.0,
size=0.5
),
marker=dict(
color='Blue',
),
opacity=0.75
)
layout = go.Layout(
title='Distribution of ratings',
xaxis=dict(
title='Rating value'
),
yaxis=dict(
title='Count'
),
bargap=0.01,
bargroupgap=0.1
)
iplot(go.Figure(data=[trace], layout=layout))
combo[combo['movieId']==31]['rating'].mean()
ave_mov = combo['rating'].groupby(combo['movieId']).mean()
ave_mov = ave_mov.reset_index()
ave_mov = ave_mov.rename(columns={"rating": "average rating"})
combo2 = combo.merge(ave_mov, how='left', on='movieId')
combo2.shape
combo2.head(3)
combo['ave'] = combo['rating'].groupby(combo['movieId']).transform('mean')
combo.head()
combo2[combo['movieId']==1129]
combo['count'] = combo['rating'].groupby(combo['movieId']).transform('count')
combo.head()
Explanation: Exercise. Some of these we know how to do, the others we don't. For the ones we know, what is the answer? For the others, what (in loose terms) do we need to be able to do to come up with an answer?
What is the overall average rating?
What is the overall distribution of ratings?
What is the average rating of each movie?
How many ratings does each movie get?
End of explanation
url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/'
url3 = 'WPP2015_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLS'
url = url1 + url2 + url3
cols = [2, 5] + list(range(6,28))
est = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…'])
print('Dimensions and dtypes of estimates: ', est.shape, '\n', est.dtypes.head(), sep='')
print('\nDimensions and dtypes of projections: ', prj.shape, '\n', prj.dtypes.head(), sep='')
est.to_csv('un_pop_est.csv')
prj.to_csv('un_pop_proj.csv')
Explanation: <a id=population></a>
Population "estimates" and "projections"
We look (again) at the UN's population data, specifically the age distribution of the population. The data comes in two sheets: estimates that cover the period 1950-2015 and projections that cover 2016-2100. Our mission is to combine them.
Load data
We start, as usual, by loading the data. This takes a minute or so.
End of explanation
list(est)[15:]
list(prj)[15:]
est.head()
prj.head()
Explanation: Comment. Note that they have different numbers of columns. Let's see where that comes from.
End of explanation
def cleanpop(df, countries, years):
take df as input and select countries and years
# rename first two columns
names = list(df)
df = df.rename(columns={names[0]: 'Country', names[1]: 'Year'})
# select countries and years
newdf = df[df['Country'].isin(countries) & df['Year'].isin(years)]
return newdf
countries = ['Japan']
past = [1950, 2000]
future = [2050, 2100]
e = cleanpop(est, countries, past)
p = cleanpop(prj, countries, future)
# make copie for later use
ealt = e.copy()
palt = p.copy()
# fix top-coding in estimates
e['80-84'] = e['80-84'].fillna(0.0) + e['80+'].fillna(0.0)
e = e.drop(['80+'], axis=1)
# check dimensions again
print('Dimensions of cleaned estimates: ', e.shape)
print('Dimensions of cleaned projections: ', p.shape)
# check to see if we have the same variables
list(e) == list(p)
ealt.head()
e.head()
Explanation: Clean data
Pick a useable subset and fix extra column so that we can combine them. The problem here is that until 1990, the highest age category was '80+. From 1990 on, we have a finer breakdown.
We fix this by reassigning '80+' to '80-84' and not worrying that some of these people are 85 or older. Note that df.fillna(0.0) replaces missing values with zeros.
End of explanation
pop = pd.concat([e, p], axis=0).fillna(0.0)
pop
Explanation: Merge estimates and projections
If we have two blocks of data, and just want to put them on top of each other, we use the Pandas' concatenate function. Ditto two blocks next to each other.
But first we need to fix the difference in the columns of the two dataframes.
End of explanation
popalt = pd.concat([ealt, palt], axis=0)
popalt
Explanation: Exercise. What happens if we try to merge the original dataframes, including the one with the extra 80+ column? Run the code below and comment on what you get.
End of explanation
pop = pop.drop('Country', axis=1)
popi = pop.set_index('Year')
popi
popi.columns.name = 'Age'
popt = popi.T
popt.head()
ax = popt.plot(kind='bar', color='blue', alpha=0.5,
subplots=True,
sharey=True,
figsize=(8,12))
Explanation: Shape data
We want age categories in the index (the default x axis in a plot) and the years in the columns. The country we don't care about because there's only one.
End of explanation
popi
popi.stack().unstack(level='Year')
list(range(1950, 2016, 5))
countries = ['United States of America', 'Japan']
past = list(range(1950, 2016, 5))
future = list(range(2015, 2101, 5))
e_US_J = cleanpop(est, countries, past)
p_US_J = cleanpop(prj, countries, future)
# fix top-coding in estimates
e_US_J['80-84'] = e_US_J['80-84'].fillna(0.0) + e_US_J['80+'].fillna(0.0)
e_US_J = e_US_J.drop(['80+'], axis=1)
e_US_J.head()
p_US_J[p_US_J['Country']=='United States of America'].head()
pop_US_J = pd.concat([e_US_J, p_US_J], axis=0)#.fillna(0.0)
pop_US_J.shape
pop_US_J
pop_i = pop_US_J.set_index(['Country', 'Year'])
pop_i.index
pop_i.columns.name = 'Age'
pop_st = pop_i.stack()
pop_st.head()
fig, ax = plt.subplots(2, 1, figsize=(12, 8))
pop_st.reorder_levels([2, 0, 1])['5-9']['United States of America'].plot(ax=ax[0], kind='line')
pop_st.reorder_levels([2, 0, 1])['5-9']['Japan'].plot(ax=ax[1], kind='line')
plt.show()
pop_st.head()
pop_st.loc[('Japan', 2100, slice(None))].plot(kind='bar')
plt.show()
pop_st.ix[('Japan', slice(None), '0-4')]
Explanation: Exercise. Use set_index, stack, and unstack to shape the dataframe popi into popt.
End of explanation |
7,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to SimpleITKv4 Registration <a href="https
Step1: Utility functions
A number of utility callback functions for image display and for plotting the similarity metric during registration.
Step2: Read images
We first read the images, casting the pixel type to that required for registration (Float32 or Float64) and look at them.
Step3: Initial Alignment
Use the CenteredTransformInitializer to align the centers of the two volumes and set the center of rotation to the center of the fixed image.
Step4: Registration
The specific registration task at hand estimates a 3D rigid transformation between images of different modalities. There are multiple components from each group (optimizers, similarity metrics, interpolators) that are appropriate for the task. Note that each component selection requires setting some parameter values. We have made the following choices
Step5: Post registration analysis
Query the registration method to see the metric value and the reason the optimization terminated.
The metric value allows us to compare multiple registration runs as there is a probabilistic aspect to our registration, we are using random sampling to estimate the similarity metric.
Always remember to query why the optimizer terminated. This will help you understand whether termination is too early, either due to thresholds being too tight, early termination due to small number of iterations - numberOfIterations, or too loose, early termination due to large value for minimal change in similarity measure - convergenceMinimumValue)
Step6: Now visually inspect the results.
Step7: If we are satisfied with the results, save them to file. | Python Code:
import SimpleITK as sitk
# Utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data).
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
import os
OUTPUT_DIR = "Output"
Explanation: Introduction to SimpleITKv4 Registration <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F60_Registration_Introduction.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
<table width="100%">
<tr style="background-color: red;"><td><font color="white">SimpleITK conventions:</font></td></tr>
<tr><td>
<ul>
<li>Dimensionality and pixel type of registered images is required to be the same (2D/2D or 3D/3D).</li>
<li>Supported pixel types are sitkFloat32 and sitkFloat64 (use the SimpleITK <a href="http://www.simpleitk.org/doxygen/latest/html/namespaceitk_1_1simple.html#af8c9d7cc96a299a05890e9c3db911885">Cast()</a> function if your image's pixel type is something else).
</ul>
</td></tr>
</table>
Registration Components
<img src="ITKv4RegistrationComponentsDiagram.svg" style="width:700px"/><br><br>
There are many options for creating an instance of the registration framework, all of which are configured in SimpleITK via methods of the <a href="http://www.simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ImageRegistrationMethod.html">ImageRegistrationMethod</a> class. This class encapsulates many of the components available in ITK for constructing a registration instance.
Currently, the available choices from the following groups of ITK components are:
Optimizers
The SimpleITK registration framework supports several optimizer types via the SetOptimizerAsX() methods, these include:
<ul>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1ExhaustiveOptimizerv4.html">Exhaustive</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1AmoebaOptimizerv4.html">Nelder-Mead downhill simplex</a>, a.k.a. Amoeba.
</li>
<li>
<a href="https://itk.org/Doxygen/html/classitk_1_1PowellOptimizerv4.html">Powell optimizer</a>.
</li>
<li>
<a href="https://itk.org/Doxygen/html/classitk_1_1OnePlusOneEvolutionaryOptimizerv4.html">1+1 evolutionary optimizer</a>.
</li>
<li>
Variations on gradient descent:
<ul>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1GradientDescentOptimizerv4Template.html">GradientDescent</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1GradientDescentLineSearchOptimizerv4Template.html">GradientDescentLineSearch</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1RegularStepGradientDescentOptimizerv4.html">RegularStepGradientDescent</a>
</li>
</ul>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1ConjugateGradientLineSearchOptimizerv4Template.html">ConjugateGradientLineSearch</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1LBFGSBOptimizerv4.html">L-BFGS-B</a> (Limited memory Broyden, Fletcher,Goldfarb,Shannon-Bound Constrained) - supports the use of simple constraints ($l\leq x \leq u$)
</li>
<li>
<a href="https://itk.org/Doxygen/html/classitk_1_1LBFGS2Optimizerv4.html">L-BFGS2</a> (Limited memory Broyden, Fletcher, Goldfarb, Shannon)
</li>
</ul>
Similarity metrics
The SimpleITK registration framework supports several metric types via the SetMetricAsX() methods, these include:
<ul>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1MeanSquaresImageToImageMetricv4.html">MeanSquares</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1DemonsImageToImageMetricv4.html">Demons</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1CorrelationImageToImageMetricv4.html">Correlation</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1ANTSNeighborhoodCorrelationImageToImageMetricv4.html">ANTSNeighborhoodCorrelation</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1JointHistogramMutualInformationImageToImageMetricv4.html">JointHistogramMutualInformation</a>
</li>
<li>
<a href="http://www.itk.org/Doxygen/html/classitk_1_1MattesMutualInformationImageToImageMetricv4.html">MattesMutualInformation</a>
</li>
</ul>
Interpolators
The SimpleITK registration framework supports several interpolators via the SetInterpolator() method, which receives one of
the <a href="http://www.simpleitk.org/doxygen/latest/html/namespaceitk_1_1simple.html#a7cb1ef8bd02c669c02ea2f9f5aa374e5">following enumerations</a>:
<ul>
<li> sitkNearestNeighbor </li>
<li> sitkLinear </li>
<li> sitkBSpline </li>
<li> sitkGaussian </li>
<li> sitkHammingWindowedSinc </li>
<li> sitkCosineWindowedSinc </li>
<li> sitkWelchWindowedSinc </li>
<li> sitkLanczosWindowedSinc </li>
<li> sitkBlackmanWindowedSinc </li>
</ul>
Data - Retrospective Image Registration Evaluation
We will be using part of the training data from the Retrospective Image Registration Evaluation (<a href="http://www.insight-journal.org/rire/">RIRE</a>) project.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
from IPython.display import clear_output
# Callback invoked by the interact IPython method for scrolling through the image stacks of
# the two images (moving and fixed).
def display_images(fixed_image_z, moving_image_z, fixed_npa, moving_npa):
# Create a figure with two subplots and the specified size.
plt.subplots(1, 2, figsize=(10, 8))
# Draw the fixed image in the first subplot.
plt.subplot(1, 2, 1)
plt.imshow(fixed_npa[fixed_image_z, :, :], cmap=plt.cm.Greys_r)
plt.title("fixed image")
plt.axis("off")
# Draw the moving image in the second subplot.
plt.subplot(1, 2, 2)
plt.imshow(moving_npa[moving_image_z, :, :], cmap=plt.cm.Greys_r)
plt.title("moving image")
plt.axis("off")
plt.show()
# Callback invoked by the IPython interact method for scrolling and modifying the alpha blending
# of an image stack of two images that occupy the same physical space.
def display_images_with_alpha(image_z, alpha, fixed, moving):
img = (1.0 - alpha) * fixed[:, :, image_z] + alpha * moving[:, :, image_z]
plt.imshow(sitk.GetArrayViewFromImage(img), cmap=plt.cm.Greys_r)
plt.axis("off")
plt.show()
# Callback invoked when the StartEvent happens, sets up our new data.
def start_plot():
global metric_values, multires_iterations
metric_values = []
multires_iterations = []
# Callback invoked when the EndEvent happens, do cleanup of data and figure.
def end_plot():
global metric_values, multires_iterations
del metric_values
del multires_iterations
# Close figure, we don't want to get a duplicate of the plot latter on.
plt.close()
# Callback invoked when the IterationEvent happens, update our data and display new figure.
def plot_values(registration_method):
global metric_values, multires_iterations
metric_values.append(registration_method.GetMetricValue())
# Clear the output area (wait=True, to reduce flickering), and plot current data
clear_output(wait=True)
# Plot the similarity metric values
plt.plot(metric_values, "r")
plt.plot(
multires_iterations,
[metric_values[index] for index in multires_iterations],
"b*",
)
plt.xlabel("Iteration Number", fontsize=12)
plt.ylabel("Metric Value", fontsize=12)
plt.show()
# Callback invoked when the sitkMultiResolutionIterationEvent happens, update the index into the
# metric_values list.
def update_multires_iterations():
global metric_values, multires_iterations
multires_iterations.append(len(metric_values))
Explanation: Utility functions
A number of utility callback functions for image display and for plotting the similarity metric during registration.
End of explanation
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32)
moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32)
interact(
display_images,
fixed_image_z=(0, fixed_image.GetSize()[2] - 1),
moving_image_z=(0, moving_image.GetSize()[2] - 1),
fixed_npa=fixed(sitk.GetArrayViewFromImage(fixed_image)),
moving_npa=fixed(sitk.GetArrayViewFromImage(moving_image)),
);
Explanation: Read images
We first read the images, casting the pixel type to that required for registration (Float32 or Float64) and look at them.
End of explanation
initial_transform = sitk.CenteredTransformInitializer(
fixed_image,
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
moving_resampled = sitk.Resample(
moving_image,
fixed_image,
initial_transform,
sitk.sitkLinear,
0.0,
moving_image.GetPixelID(),
)
interact(
display_images_with_alpha,
image_z=(0, fixed_image.GetSize()[2] - 1),
alpha=(0.0, 1.0, 0.05),
fixed=fixed(fixed_image),
moving=fixed(moving_resampled),
);
Explanation: Initial Alignment
Use the CenteredTransformInitializer to align the centers of the two volumes and set the center of rotation to the center of the fixed image.
End of explanation
registration_method = sitk.ImageRegistrationMethod()
# Similarity metric settings.
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# Optimizer settings.
registration_method.SetOptimizerAsGradientDescent(
learningRate=1.0,
numberOfIterations=100,
convergenceMinimumValue=1e-6,
convergenceWindowSize=10,
)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Setup for the multi-resolution framework.
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
# Don't optimize in-place, we would possibly like to run this cell multiple times.
registration_method.SetInitialTransform(initial_transform, inPlace=False)
# Connect all of the observers so that we can perform plotting during registration.
registration_method.AddCommand(sitk.sitkStartEvent, start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, end_plot)
registration_method.AddCommand(
sitk.sitkMultiResolutionIterationEvent, update_multires_iterations
)
registration_method.AddCommand(
sitk.sitkIterationEvent, lambda: plot_values(registration_method)
)
final_transform = registration_method.Execute(
sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32)
)
Explanation: Registration
The specific registration task at hand estimates a 3D rigid transformation between images of different modalities. There are multiple components from each group (optimizers, similarity metrics, interpolators) that are appropriate for the task. Note that each component selection requires setting some parameter values. We have made the following choices:
<ul>
<li>Similarity metric, mutual information (Mattes MI):
<ul>
<li>Number of histogram bins, 50.</li>
<li>Sampling strategy, random.</li>
<li>Sampling percentage, 1%.</li>
</ul>
</li>
<li>Interpolator, sitkLinear.</li>
<li>Optimizer, gradient descent:
<ul>
<li>Learning rate, step size along traversal direction in parameter space, 1.0 .</li>
<li>Number of iterations, maximal number of iterations, 100.</li>
<li>Convergence minimum value, value used for convergence checking in conjunction with the energy profile of the similarity metric that is estimated in the given window size, 1e-6.</li>
<li>Convergence window size, number of values of the similarity metric which are used to estimate the energy profile of the similarity metric, 10.</li>
</ul>
</li>
</ul>
Perform registration using the settings given above, and take advantage of the built in multi-resolution framework, use a three tier pyramid.
In this example we plot the similarity metric's value during registration. Note that the change of scales in the multi-resolution framework is readily visible.
End of explanation
print(f"Final metric value: {registration_method.GetMetricValue()}")
print(
f"Optimizer's stopping condition, {registration_method.GetOptimizerStopConditionDescription()}"
)
Explanation: Post registration analysis
Query the registration method to see the metric value and the reason the optimization terminated.
The metric value allows us to compare multiple registration runs as there is a probabilistic aspect to our registration, we are using random sampling to estimate the similarity metric.
Always remember to query why the optimizer terminated. This will help you understand whether termination is too early, either due to thresholds being too tight, early termination due to small number of iterations - numberOfIterations, or too loose, early termination due to large value for minimal change in similarity measure - convergenceMinimumValue)
End of explanation
moving_resampled = sitk.Resample(
moving_image,
fixed_image,
final_transform,
sitk.sitkLinear,
0.0,
moving_image.GetPixelID(),
)
interact(
display_images_with_alpha,
image_z=(0, fixed_image.GetSize()[2] - 1),
alpha=(0.0, 1.0, 0.05),
fixed=fixed(fixed_image),
moving=fixed(moving_resampled),
);
Explanation: Now visually inspect the results.
End of explanation
sitk.WriteImage(
moving_resampled, os.path.join(OUTPUT_DIR, "RIRE_training_001_mr_T1_resampled.mha")
)
sitk.WriteTransform(
final_transform, os.path.join(OUTPUT_DIR, "RIRE_training_001_CT_2_mr_T1.tfm")
)
Explanation: If we are satisfied with the results, save them to file.
End of explanation |
7,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KMeans Clustering
K = 5
Step1: Bisecting K-Means
Step2: Cutting the tree structure
Cut the tree to get a clustering with a new n_cluster
bkm.cut(n_clusters=4)
It returns a tuple | Python Code:
km = pyclust.KMeans(n_clusters=5)
km.fit(df.iloc[:,0:2].values)
print(km.centers_)
plot_scatter(df.iloc[:,0:2].values, labels=km.labels_, title="Scatter Plot: K-Means")
Explanation: KMeans Clustering
K = 5
End of explanation
bkm = pyclust.BisectKMeans(n_clusters=5)
bkm.fit(df.iloc[:,0:2].values)
print(bkm.labels_)
plot_scatter(df.iloc[:,0:2].values, labels=bkm.labels_, title="Scatter Plot: Bisecting K-Means")
bkm.tree_.show(line_type='ascii')
Explanation: Bisecting K-Means
End of explanation
plot_scatter(df.iloc[:,0:2].values, labels=bkm.cut(2)[0], title="Scatter Plot: Bisecting K-Means (2)")
plot_scatter(df.iloc[:,0:2].values, labels=bkm.cut(3)[0], title="Scatter Plot: Bisecting K-Means (3)")
plot_scatter(df.iloc[:,0:2].values, labels=bkm.cut(4)[0], title="Scatter Plot: Bisecting K-Means (4)")
Explanation: Cutting the tree structure
Cut the tree to get a clustering with a new n_cluster
bkm.cut(n_clusters=4)
It returns a tuple:
first elemen being the new cluster memberships
second element is a dictionary for the centroid of each cluster
Example
```python
bkm.cut(3)
(array([4, 4, 2, 2, 4, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 4, 2, 2, 2, 2, 4, 2, 3,
3, 2, 3, 4, 3, 4, 4, 3, 4, 3, 4, 2, 4, 2, 4, 3, 2, 3, 2, 2, 4, 3, 2,
2, 4, 2, 4, 4, 2, 2, 4, 3, 3, 2, 4, 4, 4, 3, 3, 2, 2, 4, 2, 2, 2, 3,
4, 3, 2, 2, 4, 3, 2, 2, 3, 2, 3, 3, 2, 3, 3, 2, 3, 2, 2, 2, 2, 3, 2,
2, 2, 2, 3, 4, 4, 2, 3, 2, 4, 2, 2, 2, 2, 2, 3, 4, 4, 4, 2, 2, 3, 4,
2, 2, 2, 4, 3]),
{2: [-8.1686500000000013, 4.1619483333333331],
3: [-9.2501724137931021, -2.0435517241379313],
4: [8.3429774193548365, -0.30114193548387092]})
```
End of explanation |
7,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XMAP plotter
Helping hands
http
Step1: Definitions
Step2: Setup
Figure sizes controller
Step3: Column type definition
Step4: Read XMAP
http
Step7: Add length column
Step8: More stats
Step9: Good part
http
Step10: Global statistics
Step11: List of chromosomes
Step12: Quality distribution
Step13: Quality distribution per chromosome
Step14: Position distribution
Step15: Position distribution per chromosome
Step16: Length distribution
Step17: Length distribution per chromosome | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#import matplotlib as plt
#plt.use('TkAgg')
import operator
import re
from collections import defaultdict
import pylab
pylab.show()
%pylab inline
Explanation: XMAP plotter
Helping hands
http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb
http://nbviewer.ipython.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb
Imports
End of explanation
fileUrl = "../S_lycopersicum_chromosomes.2.50.BspQI_to_EXP_REFINEFINAL1_xmap.txt"
MIN_CONF = 10.0
FULL_FIG_W , FULL_FIG_H = 16, 8
CHROM_FIG_W, CHROM_FIG_H = FULL_FIG_W, 20
Explanation: Definitions
End of explanation
class size_controller(object):
def __init__(self, w, h):
self.w = w
self.h = h
def __enter__(self):
self.o = rcParams['figure.figsize']
rcParams['figure.figsize'] = self.w, self.h
return None
def __exit__(self, type, value, traceback):
rcParams['figure.figsize'] = self.o
Explanation: Setup
Figure sizes controller
End of explanation
col_type_int = np.int64
col_type_flo = np.float64
col_type_str = np.object
col_info =[
[ "XmapEntryID" , col_type_int ],
[ "QryContigID" , col_type_int ],
[ "RefContigID" , col_type_int ],
[ "QryStartPos" , col_type_flo ],
[ "QryEndPos" , col_type_flo ],
[ "RefStartPos" , col_type_flo ],
[ "RefEndPos" , col_type_flo ],
[ "Orientation" , col_type_str ],
[ "Confidence" , col_type_flo ],
[ "HitEnum" , col_type_str ],
[ "QryLen" , col_type_flo ],
[ "RefLen" , col_type_flo ],
[ "LabelChannel", col_type_str ],
[ "Alignment" , col_type_str ],
]
col_names=[cf[0] for cf in col_info]
col_types=dict(zip([c[0] for c in col_info], [c[1] for c in col_info]))
col_types
Explanation: Column type definition
End of explanation
CONVERTERS = {
'info': filter_conv
}
SKIP_ROWS = 9
NROWS = None
gffData = pd.read_csv(fileUrl, names=col_names, index_col='XmapEntryID', dtype=col_types, header=None, skiprows=SKIP_ROWS, delimiter="\t", comment="#", verbose=True, nrows=NROWS)
gffData.head()
Explanation: Read XMAP
http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb
End of explanation
gffData['qry_match_len'] = abs(gffData['QryEndPos'] - gffData['QryStartPos'])
gffData['ref_match_len'] = abs(gffData['RefEndPos'] - gffData['RefStartPos'])
gffData['match_prop' ] = gffData['qry_match_len'] / gffData['ref_match_len']
gffData = gffData[gffData['Confidence'] >= MIN_CONF]
del gffData['LabelChannel']
gffData.head()
re_matches = re.compile("(\d+)M")
re_insertions = re.compile("(\d+)I")
re_deletions = re.compile("(\d+)D")
def process_cigar(cigar, **kwargs):
2M3D1M1D1M1D4M1I2M1D2M1D1M2I2D9M3I3M1D6M1D2M2D1M1D6M1D1M1D1M2D2M2D1M1I1D1M1D5M2D4M2D1M2D2M1D2M1D3M1D1M1D2M3I3D1M1D1M3D2M3D1M2I1D1M2D1M1D1M1I2D3M2I1M1D2M1D1M1D1M2I3D3M3D1M2D1M1D1M1D5M2D12M
assert(set([x for x in cigar]) <= set(['M', 'D', 'I', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9']))
cigar_matches = 0
cigar_insertions = 0
cigar_deletions = 0
i_matches = re_matches .finditer(cigar)
i_inserts = re_insertions.finditer(cigar)
i_deletes = re_deletions .finditer(cigar)
for i in i_matches:
n = i.group(1)
cigar_matches += int(n)
for i in i_inserts:
n = i.group(1)
cigar_insertions += int(n)
for i in i_deletes:
n = i.group(1)
cigar_deletions += int(n)
return cigar_matches, cigar_insertions, cigar_deletions
gffData[['cigar_matches', 'cigar_insertions', 'cigar_deletions']] = gffData['HitEnum'].apply(process_cigar, axis=1).apply(pd.Series, 1)
del gffData['HitEnum']
gffData.head()
re_alignment = re.compile("\((\d+),(\d+)\)")
def process_alignment(alignment, **kwargs):
Alignment (4862,48)(4863,48)(4864,47)(4865,46)(4866,45)(4867,44)(4870,43)(4873,42)(4874,41)(4875,40)(4877,40)(4878,39)(4879,38)(4880,37)(4883,36)(4884,36)(4885,35)(4886,34)(4887,33)(4888,33)(4889,32)(4890,30)(4891,30)(4892,29)(4893,28)(4894,28)(4899,27)(4900,26)(4901,25)(4902,24)(4903,23)(4904,22)(4906,21)(4907,21)(4908,20)(4910,19)(4911,18)(4912,17)(4913,16)(4915,15)(4917,14)(4918,13)(4919,12)(4920,11)(4922,10)(4923,9)(4925,8)(4927,7)(4930,6)(4931,5)(4932,3)(4933,2)(4934,1)
assert(set([x for x in alignment]) <= set(['(', ')', ',', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9']))
count_refs = defaultdict(int)
count_queries = defaultdict(int)
count_refs_colapses = 0
count_queries_colapses = 0
i_alignment = re_alignment.finditer(alignment)
for i in i_alignment:
c_r = int(i.group(1))
c_q = int(i.group(2))
count_refs [c_r] += 1
count_queries[c_q] += 1
count_refs_colapses = sum([count_refs[ x] for x in count_refs if count_refs[ x] > 1])
count_queries_colapses = sum([count_queries[x] for x in count_queries if count_queries[x] > 1])
return len(count_refs), len(count_queries), count_refs_colapses, count_queries_colapses
gffData[['len_count_refs', 'len_count_queries', 'count_refs_colapses', 'count_queries_colapses']] = gffData['Alignment'].apply(process_alignment, axis=1).apply(pd.Series, 1)
del gffData['Alignment']
gffData.head()
Explanation: Add length column
End of explanation
ref_qry = gffData[['RefContigID','QryContigID']]
ref_qry = ref_qry.sort('RefContigID')
print ref_qry.head()
ref_qry_grpby_ref = ref_qry.groupby('RefContigID', sort=True)
ref_qry_grpby_ref.head()
qry_ref = gffData[['QryContigID','RefContigID']]
qry_ref = qry_ref.sort('QryContigID')
print qry_ref.head()
qry_ref_grpby_qry = qry_ref.groupby('QryContigID', sort=True)
qry_ref_grpby_qry.head()
def stats_from_data_vals(RefContigID, QryContigID, groups, indexer, data, data_vals, valid_data_poses):
ref_lens = [ ( x["RefStartPos"], x["RefEndPos"] ) for x in data_vals ]
qry_lens = [ ( x["QryStartPos"], x["QryEndPos"] ) for x in data_vals ]
num_qry_matches = []
for RefContigID_l in groups["QryContigID_RefContigID"][QryContigID]:
for match_pos in groups["QryContigID_RefContigID"][QryContigID][RefContigID_l]:
if match_pos in valid_data_poses:
num_qry_matches.append(RefContigID_l)
#num_qry_matches = len( groups["QryContigID_RefContigID"][QryContigID] )
num_qry_matches = len( set(num_qry_matches) )
num_orientations = len( set([x["Orientation"] for x in data_vals]) )
ref_no_gap_len = sum( [ max(x)-min(x) for x in ref_lens ] )
ref_min_coord = min( [ min(x) for x in ref_lens ] )
ref_max_coord = max( [ max(x) for x in ref_lens ] )
ref_gap_len = ref_max_coord - ref_min_coord
qry_no_gap_len = sum( [ max(x)-min(x) for x in qry_lens ] )
qry_min_coord = min( [ min(x) for x in qry_lens ] )
qry_max_coord = max( [ max(x) for x in qry_lens ] )
qry_gap_len = qry_max_coord - qry_min_coord
XmapEntryIDs = groups["QryContigID_XmapEntryID"][QryContigID].keys()
Confidences = []
for XmapEntryID in XmapEntryIDs:
data_pos = list(indexer["XmapEntryID"][XmapEntryID])[0]
if data_pos not in valid_data_poses:
continue
Confidences.append( [ data[data_pos]["Confidence"], data[data_pos]["RefContigID"] ] )
max_confidence = max([ x[0] for x in Confidences ])
max_confidence_chrom = [ x[1] for x in Confidences if x[0] == max_confidence][0]
stats = {}
stats["_meta_is_max_confidence_for_qry_chrom" ] = max_confidence_chrom == RefContigID
stats["_meta_len_ref_match_gapped" ] = ref_gap_len
stats["_meta_len_ref_match_no_gap" ] = ref_no_gap_len
stats["_meta_len_qry_match_gapped" ] = qry_gap_len
stats["_meta_len_qry_match_no_gap" ] = qry_no_gap_len
stats["_meta_max_confidence_for_qry" ] = max_confidence
stats["_meta_max_confidence_for_qry_chrom" ] = max_confidence_chrom
stats["_meta_num_orientations" ] = num_orientations
stats["_meta_num_qry_matches" ] = num_qry_matches
stats["_meta_qry_matches" ] = ','.join( [ str(x) for x in sorted(list(set([ x[1] for x in Confidences ]))) ] )
stats["_meta_proportion_sizes_gapped" ] = (ref_gap_len * 1.0)/ qry_gap_len
stats["_meta_proportion_sizes_no_gap" ] = (ref_no_gap_len * 1.0)/ qry_no_gap_len
return stats
for QryContigID in sorted(QryContigIDs):
data_poses = list(groups["RefContigID_QryContigID"][RefContigID][QryContigID])
all_data_poses = list(indexer["QryContigID"][QryContigID])
data_vals = [ data[x] for x in data_poses ]
stats = stats_from_data_vals(RefContigID, QryContigID, groups, indexer, data, data_vals, all_data_poses)
#print "RefContigID %4d QryContigID %6d" % ( RefContigID, QryContigID )
for data_val in data_vals:
cigar = data_val["HitEnum"]
cigar_matches, cigar_insertions, cigar_deletions = process_cigar(cigar)
Alignment = data_val["Alignment"]
alignment_count_queries, alignment_count_refs, alignment_count_refs_colapses, alignment_count_queries_colapses = process_alignment(Alignment)
for stat in stats:
data_val[stat] = stats[stat]
data_val["_meta_proportion_query_len_gapped" ] = (data_val['_meta_len_qry_match_gapped'] * 1.0)/ data_val["QryLen"]
data_val["_meta_proportion_query_len_no_gap" ] = (data_val['_meta_len_qry_match_no_gap'] * 1.0)/ data_val["QryLen"]
#print " ", " ".join( ["%s %s" % (x, str(data_val[x])) for x in sorted(data_val)] )
reporter.write( "\t".join( [ str(data_val[x]) for x in valid_fields['names' ] ] ) + "\n" )
Explanation: More stats
End of explanation
gffData.dtypes
Explanation: Good part
http://nbviewer.ipython.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb
http://pandas.pydata.org/pandas-docs/dev/visualization.html
https://bespokeblog.wordpress.com/2011/07/11/basic-data-plotting-with-matplotlib-part-3-histograms/
http://nbviewer.ipython.org/github/mwaskom/seaborn/blob/master/examples/plotting_distributions.ipynb
http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week3/exploratory_graphs.ipynb
http://pandas.pydata.org/pandas-docs/version/0.15.0/visualization.html
http://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/
Column types
End of explanation
gffData[['Confidence', 'QryLen', 'qry_match_len', 'ref_match_len', 'match_prop']].describe()
Explanation: Global statistics
End of explanation
chromosomes = np.unique(gffData['RefContigID'].values)
chromosomes
Explanation: List of chromosomes
End of explanation
with size_controller(FULL_FIG_W, FULL_FIG_H):
bq = gffData.boxplot(column='Confidence')
Explanation: Quality distribution
End of explanation
with size_controller(FULL_FIG_W, FULL_FIG_H):
bqc = gffData.boxplot(column='Confidence', by='RefContigID')
Explanation: Quality distribution per chromosome
End of explanation
with size_controller(FULL_FIG_W, FULL_FIG_H):
hs = gffData['RefStartPos'].hist()
Explanation: Position distribution
End of explanation
hsc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
hsc = gffData['RefStartPos'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
Explanation: Position distribution per chromosome
End of explanation
with size_controller(FULL_FIG_W, FULL_FIG_H):
hl = gffData['qry_match_len'].hist()
Explanation: Length distribution
End of explanation
hlc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
Explanation: Length distribution per chromosome
End of explanation |
7,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Identifiers aka Variables
In Python, variable names are kind of tags/pointers to the memory location which hosts the data. We can also think of it as a labeled container that can store a single value. That single value can be of practically any data type.
Storing Values in Variables
Step1: In the above example, current_month is the variable name and "MAY" is the value associated with it. Operation performed in the first line is called assignment and such statements are called assignment statements. Lets discuss them in details.
Assignment Statements
You’ll store values in variables with an assignment statement. An assignment statement consists of a variable name, an equal sign (called the assignment operator), and the value to be stored. If you enter the assignment statement current_month = "MAY", then a variable named current_month will be pointing to a memory location which has the string value "MAY" stored in it.
In Python, we do not need to declare variable explicitly. They are declared automatically when any value is assigned. The assignment is done using the equal (=) operator as shown in the below example
Step2: The pictorial representation of variables from above example.
<img src="files/variables.png">
Now lets perform some actions on the variable current_month and observe the changes happening on it.
In the example shown below, we will reassign a new value JUNE to the variable current_month and observe the effects of it.
Image below shows the process of re-assignation. You will note that a new memory is assigned to the variable instead of using the existing one.
Step3: current_month was initially pointing to memory location containing value MAY and after reassination, it was pointing to a new memory location containing value JUNE and if no other referencing the previous value, then automatically Python GC will clean it at some future time.
Step4: Note
Step5: NOTE
Step6: In the above example, all x, y and z are pointing to same memory location which contains 1000, which we are able to identify by checking the id of the variables. They are pointing to the same memory location, thus value of id for all three are same.
Step7: Now, lets change value of one varialbe and again check respective ides.
Step8: Now, lets test something else. Can different data types impact the behavior of python memory optimization. We will first test it with integer, string and then with list.
Step9: check the id of both x and z, they are same but y is not same.
Step10: 2. Assigning multiple values to multiple variables
Step11: Variable Names & Naming Conventions
There are a couple of naming conventions in use in Python
Step12: Options can be used to override the default regular expression associated to each type. The table below lists the types, their associated options, and their default regular expressions.
| Type | Default Expression |
|
Step13: Good Variable Name
Choose meaningful name instead of short name. roll_no is better than rn.
Maintain the length of a variable name. Roll_no_of_a_student is too long?
Be consistent; roll_no or RollNo
Begin a variable name with an underscore(_) character for a special case.
Exercises
Q 1. Find the valid and in-valid variable names from the followings | Python Code:
current_month = "MAY"
print(current_month)
Explanation: Python Identifiers aka Variables
In Python, variable names are kind of tags/pointers to the memory location which hosts the data. We can also think of it as a labeled container that can store a single value. That single value can be of practically any data type.
Storing Values in Variables:
In Python, the declaration & assignation of value to the variable are done at the same time, i.e. as soon as we assign a value to a non-existing or existing variable, the required memory location is assigned to it and proper data is populated in it.
NOTE: Storing Values in Python is one of the most important concepts and should be understood with great care.
End of explanation
current_month = "MAY" # A comment.
date = 10
Explanation: In the above example, current_month is the variable name and "MAY" is the value associated with it. Operation performed in the first line is called assignment and such statements are called assignment statements. Lets discuss them in details.
Assignment Statements
You’ll store values in variables with an assignment statement. An assignment statement consists of a variable name, an equal sign (called the assignment operator), and the value to be stored. If you enter the assignment statement current_month = "MAY", then a variable named current_month will be pointing to a memory location which has the string value "MAY" stored in it.
In Python, we do not need to declare variable explicitly. They are declared automatically when any value is assigned. The assignment is done using the equal (=) operator as shown in the below example:
End of explanation
current_month = "JUNE"
Explanation: The pictorial representation of variables from above example.
<img src="files/variables.png">
Now lets perform some actions on the variable current_month and observe the changes happening on it.
In the example shown below, we will reassign a new value JUNE to the variable current_month and observe the effects of it.
Image below shows the process of re-assignation. You will note that a new memory is assigned to the variable instead of using the existing one.
End of explanation
current_month = "JUNE"
print(id(current_month))
next_month = "JUNE"
print(id(next_month))
next_month = "June"
print(id(next_month))
Explanation: current_month was initially pointing to memory location containing value MAY and after reassination, it was pointing to a new memory location containing value JUNE and if no other referencing the previous value, then automatically Python GC will clean it at some future time.
End of explanation
########## Reference count ###################
# NOTE: Please test the below code by saving
# it as a file and executing it instead
# of running it here.
#############################################
import sys
new_var = 10101010101000
print(sys.getrefcount(new_var))
Explanation: Note: That value of MAY has not updated but a new memory was allocated for value JUNE and varialbe now points to it.
Later in the chapter, we will show the above senario with more examples.
How to find the reference count of a value
End of explanation
x=y=z=1000
print(x, y, z)
Explanation: NOTE:
The value of refcount will almost always be more than you think. It is done internally by python to optimize the code. I will be adding more details about it in "Section 2 -> Chapter: GC & Cleanup"
Multiple Assignment:
In multiple assignment, multiple variables are assigned values in a single line. There are two ways multiple assignment can be done in python. In first format all the variables point to the same value and in next all variables point to individual values.
1. Assigning single value to multiple variables:
End of explanation
print(id(x))
print(id(y))
print(id(z))
Explanation: In the above example, all x, y and z are pointing to same memory location which contains 1000, which we are able to identify by checking the id of the variables. They are pointing to the same memory location, thus value of id for all three are same.
End of explanation
x = 200
print(x)
print(y)
print(z)
print(id(x))
print(id(y))
print(id(z))
Explanation: Now, lets change value of one varialbe and again check respective ides.
End of explanation
### INTEGER
x=1000
y=1000
z=1000
print(x)
print(y)
print(z)
print(id(x))
print(id(y))
print(id(z))
### INTEGER
x=24
y=24
z=24
print(x)
print(y)
print(z)
print(id(x))
print(id(y))
print(id(z))
### String
x="1000"
y=1000
z="1000"
print(x)
print(y)
print(z)
print(id(x))
print(id(y))
print(id(z))
Explanation: Now, lets test something else. Can different data types impact the behavior of python memory optimization. We will first test it with integer, string and then with list.
End of explanation
### list
x = ["1000"]
y = [1000]
z = ["1000"]
a = [1000]
print(x)
print(y)
print(z)
print(a)
print(id(x))
print(id(y))
print(id(z))
print(id(a))
Explanation: check the id of both x and z, they are same but y is not same.
End of explanation
x, y, z = 10, 20, 30
print(x)
print(y)
print(z)
print(id(x))
print(id(y))
print(id(z))
x, y, z = 10, 120, 10
print(x)
print(y)
print(z)
print(id(x))
print(id(y))
print(id(z))
Explanation: 2. Assigning multiple values to multiple variables:
End of explanation
pm_name = "Narendra Modi"
prime_minister = "Narendra Modi"
cong_p_name = "Rahul Gandhi"
corrent_name_of_cong_president = "Rahul Gandhi"
cong_president = "Rahul Gandhi"
cname = "RG"
Explanation: Variable Names & Naming Conventions
There are a couple of naming conventions in use in Python:
- lower_with_underscores: Uses only lower case letters and connects multiple words with underscores.
- UPPER_WITH_UNDERSCORES: Uses only upper case letters and connects multiple words with underscores.
- CapitalWords: Capitalize the beginning of each letter in a word; no underscores. With these conventions in mind, here are the naming conventions in use.
Variable Names: lower_with_underscores
Constants: UPPER_WITH_UNDERSCORES
Function Names: lower_with_underscores
Function Parameters: lower_with_underscores
Class Names: CapitalWords
Method Names: lower_with_underscores
Method Parameters and Variables: lower_with_underscores
Always use self as the first parameter to a method
To indicate privacy, precede name with a single underscore.
End of explanation
this_is_my_number
THIS_IS_MY_NUMBER
ThisIsMyNumber
this_is_number
anotherVarible
This1
this1home
1This
__sd__
_sd
Explanation: Options can be used to override the default regular expression associated to each type. The table below lists the types, their associated options, and their default regular expressions.
| Type | Default Expression |
|:-----------------:|:-----------------------------------------:|
| Argument | [a-z_][a-z0-9_] |
| Attribute | [a-z_][a-z0-9_] |
| Class | [A-Z_][a-zA-Z0-9] |
| Constant | (([A-Z_][A-Z0-9_] |
| Function | [a-z_][a-z0-9_] |
| Method | [a-z_][a-z0-9_] |
| Module | (([a-z_][a-z0-9_]), ([A-Z][a-zA-Z0-9])) |
| Variable | [a-z_][a-z0-9_] |
| Variable, inline1 | [A-Za-z_][A-Za-z0-9_] |
Please find the invalid variables name from the below list
End of explanation
_ is used
* To use as ‘Internationalization(i18n)’ or ‘Localization(l10n)’ functions.
Explanation: Good Variable Name
Choose meaningful name instead of short name. roll_no is better than rn.
Maintain the length of a variable name. Roll_no_of_a_student is too long?
Be consistent; roll_no or RollNo
Begin a variable name with an underscore(_) character for a special case.
Exercises
Q 1. Find the valid and in-valid variable names from the followings:
balance
current-balance
current balance
current_balance
4account
_spam
42
SPAM
total_$um
account4
'hello'
Q 2. Multiple Choice Questions & Answers
Is Python case sensitive when dealing with identifiers?
a) yes
b) no
c) machine dependent
d) none of the mentioned
What is the maximum possible length of an identifier?
a) 31 characters
b) 63 characters
c) 79 characters
d) none of the mentioned
What does local variable names beginning with an underscore mean?
a) they are used to indicate a private variables of a class
b) they confuse the interpreter
c) they are used to indicate global variables
d) None of the
Which of the following is true for variable names in Python?
a) unlimited length
b) Only _ and $ special characters allowed in variable name
c) private members should have leading & trailing underscores
d) None of the above
End of explanation |
7,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excersises
1.
Implemet softmax. Softmax is a vector-vector function such that
$$
x_i \mapsto \frac{\exp(x_i)}{\sum_{j=1}^n \exp(x_j)}
$$
Avoid using for loops, use vectorization. The solution below is a bad solution.
Step1: 2.
Write a function which has one parameter, a 2D array and it returns a vector of row-wise Euclidean norms of the input. Use numpy operations and vectorization, avoid for loops. The solution below is a bad solution.
Step2: 3.
Write a function which has one parameter, a positive integer $n$, and returns an $n\times n$ array of $\pm1$ values like a chessboard
Step3: 4.*
Write a function which numerically derivates a $\mathbb{R}\mapsto\mathbb{R}$ function. Use the forward finite difference.
The input is a 1D array of function values, and optionally a 1D vector of abscissa values. If not provided then the abscissa values are unit steps.
The result is a 1D array with the length of one less than the input array.
Use numpy operations instead of for loop in contrast to the solution below.
Step4: 4. b)
Make a random (1D) grid and use that as abscissa values!
Step5: 5.*
Implement the Horner's method for evaluating polynomials. The first input is a 1D array of numbers, the coefficients, from the constant coefficient to the highest order coefficent. The second input is the variable $x$ to subsitute. The function should work for all type of variables
Step6: With a slight modofication, you can implement matrix polinomials!
Step7: 6.*
Plot the $z\mapsto \exp(z)$ complex function on $[-2, 2]\times i [-2, 2]$. Use matplotlib.pyplot.imshow and the red and green color channgels for real and imaginary parts. | Python Code:
def softmax(X):
X = numpy.array(X)
Y = numpy.exp(X)
return Y/Y.sum()
print(softmax([-1,0,1]))
Explanation: Excersises
1.
Implemet softmax. Softmax is a vector-vector function such that
$$
x_i \mapsto \frac{\exp(x_i)}{\sum_{j=1}^n \exp(x_j)}
$$
Avoid using for loops, use vectorization. The solution below is a bad solution.
End of explanation
def rowwise_norm(X):
XX = numpy.array(X)**2
return numpy.sqrt(XX.sum(1))
X = numpy.arange(5)[:, None]*numpy.ones((5, 3));
print(X)
print(rowwise_norm(X))
print(rowwise_norm([[1], [-1], [1], [-1]]))
Explanation: 2.
Write a function which has one parameter, a 2D array and it returns a vector of row-wise Euclidean norms of the input. Use numpy operations and vectorization, avoid for loops. The solution below is a bad solution.
End of explanation
print((numpy.arange(5) % 2) * 2 - 1)
print("or")
print((-1)**numpy.arange(5))
print("rather")
print(-(-1)**numpy.arange(5))
def chessboard(n):
a = -(-1)**numpy.arange(n)
return a.reshape((-1, 1)) * a.reshape((1, -1))
return a[:, None] * a[None, :] # the same like above, just shorter
print(chessboard(5))
Explanation: 3.
Write a function which has one parameter, a positive integer $n$, and returns an $n\times n$ array of $\pm1$ values like a chessboard: $M_{i,j} = (-1)^{i+j}$.
Use numpy operations, avoid for loops!
First just a vector of laternating minus ones and ones.
End of explanation
def derivate(f, x=None):
if x is None:
x = numpy.arange(len(f))
df = f[:-1] - f[1:] # vector of differences of y values
dx = x[:-1] - x[1:] # vector of differences of x (abscissa) values
return df/dx
print(derivate(numpy.arange(10)**2))
x=numpy.arange(0,1,0.1)
matplotlib.pyplot.plot(x, x**2)
matplotlib.pyplot.plot(x[:-1], derivate(x**2, x))
Explanation: 4.*
Write a function which numerically derivates a $\mathbb{R}\mapsto\mathbb{R}$ function. Use the forward finite difference.
The input is a 1D array of function values, and optionally a 1D vector of abscissa values. If not provided then the abscissa values are unit steps.
The result is a 1D array with the length of one less than the input array.
Use numpy operations instead of for loop in contrast to the solution below.
End of explanation
x = numpy.sort(numpy.random.rand(100))
x = numpy.concatenate([[0], x, [1]]) # the grid contains 0, a hundered random numbers, and 1 in an increasing order
f = x**2
matplotlib.pyplot.plot(x, f)
matplotlib.pyplot.plot(x[:-1], derivate(f, x))
Explanation: 4. b)
Make a random (1D) grid and use that as abscissa values!
End of explanation
def horner(C, x):
y = numpy.zeros_like(x)
for c in C[::-1]:
y = y*x + c
return y
C = [2, 0, 1] # 2 + 0*x + 1*x^2
print(horner(C, 3)) # 2 + 3^2
print(horner(C, [3, 3]))
print(horner(C, numpy.arange(9).reshape((3,3))))
Explanation: 5.*
Implement the Horner's method for evaluating polynomials. The first input is a 1D array of numbers, the coefficients, from the constant coefficient to the highest order coefficent. The second input is the variable $x$ to subsitute. The function should work for all type of variables: numbers, arrays; the output should be the same type array as the input, containing the elementwise polynomial values.
End of explanation
def matrix_horner(C, x):
assert(len(x.shape) == 2) # matrices
assert(x.shape[0] == x.shape[1]) # square matrices
eye = numpy.eye(x.shape[0], dtype=x.dtype)
y = numpy.zeros_like(x)
for c in C[::-1]:
y = y.dot(x) + c*eye
return y
C = [2, 0, 1] # 2 + 0*x + 1*x^2
M = numpy.arange(9).reshape((3,3))
print(M)
print()
print(M.dot(M))
print()
print(2*numpy.eye(3) + M.dot(M))
print(matrix_horner(C, M))
Explanation: With a slight modofication, you can implement matrix polinomials!
End of explanation
z = -1j*numpy.linspace(-2,2,100)[:, None] + numpy.linspace(-2,2,100)[None, :] # complex grid
expz = numpy.exp(z) # function values
real_part = expz.real
imag_part = expz.imag
# for scaling to [0,1]
max_real = real_part.max()
min_real = real_part.min()
max_imag = imag_part.max()
min_imag = imag_part.min()
# RGB
colors = numpy.stack([
(real_part - min_real) / (max_real - min_real),
(imag_part - min_imag) / (max_imag - min_imag),
numpy.zeros_like(real_part)
], axis=2)
# plotting
matplotlib.pyplot.imshow(colors)
Explanation: 6.*
Plot the $z\mapsto \exp(z)$ complex function on $[-2, 2]\times i [-2, 2]$. Use matplotlib.pyplot.imshow and the red and green color channgels for real and imaginary parts.
End of explanation |
7,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trip S&S
Maps for https
Step1: 1. Map
Step2: 2. Profile
Route drawn in Google Maps and converted in http | Python Code:
import numpy as np
from scipy.interpolate import interp1d
import travelmaps2 as tm
from matplotlib import pyplot as plt
tm.setup(dpi=200)
Explanation: Trip S&S
Maps for https://mexico.werthmuller.org/besucherreisen/simon.
End of explanation
fig_x = tm.plt.figure(figsize=(tm.cm2in([11, 6])))
# Locations
MDF = [19.433333, -99.133333] # Mexico City
OAX = [16.898056, -96.414167] # Oaxaca
PES = [15.861944, -97.067222] # Puerto Escondido
ACA = [16.863611, -99.8825] # Acapulco
PBL = [19., -97.883333] # Puebla
# Create basemap
m_x = tm.Basemap(width=3500000, height=2300000, resolution='c', projection='tmerc', lat_0=24, lon_0=-102)
# Plot image
###m_x.warpimage('./data/TravelMap/HYP_HR_SR_OB_DR/HYP_HR_SR_OB_DR.tif')
# Put a shade over non-Mexican countries
countries = ['USA', 'BLZ', 'GTM', 'HND', 'SLV', 'NIC', 'CUB']
tm.country(countries, m_x, fc='.8', ec='.3', lw=.5, alpha=.6)
# Fill states
fcs = 32*['none']
ecs = 32*['k']
lws = 32*[.2,]
tm.country('MEX', bmap=m_x, fc=fcs, ec=ecs, lw=lws, adm=1)
ecs = 32*['none']
#ecs[19] = 'r'
lws = 32*[1,]
tm.country('MEX', bmap=m_x, fc=fcs, ec=ecs, lw=lws, adm=1)
# Add arrows
tm.arrow(MDF, ACA, m_x, rad=.3)
tm.arrow(ACA, PES, m_x, rad=.3)
#tm.arrow(PES, OAX, m_x, rad=-.3)
tm.arrow(OAX, PBL, m_x, rad=.3)
#tm.arrow(PBL, MDF, m_x, rad=.3)
# Add visited cities
tm.city(OAX, 'Oaxaca', m_x, offs=[.6, 0])
tm.city(MDF, 'Mexiko-Stadt', m_x, offs=[-.6, .6], halign="right")
tm.city(PES, 'Puerto Escondido', m_x, offs=[-2, -1.5])
tm.city(ACA, 'Acapulco', m_x, offs=[-.8, 0], halign="right")
tm.city(PBL, 'Puebla', m_x, offs=[.6, .6])
# Save-path
#fpath = '../mexico.werthmuller.org/content/images/simon/'
#tm.plt.savefig(fpath+'MapSSTrip.png', bbox_inches='tight')
tm.plt.show()
Explanation: 1. Map
End of explanation
fig_p,ax = plt.subplots(figsize=(tm.cm2in([10.8, 5])))
# Switch off axis and ticks
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('none')
# Get data
pdat = np.loadtxt('./data/Mexico/SSTripData.txt', skiprows=1)
# Plot City names and kilometers
opt = {'horizontalalignment':'center', 'verticalalignment':'left', 'rotation':'vertical'}
plt.annotate('Mexiko-Stadt', (0, 3000), **opt)
plt.annotate('Acapulco', (373, 600), **opt)
plt.annotate('Puerto Escondido', (773, 600), **opt)
plt.annotate('V. Sola de Vega', (890, 2200), **opt)
plt.annotate('Oaxaca', (1032, 2200), **opt)
plt.annotate('Puebla', (1368, 2600), **opt)
plt.annotate('Mexiko-Stadt', (1501, 3000), **opt)
# Ticks, hlines, axis
plt.xticks(np.arange(7)*250, ('0 km', '', '500 km', '', '1000 km', '', '1500 km'))
plt.yticks(np.arange(8)*500, ('0 m', '', '1000 m', '', '2000 m', '', '3000 m', ''))
plt.hlines([0, 1000, 2000, 3000], -100, 1600, colors='.8')
plt.hlines([500, 1500, 2500, 3500], -100, 1600, colors='.8', lw=.5)
plt.axis([-50, 1550, -300, 6000])
# Sum up differences to get distance, distance starts now at every waypoint
distance = np.cumsum(pdat[:,4])/1000 # 41443 data points
# Reduce data points to 1500
reduced = interp1d(distance, pdat[:,2], 'slinear')
ndist = np.linspace(0, 1500, 1500)
nelev = reduced(ndist)
# Plot data
plt.plot(ndist, nelev)
# Save-path
#fpath = '../mexico.werthmuller.org/content/images/simon/'
#plt.savefig(fpath+'Profile.png', bbox_inches='tight')
plt.show()
Explanation: 2. Profile
Route drawn in Google Maps and converted in http://www.gpsvisualizer.com.
End of explanation |
7,650 | Given the following text description, write Python code to implement the functionality described.
Description:
Count quadruples ( i , j , k , l ) in an array such that i < j < k < l and arr [ i ] = arr [ k ] and arr [ j ] = arr [ l ]
Function to count total number of required tuples ; Initialize unordered map ; Find the pairs ( j , l ) such that arr [ j ] = arr [ l ] and j < l ; Elements are equal ; Update the count ; Add the frequency of arr [ l ] to val ; Update the frequency of element arr [ j ] ; Return the answer ; Driver code ; Given array arr [ ] ; Function call
| Python Code:
def countTuples(arr , N ) :
ans = 0
val = 0
freq = { }
for j in range(N - 2 ) :
val = 0
for l in range(j + 1 , N ) :
if(arr[j ] == arr[l ] ) :
ans += val
if arr[l ] in freq :
val += freq[arr[l ] ]
freq[arr[j ] ] = freq . get(arr[j ] , 0 ) + 1
return ans
if __name__== ' __main __' :
arr =[1 , 2 , 1 , 2 , 2 , 2 ]
N = len(arr )
print(countTuples(arr , N ) )
|
7,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectrum
Colour is defined as the characteristic of visual perception that can be described by attributes of hue, brightness (or lightness) and colourfulness (or saturation or chroma).
When necessary, to avoid confusion between other meanings of the word, the term "perceived colour" may be used.
Perceived colour depends on the spectral distribution of the colour stimulus, on the size, shape, structure and surround of the stimulus area, on the state of adaptation of the observer's visual system, and on the observer's experience of the prevailing and similar situations of observation. <a name="back_reference_1"></a><a href="#reference_1">[1]</a>
Light is the electromagnetic radiation that is considered from the point of view of its ability to excite the human visual system. <a name="back_reference_2"></a><a href="#reference_2">[2]</a>
The portion of the electromatic radiation frequencies perceived in the approximate wavelength range 360-780 nanometres (nm) is called the visible spectrum.
Step1: The spectrum is defined as the display or specification of the monochromatic components of the radiation considered. <a name="back_reference_3"></a><a href="#reference_3">[3]</a>
At the core of Colour is the colour.colorimetry sub-package, it defines the objects needed for spectral related computations and many others
Step2: Note
Step3: Note
Step4: The sample spectral power distribution can be easily plotted against the visible spectrum
Step5: With the sample spectral power distribution defined, we can retrieve its shape
Step6: The shape returned is an instance of colour.SpectralShape class
Step7: colour.SpectralShape is used throughout Colour to define spectral dimensions and is instantiated as follows
Step8: Colour defines three convenient objects to create constant spectral power distributions
Step9: By default the shape used by colour.constant_spd, colour.zeros_spd and colour.ones_spd is the one defined by colour.DEFAULT_SPECTRAL_SHAPE attribute using the CIE 1931 2° Standard Observer shape.
Step10: A custom shape can be passed to construct a constant spectral power distribution with tailored dimensions
Step11: Often interpolation of the spectral power distribution is needed, this is achieved with the colour.SpectralPowerDistribution.interpolate method. Depending on the wavelengths uniformity, the default interpolation method will differ. Following CIE 167
Step12: Since the sample spectral power distribution is uniform the interpolation will be using the colour.SpragueInterpolator interpolator.
Note
Step13: Extrapolation although dangerous can be used to help aligning two spectral power distributions together. CIE 015
Step14: The underlying interpolator can be swapped for any of the Colour interpolators.
Step15: The extrapolation behaviour can be changed for Linear method instead of the Constant default method or even use arbitrary constant left and right values
Step16: Aligning a spectral power distribution is a convenient way to first interpolate the current data within its original bounds then if needed extrapolates any missing values to match the requested shape
Step17: The colour.SpectralPowerDistribution class also supports various arithmetic operations like addition, subtraction, multiplication, division or exponentiation with numeric and array_like variables or other colour.SpectralPowerDistribution class instances
Step18: The spectral power distribution can be normalised with an arbitrary factor
Step19: Colour Matching Functions
In the late 1920's, Wright (1928) and Guild (1931) independently conducted a series of colour matching experiments to quantify the colour ability of an average human observer which laid the foundation for the specification of the CIE XYZ colourspace. The results obtained were summarized by the Wright & Guild 1931 2° RGB CMFs $\bar{r}(\lambda)$,$\bar{g}(\lambda)$,$\bar{b}(\lambda)$ colour matching functions
Step20: With an RGB model of human vision based on Wright & Guild 1931 2° RGB CMFs $\bar{r}(\lambda)$,$\bar{g}(\lambda)$,$\bar{b}(\lambda)$ colour matching functions and for pragmatic reasons the CIE members developed a new colour space that would relate to the CIE RGB colourspace but for which all tristimulus values would be positive for real colours
Step21: In the 1960's it appeared that cones were present in a larger region of eye than the one initially covered by the experiments that lead to the CIE 1931 2° Standard Observer specification.
As a result, colour computations done with the CIE 1931 2° Standard Observer do not always correlate to the visual observation.
In 1964, the CIE defined an additional standard observer
Step22: Note
Step23: Retrieving the CIE XYZ tristimulus values of any wavelength from colour matching functions is done using the colour.wavelength_to_XYZ definition, if the value requested is not available, the colour matching functions will be interpolated following CIE 167 | Python Code:
%matplotlib inline
import colour
from colour.plotting import *
colour.filter_warnings(True, False)
colour_plotting_defaults()
# Plotting the visible spectrum.
visible_spectrum_plot()
Explanation: Spectrum
Colour is defined as the characteristic of visual perception that can be described by attributes of hue, brightness (or lightness) and colourfulness (or saturation or chroma).
When necessary, to avoid confusion between other meanings of the word, the term "perceived colour" may be used.
Perceived colour depends on the spectral distribution of the colour stimulus, on the size, shape, structure and surround of the stimulus area, on the state of adaptation of the observer's visual system, and on the observer's experience of the prevailing and similar situations of observation. <a name="back_reference_1"></a><a href="#reference_1">[1]</a>
Light is the electromagnetic radiation that is considered from the point of view of its ability to excite the human visual system. <a name="back_reference_2"></a><a href="#reference_2">[2]</a>
The portion of the electromatic radiation frequencies perceived in the approximate wavelength range 360-780 nanometres (nm) is called the visible spectrum.
End of explanation
from pprint import pprint
import colour.colorimetry as colorimetry
pprint(colorimetry.__all__)
Explanation: The spectrum is defined as the display or specification of the monochromatic components of the radiation considered. <a name="back_reference_3"></a><a href="#reference_3">[3]</a>
At the core of Colour is the colour.colorimetry sub-package, it defines the objects needed for spectral related computations and many others:
End of explanation
import colour.colorimetry.dataset as dataset
pprint(dataset.__all__)
Explanation: Note: colour.colorimetry sub-package public API is directly available from colour namespace.
Colour computations are based on a comprehensive dataset available in pretty much each sub-packages, for example colour.colorimetry.dataset defines the following data:
End of explanation
import colour
# Defining a sample spectral power distribution data.
sample_spd_data = {
380: 0.048,
385: 0.051,
390: 0.055,
395: 0.06,
400: 0.065,
405: 0.068,
410: 0.068,
415: 0.067,
420: 0.064,
425: 0.062,
430: 0.059,
435: 0.057,
440: 0.055,
445: 0.054,
450: 0.053,
455: 0.053,
460: 0.052,
465: 0.052,
470: 0.052,
475: 0.053,
480: 0.054,
485: 0.055,
490: 0.057,
495: 0.059,
500: 0.061,
505: 0.062,
510: 0.065,
515: 0.067,
520: 0.070,
525: 0.072,
530: 0.074,
535: 0.075,
540: 0.076,
545: 0.078,
550: 0.079,
555: 0.082,
560: 0.087,
565: 0.092,
570: 0.100,
575: 0.107,
580: 0.115,
585: 0.122,
590: 0.129,
595: 0.134,
600: 0.138,
605: 0.142,
610: 0.146,
615: 0.150,
620: 0.154,
625: 0.158,
630: 0.163,
635: 0.167,
640: 0.173,
645: 0.180,
650: 0.188,
655: 0.196,
660: 0.204,
665: 0.213,
670: 0.222,
675: 0.231,
680: 0.242,
685: 0.251,
690: 0.261,
695: 0.271,
700: 0.282,
705: 0.294,
710: 0.305,
715: 0.318,
720: 0.334,
725: 0.354,
730: 0.372,
735: 0.392,
740: 0.409,
745: 0.420,
750: 0.436,
755: 0.450,
760: 0.462,
765: 0.465,
770: 0.448,
775: 0.432,
780: 0.421}
spd = colour.SpectralPowerDistribution(sample_spd_data, name='Sample')
print(spd)
Explanation: Note: colour.colorimetry.dataset sub-package public API is directly available from colour namespace.
Spectral Power Distribution
Whether it be a sample spectral power distribution, colour matching functions or illuminants, spectral data is manipulated using an object built with the colour.SpectralPowerDistribution class or based on it:
End of explanation
# Plotting the sample spectral power distribution.
single_spd_plot(spd)
Explanation: The sample spectral power distribution can be easily plotted against the visible spectrum:
End of explanation
# Displaying the sample spectral power distribution shape.
print(spd.shape)
Explanation: With the sample spectral power distribution defined, we can retrieve its shape:
End of explanation
repr(spd.shape)
Explanation: The shape returned is an instance of colour.SpectralShape class:
End of explanation
# Using *colour.SpectralShape* with iteration.
shape = colour.SpectralShape(start=0, end=10, interval=1)
for wavelength in shape:
print(wavelength)
# *colour.SpectralShape.range* method is providing the complete range of values.
shape = colour.SpectralShape(0, 10, 0.5)
shape.range()
Explanation: colour.SpectralShape is used throughout Colour to define spectral dimensions and is instantiated as follows:
End of explanation
# Defining a constant spectral power distribution.
constant_spd = colour.constant_spd(100)
print('"Constant Spectral Power Distribution"')
print(constant_spd.shape)
print(constant_spd[400])
# Defining a zeros filled spectral power distribution.
print('\n"Zeros Filled Spectral Power Distribution"')
zeros_spd = colour.zeros_spd()
print(zeros_spd.shape)
print(zeros_spd[400])
# Defining a ones filled spectral power distribution.
print('\n"Ones Filled Spectral Power Distribution"')
ones_spd = colour.ones_spd()
print(ones_spd.shape)
print(ones_spd[400])
Explanation: Colour defines three convenient objects to create constant spectral power distributions:
colour.constant_spd
colour.zeros_spd
colour.ones_spd
End of explanation
print(repr(colour.DEFAULT_SPECTRAL_SHAPE))
Explanation: By default the shape used by colour.constant_spd, colour.zeros_spd and colour.ones_spd is the one defined by colour.DEFAULT_SPECTRAL_SHAPE attribute using the CIE 1931 2° Standard Observer shape.
End of explanation
colour.ones_spd(colour.SpectralShape(400, 700, 5))[450]
Explanation: A custom shape can be passed to construct a constant spectral power distribution with tailored dimensions:
End of explanation
# Checking the sample spectral power distribution uniformity.
print(spd.is_uniform())
Explanation: Often interpolation of the spectral power distribution is needed, this is achieved with the colour.SpectralPowerDistribution.interpolate method. Depending on the wavelengths uniformity, the default interpolation method will differ. Following CIE 167:2005 recommendation: The method developed by Sprague (1880) should be used for interpolating functions having a uniformly spaced independent variable and a Cubic Spline method for non-uniformly spaced independent variable. <a name="back_reference_4"></a><a href="#reference_4">[4]</a>
We can check the uniformity of the sample spectral power distribution:
End of explanation
# Copying the sample spectral power distribution.
spd_copy = spd.copy()
# Interpolating the copied sample spectral power distribution.
spd_copy.interpolate(colour.SpectralShape(400, 770, 1))
spd_copy[401]
# Comparing the interpolated spectral power distribution with the original one.
multi_spd_plot([spd, spd_copy], bounding_box=[730,780, 0.1, 0.5])
Explanation: Since the sample spectral power distribution is uniform the interpolation will be using the colour.SpragueInterpolator interpolator.
Note: Interpolation happens in place and may alter your original data, use the colour.SpectralPowerDistribution.clone method to produce a copy of your spectral power distribution before interpolation.
End of explanation
# Extrapolating the copied sample spectral power distribution.
spd_copy.extrapolate(colour.SpectralShape(340, 830))
spd_copy[340], spd_copy[830]
Explanation: Extrapolation although dangerous can be used to help aligning two spectral power distributions together. CIE 015:2004 Colorimetry, 3rd Edition recommends that unmeasured values may be set equal to the nearest measured value of the appropriate quantity in truncation: <a name="back_reference_5"></a><a href="#reference_5">[5]</a>
End of explanation
pprint([
export for export in colour.algebra.interpolation.__all__
if 'Interpolator' in export
])
# Changing interpolator while trimming the copied spectral power distribution.
spd_copy.interpolate(
colour.SpectralShape(400, 700, 10), interpolator=colour.LinearInterpolator)
Explanation: The underlying interpolator can be swapped for any of the Colour interpolators.
End of explanation
# Extrapolating the copied sample spectral power distribution with *Linear* method.
spd_copy.extrapolate(
colour.SpectralShape(340, 830),
extrapolator_args={'method': 'Linear',
'right': 0})
spd_copy[340], spd_copy[830]
Explanation: The extrapolation behaviour can be changed for Linear method instead of the Constant default method or even use arbitrary constant left and right values:
End of explanation
# Aligning the cloned sample spectral power distribution.
# We first trim the spectral power distribution as above.
spd_copy.interpolate(colour.SpectralShape(400, 700))
spd_copy.align(colour.SpectralShape(340, 830, 5))
spd_copy[340], spd_copy[830]
Explanation: Aligning a spectral power distribution is a convenient way to first interpolate the current data within its original bounds then if needed extrapolates any missing values to match the requested shape:
End of explanation
spd = colour.SpectralPowerDistribution({
410: 0.25,
420: 0.50,
430: 0.75,
440: 1.0,
450: 0.75,
460: 0.50,
480: 0.25
})
print((spd.copy() + 1).values)
print((spd.copy() * 2).values)
print((spd * [0.35, 1.55, 0.75, 2.55, 0.95, 0.65, 0.15]).values)
print((spd * colour.constant_spd(2, spd.shape) * colour.constant_spd(3, spd.shape)).values)
Explanation: The colour.SpectralPowerDistribution class also supports various arithmetic operations like addition, subtraction, multiplication, division or exponentiation with numeric and array_like variables or other colour.SpectralPowerDistribution class instances:
End of explanation
print(spd.normalise().values)
print(spd.normalise(100).values)
Explanation: The spectral power distribution can be normalised with an arbitrary factor:
End of explanation
# Plotting *Wright & Guild 1931 2 Degree RGB CMFs* colour matching functions.
single_cmfs_plot('Wright & Guild 1931 2 Degree RGB CMFs')
Explanation: Colour Matching Functions
In the late 1920's, Wright (1928) and Guild (1931) independently conducted a series of colour matching experiments to quantify the colour ability of an average human observer which laid the foundation for the specification of the CIE XYZ colourspace. The results obtained were summarized by the Wright & Guild 1931 2° RGB CMFs $\bar{r}(\lambda)$,$\bar{g}(\lambda)$,$\bar{b}(\lambda)$ colour matching functions: they represent the amounts of three monochromatic primary colours $\textbf{R}$,$\textbf{G}$,$\textbf{B}$ needed to match the test colour at a single wavelength of light.
See Also: The Colour Matching Functions notebook for in-depth information about the colour matching functions.
End of explanation
# Plotting *CIE XYZ 1931 2 Degree Standard Observer* colour matching functions.
single_cmfs_plot('CIE 1931 2 Degree Standard Observer')
Explanation: With an RGB model of human vision based on Wright & Guild 1931 2° RGB CMFs $\bar{r}(\lambda)$,$\bar{g}(\lambda)$,$\bar{b}(\lambda)$ colour matching functions and for pragmatic reasons the CIE members developed a new colour space that would relate to the CIE RGB colourspace but for which all tristimulus values would be positive for real colours: CIE XYZ described with $\bar{x}(\lambda)$,$\bar{y}(\lambda)$,$\bar{z}(\lambda)$ colour matching functions.
End of explanation
spd = colour.SpectralPowerDistribution(sample_spd_data, name='Sample')
cmfs = colour.STANDARD_OBSERVERS_CMFS['CIE 1931 2 Degree Standard Observer']
illuminant = colour.ILLUMINANTS_RELATIVE_SPDS['A']
# Calculating the sample spectral power distribution *CIE XYZ* tristimulus values.
colour.spectral_to_XYZ(spd, cmfs, illuminant)
Explanation: In the 1960's it appeared that cones were present in a larger region of eye than the one initially covered by the experiments that lead to the CIE 1931 2° Standard Observer specification.
As a result, colour computations done with the CIE 1931 2° Standard Observer do not always correlate to the visual observation.
In 1964, the CIE defined an additional standard observer: the CIE 1964 10° Standard Observer derived from the work of Stiles and Burch (1959), and Speranskaya (1959). The CIE 1964 10° Standard Observer is believed to be a better representation of the human vision spectral response and recommended when dealing with a field of view of more than 4°.
For example and as per CIE recommendation, the CIE 1964 10° Standard Observer is commonly used with spectrophotometers for colour measurements whereas colorimeters generally use the CIE 1931 2° Standard Observer for quality control and other colour evaluation applications.
CIE XYZ Tristimulus Values
The CIE XYZ tristimulus values specify a colour stimulus in terms of the visual system. Their values for colour of a surface with spectral reflectance $\beta(\lambda)$ under an illuminant of relative spectral power $S(\lambda)$ are calculated using the following equations: <a name="back_reference_6"></a><a href="#reference_6">[6]</a>
$$
\begin{equation}
X=k\int_{\lambda}\beta(\lambda)S(\lambda)\bar{x}(\lambda)d\lambda\
Y=k\int_{\lambda}\beta(\lambda)S(\lambda)\bar{y}(\lambda)d\lambda\
Z=k\int_{\lambda}\beta(\lambda)S(\lambda)\bar{z}(\lambda)d\lambda
\end{equation}
$$
where
$$
\begin{equation}
k=\cfrac{100}{\int_{\lambda}S(\lambda)\bar{y}(\lambda)d\lambda}
\end{equation}
$$
However in virtually all practical computations of CIE XYZ tristimulus values, the integrals are replaced by summations:
$$
\begin{equation}
X=k\sum\limits_{\lambda=\lambda_a}^{\lambda_b}\beta(\lambda)S(\lambda)\bar{x}(\lambda)\Delta\lambda\
Y=k\sum\limits_{\lambda=\lambda_a}^{\lambda_b}\beta(\lambda)S(\lambda)\bar{y}(\lambda)\Delta\lambda\
Z=k\sum\limits_{\lambda=\lambda_a}^{\lambda_b}\beta(\lambda)S(\lambda)\bar{z}(\lambda)\Delta\lambda\
\end{equation}
$$
where
$$
\begin{equation}
k=\cfrac{100}{\sum\limits_{\lambda=\lambda_a}^{\lambda_b}S(\lambda)\bar{y}(\lambda)\Delta\lambda}
\end{equation}
$$
Calculating the CIE XYZ tristimulus values of a colour stimulus is done using the colour.spectral_to_XYZ definition which follows ASTM E2022–11 and ASTM E308–15 practises computation method:
End of explanation
import pylab
# Plotting the *CIE 1931 Chromaticity Diagram*.
# The argument *standalone=False* is passed so that the plot doesn't get displayed
# and can be used as a basis for other plots.
chromaticity_diagram_plot_CIE1931(standalone=False)
# Calculating the *xy* chromaticity coordinates.
# The output domain of *colour.spectral_to_XYZ* is [0, 100] and
# the input domain of *colour.XYZ_to_sRGB* is [0, 1].
# We need to take it in account and rescale the input *CIE XYZ* colourspace matrix.
x, y = colour.XYZ_to_xy(colour.spectral_to_XYZ(spd, cmfs, illuminant) / 100)
# Plotting the *xy* chromaticity coordinates.
pylab.plot(x, y, 'o-', color='white')
# Annotating the plot.
pylab.annotate(spd.name,
xy=(x, y),
xytext=(-50, 30),
textcoords='offset points',
arrowprops=dict(arrowstyle='->', connectionstyle='arc3, rad=-0.2'))
# Displaying the plot.
render(standalone=True)
Explanation: Note: Output CIE XYZ colourspace matrix is in domain [0, 100].
CIE XYZ tristimulus values can be plotted into the CIE 1931 Chromaticity Diagram:
End of explanation
colour.wavelength_to_XYZ(546.1, colour.STANDARD_OBSERVERS_CMFS['CIE 1931 2 Degree Standard Observer'])
Explanation: Retrieving the CIE XYZ tristimulus values of any wavelength from colour matching functions is done using the colour.wavelength_to_XYZ definition, if the value requested is not available, the colour matching functions will be interpolated following CIE 167:2005 recommendation:
End of explanation |
7,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This tutorial shows various methods of reusing your Python code. The follow up tutorial on packaging code will explore ways to make code reusable by others.
Step1: Step 0
Step2: To make this function useful, we should return some objects to work with. In this case, we want fig and ax
Step3: Step 1
Step4: Our functions now requires two arguments to be passed, rows and cols. Use arguments if you expect to change the value often
Step5: Our figure also has a kwarg, title. Use kwargs for things you do not expect to change often. The default can be a typical value or an empty string or list. For example, I rarely use plot title
Step6: Step 2
Step7: If you have scripts nested in directories, you can go into them with a . just as you would for sub-modules like matplotlib.pyplot
Step8: Step 3
Step9: Notice that this includes the location of packages in your environments (an anaconda environment called sci in this case) and the folder of the script itself. This is how we can import from scripts next to ours or in subdirectories of the current directory, Python already knows about them.
sys.path is just a list, so we can append directories to it.
I have a set of scripts with handy functions in a directory called python_scripts. Let's add it to the path.
Step10: Note | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: This tutorial shows various methods of reusing your Python code. The follow up tutorial on packaging code will explore ways to make code reusable by others.
End of explanation
def go_figure():
figure, axes = plt.subplots(2, 2, figsize=(10,6))
axes = axes.ravel()
go_figure();
Explanation: Step 0: functions
Functions are the simplest way to cut down duplicated code. The simple example we will use is sets up a specifically scaled multi axes figure
End of explanation
def go_figure():
figure, axes = plt.subplots(2, 2, figsize=(10,6), sharex='col', sharey='row')
axes = axes.ravel()
return figure, axes
# Some data to plot
t = np.arange(0,50,0.1)
y0 = np.sin(t)
y1 = np.cos(t/2)
fig, ax = go_figure();
ax[0].plot(t,y0)
ax[1].plot(t,y1)
ax[2].plot(t ,y0 + y1)
ax[3].plot(t ,y0 * y1);
Explanation: To make this function useful, we should return some objects to work with. In this case, we want fig and ax
End of explanation
def go_figure(rows, cols, title=''):
figure, axes = plt.subplots(rows, cols, figsize=(10,6), sharex='col', sharey='row')
if type(axes) == np.array:
axes = axes.ravel()
figure.suptitle(title, fontsize=18)
return figure, axes
Explanation: Step 1: arguments
We can extend our functions with arguments (mandatory) and key word arguments (optional). You will often see the short-hand args and kwargs used to refer to these.
The pattern is:
python
def function(arg0, arg1, kwarg0=default0, kwarg1=default1):
do stuff
return value0, value1
End of explanation
fig, ax = go_figure(2,2)
Explanation: Our functions now requires two arguments to be passed, rows and cols. Use arguments if you expect to change the value often
End of explanation
fig, ax = go_figure(2, 3);
Explanation: Our figure also has a kwarg, title. Use kwargs for things you do not expect to change often. The default can be a typical value or an empty string or list. For example, I rarely use plot title
End of explanation
from waves import sine_combo
t, y0 = sine_combo()
__, y1 = sine_combo(f=(1,1.2))
fig, ax = go_figure(2,1)
ax[0].plot(t,y0)
ax[1].plot(t,y1);
Explanation: Step 2: share between scripts in directory
I have created another function sine_waves in a script called waves.py in the same directory as this one. Note that this must be a plain Python file, not a jupyter notebook.
We can import this just as we would from a downloaded Python package
End of explanation
from adjacent_dir.more_waves import cosine_combo
t, y0 = cosine_combo()
t, y1 = cosine_combo(f=(1,1.1))
fig, ax = go_figure(2,1)
ax[0].plot(t,y0)
ax[1].plot(t,y1);
Explanation: If you have scripts nested in directories, you can go into them with a . just as you would for sub-modules like matplotlib.pyplot
End of explanation
import sys
sys.path
Explanation: Step 3: scripts in distant directories
If your script is located somewhere distant on your computer, Python won't know where it is.
By default, python only knows of locations in sys.path
End of explanation
sys.path.append('/home/callum/Documents/coding/python_scripts')
sys.path
from glider_utils import labels
labels
Explanation: Notice that this includes the location of packages in your environments (an anaconda environment called sci in this case) and the folder of the script itself. This is how we can import from scripts next to ours or in subdirectories of the current directory, Python already knows about them.
sys.path is just a list, so we can append directories to it.
I have a set of scripts with handy functions in a directory called python_scripts. Let's add it to the path.
End of explanation
HTML(html)
Explanation: Note: this cross script sharing is not limited to functions, although these are the most common things we share. You can import any Python object from one project to another. You can share a lot of useful data in dictionaries, e.g. TeX formatted strings for consistent axis labels. The main value comes with reuse, if you decide to change the way you represent the $^{\circ}$ sign in your axes labels, you'll only need to change it in one place
A word of caution. It may be tempting to add a load of directories to your path, to ensure your favourite scripts and functions are always within reach. However, unless you are careful with your naming scheme, you may overwirte the namespace of Python. That is, if you have two locations where the function plot_data is included in the script useful_functions and you have added both of these locations to your path, performing from useful_functions import plot_datacould give unepxected results.
Only add the paths you need to sys.path and try to ensure unique naming strategies for scripts and functions.
Step 4: Create a package
A package is the highest unit of organisation in Python. Some popular packages include numpy, pandas and xarray. Packages are not the sole domain of venerable Python mystics however, you too can create and use modules!
Here it is in three easy steps:
1. Put functions in a .py file and put the file in a directory
2. Put that directory on the internet
3. Profit
Making these is so easy, we've already done it! a Python script in a directory is a package for all intents and purposes.
To show just how easy it is to make and share packages in Python, we'll be doing it live next week.
Footnotes
Use of __main__
In Python scripts, you will often see a small if statment with __main__ like so:
```python
if name == 'main':
function_name()
```
This allows for dual use code. The script's functions and variables can be imported from other scripts as we have practiced today. The script can also be run from the command line, in which case the contents of this if statement are executed. The use of __main__ allows the script to distinguish when it is being used in an import statment or called directly and execute accordingly.
What about __init__.py?
Reading about modules and imports in Python, you will often come across use of an empty Python file called __init__.py. This used to be a requirement for Python to treat a directory as a module name, to enable imports from scripts and directories within that directory. This is no longer necessary as of Python 3.3
Resources
https://docs.python.org/3/tutorial/modules.html
https://stackoverflow.com/questions/37139786/is-init-py-not-required-for-packages-in-python-3-3
End of explanation |
7,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correlation Functions
Contents
Two-Time Correlation Functions
Steady State Correlation Functions
Emission Spectrum
Non-Steady State Correlation Function
Step1: <a id='twotime'></a>
Two-Time Correlation Functions
With the QuTiP time-evolution functions (for example mesolve and mcsolve), a state vector or density matrix can be evolved from an initial state at
Step2: <a id='emission'></a>
Emission Spectrum
Given a correlation function $\left<A(\tau)B(0)\right>$ we can define the corresponding power spectrum as
$$
S(\omega) = \int_{-\infty}^{\infty} \left<A(\tau)B(0)\right> e^{-i\omega\tau} d\tau.
$$
In QuTiP, we can calculate $S(\omega)$ using either spectrum, which first calculates the correlation function using the essolve solver and then performs the Fourier transform semi-analytically, or we can use the function spectrum_correlation_fft to numerically calculate the Fourier transform of a given correlation data using FFT.
The following example demonstrates how these two functions can be used to obtain the emission power spectrum.
Step3: <a id='nonsteady'></a>
Non-Steady State Correlation Function
More generally, we can also calculate correlation functions of the kind $\left<A(t_1+t_2)B(t_1)\right>$, i.e., the correlation function of a system that is not in its steadystate. In QuTiP, we can evoluate such correlation functions using the function correlation_2op_2t. The default behavior of this function is to return a matrix with the correlations as a function of the two time coordinates ($t_1$ and $t_2$).
Step4: However, in some cases we might be interested in the correlation functions on the form $\left<A(t_1+t_2)B(t_1)\right>$, but only as a function of time coordinate $t_2$. In this case we can also use the correlation_2op_2t function, if we pass the density matrix at time $t_1$ as second argument, and None as third argument. The correlation_2op_2t function then returns a vector with the correlation values corresponding to the times in taulist (the fourth argument).
Ex
Step5: For convenience, the steps for calculating the first-order coherence function have been collected in the function coherence_function_g1.
Example
Step6: For convenience, the steps for calculating the second-order coherence function have been collected in the function coherence_function_g2. | Python Code:
%matplotlib inline
import numpy as np
from pylab import *
from qutip import *
Explanation: Correlation Functions
Contents
Two-Time Correlation Functions
Steady State Correlation Functions
Emission Spectrum
Non-Steady State Correlation Function
End of explanation
times = np.linspace(0,10.0,200)
a = destroy(10)
x = a.dag() + a
H = a.dag() * a
corr1 = correlation_2op_1t(H, None, times, [np.sqrt(0.5) * a], x, x)
corr2 = correlation_2op_1t(H, None, times, [np.sqrt(1.0) * a], x, x)
corr3 = correlation_2op_1t(H, None, times, [np.sqrt(2.0) * a], x, x)
plot(times, np.real(corr1), times, np.real(corr2), times, np.real(corr3))
legend(['0.5','1.0','2.0'])
xlabel(r'Time $t$')
ylabel(r'Correlation $\left<x(t)x(0)\right>$')
show()
Explanation: <a id='twotime'></a>
Two-Time Correlation Functions
With the QuTiP time-evolution functions (for example mesolve and mcsolve), a state vector or density matrix can be evolved from an initial state at :math:t_0 to an arbitrary time $t$, $\rho(t)=V(t, t_0)\left{\rho(t_0)\right}$, where $V(t, t_0)$ is the propagator defined by the equation of motion. The resulting density matrix can then be used to evaluate the expectation values of arbitrary combinations of same-time operators.
To calculate two-time correlation functions on the form $\left<A(t+\tau)B(t)\right>$, we can use the quantum regression theorem to write
$$
\left<A(t+\tau)B(t)\right> = {\rm Tr}\left[A V(t+\tau, t)\left{B\rho(t)\right}\right]
= {\rm Tr}\left[A V(t+\tau, t)\left{BV(t, 0)\left{\rho(0)\right}\right}\right]
$$
We therefore first calculate $\rho(t)=V(t, 0)\left{\rho(0)\right}$ using one of the QuTiP evolution solvers with $\rho(0)$ as initial state, and then again use the same solver to calculate $V(t+\tau, t)\left{B\rho(t)\right}$ using $B\rho(t)$ as the initial state. Note that if the intial state is the steady state, then $\rho(t)=V(t, 0)\left{\rho_{\rm ss}\right}=\rho_{\rm ss}$ and
$$
\left<A(t+\tau)B(t)\right> = {\rm Tr}\left[A V(t+\tau, t)\left{B\rho_{\rm ss}\right}\right]
= {\rm Tr}\left[A V(\tau, 0)\left{B\rho_{\rm ss}\right}\right] = \left<A(\tau)B(0)\right>,
$$
which is independent of $t$, so that we only have one time coordinate $\tau$.
QuTiP provides a family of functions that assists in the process of calculating two-time correlation functions. The available functions and their usage is show in the table below. Each of these functions can use one of the following evolution solvers: Master-equation, Exponential series and the Monte-Carlo. The choice of solver is defined by the optional argument solver.
<table>
<tr>
<th>QuTiP Function</th>
<th>Correlation Function Type</th>
</tr>
<tr>
<td>`correlation` or `correlation_2op_2t`</td>
<td>$\left<A(t+\tau)B(t)\right>$ or $\left<A(t)B(t+\tau)\right>$. </td>
</tr>
<tr>
<td>`correlation_ss` or `correlation_2op_1t`</td>
<td>$\left<A(\tau)B(0)\right>$ or $\left<A(0)B(\tau)\right>$.</td>
</tr>
<tr>
<td>`correlation_3op_1t`</td>
<td>$\left<A(0)B(\tau)C(0)\right>$.</td>
</tr>
<tr>
<td>`correlation_3op_2t`</td>
<td>$\left<A(t)B(t+\tau)C(t)\right>$.</td>
</tr>
<tr>
<td>`correlation_4op_1t` <font color='red'>(Depreciated)</font></td>
<td>$\left<A(0)B(\tau)C(\tau)D(0)\right>$</td>
</tr>
<tr>
<td>`correlation_4op_2t` <font color='red'>(Depreciated)</font></td>
<td style='min-width:200px'>$\left<A(t)B(t+\tau)C(t+\tau)D(t)\right>$ </td>
</tr>
</table>
The most common use-case is to calculate correlation functions of the kind $\left<A(\tau)B(0)\right>$, in which case we use the correlation function solvers that start from the steady state, e.g., the correlation_2op_1t function. These correlation function solvers return a vector or matrix (in general complex) with the correlations as a function of the delay times.
<a id='steady'></a>
Steady State Correlation Function
The following code demonstrates how to calculate the $\left<x(t)x(0)\right>$ correlation for a leaky cavity with three different relaxation rates.
End of explanation
N = 4 # number of cavity fock states
wc = wa = 1.0 * 2 * np.pi # cavity and atom frequency
g = 0.1 * 2 * np.pi # coupling strength
kappa = 0.75 # cavity dissipation rate
gamma = 0.25 # atom dissipation rate
# Jaynes-Cummings Hamiltonian
a = tensor(destroy(N), qeye(2))
sm = tensor(qeye(N), destroy(2))
H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag())
# collapse operators
n_th = 0.25
c_ops = [np.sqrt(kappa * (1 + n_th)) * a,
np.sqrt(kappa * n_th) * a.dag(), np.sqrt(gamma) * sm]
# calculate the correlation function using the mesolve solver, and then fft to
# obtain the spectrum. Here we need to make sure to evaluate the correlation
# function for a sufficient long time and sufficiently high sampling rate so
# that the discrete Fourier transform (FFT) captures all the features in the
# resulting spectrum.
tlist = np.linspace(0, 100, 5000)
corr = correlation_2op_1t(H, None, tlist, c_ops, a.dag(), a)
wlist1, spec1 = spectrum_correlation_fft(tlist, corr)
# calculate the power spectrum using spectrum, which internally uses essolve
# to solve for the dynamics (by default)
wlist2 = np.linspace(0.25, 1.75, 200) * 2 * np.pi
spec2 = spectrum(H, wlist2, c_ops, a.dag(), a)
# plot the spectra
fig, ax = subplots(1, 1)
ax.plot(wlist1 / (2 * np.pi), spec1, 'b', lw=2, label='eseries method')
ax.plot(wlist2 / (2 * np.pi), spec2, 'r--', lw=2, label='me+fft method')
ax.legend()
ax.set_xlabel('Frequency')
ax.set_ylabel('Power spectrum')
ax.set_title('Vacuum Rabi splitting')
ax.set_xlim(wlist2[0]/(2*np.pi), wlist2[-1]/(2*np.pi))
show()
Explanation: <a id='emission'></a>
Emission Spectrum
Given a correlation function $\left<A(\tau)B(0)\right>$ we can define the corresponding power spectrum as
$$
S(\omega) = \int_{-\infty}^{\infty} \left<A(\tau)B(0)\right> e^{-i\omega\tau} d\tau.
$$
In QuTiP, we can calculate $S(\omega)$ using either spectrum, which first calculates the correlation function using the essolve solver and then performs the Fourier transform semi-analytically, or we can use the function spectrum_correlation_fft to numerically calculate the Fourier transform of a given correlation data using FFT.
The following example demonstrates how these two functions can be used to obtain the emission power spectrum.
End of explanation
times = np.linspace(0, 10.0, 200)
a = destroy(10)
x = a.dag() + a
H = a.dag() * a
alpha = 2.5
rho0 = coherent_dm(10, alpha)
corr = correlation_2op_2t(H, rho0, times, times, [np.sqrt(0.25) * a], x, x)
pcolor(np.real(corr))
colorbar()
xlabel(r'Time $t_2$')
ylabel(r'Time $t_1$')
title(r'Correlation $\left<x(t)x(0)\right>$')
show()
Explanation: <a id='nonsteady'></a>
Non-Steady State Correlation Function
More generally, we can also calculate correlation functions of the kind $\left<A(t_1+t_2)B(t_1)\right>$, i.e., the correlation function of a system that is not in its steadystate. In QuTiP, we can evoluate such correlation functions using the function correlation_2op_2t. The default behavior of this function is to return a matrix with the correlations as a function of the two time coordinates ($t_1$ and $t_2$).
End of explanation
N = 15
taus = np.linspace(0,10.0,200)
a = destroy(N)
H = 2 * np.pi * a.dag() * a
# collapse operator
G1 = 0.75
n_th = 2.00 # bath temperature in terms of excitation number
c_ops = [np.sqrt(G1 * (1 + n_th)) * a, np.sqrt(G1 * n_th) * a.dag()]
# start with a coherent state
rho0 = coherent_dm(N, 2.0)
# first calculate the occupation number as a function of time
n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0]
# calculate the correlation function G1 and normalize with n to obtain g1
G1 = correlation_2op_2t(H, rho0, None, taus, c_ops, a.dag(), a)
g1 = G1 / np.sqrt(n[0] * n)
plot(taus, np.real(g1), 'b')
plot(taus, n, 'r')
title('Decay of a coherent state to an incoherent (thermal) state')
xlabel(r'$\tau$')
legend((r'First-order coherence function $g^{(1)}(\tau)$',
r'occupation number $n(\tau)$'))
show()
Explanation: However, in some cases we might be interested in the correlation functions on the form $\left<A(t_1+t_2)B(t_1)\right>$, but only as a function of time coordinate $t_2$. In this case we can also use the correlation_2op_2t function, if we pass the density matrix at time $t_1$ as second argument, and None as third argument. The correlation_2op_2t function then returns a vector with the correlation values corresponding to the times in taulist (the fourth argument).
Ex: First-Order Optical Coherence Function
This example demonstrates how to calculate a correlation function on the form $\left<A(\tau)B(0)\right>$ for a non-steady initial state. Consider an oscillator that is interacting with a thermal environment. If the oscillator initially is in a coherent state, it will gradually decay to a thermal (incoherent) state. The amount of coherence can be quantified using the first-order optical coherence function
$$
g^{(1)}(\tau) = \frac{\left<a^\dagger(\tau)a(0)\right>}{\sqrt{\left<a^\dagger(\tau)a(\tau)\right>\left<a^\dagger(0)a(0)\right>}}.
$$
For a coherent state $|g^{(1)}(\tau)| = 1$, and for a completely incoherent (thermal) state $g^{(1)}(\tau) = 0$. The following code calculates and plots $g^{(1)}(\tau)$ as a function of $\tau$.
End of explanation
N = 25
taus = np.linspace(0, 25.0, 200)
a = destroy(N)
H = 2 * np.pi * a.dag() * a
kappa = 0.25
n_th = 2.0 # bath temperature in terms of excitation number
c_ops = [np.sqrt(kappa * (1 + n_th)) * a, np.sqrt(kappa * n_th) * a.dag()]
states = [{'state': coherent_dm(N, np.sqrt(2)), 'label': "coherent state"},
{'state': thermal_dm(N, 2), 'label': "thermal state"},
{'state': fock_dm(N, 2), 'label': "Fock state"}]
fig, ax = subplots(1, 1)
for state in states:
rho0 = state['state']
# first calculate the occupation number as a function of time
n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0]
# calculate the correlation function G2 and normalize with n(0)n(t) to
# obtain g2
G2 = correlation_3op_1t(H, rho0, taus, c_ops, a.dag(), a.dag() * a, a)
g2 = G2 / (n[0] * n)
ax.plot(taus, np.real(g2), label=state['label'], lw=2)
ax.legend(loc=0)
ax.set_xlabel(r'$\tau$')
ax.set_ylabel(r'$g^{(2)}(\tau)$')
show()
Explanation: For convenience, the steps for calculating the first-order coherence function have been collected in the function coherence_function_g1.
Example: Second-Order Optical Coherence Function
The second-order optical coherence function, with time-delay $\tau$, is defined as
$$
\displaystyle g^{(2)}(\tau) = \frac{\langle a^\dagger(0)a^\dagger(\tau)a(\tau)a(0)\rangle}{\langle a^\dagger(0)a(0)\rangle^2}
$$
For a coherent state $g^{(2)}(\tau) = 1$, for a thermal state $g^{(2)}(\tau=0) = 2$ and it decreases as a function of time (bunched photons, they tend to appear together), and for a Fock state with $n$ photons $g^{(2)}(\tau = 0) = n(n - 1)/n^2 < 1$ and it increases with time (anti-bunched photons, more likely to arrive separated in time).
To calculate this type of correlation function with QuTiP, we could use correlation_4op_1t, which computes a correlation function of the form $\left<A(0)B(\tau)C(\tau)D(0)\right>$ (four operators, one delay-time vector). However, the middle pair of operators are evaluated at the same time $\tau$, and thus can be simplified to a single operator $E(\tau)=B(\tau)C(\tau)$, and we can instead call the correlation_3op_1t function to compute $\left<A(0)E(\tau)D(0)\right>$. This simplification is done automatically inside the depreciated correlation_4op_1t function that calls correlation_3op_1t internally.
The following code calculates and plots $g^{(2)}(\tau)$ as a function of $\tau$ for coherent, thermal and fock states.
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/guide.css", "r").read()
return HTML(styles)
css_styling()
Explanation: For convenience, the steps for calculating the second-order coherence function have been collected in the function coherence_function_g2.
End of explanation |
7,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
dic_1s == ["one", 'two','three','four','five''six','seven','eight','nine']
dic_10s == ['ten','twenty', 'thirty','fourty','fifty', 'sixty','seveny','eighty','ninety']
1000 == "one thousand"
for n in range(1-1001):
if n > 99 and not (100, 200, 300, 400, 500, 600, 700, 800, 900, 1000):
return dic_1s()+ "hundred" + "and" + dic_10() + dic_1s()
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
n= 3
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
if count_letters is int return len(string(n))
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
7,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DKRZ data ingest workflow information update
(Disclaimer
Step1: demo examples - step by step
The following examples can be adopted to the data managers needs by e.g. creating targeted jupyter notebooks or python scripts Data managers have two separate application scenarios for data ingest information management
Step2: Step 2
Step3: Step 3
Step4: interactive "help"
Step5: Display status of report
Step6: Display status of form
Step7: Appendix
Sometimes it is necessary to modify specific information and not relay on the generic steps described above
here are some examples
Attention
Step8: add data ingest step related information
Comment
Step9: workflow step
Step10: workflow step | Python Code:
# import necessary packages
from dkrz_forms import form_handler, utils, wflow_handler, checks
from datetime import datetime
from pprint import pprint
Explanation: DKRZ data ingest workflow information update
(Disclaimer: This demo notebook is for data managers only !)
Updating information with respect to the data ingest workflow (e.g. adding quality assurance information or data publication related information) should be done in a well structured way - based on well defined steps.
These steps update consistent information sets with respect to specific workflow action (e.g. data publication)
Thus the submission_forms package provides a collection of components to support these activities.
A consistent update step normally consists of
* update on who did what, when (e.g. data manager A quality checked data B at time C ..)
* update on additional information on the activity (e.g. add the quality assurance record)
* updatee the status of the individual workflow step (open, paused, action-required, closed etc.)
The following generic status states are defined:
ACTIVITY_STATUS = "0:open, 1:in-progress ,2:action-required, 3:paused,4:closed"
ERROR_STATUS = "0:open,1:ok,2:error"
ENTITY_STATUS = "0:open,1:stored,2:submitted,3:re-opened,4:closed"
CHECK_STATUS = "0:open,1:warning, 2:error,3:ok"
End of explanation
# load workflow form object
info_file = "path_to_file.json"
my_form = utils.load_workflow_form(info_file)
# show the workflow steps for this form (long-name, short-name)
# to select a specific action, you can use the long name, e.g. 'data_ingest' or the related short name e.g. 'ing'
wflow_dict = wflow_handler.get_wflow_description(my_form)
pprint(wflow_dict)
Explanation: demo examples - step by step
The following examples can be adopted to the data managers needs by e.g. creating targeted jupyter notebooks or python scripts Data managers have two separate application scenarios for data ingest information management:
Step 1: find and load a specific data ingest activity related form
Alternative A)
check out out git repo https://gitlab.dkrz.de/DKRZ-CMIP-Pool/data_forms_repo
this repo contains all completed submission forms
all data manager related changes are also committed there
subdirectories in this repo relate to the individual projects (e.g. CMIP6, CORDEX, ESGF_replication, ..)
each entry there contains the last name of the data submission originator
Alternative B) (not yet documented, only prototype)
use search interface and API of search index on all submision forms
End of explanation
# 'start_action' updates the form with information on who is currently working on the form
# internal information on this (timestamp, status information) is automatically set ..
# the resulting 'working version' of the form is commited to the work repository
wflow_handler.start_action('data_submission_review',my_form,"stephan kindermann")
Explanation: Step 2: indicate who is working on which workflow step
End of explanation
review_report = {}
review_report['comment'] = 'needed to change and correct submission form'
review_report['additional_info'] = "mail exchange with a@b with respect to question ..."
myform = wflow_handler.finish_action('data_submission_review',my_form,"stephan kindermann",review_report)
Explanation: Step 3: indicate the update and closure of a specific workflow step
End of explanation
my_form.rev.entity_out.report
Explanation: interactive "help": use ?form.part and tab completion:
End of explanation
report = checks.check_report(my_form,"sub")
checks.display_report(report)
my_form.rev.entity_in.check_status
Explanation: Display status of report
End of explanation
my_form.sub.activity.ticket_url
part = checks.check_step_form(my_form,"sub")
checks.display_check(part,"sub")
## global check
res = checks.check_generic_form(my_form)
checks.display_checks(my_form,res)
print(my_form.sub.entity_out.status)
print(my_form.rev.entity_in.form_json)
print(my_form.sub.activity.ticket_id)
pprint(my_form.workflow)
Explanation: Display status of form
End of explanation
workflow_form = utils.load_workflow_form(info_file)
review = workflow_form.rev
# any additional information keys can be added,
# yet they are invisible to generic information management tools ..
workflow_form.status = "review"
review.activity.status = "1:in-review"
review.activity.start_time = str(datetime.now())
review.activity.review_comment = "data volume check to be done"
review.agent.responsible_person = "sk"
sf = form_handler.save_form(workflow_form, "sk: review started")
review.activity.status = "3:accepted"
review.activity.ticket_id = "25389"
review.activity.end_time = str(datetime.now())
review.entity_out.comment = "This submission is related to submission abc_cde"
review.entity_out.tag = "sub:abc_cde" # tags are used to relate different forms to each other
review.entity_out.report = {'x':'y'} # result of validation in a dict (self defined properties)
# ToDo: test and document save_form for data managers (config setting for repo)
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
Explanation: Appendix
Sometimes it is necessary to modify specific information and not relay on the generic steps described above
here are some examples
Attention: this section has to refined and the status information flags have to be revised and adapted to the actual needs
End of explanation
workflow_form = utils.load_workflow_form(info_file)
ingest = workflow_form.ing
?ingest.entity_out
# agent related info
workflow_form.status = "ingest"
ingest.activity.status = "started"
ingest.agent.responsible_person = "hdh"
ingest.activity.start_time=str(datetime.now())
# activity related info
ingest.activity.comment = "data pull: credentials needed for remote site"
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
ingest.activity.status = "completed"
ingest.activity.end_time = str(datetime.now())
# report of the ingest process (entity_out of ingest workflow step)
ingest_report = ingest.entity_out
ingest_report.tag = "a:b:c" # tag structure to be defined
ingest_report.status = "completed"
# free entries for detailed report information
ingest_report.report.remote_server = "gridftp.awi.de://export/data/CMIP6/test"
ingest_report.report.server_credentials = "in server_cred.krb keypass"
ingest_report.report.target_path = ".."
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
ingest_report.report.
Explanation: add data ingest step related information
Comment: alternatively in tools workflow_step related information could also be
directly given and assigned via dictionaries, yet this is only
recommended for data managers making sure the structure is consistent with
the preconfigured one given in config/project_config.py
* example validation.activity._dict_ = data_manager_generated_dict
End of explanation
from datetime import datetime
workflow_form = utils.load_workflow_form(info_file)
qua = workflow_form.qua
workflow_form.status = "quality assurance"
qua.agent.responsible_person = "hdh"
qua.activity.status = "starting"
qua.activity.start_time = str(datetime.now())
sf = form_handler.save_form(workflow_form, "hdh: qa start")
qua.entity_out.status = "completed"
qua.entity_out.report = {
"QA_conclusion": "PASS",
"project": "CORDEX",
"institute": "CLMcom",
"model": "CLMcom-CCLM4-8-17-CLM3-5",
"domain": "AUS-44",
"driving_experiment": [ "ICHEC-EC-EARTH"],
"experiment": [ "history", "rcp45", "rcp85"],
"ensemble_member": [ "r12i1p1" ],
"frequency": [ "day", "mon", "sem" ],
"annotation":
[
{
"scope": ["mon", "sem"],
"variable": [ "tasmax", "tasmin", "sfcWindmax" ],
"caption": "attribute <variable>:cell_methods for climatologies requires <time>:climatology instead of time_bnds",
"comment": "due to the format of the data, climatology is equivalent to time_bnds",
"severity": "note"
}
]
}
sf = form_handler.save_form(workflow_form, "hdh: qua complete")
Explanation: workflow step: data quality assurance
End of explanation
workflow_form = utils.load_workflow_form(info_file)
workflow_form.status = "publishing"
pub = workflow_form.pub
pub.agent.responsible_person = "katharina"
pub.activity.status = "starting"
pub.activity.start_time = str(datetime.now())
sf = form_handler.save_form(workflow_form, "kb: publishing")
pub.activity.status = "completed"
pub.activity.comment = "..."
pub.activity.end_time = ".."
pub.activity.report = {'model':"MPI-M"} # activity related report information
pub.entity_out.report = {'model':"MPI-M"} # the report of the publication action - all info characterizing the publication
sf = form_handler.save_form(workflow_form, "kb: published")
sf = form_handler.save_form(workflow_form, "kindermann: form demo run 1")
sf.sub.activity.commit_hash
Explanation: workflow step: data publication
End of explanation |
7,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала
Step1: Считываем обучающую выборку.
Step2: Выкинем признак Cabin, а потом – все строки, где есть пропуски.
Step3: Постройте попарные зависимости признаков Age, Fare, Pclass, Sex, SibSp, Parch, Embarked и Survived. (метод scatter_matrix Pandas или pairplot Seaborn).
Step4: Как плата за билет (Fare) зависит от класса каюты (Pclass)? Постройте boxplot.
Step5: Такой boxplot получается не очень красивым из-за выбросов.
Опционально
Step6: Каково соотношение погибших и выживших в зависимости от пола? Отобразите c помощью Seaborn.countplot c аргументом hue.
Step7: Каково соотношение погибших и выживших в зависимости от класса каюты? Отобразите c помощью Seaborn.countplot c аргументом hue.
Step8: Как факт выживания зависит от возраста пассажира? Проверьте (графически) предположение, что молодые чаще выживали. Пусть, условно, молодые - младше 30 лет, пожилые – старше 60 лет.
Step9: По такой картинке сложно что-то сказать. Попробуем по-другому. | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <center>
<img src="../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
<center>Тема 2. Визуальный анализ данных
<center>Практическое задание. Визуальный анализ данных по пассажирам "Титаника". Решение
<a href="https://www.kaggle.com/c/titanic">Соревнование</a> Kaggle "Titanic: Machine Learning from Disaster".
End of explanation
train_df = pd.read_csv("../data/titanic_train.csv",
index_col='PassengerId')
train_df.head(2)
train_df.describe(include='all')
train_df.info()
Explanation: Считываем обучающую выборку.
End of explanation
train_df = train_df.drop('Cabin', axis=1).dropna()
Explanation: Выкинем признак Cabin, а потом – все строки, где есть пропуски.
End of explanation
sns.pairplot(train_df[['Survived', 'Age', 'Fare',
'Pclass', 'Sex', 'SibSp',
'Parch', 'Embarked']]);
Explanation: Постройте попарные зависимости признаков Age, Fare, Pclass, Sex, SibSp, Parch, Embarked и Survived. (метод scatter_matrix Pandas или pairplot Seaborn).
End of explanation
sns.boxplot(x='Pclass', y='Fare', data=train_df);
Explanation: Как плата за билет (Fare) зависит от класса каюты (Pclass)? Постройте boxplot.
End of explanation
train_df['Fare_no_out'] = train_df['Fare']
fare_pclass1 = train_df[train_df['Pclass'] == 1]['Fare']
fare_pclass2 = train_df[train_df['Pclass'] == 2]['Fare']
fare_pclass3 = train_df[train_df['Pclass'] == 3]['Fare']
fare_pclass1_no_out = fare_pclass1[(fare_pclass1 -
fare_pclass1.mean()).abs()
< 2 * fare_pclass1.std()]
fare_pclass2_no_out = fare_pclass2[(fare_pclass2 -
fare_pclass2.mean()).abs()
< 2 * fare_pclass2.std()]
fare_pclass3_no_out = fare_pclass3[(fare_pclass3 -
fare_pclass3.mean()).abs()
< 2 * fare_pclass3.std()]
train_df['Fare_no_out'] = fare_pclass1_no_out.append(fare_pclass2_no_out)\
.append(fare_pclass3_no_out)
sns.boxplot(x='Pclass', y='Fare_no_out', data=train_df);
Explanation: Такой boxplot получается не очень красивым из-за выбросов.
Опционально: создайте признак Fare_no_out (стоимости без выбросов) – стоимости без выбросов, в котором исключаются стоимости, отличающиеся от средней по классу более чем на 2 стандартных отклонения. Важно: надо исключать выбросы именно в зависимости от класса каюты. Иначе исключаться будут только самые большие (1 класс) и малые (3 класс) стоимости.
End of explanation
pd.crosstab(train_df['Sex'], train_df['Survived'])
sns.countplot(x="Sex", hue="Survived", data=train_df);
Explanation: Каково соотношение погибших и выживших в зависимости от пола? Отобразите c помощью Seaborn.countplot c аргументом hue.
End of explanation
sns.countplot(x="Pclass", hue="Survived", data=train_df);
Explanation: Каково соотношение погибших и выживших в зависимости от класса каюты? Отобразите c помощью Seaborn.countplot c аргументом hue.
End of explanation
sns.boxplot(x='Survived', y='Age', data=train_df);
Explanation: Как факт выживания зависит от возраста пассажира? Проверьте (графически) предположение, что молодые чаще выживали. Пусть, условно, молодые - младше 30 лет, пожилые – старше 60 лет.
End of explanation
train_df['age_cat'] = train_df['Age'].apply(lambda age: 1 if age < 30
else 3 if age > 60 else 2);
sns.countplot(x='age_cat', hue='Survived', data=train_df);
Explanation: По такой картинке сложно что-то сказать. Попробуем по-другому.
End of explanation |
7,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
Step1: Run solver
Step2: Plot dipole activations
Step3: Plot location of the strongest dipole with MRI slices
Step4: Show the evoked response and the residual for gradiometers
Step5: Generate stc from dipoles
Step6: View in 2D and 3D ("glass" brain like 3D plot) | Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fwd_fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = meg_path / 'sample_audvis-no-filter-ave.fif'
cov_fname = meg_path / 'sample_audvis-shrunk-cov.fif'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left visual'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked = mne.pick_channels_evoked(evoked)
# We make the window slightly larger than what you'll eventually be interested
# in ([-0.05, 0.3]) to avoid edge effects.
evoked.crop(tmin=-0.1, tmax=0.4)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
:footcite:GramfortEtAl2013b,GramfortEtAl2011.
The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time)
activations are localized in space, time and frequency in one step.
with a built-in filtering process based on a short time Fourier
transform (STFT), data does not need to be low passed (just high pass
to make the signals zero mean).
the solver solves a convex optimization problem, hence cannot be
trapped in local minima.
End of explanation
# alpha parameter is between 0 and 100 (100 gives 0 active source)
alpha = 40. # general regularization parameter
# l1_ratio parameter between 0 and 1 promotes temporal smoothness
# (0 means no temporal regularization)
l1_ratio = 0.03 # temporal regularization parameter
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=loose, depth=depth)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose,
depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8.,
debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
Explanation: Run solver
End of explanation
plot_dipole_amplitudes(dipoles)
Explanation: Plot dipole activations
End of explanation
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices:
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
Explanation: Plot location of the strongest dipole with MRI slices
End of explanation
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
Explanation: Show the evoked response and the residual for gradiometers
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
Explanation: Generate stc from dipoles
End of explanation
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="TF-MxNE (cond %s)"
% condition, modes=['sphere'], scale_factors=[1.])
time_label = 'TF-MxNE time=%0.2f ms'
clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])
brain = stc.plot('sample', 'inflated', 'rh', views='medial',
clim=clim, time_label=time_label, smoothing_steps=5,
subjects_dir=subjects_dir, initial_time=150, time_unit='ms')
brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True)
brain.add_label("V2", color="red", scalar_thresh=.5, borders=True)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation |
7,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook investigates the test power vs. the number of test locations J in an incremental way. Specifically, we conjectured that the test power using $\mathcal{T}$, the set of $J$ locations should not be higher than the test power obtained by using $\mathcal{T} \cup {t_{J+1}}$
Step1: $\hat{\lambda}_n$ vs $J$
Step2: p-values vs J
Step3: test threshold vs J
Step5: The test threshold $T_\alpha$ seems to increase approximately linearly with respect to $J$ for any value of $\alpha$. The slope is roughly constant for all $\alpha$.
Test power vs. J | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import freqopttest.util as util
import freqopttest.data as data
import freqopttest.ex.exglobal as exglo
import freqopttest.kernel as kernel
import freqopttest.tst as tst
import freqopttest.glo as glo
import freqopttest.plot as plot
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import sys
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 18
}
plt.rc('font', **font)
plt.rc('lines', linewidth=2)
# sample source
n = 500
dim = 30
seed = 13
#ss = data.SSGaussMeanDiff(dim, my=0.5)
ss = data.SSGaussVarDiff(dim)
#ss = data.SSSameGauss(dim)
#ss = data.SSBlobs()
dim = ss.dim()
tst_data = ss.sample(n, seed=seed)
tr, te = tst_data.split_tr_te(tr_proportion=0.5, seed=seed+82)
J = 2
alpha = 0.01
T = tst.MeanEmbeddingTest.init_locs_2randn(tr, J, seed=seed+1)
#T = np.random.randn(J, dim)
med = util.meddistance(tr.stack_xy(), 800)
list_gwidth = np.hstack( ( (med**2) *(2.0**np.linspace(-5, 5, 30) ) ) )
list_gwidth.sort()
besti, powers = tst.MeanEmbeddingTest.grid_search_gwidth(tr, T, list_gwidth, alpha)
# test with the best Gaussian with
best_width = list_gwidth[besti]
met_grid = tst.MeanEmbeddingTest(T, best_width, alpha)
met_grid.perform_test(te)
Explanation: This notebook investigates the test power vs. the number of test locations J in an incremental way. Specifically, we conjectured that the test power using $\mathcal{T}$, the set of $J$ locations should not be higher than the test power obtained by using $\mathcal{T} \cup {t_{J+1}}$
End of explanation
def draw_t(tst_data, seed=None):
# Fit one Gaussian to the X,Y data.
if seed is not None:
rand_state = np.random.get_state()
np.random.seed(seed)
xy = tst_data.stack_xy()
# fit a Gaussian to each of X, Y
m = np.mean(xy, 0)
cov = np.cov(xy.T)
t = np.random.multivariate_normal(m, cov, 1)
# reset the seed back
if seed is not None:
np.random.set_state(rand_state)
return t
def simulate_stats_trajectory(T):
Tn = T
# add one new test location at a time.
trials = 30
test_stats = np.zeros(trials)
for i in range(trials):
# draw new location
t = draw_t(tr)
Tn = np.vstack((Tn, t))
met = tst.MeanEmbeddingTest(Tn, best_width, alpha)
tresult = met.perform_test(te)
test_stats[i] = tresult['test_stat']
return test_stats, Tn
for rep in range(6):
test_stats, Tn = simulate_stats_trajectory(T)
plt.plot(np.arange(len(T), len(Tn)), test_stats)
print('stats increasing: %s', np.all(np.diff(test_stats)>=0) )
plt.xlabel('$J$')
plt.title('$\hat{\lambda}_n$ as J increases')
Explanation: $\hat{\lambda}_n$ vs $J$
End of explanation
# plot p-value.
for r in range(6):
test_stats, Tn = simulate_stats_trajectory(T)
Js = np.arange(len(T), len(Tn))
pvals = [stats.chi2.sf(s, df=J) for s, J in zip(test_stats, Js)]
plt.plot(Js, pvals)
plt.xlabel('$J$')
plt.title('p-values as J increases')
Explanation: p-values vs J
End of explanation
Js = range(1, 30)
alphas = [1e-6, 0.005, 0.01, 0.05, 0.1]
for i, al in enumerate(alphas):
threshs = [stats.chi2.isf(al, df=J) for J in Js ]
plt.plot(Js, threshs, '-', label='$\\alpha = %.3g$'%(al) )
plt.xlabel('J')
plt.ylabel('$T_\\alpha$')
plt.legend(loc='best')
Explanation: test threshold vs J
End of explanation
# sample source
n = 1000
d = 2
seed = 13
np.random.seed(seed)
ss = data.SSGaussMeanDiff(d, my=1.0)
J = 2
alpha = 0.01
def eval_test_locs(T, ss, n, rep, seed_start=1, alpha=0.01):
Return a empirical test power
rejs = np.zeros(rep)
dat = ss.sample(1000, seed=298)
gwidth2 = util.meddistance(dat.stack_xy())**2
for r in range(rep):
te = ss.sample(n, seed=seed_start+r)
met = tst.MeanEmbeddingTest(T, gwidth2, alpha)
result = met.perform_test(te)
h0_rejected = result['h0_rejected']
rejs[r] = h0_rejected
print('rep %d: rej: %s'%(r, h0_rejected))
power = np.mean(rejs)
return power
# define a set of locations
#mid = np.zeros(d)
#T_1 = mid[np.newaxis, :]
#T_2 = np.vstack((T_1, np.hstack((np.zeros(d-1), 20)) ))
#T_3 = np.vstack((T_2, np.hstack((np.zeros(d-1), 40)) ))
T = np.random.randn(270, d)
eval_test_locs(T, ss, n=n, rep=100, seed_start=1, alpha=alpha)
# plot one instance of the data in 2d.
te = ss.sample(n, seed=seed)
X, Y = te.xy()
plt.plot(X[:, 0], X[:, 1], 'ob')
plt.plot(Y[:, 0], Y[:, 1], 'or')
Explanation: The test threshold $T_\alpha$ seems to increase approximately linearly with respect to $J$ for any value of $\alpha$. The slope is roughly constant for all $\alpha$.
Test power vs. J: 2d Gaussian mean diff problem
For this example, we will consider a 2d Gaussian example where both P, Q are Gaussian with unit variance. P has mean [0, 0] and Q has mean [0, 1]. We will consider two ways to add test locations. Firstly we will add test locations in regions which reveal the difference of P, Q. Then, we will add test locations in uninformative regions to show that more locations dot necessarily increase the test power.
End of explanation |
7,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SQLAlchemy
What is it?
Object-Relational Mapper -- A technique that connects the objects of an application to tables in an RDB
multi-level -- can interact with DBs as multiple levels of abstraction
Why is it useful?
There are some reasons you should consider SQLAlchemy
Uses less database specific code
Generalizes easily to different databases
Integrates nicely with Django, Pylons/Pyrmaid and Flask
There are also a few organizations you may have heard of that use SQLAlchemy.
Database setup
On OSX use only second command
bash
$ sudo su - postgres
$ psql -U postgres
Then
sql
CREATE USER ender WITH ENCRYPTED PASSWORD 'bugger';
CREATE DATABASE foo WITH OWNER ender;
\q
You can ensure that you data base works
Create a DB
Step1: Add some content to the DB
Step5: A DbWrapper for your convenience
Step6: Using DbWrapper | Python Code:
import os,sys,getpass,datetime
from sqlalchemy import Column, ForeignKey, Integer, String, Float, DateTime
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
class Member(Base):
__tablename__ = 'member'
id = Column(Integer, primary_key=True)
name = Column(String(250), nullable=False)
address = Column(String(250), nullable=False)
signup_store = Column(String(250))
class Purchase(Base):
__tablename__ = 'purchase'
id = Column(Integer, primary_key=True)
item_number = Column(Integer, nullable=False)
item_category = Column(String(250), nullable=False)
item_name = Column(String(250), nullable=False)
item_amount = Column(Float, nullable=False)
purchace_date = Column(DateTime,default=datetime.datetime.utcnow)
member_id = Column(Integer, ForeignKey('member.id'))
member = relationship(Member)
class Game(Base):
__tablename__ = 'game'
id = Column(Integer, primary_key=True)
game_name = Column(String(250))
game_type = Column(String(250))
game_maker = Column(String(250), nullable=False)
game_date = Column(DateTime, default=datetime.datetime.utcnow)
member_id = Column(Integer, ForeignKey('member.id'))
member = relationship(Member)
## Create an engine
uname = 'ender'
upass = getpass.getpass()
dbname = 'foo'
dbhost = 'localhost'
port = '5432'
engine = create_engine('postgresql://%s:%s@%s:%s/%s'%(uname,upass,dbhost,port,dbname))
## erase the taples if they exist (CAREFUL the drop_all!!!)
#Base.metadata.reflect(bind=engine)
#Base.metadata.drop_all(engine)
## Create all tables in the engine. This is equivalent to "Create Table"
#Base.metadata.create_all(engine)
Explanation: SQLAlchemy
What is it?
Object-Relational Mapper -- A technique that connects the objects of an application to tables in an RDB
multi-level -- can interact with DBs as multiple levels of abstraction
Why is it useful?
There are some reasons you should consider SQLAlchemy
Uses less database specific code
Generalizes easily to different databases
Integrates nicely with Django, Pylons/Pyrmaid and Flask
There are also a few organizations you may have heard of that use SQLAlchemy.
Database setup
On OSX use only second command
bash
$ sudo su - postgres
$ psql -U postgres
Then
sql
CREATE USER ender WITH ENCRYPTED PASSWORD 'bugger';
CREATE DATABASE foo WITH OWNER ender;
\q
You can ensure that you data base works
Create a DB
End of explanation
## create a session (staging zone)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
## add some members
new_member_1 = Member(name='pipin',address='west shire',signup_store='prancing pony')
new_member_2 = Member(name='peregrin',address='south shire',signup_store='prancing pony')
session.add(new_member_1)
session.add(new_member_2)
session.commit()
## add some purchases
new_purchase = Purchase(item_number=1234,
item_category='role playing',
item_name='playing mat',
item_amount = 10.45,
purchace_date = datetime.datetime.utcnow,
member_id = new_member_1.id
)
session.commit()
print('done')
Explanation: Add some content to the DB
End of explanation
import sys,os,csv,re
from sqlalchemy.ext.automap import automap_base
from sqlalchemy import create_engine
from sqlalchemy.orm import Session,relationship
from sqlalchemy import MetaData,Table,Column,Sequence,ForeignKey, Integer, String
from sqlalchemy.inspection import inspect
from sqlalchemy.sql import select
from sqlalchemy.ext.declarative import declarative_base
try:
from sqlalchemy_schemadisplay import create_schema_graph
createGraph = True
except:
createGraph = False
class DbWrapper(object):
interface with a generic database
def __init__(self,uname,upass,dbname,dbhost='localhost',port='5432',reflect=False):
Constructor
uname - database username
upass - database password
dbname - database name
dbhost - database host address
port - database port
## db variables
self.uname = uname
self.upass = upass
self.dbname = dbname
self.dbhost = dbhost
self.port = port
## initialize
self.connect()
def connect(self):
## basic connection
self.Base = automap_base()
self.engine = create_engine('postgresql://%s:%s@%s:%s/%s'%(self.uname,self.upass,self.dbhost,self.port,self.dbname))
self.conn = self.engine.connect()
self.session = Session(self.engine)
self.meta = MetaData()
self.tables = {}
## reflect the tables
self.meta.reflect(bind=self.engine)
for tname in self.engine.table_names():
tbl = Table(tname,self.meta,autoload=True,autoload_with=self.engine)
self.tables[tname] = tbl
def print_summary(self):
print a list of the tables
print("-----------------------")
print("%s"%(self.dbname))
print("%s tables"%len(self.tables.keys()))
for tname,tbl in self.tables.iteritems():
print("\t %s"%(tname))
print("\t\tPK: %s "%";".join([key.name for key in inspect(tbl).primary_key]))
for col in tbl.columns:
print("\t\t%s"%col)
def draw_schema(self,filename="schema.png"):
if createGraph:
# create the pydot graph object by autoloading all tables via a bound metadata object
graph = create_schema_graph(metadata=self.meta,
show_datatypes=False, # can get large with datatypes
show_indexes=False, # ditto for indexes
rankdir='LRA', # From left to right (LR), top to bottom (TB)
concentrate=False # Don't try to join the relation lines together
)
if re.search("\.png",filename):
graph.write_png(filename)
elif re.search("\.svg",filename):
graph.write_svg(filename)
else:
raise Exception("invalid filename specified [*.png or *.svg")
print("...%s created"%filename)
else:
print "Not creating schema figure because 'sqlalchemy_schemadisplay' is not installed"
Explanation: A DbWrapper for your convenience
End of explanation
from IPython.display import Image
Image(filename='schema.png',width=400)
## connect and use the built in methods
db = DbWrapper('ender','bugger','foo')
db.print_summary()
db.draw_schema()
## sqlalchemy ORM queries
Member = db.tables['member']
all_members = db.session.query(Member).all()
specific_rows = db.session.query(Member).filter_by(name="pipin").all()
print all_members
print specific_rows
## sqlalchemy core queries
s = select([Member])
_result = db.conn.execute(s)
result = _result.fetchall()
print str(s)
print result[0]
Explanation: Using DbWrapper
End of explanation |
7,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tecnologías NoSQL -- Tutorial en JISBD 2017
Toda la información de este tutorial está disponible en https
Step1: http
Step2: Creamos una base de datos presentations
Step3: Y la colección jisbd17
Step4: Voy a añadir todas las imágenes de la presentación a la base de datos
Primero se buscan todos los ficheros, y después se utiliza la función update_one() para añadir o actualizar los valores de la base de datos (ya habíamos metido información parcial para jisbd17-000).
Step5: Añadiendo las imágenes a la base de datos...
Step6: Añado la presentación de JISBD 2017 a la colección presentations.
Step7: Introducción a NoSQL
Step8: ¡Escalabilidad!
Step9: Schemaless
Step10: Modelado de datos en NoSQL
Step11: Eficiencia raw
Step12: Tipos de Sistemas NoSQL
MongoDB (documentos)
Base de datos documental que usaremos como ejemplo. Una de las más extendidas
Step13: Para usar el shell de mongo en Javascript
Step14: La función find() tiene un gran número de posibilidades para especificar la búsqueda. Se pueden utilizar cualificadores complejos como
Step15: También permite mostrar el plan de ejecución
Step16: Se puede crear un índice si la búsqueda por ese campo va a ser crítica. Se pueden crear más índices, de tipos ASCENDING, DESCENDING, HASHED, y otros geoespaciales. https
Step17: Map-Reduce
Step18: Mongodb incluye dos APIs para procesar y buscar documentos
Step19: Como un plot
Step20: O un histograma
Step21: Framework de Agregación
Framework de agregación
Step22: Simulación de JOIN
Step23: HBase (wide-column)
Usaré la imagen docker de HBase a partir de aquí
Step24: También se puede conectar de forma remota. Usaremos, desde Python, el paquete happybase
Step25: Copiar la tabla jisbd17 de mongo
Se hará respetando las familias de columnas creadas. En particular, se dejará por ahora el campo xref, del que se verá después una optimización.
Step26: Para el caso de xref usaremos una optimización posible en HBase
Step27: Y finalmente el índice inverso. Es muy eficiente ya que para esa familia de columnas xref se ha usado el filtro Bloom ROWCOL.
Step28: Finalmente, en HBase, un scan es una pérdida de tiempo. Se debería precomputar la referencia inversa e incluirla en cada slide. La búsqueda así es O(1).
Obtención de una fila con happybase
Step29: Ejemplos de filtros con happybase
Step30: Neo4j (Grafos)
Se puede utilizar el propio interfaz de Neo4j también en la dirección http
Step31: Vamos a cargar la extensión ipython-cypher para poder lanzar consultas Cypher directamente a través de la hoja. He iniciado la imagen de Neo4j sin autenticación, para pruebas locales.
Utilizaremos una extensión de Jupyter Notebook que se llama ipython-cypher. Está instalada en la máquina virtual. Si no, se podría instalar con
Step32: Importamos todas las diapositivas de MongoDB
Step33: Crearemos la relación
Step34: El lenguaje Cypher
Step35: Ipython-cypher
Step36: Vamos a añadir las relaciones xref que haya en las diapositivas. Por ahora sólo había unas puestas a mano. Para las diapositivas que no tengan referencias, añado una al azar.
Step37: Nuestro trabajo de investigación | Python Code:
%load extra/utils/functions.py
ds(1,2)
ds(3)
yoda(u"Una guerra SQL vs. NoSQL no debes empezar")
Explanation: Tecnologías NoSQL -- Tutorial en JISBD 2017
Toda la información de este tutorial está disponible en https://github.com/dsevilla/jisbd17-nosql.
Diego Sevilla Ruiz, dsevilla@um.es.
End of explanation
ds(4)
%%bash
sudo docker pull mongo
pip install --upgrade pymongo
!sudo docker run --rm -d --name mongo -p 27017:27017 mongo
import pymongo
from pymongo import MongoClient
client = MongoClient("localhost", 27017)
client
Explanation: http://www.nosql-vs-sql.com/
End of explanation
db = client.presentations
Explanation: Creamos una base de datos presentations:
End of explanation
jisbd17 = db.jisbd17
jisbd17
jisbd17.insert_one({'_id' : 'jisbd17-000',
'title': 'blah',
'text' : '',
'image': None,
'references' :
[{'type' : 'web',
'ref' : 'http://nosql-database.org'},
{'type' : 'book',
'ref' : 'Sadalage, Fowler. NoSQL Distilled'}
],
'xref' : ['jisbd17-010', 'jisbd17-002'],
'notes': 'blah blah'
})
client.database_names()
DictTable(jisbd17.find_one())
Explanation: Y la colección jisbd17:
End of explanation
import os
import os.path
import glob
files = glob.glob(os.path.join('slides','slides-dir','*.png'))
Explanation: Voy a añadir todas las imágenes de la presentación a la base de datos
Primero se buscan todos los ficheros, y después se utiliza la función update_one() para añadir o actualizar los valores de la base de datos (ya habíamos metido información parcial para jisbd17-000).
End of explanation
from bson.binary import Binary
for file in files:
img = load_img(file)
img_to_thumbnail(img)
slidename = os.path.basename(os.path.splitext(file)[0])
jisbd17.update_one({'_id': slidename},
{'$set' : {'image': Binary(img_to_bytebuffer(img))}},
True)
for slide in jisbd17.find():
print(slide['_id'], slide.get('title'))
slide0 = jisbd17.find_one({'_id': 'jisbd17-000'})
img_from_bytebuffer(slide0['image'])
Explanation: Añadiendo las imágenes a la base de datos...
End of explanation
presentations = db.presentations
slides = [r['_id'] for r in jisbd17.find({'_id' : {'$regex' : '^jisbd17-'}},projection={'_id' : True}).sort('_id', 1)]
presentations.insert_one({'name' : 'Tecnologías NoSQL. JISBD 2017',
'slides' : slides
})
presentations.find_one()
yoda(u'Modelado de datos tú no hacer...')
inciso_slide = 9
ds(inciso_slide,3)
jisbd17.find_one({'_id': 'jisbd17-000'})
Explanation: Añado la presentación de JISBD 2017 a la colección presentations.
End of explanation
ds(13,7)
ds(20,5)
Explanation: Introducción a NoSQL
End of explanation
ds(25,6)
Explanation: ¡Escalabilidad!
End of explanation
ds(31,6)
ds(37)
Explanation: Schemaless
End of explanation
ds(38,11)
Explanation: Modelado de datos en NoSQL
End of explanation
ds(49,6)
Explanation: Eficiencia raw
End of explanation
import re
def read_slides():
in_slide = False
slidetitle = ''
slidetext = ''
slidenum = 0
with open('slides/slides.tex', 'r') as f:
for line in f:
# Remove comments
line = line.split('%')[0]
if not in_slide:
if '\\begin{frame}' in line:
in_slide = True
elif '\\frametitle' in line:
q = re.search('\\\\frametitle{([^}]+)',line)
slidetitle = q.group(1)
continue
elif '\\framebreak' in line or re.match('\\\\only<[^1]',line) or '\\end{frame}' in line:
# Añadir la diapositiva a la lista
slideid = 'jisbd17-{:03d}'.format(slidenum)
print(slideid)
jisbd17.update_one({'_id': slideid},
{'$set' : {'title': slidetitle,
'text' : slidetext
}},
True)
# Next
slidetext = ''
slidenum += 1
if '\\end{frame}' in line:
in_slide = False
slidetitle = ''
else:
slidetext += line
# Llamar a la función
read_slides()
Explanation: Tipos de Sistemas NoSQL
MongoDB (documentos)
Base de datos documental que usaremos como ejemplo. Una de las más extendidas:
Modelo de documentos JSON (BSON, en binario, usado para eficiencia)
Map-Reduce para transformaciones de la base de datos y consultas
Lenguaje propio de manipulación de la base de datos llamado "de agregación" (aggregate)
Soporta sharding (distribución de partes de la BD en distintos nodos)
Soporta replicación (copias sincronizadas master-slave en distintos nodos)
No soporta ACID
La transacción se realiza a nivel de DOCUMENTO
Usaremos pymongo desde Python. Para instalarlo:
sudo pip install --upgrade pymongo
Texto y título de las diapositivas
Como ya tenemos populada la colección jisbd17, podemos actualizar los documentos para añadir el título y el texto de cada diapositiva. Lo extraeremos del fichero slides.tex.
End of explanation
slides = jisbd17.find(filter={},projection={'text': True})
df = pd.DataFrame([len(s.get('text','')) for s in slides])
df.plot()
Explanation: Para usar el shell de mongo en Javascript:
docker exec -it mongo mongo
Consultas sencillas
Distribución del tamaño del texto de las transparencias.
End of explanation
jisbd17.find_one({'text': {'$regex' : '[Mm]ongo'}})['_id']
Explanation: La función find() tiene un gran número de posibilidades para especificar la búsqueda. Se pueden utilizar cualificadores complejos como:
$and
$or
$not
Estos calificadores unen "objetos", no valores. Por otro lado, hay otros calificadores que se refieren a valores:
$lt (menor)
$lte (menor o igual)
$gt (mayor)
$gte (mayor o igual)
$regex (expresión regular)
End of explanation
jisbd17.find({'title' : 'jisbd17-001'}).explain()
Explanation: También permite mostrar el plan de ejecución:
End of explanation
jisbd17.create_index([('title', pymongo.HASHED)])
jisbd17.find({'title' : 'jisbd17-001'}).explain()
Explanation: Se puede crear un índice si la búsqueda por ese campo va a ser crítica. Se pueden crear más índices, de tipos ASCENDING, DESCENDING, HASHED, y otros geoespaciales. https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.create_index
End of explanation
ds(59,9)
Explanation: Map-Reduce
End of explanation
from bson.code import Code
map = Code(
'''function () {
if ('text' in this)
emit(this.text.length, 1)
else
emit(0,1)
}''')
reduce = Code(
'''function (key, values) {
return Array.sum(values);
}''')
results = jisbd17.map_reduce(map, reduce, "myresults")
results = list(results.find())
results
Explanation: Mongodb incluye dos APIs para procesar y buscar documentos: el API de Map-Reduce y el API de agregación. Veremos primero el de Map-Reduce. Manual: https://docs.mongodb.com/manual/aggregation/#map-reduce
Histograma de tamaño del texto de las diapositivas
Con Map-Reduce se muestra el tamaño del texto de cada diapositiva, y el número de diapositiva que tienen ese tamaño de texto.
End of explanation
df = pd.DataFrame(data = [int(r['value']) for r in results],
index = [int(r['_id']) for r in results],
columns=['posts per length'])
df.plot(kind='bar',figsize=(30,10))
Explanation: Como un plot:
End of explanation
df.hist()
Explanation: O un histograma:
End of explanation
list(jisbd17.aggregate( [ {'$project' : { 'Id' : 1 }}, {'$limit': 20} ]))
nposts_by_length = jisbd17.aggregate( [
#{'$match': { 'text' : {'$regex': 'HBase'}}},
{'$project': {
'text' : {'$ifNull' : ['$text', '']}
}},
{'$project' : {
'id' : {'$strLenBytes': '$text'},
'value' : {'$literal' : 1}
}
},
{'$group' : {
'_id' : '$id',
'count' : {'$sum' : '$value'}
}
},
{'$sort' : { '_id' : 1}}
])
list(nposts_by_length)
Explanation: Framework de Agregación
Framework de agregación: https://docs.mongodb.com/manual/reference/operator/aggregation/. Y aquí una presentación interesante sobre el tema: https://www.mongodb.com/presentations/aggregation-framework-0?jmp=docs&_ga=1.223708571.1466850754.1477658152
End of explanation
list(jisbd17.aggregate( [
{'$lookup' : {
"from": "jisbd17",
"localField": "xref",
"foreignField": "_id",
"as": "xrefTitles"
}},
{'$project' : {
'_id' : True,
'xref' : True,
'xrefTitles.title' : True
}}
]))
Explanation: Simulación de JOIN: $lookup
El framework de agregación introdujo también una construcción equivalente a JOIN de SQL. Por ejemplo, se puede mostrar los títulos de las transparencias referenciadas además de los identificadores:
End of explanation
%%bash
cd /tmp && git clone https://github.com/dsevilla/hadoop-hbase-docker.git
ds(84,11)
ds(97,2)
Explanation: HBase (wide-column)
Usaré la imagen docker de HBase a partir de aquí: https://github.com/krejcmat/hadoop-hbase-docker, ligeramente modificada. Para iniciar los contenedores (un master y dos "slave"):
git clone https://github.com/dsevilla/hadoop-hbase-docker.git
cd hadoop-hbase-docker
./start-container.sh latest 2
# Un conenedor máster, 2 slave, simulan un clúster distribuido de tres nodos
# Los contenedores arrancan, el shell entra en el master:
./configure-slaves.sh
./start-hadoop.sh
hbase-daemon.sh start thrift # Servidor para conexión externo
./start-hbase.sh
Ahora ya podemos conectar a la base de datos. Dentro del contenedor, ejecutando hbase shell nos vuelve a mostrar el shell. En él, podemos ejecutar consultas, creación de tablas, etc.:
status
# Crear tabla
# Put
# Consultas sencillas
End of explanation
!pip install --upgrade happybase
import happybase
happybase.__version__
host = '127.0.0.1'
hbasecon = happybase.Connection(host)
hbasecon.tables()
ds(103,3)
try:
hbasecon.create_table(
"jisbd17",
{
'slide': dict(bloom_filter_type='ROW',max_versions=1),
'image' : dict(compression='GZ',max_versions=1),
'text' : dict(compression='GZ',max_versions=1),
'xref' : dict(bloom_filter_type='ROWCOL',max_versions=1)
})
except:
print ("Database slides already exists.")
pass
hbasecon.tables()
Explanation: También se puede conectar de forma remota. Usaremos, desde Python, el paquete happybase:
End of explanation
h_jisbd17 = hbasecon.table('jisbd17')
with h_jisbd17.batch(batch_size=100) as b:
for doc in jisbd17.find():
b.put(doc['_id'], {
'slide:title' : doc.get('title',''),
'slide:notes' : doc.get('notes',''),
'text:' : doc.get('text', ''),
'image:' : str(doc.get('image',''))
})
Explanation: Copiar la tabla jisbd17 de mongo
Se hará respetando las familias de columnas creadas. En particular, se dejará por ahora el campo xref, del que se verá después una optimización.
End of explanation
with h_jisbd17.batch(batch_size=100) as b:
for doc in jisbd17.find():
if 'xref' in doc:
for ref in doc['xref']:
b.put(doc['_id'], {
'xref:'+ref : ''
})
list(h_jisbd17.scan(columns=['xref']))
Explanation: Para el caso de xref usaremos una optimización posible en HBase:
Las filas pueden crecer tanto como se quiera también en columnas
El filtro Bloom ROWCOL hace muy eficiente buscar por una columna en particular
IDEA: Usar los elementos del array como nombres de las columnas. Convierte automáticamente a esa columna en un índice inverso:
End of explanation
list(h_jisbd17.scan(columns=['xref:jisbd17-002']))
Explanation: Y finalmente el índice inverso. Es muy eficiente ya que para esa familia de columnas xref se ha usado el filtro Bloom ROWCOL.
End of explanation
h_jisbd17.row('jisbd17-001')
Explanation: Finalmente, en HBase, un scan es una pérdida de tiempo. Se debería precomputar la referencia inversa e incluirla en cada slide. La búsqueda así es O(1).
Obtención de una fila con happybase
End of explanation
ds(114)
list(h_jisbd17.scan(filter="KeyOnlyFilter()"))
list(h_jisbd17.scan(filter="PrefixFilter('jisbd17-0')",limit=5))
list(h_jisbd17.scan(filter="ColumnPrefixFilter('t')"))
list(h_jisbd17.scan(filter="RowFilter(<,'binary:jisbd17-1')",limit=5))
list(h_jisbd17.scan(filter="SingleColumnValueFilter('slide', 'title', =,'binary:HBase')"))
Explanation: Ejemplos de filtros con happybase
End of explanation
%%bash
sudo docker pull neo4j
sudo docker run -d --rm --name neo4j -p 7474:7474 -p 7687:7687 --env NEO4J_AUTH=none neo4j
Explanation: Neo4j (Grafos)
Se puede utilizar el propio interfaz de Neo4j también en la dirección http://127.0.0.1:7474.
End of explanation
%%bash
pip install ipython-cypher
pip install py2neo
ds(135,3)
ds(139,2)
ds(148)
ds(150,4)
from py2neo import Graph
graph = Graph('http://localhost:7474/db/data/')
graph.delete_all()
Explanation: Vamos a cargar la extensión ipython-cypher para poder lanzar consultas Cypher directamente a través de la hoja. He iniciado la imagen de Neo4j sin autenticación, para pruebas locales.
Utilizaremos una extensión de Jupyter Notebook que se llama ipython-cypher. Está instalada en la máquina virtual. Si no, se podría instalar con:
pip install ipython-cypher
Después, todas las celdas que comiencen por %%cypher y todas las instrucciones Python que comiencen por %cypher se enviarán a Neo4j para su interpretación. También usaremos la librería py2neo para crear el grafo:
pip install py2neo
End of explanation
from py2neo import Node
for doc in jisbd17.find():
node = Node("Slide",
name = doc.get('_id'),
title = doc.get('title',''),
notes = doc.get('notes',''),
text = doc.get('text', ''))
graph.create(node)
graph.find_one('Slide', property_key='name', property_value='jisbd17-001')
from py2neo import NodeSelection
NodeSelection(graph,conditions=["_.name='jisbd17-001'"]).first()['title']
Explanation: Importamos todas las diapositivas de MongoDB:
End of explanation
from py2neo import Relationship
for i in range(jisbd17.count() - 1):
slide_pre = NodeSelection(graph,conditions=[
'_.name = \'jisbd17-{:03d}\''.format(i)]).first()
slide_next = NodeSelection(graph).where(
'_.name = \'jisbd17-{:03d}\''.format(i+1)).first()
graph.create(Relationship(slide_pre, "NEXT", slide_next))
Explanation: Crearemos la relación :NEXT para indicar la siguiente diapositiva. Ahora se hará con py2neo y después con ipython-cypher.
End of explanation
ds(154,7)
Explanation: El lenguaje Cypher
End of explanation
%load_ext cypher
%config CypherMagic.auto_html=False
%config CypherMagic.auto_pandas=True
%%cypher
match (n) return n;
%%cypher
match (n) return n.name;
Explanation: Ipython-cypher
End of explanation
import random
nslides = jisbd17.count()
for doc in jisbd17.find():
for ref in doc.get('xref',['jisbd17-{:03d}'.format(random.randint(1,nslides))]):
slide_from = doc['_id']
slide_to = ref
%cypher MATCH (f:Slide {name: {slide_from}}), (t:Slide {name: {slide_to}}) MERGE (f)-[:REF]->(t)
%config CypherMagic.auto_networkx=False
%config CypherMagic.auto_pandas=False
%%cypher
MATCH p=shortestPath(
(s:Slide {name:"jisbd17-004"})-[*]->(r:Slide {name:"jisbd17-025"})
)
RETURN p
# Tópicos de slides con expresiones regulares
import cypher
cypher.run("MATCH (n) RETURN n")
!sudo docker stop neo4j
!sudo docker stop mongo
Explanation: Vamos a añadir las relaciones xref que haya en las diapositivas. Por ahora sólo había unas puestas a mano. Para las diapositivas que no tengan referencias, añado una al azar.
End of explanation
ds(164,6)
Explanation: Nuestro trabajo de investigación
End of explanation |
7,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
Step1: Downloading the Data
We'll use a language dataset provided by http
Step2: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following
Step3: Sentence Integerizing
The utils_preproc.tokenize() method does the following
Step4: The outputted tokenizer can be used to get back the actual works
from the integers representing them
Step5: Creating the tf.data.Dataset
load_and_preprocess
Exercise 1
Implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones
Step6: load_and_integerize
Exercise 2
Using utils_preproc.tokenize, implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple
Step7: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that
Step8: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively
Step9: Let us store the maximal sentence length of both languages into two variables
Step10: We are now using scikit-learn train_test_split to create our splits
Step11: Let's make sure the number of example in each split looks good
Step12: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0
Step13: Create tf.data dataset for train and eval
Exercise 3
Implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark
Step14: Let's now create the actual train and eval dataset using the function above
Step15: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
Step16: Exercise 4
Implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
Step17: Exercise 5
Implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important
Step18: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output
Step19: Exercise 6
To be able to train the encoder-decoder network defined above, create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder
Step20: Let's now train the model!
Step21: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models
Step23: Exercise 8
Now that we have a separate encoder and a separate decoder, implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note
Step24: Now we're ready to predict!
Step25: Checkpoint Model
Exercise 9
Save
* model to disk as the file model.h5
* encoder_model to disk as the file encoder_model.h5
* decoder_model to disk as the file decoder_model.h5
Step26: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info
Step27: Exercise 10
Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes. | Python Code:
import os
import pickle
import sys
import nltk
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import (
Dense,
Embedding,
GRU,
Input,
)
from tensorflow.keras.models import (
load_model,
Model,
)
import utils_preproc
print(tf.__version__)
SEED = 0
MODEL_PATH = 'translate_models/baseline'
DATA_URL = 'http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip'
LOAD_CHECKPOINT = False
tf.random.set_seed(SEED)
Explanation: Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
End of explanation
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin=DATA_URL, extract=True)
path_to_file = os.path.join(
os.path.dirname(path_to_zip),
"spa-eng/spa.txt"
)
print("Translation data stored at:", path_to_file)
data = pd.read_csv(
path_to_file, sep='\t', header=None, names=['english', 'spanish'])
data.sample(3)
Explanation: Downloading the Data
We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.
End of explanation
raw = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?"
]
processed = [utils_preproc.preprocess_sentence(s) for s in raw]
processed
Explanation: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following:
1. Converts sentence to lower case
2. Adds a space between punctuation and words
3. Replaces tokens that aren't a-z or punctuation with space
4. Adds <start> and <end> tokens
For example:
End of explanation
integerized, tokenizer = utils_preproc.tokenize(processed)
integerized
Explanation: Sentence Integerizing
The utils_preproc.tokenize() method does the following:
Splits each sentence into a token list
Maps each token to an integer
Pads to length of longest sentence
It returns an instance of a Keras Tokenizer
containing the token-integer mapping along with the integerized sentences:
End of explanation
tokenizer.sequences_to_texts(integerized)
Explanation: The outputted tokenizer can be used to get back the actual works
from the integers representing them:
End of explanation
def load_and_preprocess(path, num_examples):
with open(path_to_file, 'r') as fp:
lines = fp.read().strip().split('\n')
sentence_pairs = # TODO 1a
return zip(*sentence_pairs)
en, sp = load_and_preprocess(path_to_file, num_examples=10)
print(en[-1])
print(sp[-1])
Explanation: Creating the tf.data.Dataset
load_and_preprocess
Exercise 1
Implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones:
End of explanation
def load_and_integerize(path, num_examples=None):
targ_lang, inp_lang = load_and_preprocess(path, num_examples)
# TODO 1b
input_tensor, inp_lang_tokenizer = # TODO
target_tensor, targ_lang_tokenizer = # TODO
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
Explanation: load_and_integerize
Exercise 2
Using utils_preproc.tokenize, implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:
python
(input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)
where
input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences
inp_lang_tokenizer is the source language tokenizer
targ_lang_tokenizer is the target language tokenizer
End of explanation
TEST_PROP = 0.2
NUM_EXAMPLES = 30000
Explanation: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that:
End of explanation
input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(
path_to_file, NUM_EXAMPLES)
Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:
End of explanation
max_length_targ = target_tensor.shape[1]
max_length_inp = input_tensor.shape[1]
Explanation: Let us store the maximal sentence length of both languages into two variables:
End of explanation
splits = train_test_split(
input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED)
input_tensor_train = splits[0]
input_tensor_val = splits[1]
target_tensor_train = splits[2]
target_tensor_val = splits[3]
Explanation: We are now using scikit-learn train_test_split to create our splits:
End of explanation
(len(input_tensor_train), len(target_tensor_train),
len(input_tensor_val), len(target_tensor_val))
Explanation: Let's make sure the number of example in each split looks good:
End of explanation
print("Input Language; int to word mapping")
print(input_tensor_train[0])
print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), '\n')
print("Target Language; int to word mapping")
print(target_tensor_train[0])
print(utils_preproc.int2word(targ_lang, target_tensor_train[0]))
Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:
End of explanation
def create_dataset(encoder_input, decoder_input):
# shift ahead by 1
target = tf.roll(decoder_input, -1, 1)
# replace last column with 0s
zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)
target = tf.concat((target[:, :-1], zeros), axis=-1)
dataset = # TODO
return dataset
Explanation: Create tf.data dataset for train and eval
Exercise 3
Implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.
End of explanation
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
train_dataset = create_dataset(
input_tensor_train, target_tensor_train).shuffle(
BUFFER_SIZE).repeat().batch(BATCH_SIZE, drop_remainder=True)
eval_dataset = create_dataset(
input_tensor_val, target_tensor_val).batch(
BATCH_SIZE, drop_remainder=True)
Explanation: Let's now create the actual train and eval dataset using the function above:
End of explanation
EMBEDDING_DIM = 256
HIDDEN_UNITS = 1024
INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1
TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1
Explanation: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
End of explanation
encoder_inputs = Input(shape=(None,), name="encoder_input")
encoder_inputs_embedded = # TODO
encoder_rnn = # TODO
encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)
Explanation: Exercise 4
Implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
End of explanation
decoder_inputs = Input(shape=(None,), name="decoder_input")
decoder_inputs_embedded = # TODO
decoder_rnn = # TODO
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=encoder_state)
Explanation: Exercise 5
Implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!
The output of the encoder will be the decoder_outputs and the decoder_state.
End of explanation
decoder_dense = Dense(TARGET_VOCAB_SIZE, activation='softmax')
predictions = decoder_dense(decoder_outputs)
Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:
End of explanation
model = # TODO
model.compile(# TODO)
model.summary()
Explanation: Exercise 6
To be able to train the encoder-decoder network defined above, create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:
End of explanation
STEPS_PER_EPOCH = len(input_tensor_train)//BATCH_SIZE
EPOCHS = 1
history = model.fit(
train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=eval_dataset,
epochs=EPOCHS
)
Explanation: Let's now train the model!
End of explanation
if LOAD_CHECKPOINT:
encoder_model = load_model(os.path.join(MODEL_PATH, 'encoder_model.h5'))
decoder_model = load_model(os.path.join(MODEL_PATH, 'decoder_model.h5'))
else:
encoder_model = # TODO
decoder_state_input = Input(shape=(HIDDEN_UNITS,), name="decoder_state_input")
# Reuses weights from the decoder_rnn layer
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=decoder_state_input)
# Reuses weights from the decoder_dense layer
predictions = decoder_dense(decoder_outputs)
decoder_model = # TODO
Explanation: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models:
an encoder model with signature encoder_inputs -> encoder_state
a decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]
This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1.
Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state.
At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word.
This process can be continued until the decoder produces the token <stop>.
This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model.
Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.
Exercise 7
Create the Keras Model encoder_model with signature encoder_inputs -> encoder_state and the Keras Model decoder_model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state].
End of explanation
def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):
Arguments:
input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)
output_tokenizer: Tokenizer used to conver from int to words
Returns translated sentences
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seqs)
# Populate the first character of target sequence with the start character.
batch_size = input_seqs.shape[0]
target_seq = tf.ones([batch_size, 1])
decoded_sentences = [[] for _ in range(batch_size)]
for i in range(max_decode_length):
output_tokens, decoder_state = decoder_model.predict(
[target_seq, states_value])
# Sample a token
sampled_token_index = # TODO
tokens = # TODO
for j in range(batch_size):
decoded_sentences[j].append(tokens[j])
# Update the target sequence (of length 1).
target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)
# Update states
states_value = decoder_state
return decoded_sentences
Explanation: Exercise 8
Now that we have a separate encoder and a separate decoder, implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.
End of explanation
sentences = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?"
]
reference_translations = [
"We're not eating.",
"Winter is coming.",
"Winter is coming.",
"Tom ate nothing.",
"His bad leg prevented him from winning the race.",
"Your answer is wrong.",
"How about going for a walk after lunch?"
]
machine_translations = decode_sequences(
utils_preproc.preprocess(sentences, inp_lang),
targ_lang,
max_length_targ
)
for i in range(len(sentences)):
print('-')
print('INPUT:')
print(sentences[i])
print('REFERENCE TRANSLATION:')
print(reference_translations[i])
print('MACHINE TRANSLATION:')
print(machine_translations[i])
Explanation: Now we're ready to predict!
End of explanation
if not LOAD_CHECKPOINT:
os.makedirs(MODEL_PATH, exist_ok=True)
# TODO
with open(os.path.join(MODEL_PATH, 'encoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(inp_lang, fp)
with open(os.path.join(MODEL_PATH, 'decoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(targ_lang, fp)
Explanation: Checkpoint Model
Exercise 9
Save
* model to disk as the file model.h5
* encoder_model to disk as the file encoder_model.h5
* decoder_model to disk as the file decoder_model.h5
End of explanation
def bleu_1(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (1,), smoothing_function)
def bleu_4(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (.25, .25, .25, .25), smoothing_function)
Explanation: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
End of explanation
%%time
num_examples = len(input_tensor_val)
bleu_1_total = 0
bleu_4_total = 0
for idx in range(num_examples):
reference_sentence = utils_preproc.int2word(
targ_lang, target_tensor_val[idx][1:])
decoded_sentence = decode_sequences(
input_tensor_val[idx:idx+1], targ_lang, max_length_targ)[0]
bleu_1_total += # TODO
bleu_4_total += # TODO
print('BLEU 1: {}'.format(bleu_1_total/num_examples))
print('BLEU 4: {}'.format(bleu_4_total/num_examples))
Explanation: Exercise 10
Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.
End of explanation |
7,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab #0 - jupyter notebook autograder test
Copyright 2016 © document created by TeamLab.Gachon@gmail.com
Introduction
많이 달라긴 프로그래밍 환경에 놀란 것도 잠시, 첫 번째 Lab을 수행해보자. 첫 번째 랩은 전혀 어렵지 않다. 단지 Linux환경이 아닌 jupyter notebook(a.k.a ipython notebook) 환경에서 Lab을 제출 하는 것을 배운다. 아마 Gachon CS50 - Programming 입문 with Python을 들은 학생이라면 전혀 어렵지 않게 제출할 수 있을 거 같다.
숙제 다운로드 받기
먼저 Lab을 다운로드하기 위하여는 아래와 같이 숙제를 다운로드하는 프로그램인 gachon-autograder-client을 여러분의 python interpreter에 설치하자. 참고로 본 설치는 앞으로의 Lab을 위해서 단 1회만 수행하면 된다.
bash
pip install git+https
Step1: 위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "nb_arithmetic_functions.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것이다.
jupyter notebook nb_arithmetic_functions.ipynb
nb_arithmetic_functions 코드 구조
본 Lab은 단순히 autograder를 테스트하기 위해 만 Lab은 두 함수만 수행할 수 있게 작성하면 된다.
함수명 | 역할
-------- | ---
addition | 정수 또는 실수 형태로 입력되는 두 변수 a,b 의 합을 반환함
minus | 정수 또는 실수 형태로 입력되는 두 변수 a,b 의 차를 반환함
이제부터 아래 두 함수의 실행 결과 값이 올바로 나오도록 다음 함수를 수정하여라
Problem #1 - addition 함수
Step2: Problem #2 - minus 함수
Step3: 결과 제출 하기
본 Lab이 쉬우니 숙제 제출도 어렵지 않다. 본 Lab의 숙제 제출은 다음의 코드를 수행하면, 채점 결과를 자동으로 제공해 준다.
문제없이 숙제를 제출할면 아래 결과가 모두 PASS로 표시 될 것이다. | Python Code:
import gachon_autograder_client as g_autograder
EMAIL = "#YOUR_EMAIL"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_NAME = "nb_test"
g_autograder.get_assignment(EMAIL, PASSWORD, ASSIGNMENT_NAME)
Explanation: Lab #0 - jupyter notebook autograder test
Copyright 2016 © document created by TeamLab.Gachon@gmail.com
Introduction
많이 달라긴 프로그래밍 환경에 놀란 것도 잠시, 첫 번째 Lab을 수행해보자. 첫 번째 랩은 전혀 어렵지 않다. 단지 Linux환경이 아닌 jupyter notebook(a.k.a ipython notebook) 환경에서 Lab을 제출 하는 것을 배운다. 아마 Gachon CS50 - Programming 입문 with Python을 들은 학생이라면 전혀 어렵지 않게 제출할 수 있을 거 같다.
숙제 다운로드 받기
먼저 Lab을 다운로드하기 위하여는 아래와 같이 숙제를 다운로드하는 프로그램인 gachon-autograder-client을 여러분의 python interpreter에 설치하자. 참고로 본 설치는 앞으로의 Lab을 위해서 단 1회만 수행하면 된다.
bash
pip install git+https://github.com/TeamLab/gachon-autograder-client.git
다음으로 Lab Template 파일을 다운로드 받자. 다운로드를 받기 위해서는 python 파일 또는 jupyter notebook 파일을 생성하여 아래 코드를 실행 시켜야 한다.
End of explanation
def addition(a, b):
result = None
return result
# 실행결과
print (addition(5, 3))
print (addition(10, 5))
Explanation: 위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "nb_arithmetic_functions.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것이다.
jupyter notebook nb_arithmetic_functions.ipynb
nb_arithmetic_functions 코드 구조
본 Lab은 단순히 autograder를 테스트하기 위해 만 Lab은 두 함수만 수행할 수 있게 작성하면 된다.
함수명 | 역할
-------- | ---
addition | 정수 또는 실수 형태로 입력되는 두 변수 a,b 의 합을 반환함
minus | 정수 또는 실수 형태로 입력되는 두 변수 a,b 의 차를 반환함
이제부터 아래 두 함수의 실행 결과 값이 올바로 나오도록 다음 함수를 수정하여라
Problem #1 - addition 함수
End of explanation
def minus(a, b):
result = None
return result
# 실행결과
print (minus(5, 3))
print (minus(10, 5))
Explanation: Problem #2 - minus 함수
End of explanation
import gachon_autograder_client as g_autograder
EMAIL = "#YOUR_EMAIL"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_FILE_NAME = "nb_arithmetic_functions.ipynb"
g_autograder.submit_assignment(EMAIL, PASSWORD, ASSIGNMENT_FILE_NAME)
Explanation: 결과 제출 하기
본 Lab이 쉬우니 숙제 제출도 어렵지 않다. 본 Lab의 숙제 제출은 다음의 코드를 수행하면, 채점 결과를 자동으로 제공해 준다.
문제없이 숙제를 제출할면 아래 결과가 모두 PASS로 표시 될 것이다.
End of explanation |
7,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive Image Processing with Numba and Bokeh
This demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.
Numba must be installed in order to run this demo. To run, click on, Cell->Run All in the top menu, then scroll down to individual examples and play around with their controls.
Step1: Gaussian Blur
This first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider.
Note
Step2: 3x3 Image Kernels
Many image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides serveral predefined kernels for the user to experiment with.
The UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additioanlly there are sliders the permit adjustment to the bias and scale of the final greyscale image.
Note
Step4: Wavelet Decomposition
This last example demostrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see differnet levels of decomposition of the image. | Python Code:
from __future__ import print_function, division
from timeit import default_timer as timer
from bokeh.plotting import figure, show, output_notebook
from bokeh.models import GlyphRenderer, LinearColorMapper
from numba import jit, njit
from IPython.html.widgets import interact
import numpy as np
import scipy.misc
output_notebook()
Explanation: Interactive Image Processing with Numba and Bokeh
This demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.
Numba must be installed in order to run this demo. To run, click on, Cell->Run All in the top menu, then scroll down to individual examples and play around with their controls.
End of explanation
# smaller image
img_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C')
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
width, height = img_blur.shape
p_blur = figure(x_range=(0, width), y_range=(0, height))
p_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name='blur')
@njit
def blur(outimg, img, amt):
iw, ih = img.shape
for i in range(amt, iw-amt):
for j in range(amt, ih-amt):
px = 0.
for w in range(-amt//2, amt//2):
for h in range(-amt//2, amt//2):
px += img[i+w, j+h]
outimg[i, j]= px/(amt*amt)
def update(i=0):
level = 2*i + 1
out = img_blur.copy()
ts = timer()
blur(out, img_blur, level)
te = timer()
print('blur takes:', te - ts)
renderer = p_blur.select(dict(name="blur", type=GlyphRenderer))
ds = renderer[0].data_source
ds.data['image'] = [out]
ds.push_notebook()
show(p_blur)
interact(update, i=(0, 10))
Explanation: Gaussian Blur
This first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider.
Note: This simple example does not handle the edge case, so the edge of the image will remain unblurred as the slider is increased.
End of explanation
@jit
def getitem(img, x, y):
w, h = img.shape
if x >= w:
x = w - 1 - (x - w)
if y >= h:
y = h - 1 - (y - h)
return img[x, y]
def filter_factory(kernel):
ksum = np.sum(kernel)
if ksum == 0:
ksum = 1
k9 = kernel / ksum
@jit
def kernel_apply(img, out, x, y):
tmp = 0
for i in range(3):
for j in range(3):
tmp += img[x+i-1, y+j-1] * k9[i, j]
out[x, y] = tmp
@jit
def kernel_apply_edge(img, out, x, y):
tmp = 0
for i in range(3):
for j in range(3):
tmp += getitem(img, x+i-1, y+j-1) * k9[i, j]
out[x, y] = tmp
@jit
def kernel_k9(img, out):
# Loop through all internals
for x in range(1, img.shape[0] -1):
for y in range(1, img.shape[1] -1):
kernel_apply(img, out, x, y)
# Loop through all the edges
for x in range(img.shape[0]):
kernel_apply_edge(img, out, x, 0)
kernel_apply_edge(img, out, x, img.shape[1] - 1)
for y in range(img.shape[1]):
kernel_apply_edge(img, out, 0, y)
kernel_apply_edge(img, out, img.shape[0] - 1, y)
return kernel_k9
average = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
], dtype=np.float32)
sharpen = np.array([
[-1, -1, -1],
[-1, 12, -1],
[-1, -1, -1],
], dtype=np.float32)
edge = np.array([
[ 0, -1, 0],
[-1, 4, -1],
[ 0, -1, 0],
], dtype=np.float32)
edge_h = np.array([
[ 0, 0, 0],
[-1, 2, -1],
[ 0, 0, 0],
], dtype=np.float32)
edge_v = np.array([
[0, -1, 0],
[0, 2, 0],
[0, -1, 0],
], dtype=np.float32)
gradient_h = np.array([
[-1, -1, -1],
[ 0, 0, 0],
[ 1, 1, 1],
], dtype=np.float32)
gradient_v = np.array([
[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1],
], dtype=np.float32)
sobol_h = np.array([
[ 1, 2, 1],
[ 0, 0, 0],
[-1, -2, -1],
], dtype=np.float32)
sobol_v = np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1],
], dtype=np.float32)
emboss = np.array([
[-2, -1, 0],
[-1, 1, 1],
[ 0, 1, 2],
], dtype=np.float32)
kernels = {
"average" : filter_factory(average),
"sharpen" : filter_factory(sharpen),
"edge (both)" : filter_factory(edge),
"edge (horizontal)" : filter_factory(edge_h),
"edge (vertical)" : filter_factory(edge_v),
"gradient (horizontal)" : filter_factory(gradient_h),
"gradient (vertical)" : filter_factory(gradient_v),
"sobol (horizontal)" : filter_factory(sobol_h),
"sobol (vertical)" : filter_factory(sobol_v),
"emboss" : filter_factory(emboss),
}
images = {
"ascent" : np.copy(scipy.misc.ascent().astype(np.float32)[::-1, :]),
"face" : np.copy(scipy.misc.face(gray=True).astype(np.float32)[::-1, :]),
}
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
cm = LinearColorMapper(palette=palette, low=0, high=256)
width, height = images['ascent'].shape
p_kernel = figure(x_range=(0, width), y_range=(0, height))
p_kernel.image(image=[images['ascent']], x=[0], y=[0], dw=[width], dh=[height], color_mapper=cm, name="kernel")
def update(image="ascent", kernel_name="none", scale=100, bias=0):
global _last_kname
global _last_out
img_kernel = images.get(image)
kernel = kernels.get(kernel_name, None)
if kernel == None:
out = np.copy(img_kernel)
else:
out = np.zeros_like(img_kernel)
ts = timer()
kernel(img_kernel, out)
te = timer()
print('kernel takes:', te - ts)
out *= scale / 100.0
out += bias
print(out.min(), out.max())
renderer = p_kernel.select(dict(name="kernel", type=GlyphRenderer))
ds = renderer[0].data_source
ds.data['image'] = [out]
ds.push_notebook()
show(p_kernel)
knames = ["none"] + sorted(kernels.keys())
interact(update, image=["ascent" ,"face"], kernel_name=knames, scale=(10, 100, 10), bias=(0, 255))
Explanation: 3x3 Image Kernels
Many image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides serveral predefined kernels for the user to experiment with.
The UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additioanlly there are sliders the permit adjustment to the bias and scale of the final greyscale image.
Note: Right now, adjusting the scale and bias are not as efficient as possible, because the update function always also applies the kernel (even if it has not changed). A better implementation might have a class that keeps track of the current kernal and output image so that bias and scale can be applied by themselves.
End of explanation
@njit
def wavelet_decomposition(img, tmp):
Perform inplace wavelet decomposition on `img` with `tmp` as
a temporarily buffer.
This is a very simple wavelet for demonstration
w, h = img.shape
halfwidth, halfheight = w//2, h//2
lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfwidth:, :]
# Along first dimension
for x in range(halfwidth):
for y in range(h):
lefthalf[x, y] = (img[2 * x, y] + img[2 * x + 1, y]) / 2
righthalf[x, y] = img[2 * x, y] - img[2 * x + 1, y]
# Swap buffer
img, tmp = tmp, img
tophalf, bottomhalf = tmp[:, :halfheight], tmp[:, halfheight:]
# Along second dimension
for y in range(halfheight):
for x in range(w):
tophalf[x, y] = (img[x, 2 * y] + img[x, 2 * y + 1]) / 2
bottomhalf[x, y] = img[x, 2 * y] - img[x, 2 * y + 1]
return halfwidth, halfheight
img_wavelet = np.copy(scipy.misc.face(gray=True)[::-1, :])
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
width, height = img_wavelet.shape
p_wavelet = figure(x_range=(0, width), y_range=(0, height))
p_wavelet.image(image=[img_wavelet], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name="wavelet")
def update(level=0):
out = np.copy(img_wavelet)
tmp = np.zeros_like(img_wavelet)
ts = timer()
hw, hh = img_wavelet.shape
while level > 0 and hw > 1 and hh > 1:
hw, hh = wavelet_decomposition(out[:hw, :hh], tmp[:hw, :hh])
level -= 1
te = timer()
print('wavelet takes:', te - ts)
renderer = p_wavelet.select(dict(name="wavelet", type=GlyphRenderer))
ds = renderer[0].data_source
ds.data['image'] = [out]
ds.push_notebook()
show(p_wavelet)
interact(update, level=(0, 7))
Explanation: Wavelet Decomposition
This last example demostrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see differnet levels of decomposition of the image.
End of explanation |
7,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usage
Step1: Setting up the Pulsar object
enterprise uses a specific Pulsar object to store all of the relevant pulsar information (i.e. TOAs, residuals, error bars, flags, etc) from the timing package. Eventually enterprise will support both PINT and tempo2; however, for the moment it only supports tempo2 through the libstempo package. This object is then used to initalize Signals that define the generative model for the pulsar residuals. This is in keeping with the overall enterprise philosophy that the pulsar data should be as loosley coupled as possible to the pulsar model.
Below we initialize a pulsar class with NANOGrav B1855+09 data by passing it the par and tim file.
Step2: Parameters
In enterprise signal parameters are set by specifying a prior distribution (i.e., Uniform, Normal, etc.). Below we will give an example of this functionality.
Step3: This is an abstract parameter class in that it is not yet intialized. It is equivalent to defining the class via the standard nomenclature class efac(object)... The parameter is then intialized via a name. This way, a single parameter class can be initialized for multiple signal parameters with different names (i.e. EFAC per observing backend, etc). Once the parameter is initialized then you then have access to many useful methods.
Step4: Set up a basic pulsar noise model
For our basic noise model we will use standard EFAC, EQUAD, and ECORR white noise with a powerlaw red noise parameterized by an amplitude and spectral index. Using the methods described above we define our parameters for our noise model below.
Step5: White noise signals
White noise signals are straightforward to intialize
Step6: Again, these are abstract classes that will be in itialized when passes a Pulsar object. This, again, makes for ease of use when constucting pulsar signal models in that these classes are created on the fly and can be re-intialized with different pulsars.
Red noise signals
Red noise signals are handled somewhat differently than other signals in that we do not create the class by passing the parameters directly. Instead we use the Function factory (creates a class, not an instance) to set the red noise PSD used (i.e. powerlaw, spectrum, broken, etc). This allows the user to define custom PSDs with no extra coding overhead other than the PSD definition itself.
Step7: Here we have defined a power-law function class that will be initialized when the red noise class is initialized. The red noise signal model is then a powerlaw red noise process modeled via a Fourier basis Gaussian Process with 30 components.
Linear timing model
We must include the timing model in all of our analyses. In this case we treat it as a gaussian process with very large variances. Thus, this is equvalent to marginalizing over the linear timing model coefficients assuming uniform priors. In enterprise this is setup via
Step8: Initializing the model
Now that we have all of our signals defined we can define our total noise model as the sum of all of the components and intialize by passing that combined signal class the pulsar object. Is that awesome or what! | Python Code:
% matplotlib inline
%config InlineBackend.figure_format = 'retina'
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from enterprise.pulsar import Pulsar
import enterprise.signals.parameter as parameter
from enterprise.signals import utils
from enterprise.signals import signal_base
from enterprise.signals import selections
from enterprise.signals.selections import Selection
from tests.enterprise_test_data import datadir
from enterprise.signals import white_signals
from enterprise.signals import gp_signals
from enterprise.signals import deterministic_signals
Explanation: Usage
End of explanation
# pulsar file information
parfiles = datadir + '/B1855+09_NANOGrav_11yv0.gls.par'
timfiles = datadir + 'data/B1855+09_NANOGrav_11yv0.tim'
psr = Pulsar(parfiles, timfiles)
Explanation: Setting up the Pulsar object
enterprise uses a specific Pulsar object to store all of the relevant pulsar information (i.e. TOAs, residuals, error bars, flags, etc) from the timing package. Eventually enterprise will support both PINT and tempo2; however, for the moment it only supports tempo2 through the libstempo package. This object is then used to initalize Signals that define the generative model for the pulsar residuals. This is in keeping with the overall enterprise philosophy that the pulsar data should be as loosley coupled as possible to the pulsar model.
Below we initialize a pulsar class with NANOGrav B1855+09 data by passing it the par and tim file.
End of explanation
# lets define an efac parameter with a uniform prior from [0.5, 5]
efac = parameter.Uniform(0.5, 5)
Explanation: Parameters
In enterprise signal parameters are set by specifying a prior distribution (i.e., Uniform, Normal, etc.). Below we will give an example of this functionality.
End of explanation
# initialize efac parameter with name "efac_1"
efac1 = efac('efac_1')
# return parameter name
print(efac1.name)
# get pdf at a point (log pdf is access)
print(efac1.get_pdf(1.3), efac1.get_logpdf(1.3))
# return 5 samples from this prior distribution
print(efac1.sample(size=5))
Explanation: This is an abstract parameter class in that it is not yet intialized. It is equivalent to defining the class via the standard nomenclature class efac(object)... The parameter is then intialized via a name. This way, a single parameter class can be initialized for multiple signal parameters with different names (i.e. EFAC per observing backend, etc). Once the parameter is initialized then you then have access to many useful methods.
End of explanation
# white and red noise parameters with uniform priors
efac = parameter.Uniform(0.5, 5)
log10_equad = parameter.Uniform(-10, -5)
log10_ecorr = parameter.Uniform(-10, -5)
log10_A = parameter.Uniform(-18, -12)
gamma = parameter.Uniform(1, 7)
Explanation: Set up a basic pulsar noise model
For our basic noise model we will use standard EFAC, EQUAD, and ECORR white noise with a powerlaw red noise parameterized by an amplitude and spectral index. Using the methods described above we define our parameters for our noise model below.
End of explanation
# EFAC, EQUAD, and ECORR signals
ef = ws.MeasurementNoise(efac=efac)
eq = ws.EquadNoise(log10_equad=log10_equad)
ec = gs.EcorrBasisModel(log10_ecorr=log10_ecorr)
Explanation: White noise signals
White noise signals are straightforward to intialize
End of explanation
# Use Function object to set power-law red noise with uniform priors
pl = Function(utils.powerlaw, log10_A=log10_A, gamma=gamma)
# red noise signal using Fourier GP
rn = gs.FourierBasisGP(spectrum=pl, components=30)
Explanation: Again, these are abstract classes that will be in itialized when passes a Pulsar object. This, again, makes for ease of use when constucting pulsar signal models in that these classes are created on the fly and can be re-intialized with different pulsars.
Red noise signals
Red noise signals are handled somewhat differently than other signals in that we do not create the class by passing the parameters directly. Instead we use the Function factory (creates a class, not an instance) to set the red noise PSD used (i.e. powerlaw, spectrum, broken, etc). This allows the user to define custom PSDs with no extra coding overhead other than the PSD definition itself.
End of explanation
# timing model as GP (no parameters)
tm = gs.TimingModel()
Explanation: Here we have defined a power-law function class that will be initialized when the red noise class is initialized. The red noise signal model is then a powerlaw red noise process modeled via a Fourier basis Gaussian Process with 30 components.
Linear timing model
We must include the timing model in all of our analyses. In this case we treat it as a gaussian process with very large variances. Thus, this is equvalent to marginalizing over the linear timing model coefficients assuming uniform priors. In enterprise this is setup via:
End of explanation
# create combined signal class with some metaclass magic
s = ef + ec + eq + rn + tm
# initialize model with pulsar object
pm = s(psr)
# print out the parameter names and priors
pm.params
Explanation: Initializing the model
Now that we have all of our signals defined we can define our total noise model as the sum of all of the components and intialize by passing that combined signal class the pulsar object. Is that awesome or what!
End of explanation |
7,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder-Decoder Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04dra/encdec_noing10_bow_200_512_04dra.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04dra/encdec_noing10_bow_200_512_04dra_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
Explanation: Generations
End of explanation
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
7,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Exploring the Format of the Data
Step2: Setting up Vocabulary of All Words
Step3: Vectorizing the Data
Step4:
Step5: Functionalize Vectorization
Step6: Creating the Model
Step7: Placeholders for Inputs
Recall we technically have two inputs, stories and questions. So we need to use placeholders. Input() is used to instantiate a Keras tensor.
Step8: Building the Networks
To understand why we chose this setup, make sure to read the paper we are using
Step9: Input Encoder c
Step10: Question Encoder
Step11: Encode the Sequences
Step12: Use dot product to compute the match between first input vector seq and the query
Step13: Add this match matrix with the second input vector sequence
Step14: Concatenate
Step15: Saving the Model
Step16: Evaluating the Model
Plotting Out Training History
Step17: Evaluating on Given Test Set
Step18: Writing Your Own Stories and Questions
Remember you can only use words from the existing vocab | Python Code:
import pickle
import numpy as np
with open("train_qa.txt", "rb") as fp: # Unpickling
train_data = pickle.load(fp)
with open("test_qa.txt", "rb") as fp: # Unpickling
test_data = pickle.load(fp)
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Question and Answer Chat Bots
Loading the Data
We will be working with the Babi Data Set from Facebook Research.
Full Details: https://research.fb.com/downloads/babi/
Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush,
"Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks",
http://arxiv.org/abs/1502.05698
End of explanation
type(test_data)
type(train_data)
len(test_data)
len(train_data)
train_data[0]
' '.join(train_data[0][0])
' '.join(train_data[0][1])
train_data[0][2]
Explanation: Exploring the Format of the Data
End of explanation
# Create a set that holds the vocab words
vocab = set()
all_data = test_data + train_data
for story, question , answer in all_data:
# In case you don't know what a union of sets is:
# https://www.programiz.com/python-programming/methods/set/union
vocab = vocab.union(set(story))
vocab = vocab.union(set(question))
vocab.add('no')
vocab.add('yes')
vocab
vocab_len = len(vocab) + 1 #we add an extra space to hold a 0 for Keras's pad_sequences
max_story_len = max([len(data[0]) for data in all_data])
max_story_len
max_question_len = max([len(data[1]) for data in all_data])
max_question_len
Explanation: Setting up Vocabulary of All Words
End of explanation
vocab
# Reserve 0 for pad_sequences
vocab_size = len(vocab) + 1
Explanation: Vectorizing the Data
End of explanation
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
# integer encode sequences of words
tokenizer = Tokenizer(filters=[])
tokenizer.fit_on_texts(vocab)
tokenizer.word_index
train_story_text = []
train_question_text = []
train_answers = []
for story,question,answer in train_data:
train_story_text.append(story)
train_question_text.append(question)
train_story_seq = tokenizer.texts_to_sequences(train_story_text)
len(train_story_text)
len(train_story_seq)
# word_index = tokenizer.word_index
Explanation:
End of explanation
def vectorize_stories(data, word_index=tokenizer.word_index, max_story_len=max_story_len,max_question_len=max_question_len):
'''
INPUT:
data: consisting of Stories,Queries,and Answers
word_index: word index dictionary from tokenizer
max_story_len: the length of the longest story (used for pad_sequences function)
max_question_len: length of the longest question (used for pad_sequences function)
OUTPUT:
Vectorizes the stories,questions, and answers into padded sequences. We first loop for every story, query , and
answer in the data. Then we convert the raw words to an word index value. Then we append each set to their appropriate
output list. Then once we have converted the words to numbers, we pad the sequences so they are all of equal length.
Returns this in the form of a tuple (X,Xq,Y) (padded based on max lengths)
'''
# X = STORIES
X = []
# Xq = QUERY/QUESTION
Xq = []
# Y = CORRECT ANSWER
Y = []
for story, query, answer in data:
# Grab the word index for every word in story
x = [word_index[word.lower()] for word in story]
# Grab the word index for every word in query
xq = [word_index[word.lower()] for word in query]
# Grab the Answers (either Yes/No so we don't need to use list comprehension here)
# Index 0 is reserved so we're going to use + 1
y = np.zeros(len(word_index) + 1)
# Now that y is all zeros and we know its just Yes/No , we can use numpy logic to create this assignment
#
y[word_index[answer]] = 1
# Append each set of story,query, and answer to their respective holding lists
X.append(x)
Xq.append(xq)
Y.append(y)
# Finally, pad the sequences based on their max length so the RNN can be trained on uniformly long sequences.
# RETURN TUPLE FOR UNPACKING
return (pad_sequences(X, maxlen=max_story_len),pad_sequences(Xq, maxlen=max_question_len), np.array(Y))
inputs_train, queries_train, answers_train = vectorize_stories(train_data)
inputs_test, queries_test, answers_test = vectorize_stories(test_data)
inputs_test
queries_test
answers_test
sum(answers_test)
tokenizer.word_index['yes']
tokenizer.word_index['no']
Explanation: Functionalize Vectorization
End of explanation
from keras.models import Sequential, Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Activation, Dense, Permute, Dropout
from keras.layers import add, dot, concatenate
from keras.layers import LSTM
Explanation: Creating the Model
End of explanation
input_sequence = Input((max_story_len,))
question = Input((max_question_len,))
Explanation: Placeholders for Inputs
Recall we technically have two inputs, stories and questions. So we need to use placeholders. Input() is used to instantiate a Keras tensor.
End of explanation
# Input gets embedded to a sequence of vectors
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_size,output_dim=64))
input_encoder_m.add(Dropout(0.3))
# This encoder will output:
# (samples, story_maxlen, embedding_dim)
Explanation: Building the Networks
To understand why we chose this setup, make sure to read the paper we are using:
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus,
"End-To-End Memory Networks",
http://arxiv.org/abs/1503.08895
Encoders
Input Encoder m
End of explanation
# embed the input into a sequence of vectors of size query_maxlen
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_size,output_dim=max_question_len))
input_encoder_c.add(Dropout(0.3))
# output: (samples, story_maxlen, query_maxlen)
Explanation: Input Encoder c
End of explanation
# embed the question into a sequence of vectors
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=64,
input_length=max_question_len))
question_encoder.add(Dropout(0.3))
# output: (samples, query_maxlen, embedding_dim)
Explanation: Question Encoder
End of explanation
# encode input sequence and questions (which are indices)
# to sequences of dense vectors
input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)
Explanation: Encode the Sequences
End of explanation
# shape: `(samples, story_maxlen, query_maxlen)`
match = dot([input_encoded_m, question_encoded], axes=(2, 2))
match = Activation('softmax')(match)
Explanation: Use dot product to compute the match between first input vector seq and the query
End of explanation
# add the match matrix with the second input vector sequence
response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)
response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)
Explanation: Add this match matrix with the second input vector sequence
End of explanation
# concatenate the match matrix with the question vector sequence
answer = concatenate([response, question_encoded])
answer
# Reduce with RNN (LSTM)
answer = LSTM(32)(answer) # (samples, 32)
# Regularization with Dropout
answer = Dropout(0.5)(answer)
answer = Dense(vocab_size)(answer) # (samples, vocab_size)
# we output a probability distribution over the vocabulary
answer = Activation('softmax')(answer)
# build the final model
model = Model([input_sequence, question], answer)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# train
history = model.fit([inputs_train, queries_train], answers_train,batch_size=32,epochs=120,validation_data=([inputs_test, queries_test], answers_test))
Explanation: Concatenate
End of explanation
filename = 'chatbot_120_epochs.h5'
model.save(filename)
Explanation: Saving the Model
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Explanation: Evaluating the Model
Plotting Out Training History
End of explanation
model.load_weights(filename)
pred_results = model.predict(([inputs_test, queries_test]))
test_data[0][0]
story =' '.join(word for word in test_data[0][0])
print(story)
query = ' '.join(word for word in test_data[0][1])
print(query)
print("True Test Answer from Data is:",test_data[0][2])
#Generate prediction from model
val_max = np.argmax(pred_results[0])
for key, val in tokenizer.word_index.items():
if val == val_max:
k = key
print("Predicted answer is: ", k)
print("Probability of certainty was: ", pred_results[0][val_max])
Explanation: Evaluating on Given Test Set
End of explanation
vocab
# Note the whitespace of the periods
my_story = "John left the kitchen . Sandra dropped the football in the garden ."
my_story.split()
my_question = "Is the football in the garden ?"
my_question.split()
mydata = [(my_story.split(),my_question.split(),'yes')]
my_story,my_ques,my_ans = vectorize_stories(mydata)
pred_results = model.predict(([ my_story, my_ques]))
#Generate prediction from model
val_max = np.argmax(pred_results[0])
for key, val in tokenizer.word_index.items():
if val == val_max:
k = key
print("Predicted answer is: ", k)
print("Probability of certainty was: ", pred_results[0][val_max])
Explanation: Writing Your Own Stories and Questions
Remember you can only use words from the existing vocab
End of explanation |
7,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Spatial Extension of IC443
This tutorial demonstrates how to perform a measurement of spatial extension with the extension method in the fermipy package. This tutorial assumes that you have first gone through the PG 1553 analysis tutorial.
Get the Data and Setup the Analysis
Step1: In this thread we will use a pregenerated data set which is contained in a tar archive in the data directory of the fermipy-extra repository.
Step2: We first instantiate a GTAnalysis instance using the config file in the ic443 directory and the run the setup() method. This will prepare all of the ancillary files and create the pylikelihood instance for binned analysis. Note that in this example these files have already been generated so the routines that will normally be executed to create these files will be skipped.
Step3: Print the ROI model
We can print the ROI object to see a list of sources in the model along with their distance from the ROI center (offset), TS, and number of predicted counts (Npred). Since we haven't yet fit any sources, the ts of all sources will initially be set to nan.
Step4: Now we will run the optimize() method. This method refits the spectral parameters of all sources in the ROI and gives us baseline model that we can use as a starting point for fitting the spatial extension.
Step5: To check the quality of the ROI model fit we can generate a residual map with the residmap method. This will produce smoothed maps of the counts distribution and residuals (counts-model) using a given spatial kernel. The spatial kernel can be defined with a source dictionary. In the following example we use a PointSource with a PowerLaw index of 2.0.
Step6: We can see the effect of removing sources from the model by running residmap with the exclude option. Here we generate a residual map with the source 3FGL J0621.0+2514 removed from the model.
Step7: We can get alternative assessment of the model by generating a TS map of the region. Again we see a hotspot at the position of 3FGL J0621.0+2514 which we excluded from the model.
Step8: Measuring Source Extension
After optimizing the model we are ready to run an extension analysis on IC 443. As reported in Abdo et al. 2010, this source has a spatial extension of 0.27 deg $\pm$ 0.01 (stat). We can run an extension test of this source by calling the extension method with the source name. The extension method has a number of options which can be changed at runtime by passing keyword arguments. To see the default settings we can look at the extension sub-dictionary of the config property of our GTAnalysis instance.
Step9: By default the method will use a 2D Gaussian source template and scan the width parameter between 0.00316 and 1 degrees in 26 steps. The width parameter can be used to provide an explicit vector of points for the scan. Since we know the extension of IC 443 is well localized around 0.27 deg we use a width vector centered around this point. The analysis results are returned as an output dictionary and are also written to the internal source object of the GTAnalysis instance.
Step10: To inspect the results of the analysis we can make a plot of the likelihood profile. From this we can see that the spatial extension is in good agreement with the value from Abdo et al. 2010.
Step11: As an additional cross-check we can look at what happens when we free sources and rerun the extension analysis. | Python Code:
%matplotlib inline
import os
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
from fermipy.gtanalysis import GTAnalysis
from fermipy.plotting import ROIPlotter
Explanation: Fitting Spatial Extension of IC443
This tutorial demonstrates how to perform a measurement of spatial extension with the extension method in the fermipy package. This tutorial assumes that you have first gone through the PG 1553 analysis tutorial.
Get the Data and Setup the Analysis
End of explanation
if os.path.isfile('../data/ic443.tar.gz'):
!tar xzf ../data/ic443.tar.gz
else:
!curl -OL https://raw.githubusercontent.com/fermiPy/fermipy-extras/master/data/ic443.tar.gz
!tar xzf ic443.tar.gz
Explanation: In this thread we will use a pregenerated data set which is contained in a tar archive in the data directory of the fermipy-extra repository.
End of explanation
gta = GTAnalysis('ic443/config.yaml')
matplotlib.interactive(True)
gta.setup()
Explanation: We first instantiate a GTAnalysis instance using the config file in the ic443 directory and the run the setup() method. This will prepare all of the ancillary files and create the pylikelihood instance for binned analysis. Note that in this example these files have already been generated so the routines that will normally be executed to create these files will be skipped.
End of explanation
gta.print_roi()
Explanation: Print the ROI model
We can print the ROI object to see a list of sources in the model along with their distance from the ROI center (offset), TS, and number of predicted counts (Npred). Since we haven't yet fit any sources, the ts of all sources will initially be set to nan.
End of explanation
gta.optimize()
gta.print_roi()
Explanation: Now we will run the optimize() method. This method refits the spectral parameters of all sources in the ROI and gives us baseline model that we can use as a starting point for fitting the spatial extension.
End of explanation
resid = gta.residmap('ic443_roifit',model={'SpatialModel' : 'PointSource', 'Index' : 2.0})
o = resid
fig = plt.figure(figsize=(14,6))
ROIPlotter(o['sigma'],roi=gta.roi).plot(vmin=-5,vmax=5,levels=[-5,-3,3,5,7,9],subplot=121,cmap='RdBu_r')
plt.gca().set_title('Significance')
ROIPlotter(o['excess'],roi=gta.roi).plot(vmin=-200,vmax=200,subplot=122,cmap='RdBu_r')
plt.gca().set_title('Excess Counts')
Explanation: To check the quality of the ROI model fit we can generate a residual map with the residmap method. This will produce smoothed maps of the counts distribution and residuals (counts-model) using a given spatial kernel. The spatial kernel can be defined with a source dictionary. In the following example we use a PointSource with a PowerLaw index of 2.0.
End of explanation
resid_noj0621 = gta.residmap('ic443_roifit_noj0621',
model={'SpatialModel' : 'PointSource', 'Index' : 2.0},
exclude=['3FGL J0621.0+2514'])
o = resid_noj0621
fig = plt.figure(figsize=(14,6))
ROIPlotter(o['sigma'],roi=gta.roi).plot(vmin=-5,vmax=5,levels=[-5,-3,3,5,7,9],subplot=121,cmap='RdBu_r')
plt.gca().set_title('Significance')
ROIPlotter(o['excess'],roi=gta.roi).plot(vmin=-200,vmax=200,subplot=122,cmap='RdBu_r')
plt.gca().set_title('Excess Counts')
Explanation: We can see the effect of removing sources from the model by running residmap with the exclude option. Here we generate a residual map with the source 3FGL J0621.0+2514 removed from the model.
End of explanation
tsmap_noj0621 = gta.tsmap('ic443_noj0621',
model={'SpatialModel' : 'PointSource', 'Index' : 2.0},
exclude=['3FGL J0621.0+2514'])
o = tsmap_noj0621
fig = plt.figure(figsize=(6,6))
ROIPlotter(o['sqrt_ts'],roi=gta.roi).plot(vmin=0,vmax=5,levels=[3,5,7,9],subplot=111,cmap='magma')
plt.gca().set_title('sqrt(TS)')
Explanation: We can get alternative assessment of the model by generating a TS map of the region. Again we see a hotspot at the position of 3FGL J0621.0+2514 which we excluded from the model.
End of explanation
import pprint
pprint.pprint(gta.config['extension'])
Explanation: Measuring Source Extension
After optimizing the model we are ready to run an extension analysis on IC 443. As reported in Abdo et al. 2010, this source has a spatial extension of 0.27 deg $\pm$ 0.01 (stat). We can run an extension test of this source by calling the extension method with the source name. The extension method has a number of options which can be changed at runtime by passing keyword arguments. To see the default settings we can look at the extension sub-dictionary of the config property of our GTAnalysis instance.
End of explanation
ext_gauss = gta.extension('3FGL J0617.2+2234e',width=np.linspace(0.25,0.30,11).tolist())
gta.write_roi('ext_gauss_fit')
Explanation: By default the method will use a 2D Gaussian source template and scan the width parameter between 0.00316 and 1 degrees in 26 steps. The width parameter can be used to provide an explicit vector of points for the scan. Since we know the extension of IC 443 is well localized around 0.27 deg we use a width vector centered around this point. The analysis results are returned as an output dictionary and are also written to the internal source object of the GTAnalysis instance.
End of explanation
plt.figure(figsize=(8,6))
plt.plot(ext_gauss['width'],ext_gauss['dloglike'],marker='o')
plt.gca().set_xlabel('Width [deg]')
plt.gca().set_ylabel('Delta Log-Likelihood')
plt.gca().axvline(ext_gauss['ext'])
plt.gca().axvspan(ext_gauss['ext']-ext_gauss['ext_err_lo'],ext_gauss['ext']+ext_gauss['ext_err_hi'],
alpha=0.2,label='This Measurement',color='b')
plt.gca().axvline(0.27,color='k')
plt.gca().axvspan(0.27-0.01,0.27+0.01,alpha=0.2,label='Abdo et al. 2010',color='k')
plt.gca().set_ylim(2030,2070)
plt.gca().set_xlim(0.20,0.34)
plt.annotate('TS$_{\mathrm{ext}}$ = %.2f\nR$_{68}$ = %.3f $\pm$ %.3f'%
(ext_gauss['ts_ext'],ext_gauss['ext'],ext_gauss['ext_err']),xy=(0.05,0.05),xycoords='axes fraction')
plt.gca().legend(frameon=False)
Explanation: To inspect the results of the analysis we can make a plot of the likelihood profile. From this we can see that the spatial extension is in good agreement with the value from Abdo et al. 2010.
End of explanation
ext_gauss_free = gta.extension('3FGL J0617.2+2234e',width=np.linspace(0.25,0.30,11).tolist(),free_radius=1.0)
print 'Fixed Sources: %f +/- %f'%(ext_gauss['ext'],ext_gauss['ext_err'])
print 'Free Sources: %f +/- %f'%(ext_gauss_free['ext'],ext_gauss_free['ext_err'])
Explanation: As an additional cross-check we can look at what happens when we free sources and rerun the extension analysis.
End of explanation |
7,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sensitivity of enrichment analysis to quality trimming
In this sheet we explore how trimming the gene-age data by various quality measures affects enrichment analysis of gene ontology and other terms
Step1: First we'll take a look at the data and the distributions of different quality measures
Step2: Histogram of entropy values
This measure gives the Shannon entropy of the normalized distribution of age-calls from the different algorithms
Step3: Histogram of the number of algorithms contributing to each gene's age call
Step4: Histogram of bimodality values
Step5: Filtering parameters
Now we define 5 different datasets with different amounts of quality filter, ranged from least to most stringent
Dataset 1 - No filters
- All proteins included
Dataset 2 - Just entropy
- Entropy
Step6: Enrichment
Enrichment values were calculated with g
Step7: Store the datasets in a dictionary
Step8: Create datasets that give the p-value for each term from each of the five datasets
First define a function to munge the data. It selects one of the two kingdoms, 'bacs' or 'archs' (which is all we are investigating here), and one of the enrichment databases or GO terms (default "BP"
Step9: P-value series
This is what the output looks like. Note that if the column "diff" is negative, it means that the heavy filtering on dataset 5 gave a stronger p-value.
Step10: Pick a few representative enrichment terms and plot them | Python Code:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Sensitivity of enrichment analysis to quality trimming
In this sheet we explore how trimming the gene-age data by various quality measures affects enrichment analysis of gene ontology and other terms
End of explanation
con = pd.read_csv("../main_HUMAN.csv",index_col=0)
con.head()
Explanation: First we'll take a look at the data and the distributions of different quality measures
End of explanation
con["entropy"].hist(bins=50,color='grey')
Explanation: Histogram of entropy values
This measure gives the Shannon entropy of the normalized distribution of age-calls from the different algorithms
End of explanation
con["NumDBsContributing"].hist(bins=13,color='grey')
Explanation: Histogram of the number of algorithms contributing to each gene's age call
End of explanation
con["Bimodality"].hist(bins=50,color='grey')
Explanation: Histogram of bimodality values
End of explanation
d1_archs = [prot for prot in con[con["modeAge"] == "Euk_Archaea"].index]
d1_bacs = [prot for prot in con[con["modeAge"] == "Euk+Bac"].index]
d2_archs = [prot for prot in con[(con["modeAge"] == "Euk_Archaea")
& (con["entropy"]<1)].index]
d2_bacs = [prot for prot in con[(con["modeAge"] == "Euk+Bac")
& (con["entropy"]<1)].index]
d3_archs = [prot for prot in con[(con["modeAge"] == "Euk_Archaea") &
(con["HGT_flag"] == False)].index]
d3_bacs = [prot for prot in con[(con["modeAge"] == "Euk+Bac") &
(con["HGT_flag"] == False)].index]
d4_archs = [prot for prot in con[(con["modeAge"] == "Euk_Archaea") &
(con["NumDBsContributing"] > 3)].index]
d4_bacs = [prot for prot in con[(con["modeAge"] == "Euk+Bac") &
(con["NumDBsContributing"] > 3)].index]
d5_archs = [prot for prot in con[(con["modeAge"] == "Euk_Archaea") &
(con["Bimodality"] < 5)].index]
d5_bacs = [prot for prot in con[(con["modeAge"] == "Euk+Bac") &
(con["Bimodality"] < 5)].index]
d6_archs = [prot for prot in con[(con["modeAge"] == "Euk_Archaea") &
(con["entropy"]<1) &
(con["HGT_flag"] == False) &
(con["Bimodality"] < 5) &
(con["NumDBsContributing"] > 3)].index]
d6_bacs = [prot for prot in con[(con["modeAge"] == "Euk+Bac") &
(con["entropy"]<1) &
(con["HGT_flag"] == False) &
(con["Bimodality"] < 5) &
(con["NumDBsContributing"] > 3)].index]
## Write out the data files for submission to g:Profiler
archs = [d1_archs,d2_archs,d3_archs,d4_archs,d5_archs,d6_archs]
bacs = [d1_bacs,d2_bacs,d3_bacs,d4_bacs,d5_bacs,d6_bacs]
for index,prots in enumerate(archs):
with open("d%d_archs.txt" % (index+1),'w') as out:
for i in prots:
out.write(i+"\n")
for index,prots in enumerate(bacs):
with open("d%d_bacs.txt" % (index+1),'w') as out:
for i in prots:
out.write(i+"\n")
Explanation: Filtering parameters
Now we define 5 different datasets with different amounts of quality filter, ranged from least to most stringent
Dataset 1 - No filters
- All proteins included
Dataset 2 - Just entropy
- Entropy: < 1
- HGT: No filter
- NumDBsContributing: No Filter
- Bimodality: No Filter
Dataset 3 - Just HGT flags
- Entropy: No Filter
- HGT: Remove flagged proteins
- NumDBsContributing: No Filter
- Bimodality: No Filter
Dataset 4 - Just number of algorithms contributing to final age
- Entropy: No Filter
- HGT: No Filter
- NumDBsContributing: > 3
- Bimodality: No Filter
Dataset 5 - Just bimodality metric
- Entropy: No Filter
- HGT: No Filter
- NumDBsContributing: No Filter
- Bimodality: < 5
Dataset 6 - Filter on entropy, HGT flag, Num. algorithms, and bimodality
- Entropy: < 1
- HGT: Remove flagged proteins
- NumDBsContributing: > 3
- Bimodality: < 5
End of explanation
infiles = !ls *enrichment.txt
infiles
Explanation: Enrichment
Enrichment values were calculated with g:Profiler (http://biit.cs.ut.ee/gprofiler/index.cgi)
Now we get the gProfiler output for each dataset, and see how the p-values change with trimming.
End of explanation
dsets = dict(zip(
["_".join(i.split("_")[:2]) for i in infiles],
[pd.read_table(i) for i in infiles]))
dsets['d1_archs'].head()
for df in dsets:
dsets[df].sort(["t type","p-value"],inplace=True)
## Write data files once they've been sorted
for df in dsets:
dsets[df].to_csv(df+"_sorted.csv")
Explanation: Store the datasets in a dictionary
End of explanation
def pValue_series(data_sets,kingdom='archs',t_type="BP"):
is_first = True
for df in sorted(data_sets.keys()):
if kingdom not in df: # ignore other kingdom, 'arch' or 'bac'
continue
trimdf = data_sets[df][data_sets[df]["t type"] == t_type]
trimdf.loc[:,"name"] = trimdf["t name"].map(lambda x: x.strip())
trimdf.loc[:,"log p-value"] = trimdf["p-value"].map(lambda x: -(np.log10(x)))
if is_first:
pvalues = trimdf[["name","log p-value"]].set_index("name")
pvalues.columns = ["log p-value" + "_" + df]
is_first = False
else:
temp = trimdf[["name","log p-value"]].set_index("name")
temp.columns = ["log p-value" + "_" + df]
pvalues = pd.concat([pvalues,temp],axis=1,copy=False)
pvalues.loc[:,"diff"] = pvalues.ix[:,0] - pvalues.ix[:,-1]
pvalues.sort("diff",inplace=True)
return pvalues
Explanation: Create datasets that give the p-value for each term from each of the five datasets
First define a function to munge the data. It selects one of the two kingdoms, 'bacs' or 'archs' (which is all we are investigating here), and one of the enrichment databases or GO terms (default "BP": biological process).
The p-values are converted to negative log10. It also adds a column that is the difference between dataset 1 and dataset 5.
End of explanation
archs_pvalues_BP = pValue_series(dsets)
archs_pvalues_BP.head(n=10)
bacs_pvalues_BP = pValue_series(dsets,'bacs')
archs_pvalues_CC = pValue_series(dsets,'archs',"CC")
bacs_pvalues_CC = pValue_series(dsets,'bacs',"CC")
#Write out p-value series
archs_pvalues_BP.to_csv("archs_p-values_BP.csv")
bacs_pvalues_BP.to_csv("bacs_p-values_BP.csv")
archs_pvalues_CC.to_csv("archs_p-values_CC.csv")
bacs_pvalues_CC.to_csv("bacs_p-values_CC.csv")
Explanation: P-value series
This is what the output looks like. Note that if the column "diff" is negative, it means that the heavy filtering on dataset 5 gave a stronger p-value.
End of explanation
def plot_it(df,term,style='-',label=None):
df = df.fillna(0)
row = df.ix[term,:-1]
row.index = [i.split("_")[-2] for i in row.index]
row.plot(style=style,label=label,color='black',linewidth=2)
plt.xticks(rotation=70)
plt.xlabel("Dataset")
plt.ylabel("-log10(p-value)")
plot_it(archs_pvalues_BP,"RNA metabolic process",label="RNA metabolic process (BP)")
plot_it(archs_pvalues_BP,"ribosome biogenesis",style='--',label="ribosome biogenesis (BP)")
plot_it(archs_pvalues_CC,"nucleolus",style=':',label="nucleolus (CC)")
plot_it(archs_pvalues_CC,"cytosolic large ribosomal subunit",style='-.',label="cyto. large ribosomal subunit (CC)")
plt.legend(loc=0,prop={'size':9})
ax = plt.gca()
ax.set_ylim([0,50])
#plt.savefig("Archaea_p-values.svg")
plot_it(bacs_pvalues_BP,"amide biosynthetic process",label="amide biosynthetic process (BP)")
plot_it(bacs_pvalues_BP,"metabolic process",style='--',label="metabolic process (BP)")
plot_it(bacs_pvalues_CC,"intracellular organelle",style='-.',label="intracellular organelle (CC)")
plot_it(bacs_pvalues_BP,"catabolic process",style=':',label="mitochondrial membrane (BP)")
plt.legend(loc=2,prop={'size':9})
ax = plt.gca()
ax.set_ylim([0,100])
#plt.savefig("Bacteria_p-values.svg")
fig,axes = plt.subplots(2,2)
dfs_names = zip(["Euk_Archaea (BP)", "Euk+Bacteria (BP)","Euk_Archaea (CC)","Euk+Bacteria (CC)"],
[archs_pvalues_BP,bacs_pvalues_BP,archs_pvalues_CC,bacs_pvalues_CC])
# Greys
# flier_colors = ["#252525","#252525","#636363","#636363","#969696","#969696","#bdbdbd","#bdbdbd","#d9d9d9","#d9d9d9"]
flier_colors = ["#990000","#990000","#d7301f","#d7301f","#ef6548","#ef6548","#fc8d59","#fc8d59","#fdbb84","#fdbb84"]
index=0
for name,df in dfs_names:
colors = dict(boxes='black',caps='black',medians='black',whiskers='grey')
ax = axes.flat[index]
df = df.dropna()
df = df.ix[:,:-1]
df.columns = [i.split("_")[1] for i in df.columns]
box1 = df.plot(kind='box',ax=ax,notch=True,sym=".",color=colors,return_type='dict',bootstrap=5000)
for col,flier in zip(flier_colors,box1['fliers']):
plt.setp(flier,color=col)
ax.set_title(name)
ax.get_xaxis().set_ticks([])
index += 1
#plt.savefig("avg_p-values.svg")
Explanation: Pick a few representative enrichment terms and plot them
End of explanation |
7,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Check counts of individual classes
Step4: Step 2
Step5: Augmenting training data by applying transformations to the original data
As can be seen from the visualization of test, training and validation data; the number of images per class varies greatly in each of the datasets. This means that there might be features in validation data, that are not fully encompassed in the training data. To overcome this issue, data augmentation is applied. The first step in augmentation is to make a dataset of images that are below a certain number in each of the classes. Specifically, classes with number of images less than 800 are selected to be augmented. Also, augmentation is done a way such that the final distribution of class vs count is relatively flat.
Step6: Check final size of the training dataset after augmentation and plot distribution of images per class
Step7: Model Architecture
Step8: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step9: Plot epoch vs training loss/accuracy and epoch vs validation loss/accuracy
Step10: Check accuracy on test dataset (ran only once after a final model was saved)
Step11: Step 3
Step12: Predict the Sign Type for Each Image and Analyze Performance
Step13: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability
Step14: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note | Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = r'C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\pickled_data\train.p'
validation_file = r'C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\pickled_data\valid.p'
testing_file = r'C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\pickled_data\test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
print('Data loaded')
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = np.shape(X_train[0])
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validation examples =", n_validation)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import random
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
# show image of 20 random data points
figs, axs = plt.subplots(4, 5, figsize = (16, 8))
figs.subplots_adjust(hspace = .2, wspace = .001)
axs = axs.ravel()
for i in range(20):
index = random.randint(0, len(X_train))
image = X_train[index]
axs[i].axis('off')
axs[i].imshow(image)
axs[i].set_title(y_train[index])
Explanation: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
End of explanation
import numpy as np
train_class, train_counts = np.unique(y_train, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(train_class, train_counts)
plt.grid()
plt.title("Training Dataset : class vs count")
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.show()
test_class, test_counts = np.unique(y_test, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(test_class, test_counts)
plt.grid()
plt.title("Testing Dataset : class vs count")
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.show()
valid_class, valid_counts = np.unique(y_valid, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(valid_class, valid_counts)
plt.grid()
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.title("Validation Dataset : class vs count")
plt.show()
Explanation: Check counts of individual classes
End of explanation
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include converting
### to grayscale, etc. Feel free to use as many code cells as needed.
### Define functions to be used for pre-processing, cleaning and augmenting the data
import cv2
def normalize(dataset):
return dataset/128.0 - 1.0
def rgbtograyscale(dataset):
return np.sum(dataset/3, axis = 3, keepdims = True)
def histogram_eq(dataset):
hist_eq_dataset = []
channels = np.shape(dataset[0])[2]
for i in range(len(dataset)):
img = dataset[i]
for j in range(channels):
img[:, :, j] = cv2.equalizeHist(img[:, :, j])
hist_eq_dataset.append(img)
return hist_eq_dataset
def rand_rotation(dataset):
rand_rotation_dataset = []
rows,cols = dataset[0].shape[:2]
for i in range(len(dataset)):
img = dataset[i]
random_angle = 30.0*np.random.rand() - 15 # rotations are restricted to be within (-15, 15) degreees
M = cv2.getRotationMatrix2D((cols/2,rows/2), random_angle, 1)
dst = cv2.warpAffine(img, M, (cols, rows))
rand_rotation_dataset.append(dst)
return rand_rotation_dataset
def rand_translation(dataset):
rand_translation_dataset = []
rows,cols = dataset[0].shape[:2]
for i in range(len(dataset)):
img = dataset[i]
delx, dely = np.random.randint(-3, 3, 2)
M = np.float32([[1, 0, delx],[0, 1, dely]])
dst = cv2.warpAffine(img, M, (cols,rows))
rand_translation_dataset.append(dst)
return rand_translation_dataset
def rand_zoom(dataset):
rand_zoom_dataset = []
rows,cols = dataset[0].shape[:2]
for i in range(len(dataset)):
img = dataset[i]
px = np.random.randint(-2, 2) # transform limits
pts1 = np.float32([[px,px],[rows-px,px],[px,cols-px],[rows-px,cols-px]]) # ending locations
pts2 = np.float32([[0,0],[rows,0],[0,cols],[rows,cols]]) # starting locations
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(rows,cols))
rand_zoom_dataset.append(dst)
return rand_zoom_dataset
def augment_dataset(dataset, dataset_class, deficit, deficit_class):
# Note: even though there might be duplicates selected from the original X_train dataset
# in augment dataset, application of transformations like random rotation, histogram
# equalization etc. will produce new features for the network which will help
# generalize the model.
X_train_deficit = dataset[dataset_class == deficit_class]
X_train_augment = []
for i in range(deficit):
index = random.randint(0, len(X_train_deficit) - 1)
X_train_augment.append(X_train_deficit[index])
return X_train_augment, deficit_class*np.ones(len(X_train_augment))
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
End of explanation
deficit_index = train_class[train_counts < 800] # This coincidentally is also the deficit class label
deficit_count = 800 - train_counts[deficit_index]
n_augment = np.sum(deficit_count)
print(deficit_count) # print the number of images to be added per class
print(deficit_index) # print the class of images to be added
print(n_augment) # print the total number of images to be added
X_train_augment = []
y_train_augment = []
for i in range(len(deficit_index)):
augment_data, augment_class = augment_dataset(X_train, y_train, deficit_count[i], deficit_index[i])
X_train_augment.extend(augment_data)
y_train_augment.extend(augment_class)
print(len(X_train_augment)) # verify number of images in augment dataset matches with total number of images to be added
X_train_augment = np.array(X_train_augment) # convert to np array from list
y_train_augment = np.array(y_train_augment) # convert to np array from list
print(np.shape(X_train))
print(np.shape(X_train_augment))
print(np.shape(y_train))
print(np.shape(y_train_augment))
# Once we have a set of images selected from the original dataset for augmenting the training data, we apply
# various transformations like random rotation, translation, zoom and histogram equalization on the augment dataset.
# This augment dataset is then concatenated with the original dataset.
X_train = np.concatenate((X_train, rand_translation(rand_rotation(histogram_eq(rand_zoom(X_train_augment))))), axis = 0)
y_train = np.concatenate((y_train, y_train_augment), axis = 0)
# show image of 20 random data points
figs, axs = plt.subplots(4, 5, figsize = (16, 8))
figs.subplots_adjust(hspace = .4, wspace = .001)
axs = axs.ravel()
for i in range(20):
index = random.randint(0, len(X_train) - 1)
image = X_train[index]
axs[i].axis('off')
axs[i].imshow(image)
axs[i].set_title(y_train[index])
Explanation: Augmenting training data by applying transformations to the original data
As can be seen from the visualization of test, training and validation data; the number of images per class varies greatly in each of the datasets. This means that there might be features in validation data, that are not fully encompassed in the training data. To overcome this issue, data augmentation is applied. The first step in augmentation is to make a dataset of images that are below a certain number in each of the classes. Specifically, classes with number of images less than 800 are selected to be augmented. Also, augmentation is done a way such that the final distribution of class vs count is relatively flat.
End of explanation
print(np.shape(X_train))
print(np.shape(y_train))
# plot distribution of augmented dataset
train_class, train_counts = np.unique(y_train, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(train_class, train_counts)
plt.grid()
plt.title("Augmented training Dataset : class vs count")
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.show()
X_train = np.array(X_train) # convert to np array from list
X_valid = np.array(X_valid) # convert to np array from list
X_train = normalize(rgbtograyscale(X_train)) # normalize the dataset and convert to grayscale
X_valid = normalize(rgbtograyscale(X_valid)) # normalize the dataset and convert to grayscale
# verify whether the preprocessing is applied successfully to validation and training datasets
print(np.shape(X_train))
print(np.shape(X_test))
print(np.shape(X_valid))
# show image of 20 random data points
figs, axs = plt.subplots(4, 5, figsize = (16, 8))
figs.subplots_adjust(hspace = .4, wspace = .001)
axs = axs.ravel()
for i in range(20):
index = random.randint(0, len(X_train) - 1)
image = X_train[index].squeeze()
axs[i].axis('off')
axs[i].imshow(image, cmap = 'gray')
axs[i].set_title(int(y_train[index]))
Explanation: Check final size of the training dataset after augmentation and plot distribution of images per class
End of explanation
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
EPOCHS = 60
BATCH_SIZE = 200
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolution. Input = 32x32x1. Output = 28x28x6.
conv1_w = tf.Variable(tf.truncated_normal(shape = (5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_w, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolution. Output = 10x10x16.
conv2_w = tf.Variable(tf.truncated_normal([5, 5, 6, 16], mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_w, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
conv2 = flatten(conv2)
# Dropout layer 1
conv2 = tf.nn.dropout(conv2, 0.80)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fconnected1_w = tf.Variable(tf.truncated_normal([400, 120], mean = mu, stddev = sigma))
fconnected1_b = tf.Variable(tf.zeros(120))
fconnected_1 = tf.add(tf.matmul(conv2, fconnected1_w), fconnected1_b)
# Activation.
fconnected_1 = tf.nn.relu(fconnected_1)
# Dropout layer 2
fconnected_1 = tf.nn.dropout(fconnected_1, 0.75)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fconnected2_w = tf.Variable(tf.truncated_normal([120, 84], mean = mu, stddev = sigma))
fconnected2_b = tf.Variable(tf.zeros(84))
fconnected_2 = tf.add(tf.matmul(fconnected_1, fconnected2_w), fconnected2_b)
# Activation.
fconnected_2 = tf.nn.relu(fconnected_2)
# Layer 5: Fully Connected. Input = 84. Output = 43.
fconnected3_w = tf.Variable(tf.truncated_normal([84, 43], mean = mu, stddev = sigma))
fconnected3_b = tf.Variable(tf.zeros(43))
logits = tf.add(tf.matmul(fconnected_2, fconnected3_w), fconnected3_b)
return logits
Explanation: Model Architecture
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
total_loss = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
loss, accuracy = sess.run([loss_operation, accuracy_operation], feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
total_loss += (loss * len(batch_x))
return total_loss/float(num_examples), total_accuracy/float(num_examples)
train_loss_history = []
valid_loss_history = []
train_accuracy_history = []
valid_accuracy_history = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_loss, validation_accuracy = evaluate(X_valid, y_valid)
valid_loss_history.append(validation_loss)
valid_accuracy_history.append(validation_accuracy)
train_loss, train_accuracy = evaluate(X_train, y_train)
train_loss_history.append(train_loss)
train_accuracy_history.append(train_accuracy)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print("Training Accuracy = {:.3f}".format(train_accuracy))
print("Validation loss = {:.3f}".format(validation_loss))
print("Training loss = {:.3f}".format(train_loss))
print()
saver.save(sess, './lenet')
print("Model saved")
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
plt.figure()
loss_plot = plt.subplot(2,1,1)
loss_plot.set_title('Loss')
loss_plot.plot(train_loss_history, 'r', label='Training Loss')
loss_plot.plot(valid_loss_history, 'b', label='Validation Loss')
loss_plot.set_xlim([0, EPOCHS])
loss_plot.legend(loc = 1)
plt.figure()
accuracy_plot = plt.subplot(2,1,1)
accuracy_plot.set_title('Accuracy')
accuracy_plot.plot(train_accuracy_history, 'r', label='Training Accuracy')
accuracy_plot.plot(valid_accuracy_history, 'b', label='Validation Accuracy')
accuracy_plot.set_xlim([0, EPOCHS])
accuracy_plot.legend(loc = 4)
Explanation: Plot epoch vs training loss/accuracy and epoch vs validation loss/accuracy
End of explanation
# Normalize and grayscale test dataset
X_test = normalize(rgbtograyscale(X_test))
#Run testing
with tf.Session() as sess:
saver_new = tf.train.import_meta_graph('./lenet.meta')
saver_new.restore(sess, "./lenet")
test_loss, test_accuracy = evaluate(X_test, y_test)
print("Test Set Accuracy = {:.3f}".format(test_accuracy))
Explanation: Check accuracy on test dataset (ran only once after a final model was saved)
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
from scipy import misc
import glob
X_test_new = []
y_test_new = np.array([0, 14, 25, 4, 38])
path = r"C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\test_images_from_web/*.png"
for image_path in glob.glob(path):
X_test_new.append(misc.imread(image_path))
X_test_new = np.asarray(X_test_new)
print('Importing done...')
# Apply normalization, rgb to grayscale conversion and other preprocessing steps
X_test_new = normalize(rgbtograyscale(X_test_new))
print()
print("Dimensions of test dataset:")
print(X_test_new.shape)
# show new test images
figs, axs = plt.subplots(1, 5, figsize = (6, 8))
figs.subplots_adjust(hspace = .4, wspace = .01)
axs = axs.ravel()
for i in range(5):
image = X_test_new[i].squeeze()
axs[i].axis('off')
axs[i].imshow(image, cmap = 'gray')
axs[i].set_title(int(y_test_new[i]))
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
#Run testing
with tf.Session() as sess:
saver_new2 = tf.train.import_meta_graph('./lenet.meta')
saver_new2.restore(sess, "./lenet")
test_loss, test_accuracy = evaluate(X_test_new, y_test_new)
softmax_prob = sess.run(tf.nn.softmax(logits), feed_dict={x: X_test_new})
top5_prob = sess.run(tf.nn.top_k(softmax_prob, k = 5))
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
print("New Test Set Accuracy = {:.3f}".format(test_accuracy))
Explanation: Predict the Sign Type for Each Image and Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
plt.figure(figsize = (8, 16))
for i in range(5):
plt.subplot(5, 2, 2*i+1)
plt.imshow(X_test_new[i].squeeze(), cmap = 'gray')
plt.title(y_test_new[i])
plt.axis('off')
plt.subplot(5, 2, 2*i+2)
plt.barh(np.arange(1, 6, 1), top5_prob.values[i, :])
plt.ylabel("Class")
plt.xlabel("Probability")
labs = top5_prob.indices[i]
plt.yticks(np.arange(1, 6, 1), labs)
plt.show()
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
Explanation: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation |
7,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The competition is to predict the highest future returns for stocks that are actually traded on the Japan Exchange Group, Inc.
In this notebook, we will work with jpx_tokyo_market_prediction, which is unfamiliar to Kaggle beginners, and how to extract the relevant data in the training data.
Table of Contents
Explanation of data
jpx_tokyo_market_prediction
Create models and submit data
TODO
add sharp ratio metrics for evaluation
improve prediction with dataframe operation
pytorch gpu check
Step1: Explanation of data
Loading Modules
First, load the required modules.
In this case, we will use pandas to load the data.
Step2: Check the data
Read stock_price.csv using read_csv in pandas.
Step5: define metric
Step6: Check the form of this data (nrows ,columns) and the contents.
The data contained in stock_price.csv was as follows.
* SecuritiesCode ... Securities Code (number assigned to each stock)
* Open ... Opening price (price per share at the beginning of the day (9
Step7: Create Dataset
Create a data set that can be retrieved for each Code.
First, convert Nan in stock_price_df to 0, bool to int, and 'Date' to datetime.
Step8: Some of them contained missing values, so they were removed.
Step9: 数据预处理是否应该把所有股票放在一起?
stdscStandardize the features (other than RowId, Date, and SecuritiesCode) to be used in this project using sklearn's StandardScaler.
Step10: Store data for each issue in dictionary form and store it in such a way that it can be recalled for each issue.
Step11: Use Pytorch dataloader to recall data for each mini-batch.
Step12: Trainig
For each stock, LSTM training is conducted by repeatedly creating a data set and training the model.
To see the learning status, check the phenomenon of the loss function.
→Can be learned.
Step13: Prediction
The trained model will be used to make predictions on the submitted data.
DataFrame → Ndarray → tensor and transform the data to make predictions.
Step14: Submission
Perform data preparation for submission from jpx_tokyo_market_prediction. | Python Code:
# check gpu env with torch
import torch
print(torch.__version__) # 查看torch当前版本号
print(torch.version.cuda) # 编译当前版本的torch使用的cuda版本号
print("is_cuda_available:", torch.cuda.is_available()) # 查看当前cuda是否可用于当前版本的Torch,如果输出
print('gpu count:', torch.cuda.device_count())
# 查看指定GPU的容量、名称
device = "cuda:0"
print(f"{device} capability:", torch.cuda.get_device_capability(device))
print(f"{device} name:", torch.cuda.get_device_name(device))
Explanation: Introduction
The competition is to predict the highest future returns for stocks that are actually traded on the Japan Exchange Group, Inc.
In this notebook, we will work with jpx_tokyo_market_prediction, which is unfamiliar to Kaggle beginners, and how to extract the relevant data in the training data.
Table of Contents
Explanation of data
jpx_tokyo_market_prediction
Create models and submit data
TODO
add sharp ratio metrics for evaluation
improve prediction with dataframe operation
pytorch gpu check
End of explanation
import numpy as np
import pandas as pd
Explanation: Explanation of data
Loading Modules
First, load the required modules.
In this case, we will use pandas to load the data.
End of explanation
stock_price_df = pd.read_csv("/mnt/d/dataset/quant/kaggle22/train_files/stock_prices.csv")
test_stock_price_df = pd.read_csv("/mnt/d/dataset/quant/kaggle22/supplemental_files/stock_prices.csv")
# stock_price_df = pd.read_csv("../input/jpx-tokyo-stock-exchange-prediction//train_files/stock_prices.csv")
# test_stock_price_df = pd.read_csv("../input/jpx-tokyo-stock-exchange-prediction/supplemental_files/stock_prices.csv")
from datetime import datetime
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
# columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag']
def preprocess(df, processor, columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag'], is_fit=True):
df = df.copy()
df['ExpectedDividend'] = df['ExpectedDividend'].fillna(0)
df['SupervisionFlag'] = df['SupervisionFlag'].map({True: 1, False: 0})
df['Date'] = pd.to_datetime(df['Date'])
# df.info()
df = df.dropna(how='any')
df[columns] = processor.fit_transform(df[columns])
return df
train_df = preprocess(stock_price_df, stdsc, is_fit=True)
test_df = preprocess(test_stock_price_df, stdsc, is_fit=False)
train_df
Explanation: Check the data
Read stock_price.csv using read_csv in pandas.
End of explanation
def calc_spread_return_sharpe(df: pd.DataFrame, portfolio_size: int = 200, toprank_weight_ratio: float = 2, rank='Rank') -> float:
Args:
df (pd.DataFrame): predicted results
portfolio_size (int): # of equities to buy/sell
toprank_weight_ratio (float): the relative weight of the most highly ranked stock compared to the least.
Returns:
(float): sharpe ratio
def _calc_spread_return_per_day(df, portfolio_size, toprank_weight_ratio, rank='Rank'):
Args:
df (pd.DataFrame): predicted results
portfolio_size (int): # of equities to buy/sell
toprank_weight_ratio (float): the relative weight of the most highly ranked stock compared to the least.
Returns:
(float): spread return
assert df[rank].min() == 0
assert df[rank].max() == len(df[rank]) - 1
weights = np.linspace(start=toprank_weight_ratio, stop=1, num=portfolio_size)
purchase = (df.sort_values(by=rank)['Target'][:portfolio_size] * weights).sum() / weights.mean()
short = (df.sort_values(by=rank, ascending=False)['Target'][:portfolio_size] * weights).sum() / weights.mean()
return purchase - short
buf = df.groupby('Date').apply(_calc_spread_return_per_day, portfolio_size, toprank_weight_ratio, rank)
sharpe_ratio = buf.mean() / buf.std()
return sharpe_ratio
# add rank according to Target for train
train_df['Rank'] = train_df.groupby("Date")['Target'].transform('rank', ascending=False, method="first") - 1
train_df['Rank'] = train_df['Rank'].astype(int)
# print(train_df['Rank'].min())
df_astock = train_df[train_df['Date'] == '2021-12-03']
# make sure it's correct
print(df_astock['Rank'].min(), df_astock['Rank'].max())
df_astock.sort_values(by=['Target'], ascending=False)
# sharp = calc_spread_return_sharpe(train_df)
tmpdf = test_df.copy()
tmpdf["Close_shift1"] = tmpdf["Close"].shift(-1)
tmpdf["Close_shift2"] = tmpdf["Close"].shift(-2)
tmpdf["rate"] = (tmpdf["Close_shift2"] - tmpdf["Close_shift1"]) / tmpdf["Close_shift1"]
tmpdf.fillna(value={'rate': 0.}, inplace=True)
tmpdf['Rank'] = tmpdf.groupby("Date")['Target'].transform('rank', ascending=False, method="first") - 1
tmpdf['Rank'] = tmpdf['Rank'].astype(int)
tmpdf
test_df
sharp_train = calc_spread_return_sharpe(train_df)
sharp_test = calc_spread_return_sharpe(tmpdf)
print(f"train={sharp_train}, test={sharp_test}")
train_df.drop(['Rank'], axis=1, inplace=True)
Explanation: define metric: sharp rate and compute for the training data with known Target
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
class LSTM(nn.Module):
def __init__(self, d_feat=6, hidden_size=64, num_layers=2, dropout=0.0):
super().__init__()
self.rnn = nn.LSTM(
input_size=d_feat,
hidden_size=hidden_size,
num_layers=num_layers,
batch_first=True,
dropout=dropout,
)
self.fc_out = nn.Linear(hidden_size, 1)
self.d_feat = d_feat
def forward(self, x):
# x: [N, F*T]
# x = x.reshape(len(x), self.d_feat, -1) # [N, F, T]
# x = x.permute(0, 2, 1) # [N, T, F]
out, _ = self.rnn(x)
return self.fc_out(out[:, -1, :]).squeeze(dim=-1)
Explanation: Check the form of this data (nrows ,columns) and the contents.
The data contained in stock_price.csv was as follows.
* SecuritiesCode ... Securities Code (number assigned to each stock)
* Open ... Opening price (price per share at the beginning of the day (9:00 am))
* High ... High ... the highest price of the day
* Low ... Low price
* Colse ... Closing price
* Volume ... Volume (number of shares traded in a day)
* AdjustmentFactor ... Used to calculate the theoretical stock price and volume at the time of a reverse stock split or reverse stock split
* ExpectedDividend ... Expected dividend on ex-rights date
* SupercisionFlag ... Flag for supervised issues and delisted issues
* Target ... Percentage change in adjusted closing price (from one day to the next)
Although many other data are available for this competition, we will implement this using only the information in stock_price.csv.
jpx_tokyo_market_prediction
Next, we will check the usage of the API named jpx_tokyo_market_prediction.
First, import it as you would any other module.
Since jpx_tokyo_market_prediction can only be executed once, we will write the image in Markdown.
python
import jpx_tokyo_market_prediction
env = jpx_tokyo_market_prediction.make_env()
iter_test = env.iter_test()
The environment was created by executing make_env() and the object was created by executing iter_test().
As shown below, looking at the type, iter_test is a generator, so we can confirm that it is an object that can be called one by one with a for statement.
python
print(type(iter_test))
[出力]
<class 'generator'>
By turning a for statement, check the operation as follows.
python
count = 0
for (prices, options, financials, trades, secondary_prices, sample_prediction) in iter_test:
print(prices.head())
env.predict(sample_prediction)
count += 1
break
[出力]
```
This version of the API is not optimized and should not be used to estimate the runtime of your code on the hidden test set.
Date RowId SecuritiesCode Open High Low Close \
0 2021-12-06 20211206_1301 1301 2982.0 2982.0 2965.0 2971.0
1 2021-12-06 20211206_1332 1332 592.0 599.0 588.0 589.0
2 2021-12-06 20211206_1333 1333 2368.0 2388.0 2360.0 2377.0
3 2021-12-06 20211206_1375 1375 1230.0 1239.0 1224.0 1224.0
4 2021-12-06 20211206_1376 1376 1339.0 1372.0 1339.0 1351.0
Volume AdjustmentFactor ExpectedDividend SupervisionFlag
0 8900 1.0 NaN False
1 1360800 1.0 NaN False
2 125900 1.0 NaN False
3 81100 1.0 NaN False
4 6200 1.0 NaN False
```
The names of each variable are as follows.
* price ... Data for each stock on the target day, the same as the information in stock_price.csv without Target.
* options ... Same information as options.csv for the target date.
* finacials ... Same information as finacials.csv for the target date.
* trades ... Same information as trades.csv of the target date
* secondary_prices ... Same information as secondary_stock_price.csv without Target for the target date.
* sample_prediction ... Data from sample_prediction.csv for the target date.
Thus, if we call the 2000 stocks of the target date one day at a time using jpx_tokyo_market_prediction, forecast them with the model we created, and then create the submitted data with env.predict, we can produce a score.
Create models and submit data
Here, we will create a simple training model using stock_price.csv and implement it up to submission.
Create Model(LSTM)
We use a model called LSTM (Long Short Term Memory).
LSTM is one of the RNNs used for series data and is a model that can learn long-term dependencies.
We will implement LSTM using Pytorch.
End of explanation
# stock_price_df['ExpectedDividend'] = stock_price_df['ExpectedDividend'].fillna(0)
# stock_price_df['SupervisionFlag'] = stock_price_df['SupervisionFlag'].map({True: 1, False: 0})
# stock_price_df['Date'] = pd.to_datetime(stock_price_df['Date'])
# stock_price_df.info()
Explanation: Create Dataset
Create a data set that can be retrieved for each Code.
First, convert Nan in stock_price_df to 0, bool to int, and 'Date' to datetime.
End of explanation
# stock_price_df = stock_price_df.dropna(how='any')
# # Confirmation of missing information
# stock_price_df_na = (stock_price_df.isnull().sum() / len(stock_price_df)) * 100
# stock_price_df_na = stock_price_df_na.drop(stock_price_df_na[stock_price_df_na == 0].index).sort_values(ascending=False)[:30]
# missing_data = pd.DataFrame({'Missing Ratio' :stock_price_df_na})
# missing_data.head(22)
Explanation: Some of them contained missing values, so they were removed.
End of explanation
# from sklearn.preprocessing import StandardScaler
# stdsc = StandardScaler()
# columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag']
# stock_price_df[columns] = stdsc.fit_transform(stock_price_df[columns])
# stock_price_df.head()
Explanation: 数据预处理是否应该把所有股票放在一起?
stdscStandardize the features (other than RowId, Date, and SecuritiesCode) to be used in this project using sklearn's StandardScaler.
End of explanation
dataset_dict = {}
for sc in train_df['SecuritiesCode'].unique():
dataset_dict[str(sc)] = train_df[train_df['SecuritiesCode'] == sc].values[:, 3:].astype(np.float32)
print(dataset_dict['1301'].shape)
Explanation: Store data for each issue in dictionary form and store it in such a way that it can be recalled for each issue.
End of explanation
from torch.utils.data.sampler import SubsetRandomSampler
class MyDataset(torch.utils.data.Dataset):
def __init__(self, X, sequence_num=31, y=None, mode='train'):
self.data = X
self.teacher = y
self.sequence_num = sequence_num
self.mode = mode
def __len__(self):
return len(self.teacher)
def __getitem__(self, idx):
out_data = self.data[idx]
if self.mode == 'train':
out_label = self.teacher[idx[-1]]
return out_data, out_label
else:
return out_data
def create_dataloader(dataset, dataset_num, sequence_num=31, input_size=8, batch_size=32, shuffle=False):
sampler = np.array([list(range(i, i+sequence_num)) for i in range(dataset_num-sequence_num+1)])
if shuffle is True:
np.random.shuffle(sampler)
dataloader = torch.utils.data.DataLoader(dataset, batch_size, sampler=sampler)
return dataloader
test_df.loc[test_df['Date'] == "2021-12-06"]
Explanation: Use Pytorch dataloader to recall data for each mini-batch.
End of explanation
import matplotlib.pyplot as plt
plt.plot(log_train[0][1:], log_train[1][1:])
plt.plot(log_eval[0][1:], log_eval[1][1:])
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
from tqdm import tqdm
import time
import os
import copy
output_dir = "output_lstm"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
epochs = 20
batch_size = 512
seq_len = 14
num_layers = 2
input_size = 5
lstm_dim = 64
dropout = 0.
# Check wheter GPU is available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Model Instantiation
model = LSTM(d_feat=input_size, hidden_size=lstm_dim, num_layers=num_layers, dropout=dropout)
model.to(device)
model.train()
# setting optimizer
lr = 0.0001
weight_decay = 1.0e-05
optimizer = torch.optim.Adagrad(model.parameters(), lr=lr, weight_decay=weight_decay)
# setting criterion
criterion = nn.MSELoss()
def train_epoch(train_df, model, seq_len=30, batch_size=512):
groups = train_df.groupby(['SecuritiesCode'])
total_loss = 0.
iteration = 0
model.train()
def collect_batch_index(): # index for a stock with seq_len continuous days
batch_index = []
for sc, group in groups:
indices = np.arange(len(group))
for i in range(len(indices))[:: seq_len]:
if len(indices) - i < seq_len:
break
batch_index.append(group.index[i: i + seq_len])
return batch_index
batch_index = collect_batch_index()
indices = np.arange(len(batch_index))
np.random.shuffle(indices)
for i in range(len(indices))[:: batch_size]:
# if len(indices) - i < batch_size:
# break
x_train = []
y_train = []
for index in indices[i: i + batch_size]:
values = train_df.loc[batch_index[index]].values
x_train.append(values[:, 3: 3 + input_size].astype(np.float32))
y_train.append(values[:, -1][-1])
# print(y_train)
feature = torch.from_numpy(np.vstack(x_train).reshape((len(y_train), seq_len, -1))).float().to(device)
label = torch.from_numpy(np.vstack(y_train)).flatten().float().to(device)
# print(feature.size(), label.size())
pred = model(feature)
# print(pred.size(), label.size())
loss = criterion(pred, label)
total_loss += loss.item()
# if list(label.size())[0] < batch_size:
# print('train', pred.size(), label.size(), feature.size())
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_value_(model.parameters(), 3.0)
optimizer.step()
iteration += 1
return total_loss/iteration
def test_epoch(train_df, model, seq_len=30, batch_size=512):
groups = train_df.groupby(['SecuritiesCode'])
total_loss = 0.
iteration = 0
model.eval()
tmp_df = train_df.copy()
tmp_df['pred'] = 0 # 默认没预测的涨幅都为 0
def collect_batch_index(): # index for a stock with seq_len continuous days
batch_index = []
for sc, group in groups:
indices = np.arange(len(group))
for i in range(len(indices))[:: seq_len]:
if len(indices) - i < seq_len:
break
batch_index.append(group.index[i: i + seq_len])
return batch_index
batch_index = collect_batch_index()
indices = np.arange(len(batch_index))
pre_indices = [index[-1] for index in batch_index]
pred_array = np.array([])
for i in range(len(indices))[:: batch_size]:
# if len(indices) - i < batch_size:
# break
x_train = []
y_train = []
for index in indices[i: i + batch_size]:
values = train_df.loc[batch_index[index]].values
# see the train_df format upstair
x_train.append(values[:, 3: 3 + input_size].astype(np.float32))
y_train.append(values[:, -1][-1])
feature = torch.from_numpy(np.vstack(x_train).reshape((len(y_train), seq_len, -1))).float().to(device)
label = torch.from_numpy(np.vstack(y_train)).flatten().float().to(device)
pred = model(feature)
loss = criterion(pred, label)
# if list(label.size())[0] < batch_size:
# print('test', pred.size(), label.size(), feature.size())
total_loss += loss.item()
# print(pred)
pred_array = np.append(pred_array, pred.detach().cpu().numpy())
iteration += 1
# print(len(pre_indices), len(pred_array))
tmp_df.loc[pre_indices, 'pred'] = pred_array
tmp_df['Rank'] = tmp_df.groupby("Date")['pred'].transform('rank', ascending=False, method="first") - 1
tmp_df['Rank'] = tmp_df['Rank'].astype(int)
sharp = calc_spread_return_sharpe(tmp_df)
return total_loss/iteration, sharp
log_train = [[0], [np.inf]]
log_eval = [[0], [np.inf]]
best_eval_loss = np.inf
best_model_path = 'Unknown'
if True:
model.eval()
train_loss, train_sharp = test_epoch(train_df, model, batch_size=batch_size, seq_len=seq_len)
test_loss, test_sharp = test_epoch(test_df, model, batch_size=batch_size, seq_len=seq_len)
print("with training, random train_loss={}, train_sharp={}, eval_loss={}, eval_sharp={}".format(train_loss, train_sharp, test_loss, test_sharp))
_tqdm = tqdm(range(epochs))
for epoch in _tqdm:
epoch_loss = 0.0
# set iteration counter
iteration = 0
start_time = time.time()
epoch_loss = train_epoch(train_df, model, seq_len=seq_len, batch_size=batch_size)
end_time = time.time()
# print('epoch_loss={}'.format(epoch_loss))
log_train[0].append(epoch)
log_train[1].append(epoch_loss)
# eval
eval_loss, sharp = test_epoch(test_df, model, seq_len=seq_len, batch_size=batch_size)
train_loss, train_sharp = test_epoch(train_df, model, seq_len=seq_len, batch_size=batch_size)
log_eval[0].append(epoch)
log_eval[1].append(eval_loss)
if best_eval_loss > eval_loss:
best_eval_loss = eval_loss
best_model_path = f"{output_dir}/{epoch}.pt"
# print("epoch {}, run_time={}, train loss={}, eval_loss={}".format(epoch, (end_time - start_time), epoch_loss, eval_loss))
print("epoch {}, run_time={}, train loss={}, eval_loss={}, eval_sharp={}".format(epoch, (end_time - start_time), epoch_loss, eval_loss, sharp))
print("\t train_loss={}, train_sharp={}".format(train_loss, train_sharp))
# _tqdm.set_description("epoch {}, run_time={}, train loss={}, eval_loss={}, eval_sharp={}".format(epoch, (end_time - start_time), epoch_loss, eval_loss, sharp))
# save mode
save_path = f"{output_dir}/{epoch}.pt"
param = copy.deepcopy(model.state_dict())
torch.save(param, save_path)
print(best_model_path)
# best_model_path = "output_lstm/17.pt"
model.load_state_dict(torch.load(best_model_path))
Explanation: Trainig
For each stock, LSTM training is conducted by repeatedly creating a data set and training the model.
To see the learning status, check the phenomenon of the loss function.
→Can be learned.
End of explanation
from datetime import datetime
columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag']
def predict(model, X_df, sequence=30):
pred_df = X_df[['Date', 'SecuritiesCode']]
# Grouping by `groupby` and retrieving one by one
code_group = X_df.groupby('SecuritiesCode')
X_all = np.array([])
for sc, group in code_group:
# Standardize target data
group_std = stdsc.transform(group[columns])
# Calling up past data of the target data
X = dataset_dict[str(sc)][-1*(sequence-1):, :-1]
# concat
group_std_add = np.zeros((group_std.shape[0], group_std.shape[1]+1))
group_std_add[:, :-1] = group_std
dataset_dict[str(sc)] = np.vstack((dataset_dict[str(sc)], group_std_add))
X = np.vstack((X[:, :input_size], group_std[:, :input_size]))
X_all = np.append(X_all, X)
X_all = X_all.reshape(-1, sequence, X.shape[1])
y_pred = np.array([])
for it in range(X_all.shape[0]//512+1):
data = X_all[it*512:(it+1)*512]
data = torch.from_numpy(data.astype(np.float32)).clone()
data = data.to(torch.float32)
data = data.to(device)
print('input size', data.size())
# print(data)
output = model.forward(data)
# print(output)
# output = output.view(1, -1)
output = output.to('cpu').detach().numpy().copy()
y_pred = np.append(y_pred, output)
pred_df['target'] = y_pred
# print(y_pred, y_pred.shape)
pred_df['Rank'] = pred_df["target"].rank(ascending=False, method="first") - 1
pred_df['Rank'] = pred_df['Rank'].astype(int)
pred_df = pred_df.drop('target', axis=1)
return pred_df
Explanation: Prediction
The trained model will be used to make predictions on the submitted data.
DataFrame → Ndarray → tensor and transform the data to make predictions.
End of explanation
import sys
sys.path.append("/mnt/d/dataset/quant/kaggle22/")
import jpx_tokyo_market_prediction
env = jpx_tokyo_market_prediction.make_env()
iter_test = env.iter_test()
count = 0
for (prices, options, financials, trades, secondary_prices, sample_prediction) in iter_test:
prices = prices.fillna(0)
prices['SupervisionFlag'] = prices['SupervisionFlag'].map({True: 1, False: 0})
prices['Date'] = pd.to_datetime(prices['Date'])
pred_df = predict(model, prices)
# print(pred_df)
env.predict(pred_df)
count += 1
pred_df
Explanation: Submission
Perform data preparation for submission from jpx_tokyo_market_prediction.
End of explanation |
7,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning 101++ in Python
Created by
Step1: 2. Linear Regression
Linear Regression assumes a linear relationship between 2 variables.
As an example we'll consider the historical page views of a web server and compare it to its CPU usage. We'll try to predict the CPU usage of the server based on the page views of the different pages.
2.1 Data import and inspection
Let's import the data and take a look at it.
Step2: The orange line on the plot above is the number of page views in blue and the orange line is the CPU load that viewing this pages generates on the server.
2.2 Simple linear regression
First, we're going to work with the total page views on the server, and compare it to the CPU usage. We can make use of a PyPlot's scatter plot to understand the relation between the total page views and the CPU usage
Step3: There clearly is a strong correlation between the page views and the CPU usage. Because of this correlation we can build a model to predict the CPU usage from the total page views. If we use a linear model we get a formula like the following
Step4: Now we need to feed the data to the model to fit it.
```
X = [[x_11, x_12, x_13, ...], y = [y_1,
[x_21, x_22, x_23, ...], y_2,
[x_31, x_32, x_33, ...], y_3,
...] ...]
```
In general, the model.fit(X,y) method takes a matrix X and vector y as arguments and tries to find coefficients that allow to predict the y_i's from the x_ij's. In our case the matrix X will consist of only one column containing the total page views. Our total_page_views variable however, is still only a one-dimensional vector, so we need to np.reshape() it into a two-dimensional array. Since there is only one feature the second dimension should be 1. You can leave one dimension unspecified by passing -1, it will be determined from the size of the data.
Then we fit our model using the the total page views and cpu. The coefficients found are automatically stored in the simple_lin_model object.
Step5: We can now inspect the coefficient $c_1$ and constant term (intercept) $c_0$ of the model
Step6: So this means that each additional page view adds about 0.11% CPU load to the server and all the other processes running on the server consume on average 0.72% CPU.
Once the model is trained we can use it to predict the outcome for a given input (or array of inputs). Note that the predict function requires a 2-dimensional array similar to the fit function.
What is the expected CPU usage when we have 880 page views per second?
Step7: What is the expected CPU usage when we have 1000 page views per second? Is this technically possible? Why does the model predict it this way?
Step8: Now we plot the linear model together with our data to verify it captures the relationship correctly (the predict method can accept the entire total_page_views array at once).
Step9: Our model can calculate the R2 score indicating how well the linear model captures the data. A score of 1 means there is perfect linear correlation and the model can fit the data perfectly, a score of 0 (or lower) means that there is no correlation at all (and it does not make sense to try to model it that way). The score method takes the same arguments as the fit method.
Step10: 2.3 Extrapolation
Now let's repeat this experiment with similar but different data. We will try to predict what the CPU usage will be if there will be 8 page views (per second).
Step11: Now let's plot what you have done.
Step12: Is this what you would expect? Can you see what's wrong?
Let's plot the time series again to get a different view at the data.
Step13: The spikes of CPU usage are actually backups that run at night and they can be ignored. So repeat the exercise again but ignore these data points.
You can subselect parts of arrays with a second array that holds True / False values like so
Step14: So what you should have learned from the previous exercise is that you should always look at your data and/or write scripts to inspect your data. Additionally extrapolation does not always work because there are no training examples in that area.
3. Multiple linear regression
A server can host different pages and each of the page views will generate load on the CPU. This load will however not be the same for each page.
Now let us consider the separate page views and build a linear model for that. The model we try to fit takes the form
Step15: Let's have a look at this data.
Step16: We start again by creating a LinearRegression model.
Step17: Next we fit the model on the data, using multi_lin_model.fit(X,y). In contrast to the case above our page_views variable already has the correct shape to pass as the X matrix
Step18: Now, given the coefficients calculated by the model, which capture the contribution of each page view to the total CPU usage, we can start to answer some interesting questions. For example,
which page view causes most CPU usage, on a per visit basis?
For this we can generate a table of page names with their coefficients in descending order
Step19: From this table we see that 'resources/js/basket.js' consumes the most per CPU per view. It generates about 0.30% CPU load for each additional page view. 'products/science.html' on the other hand is much leaner and only consumes about 0.04% CPU per view. Does this seem to be correct if you look at the scatter plot above?
Now let us investigate the constant term again.
Step20: As you can see this term is very similar to the result achieved in single linear regression, but it is not entirely the same. This means that these models are not perfect. However, they seem to be able to give a reliable estimate.
Now let's compute the R2 score.
Step21: As you can see from the R2 score, this model performs better. It can explain 91.5% of the variance instead of just 90.5% of the variance. So this gives the impression that this model is more accurate.
4. Non-linear Regression
Sometimes linear relations don't cut it anymore, so you might want a more complex method. There are 2 approaches to this
Step22: For our training set, we will calculate 10 y values from evenly spaced x values using this function.
Step23: Now let's try to fit a model to this data with linear regression.
Step24: As you can see this fit is not optimal.
4.2. Fitting a sine function using polynomial expansion
One of the easiest ways to make your machine learning technique more intelligent is to extract relevant features from the data. These features can be anything that you can find that will make it easier for the method to be able to fit the data. This means that as a machine learning engineer it is best to know and understand your data.
As some of you might remember from math class, you can create an approximation of any function (including a sine function) using a polynomial function with the Taylor expansion. So we will use that approach to learn a better fit.
In this case we will create what we call features using a polynomial expansion. If you set the degree to 3 it will generate data of the 0d, 1st, 2nd and 3rd order (including cross products) as shown in the example below (change x and degree to see the different expansions of x to a certain degree).
Step25: As you can see above this function transforms $x$ into [$x^0$, $x^1$, $x^2$, $x^3$] with $x^0=1$ and $x^1 = x$. If you have 2 inputs it will also take the cross products so that [$x_1$, $x_2$] is transformed into
Step26: In this example we only have 1 input so the number of features is always the degree + 1.
Because of this polynomial features extraction finding of the coefficients of the polynomial becomes a linear problem, so similar to the previous exercise on multiple linear regression you can find the optimal weights as follows
Step27: Now play with the degree of the polynomial expansion function below to create better features. Search for the optimal degree.
Step28: What do you notice? When does it work better? And when does it work best?
Now let's test this on new and unseen data.
Step29: If everything is correct your score is very close to 1. Which means that we have built a model that can fit this data (almost) perfectly.
4.3. Add noise to the equation
Sadly all the data that we measure or gather doesn't have the mathematical precision of the data we used here. Quite often our measurements contain noise.
So let us repeat this process for data with more noise. Similarly as above, you have to choose the optimal degree of the polynomials.
Step30: Now let's see what this results to in the test set.
Step31: As you can clearly see, this result is not that good. Why do you think this is?
Now plot the result to see the function you created.
Step32: Is this what you expect?
Now repeat the process below a couple of times for random noise.
Step33: What did you observe? And what is the method learning? And how can you avoid this?
Try to figure out a solution for this problem without changing the noise level.
Step34: 5. Over-fitting and Cross-Validation
What you have experienced above is called over-fitting and happens when your model learns the noise that is inherent in the data.
This problem was caused because there were to many parameters in the model. So the model was too advanced so that it became capable of learning the noise in the data by heart. Reducing the number of parameters solves this problem. But how do you know how many parameters is optimal?
(Another way to solve this problem is to use more data. Because if there are more data points in the data and if there is more noise, your model isn't able to learn all that noise anymore and you get a better result. Since it's often not possible to gather more data we will not take this approach.)
In the exercise above you had to set the number of polynomial functions to get a better result, but how can you estimate this in a reliable way without manually selection the optimal parameters?
5.1. Validation set
A common way to solve this problem is through the use of a validation set. This means that you use a subset of the training data to train your model on, and another subset of the training data to validate your parameters. Based on the score of your model on this validation set you can select the optimal parameter.
So use this approach to select the best number of polynomials for the noisy sine function.
Step35: Now let's train on the entire train set (including the validation set) and test this result on the test set with the following code.
Step36: As you can see this approach works to select the optimal degree. Usually the test score is lower than the validation score, but in this case it is not because the test data doesn't contain noise.
5.2. Cross-Validation
To improve this procedure you can repeat the process above for different train and validation sets so that the optimal parameter is less dependent on the way the data was selected.
One basic strategy for this is leave-one-out cross validation, where each data point is left out of the train set once, and the model is then validated on this point. Now let's implement this. First make a 2-dimensional array results to store all your results using the np.ones() function
Step37: Let's plot these results in a box plot to get an idea on how well the models performed on average.
Tip
Step38: Next we will compute the best degree.
Step39: Now let's train the model on the entire train set (including the validation set) and have a look at the result.
Step40: As you can see this automatic way of selecting the optimal degree has resulted in a good fit for the sine function.
5.3 Regularisation
When you have too many parameters in your model, there is a risk of over-fitting, i.e. your model learns the noise. To avoid this, techniques have been developed to make an estimation of this noise.
One of these techniques is Ridge Regression. This linear regression technique has an additional parameter called the regularisation parameter. This parameter basically sets the standard deviation of the noise you want to remove. The effect in practice is that it makes sure the weights of linear regression remain small and thus less over-fitting.
Since this is an additional parameter that needs to be set, it needs to be set using cross-validation as well. Luckily sklearn developed a method that does this for us in a computational efficient way called sklearn.linear_model.RidgeCV()
Step41: As you can see above, the result of Ridge Regression is not as good as reducing the number of features in this example. However it works a lot better than without regularisation (try that). In the example above you will notice that it makes the result a lot smoother and removes the unwanted spikes. It will actually make sure that if you have too many features you still get a reasonable result. So this means that it should be in your standard toolkit.
The removal of the extra features can be automated using feature selection. A very short introduction to sklearn on the topic can be found here.
Another method that is often used is sklearn.linear_model.LassoCV() which actually combines removal of features and estimation of the noise. It is however very dependent on the dataset which of the two methods performs best.
Cross-validation should be applied to any parameter you set in your function and that without looking at the test set.
Over-fitting is one of the biggest issues in machine learning and a lot of the research that is currently being done in machine learning is a search for techniques to avoid over-fitting. As a starting point we list a few of the techniques that you can use to avoid over-fitting
Step42: As you can see, the extrapolation results for non-linear regression are even worse than for those of linear regression. This is because models only work well in the input space they have been trained in.
A possible way to be able to extrapolate and to use a non-linear method is to use forecasting techniques. This is handled in part 7, an optional part for those interested and going through the tutorial quite fast. Otherwise continue to the section on classification in exercise 6.
6. Classification
In classification the purpose is to separate 2 classes. As an example we will use the double spiral. It is a very common toy example in machine learning and allows you to visually show what is going on.
As shown in the graph below the purpose is to separate the blue from the red dots.
Step43: In a colored image this is easy to do, but when you remove the color it becomes much harder. Can you do the classification in the image below?
In black the samples from the train set are shown and in yellow the samples from the validation set.
Step44: As you can see classifying is very hard to do when you don't get the answer even if you saw the solution earlier. But you will see that machine learning algorithms can solve this quite well if they can learn from examples.
This figure also illustrates that the train and validation set are from the same distribution, which is why they look very similar on the plot. If you want to put a model trained on this data set in production and the real data is from the same distribution, you can expect similar results in real life as on your validation set.
6.1 Linear classifier
Let's try to do this with a linear classifier.
Logistic regression, despite its name, is a linear model for classification rather than regression. Its sklearn implementation is sklearn.linear_model.LogisticRegression().
Step45: Now let's plot the result.
Step46: As you can see a linear classifier returns a linear decision boundary.
6.2 Non-linear classification
Now let's do this better with a non-linear classifier using polynomials. Play with the degree of the polynomial expansion and look for the effect on the validation set accuracy of the LogisticRegressionCV() model. This is a more advanced version of the default LogisticRegression() that uses cross validation to tune its hyper-parameters. What gives you the best results?
If you get a lot of "failed to converge" warnings consider increasing the max_iter parameter to 1000 or so. Getting some warnings is normal.
Step47: If everything went well you should get a validation/test accuracy very close to 0.8.
6.3 Random Forests
An often used technique in machine learning are random forests. Basically they are decision trees, or in programmers terms, if-then-else structures, like the one shown below.
<img src="images/tree.png" width=70%>
Decision trees are know to over-fit a lot because they just learn the train set by heart and store it. Random forests on the other hand combine multiple different (randomly initialized) decision trees that all over-fit in their own way. But by combining their output using a voting mechanism, they tend to cancel out each other's mistakes. This approach is called an ensemble and can be used for any combination of machine learning techniques. A schema representation of how such a random forest works is shown below.
<img src="images/random_forest.jpg">
Now let's try to use a random forest to solve the double spiral problem. (see sklearn.ensemble.RandomForestClassifier())
Step48: As you can see they are quite powerful right out of the box without any parameter tuning. But we can get the results even better with some fine tuning.
Try changing the min_samples_leaf parameter for values between 0 and 0.5.
Step49: The min_samples_leaf parameter sets the number of data points that can create a new branch/leaf in the tree. So in practice it limits the depth of the decision tree. The bigger this parameter is, the less deep the tree will be and less likely each tree will over-fit.
For this parameter you can set integer numbers to set the specific number of samples, or you can use values between 0 and 0.5 to express a percentage of the size of the dataset. Since you might experiment with a smaller dataset to roughly tune your parameters, it is best to use values between 0 and 0.5 so that the value you chose is not as dependant on the size of the dataset you are working with.
Now that you have found the optimal min_samples_leaf run the code again with the same parameter. Do you get the same result? Why not?
Another parameter to play with is the n_estimators parameter. Play with only this parameter to see what happens.
Step50: As you can see increasing the number of estimators improves the model and reduces over-fitting. This parameter actually sets the number of trees in the random forest. The more trees there are in the forest the better the result is. But obviously it requires more computing power so that is the limiting factor here.
This is the basic idea behind ensembles
Step51: As you have noticed by now it seems that random forests are less powerful than linear regression with polynomial feature extraction. This is because these polynomials are ideally suited for this task. This also means that you could get a better result if you would also apply polynomial expansion for random forests. Try that below.
Step52: As you have may have noticed, it is hard to get results that are better than the ones obtained using logistic regression. This illustrates that linear techniques are very powerful and often underrated. But in some situations they are not powerful enough and you need something stronger like a random forest or even neural networks (check this simulator if you want to play with the latter).
There is one neat trick that can be used for random forests. If you set the n_jobs it will use more than 1 core to compute. Set it to -1 to use all the cores (including hyper-threading cores). But don't do that during this tutorial because that would block the machine you are all working on.
To avoid over-fitting, you can set the max_depth parameter for random forests which limits the maximum depth of each tree. Alternatively, you can set the min_samples_split parameter which determines how many data points you need at least before you create another split (this is an additional if-else structure) while building the tree. Or the min_samples_leaf that sets the minimum amount of data points you have in each leaf. All 3 parameters are dependent on the number of data points in your dataset especially the last 2 so don't forget to adapt them if you have been playing around with a small subset of the data. (A good trick to solve this might be to use a range similar to [0.0001, 0.001, 0.01, 0.1] * len(x_train). Feel free to extend the range in any direction. It is generally good practice to construct them using a log scale like in the example, or better like this
Step53: In the graph above you can clearly see that there is a rising trend in the data.
7.1 One-step ahead prediction
This forecasting section will describe the one-step ahead prediction. In this case, this means that we will only predict the next data point i.e. the number of page views in the next hour.
Now let's first build a model that tries to predict the next data point from the previous one.
We will use a technique called teacher forcing where we assume that the output of the previous prediction is correct. This means that we can use the original time series as input. Now we only need to align input and output so that the output corresponds to the next sample in the input time series.
Step54: As you can see from the score above, the model is not perfect but it seems to get a relatively high score. Now let's make a prediction into the future and plot this.
To predict the data point after that we will use the predicted data to make a new prediction. The code below shows how this works for this data set using the linear model you used earlier. Don't forget to fill out the missing code.
Step55: As you can see from the image above the model doesn't quite seem to fit the data well. Let's see how we can improve this.
7.2 Multiple features
If your model is not smart enough there is a simple trick in machine learning to make your model more intelligent (but also more complex). This is by adding more features.
To make our model better we will use more than 1 sample from the past. To make your life easier there is a simple function below that will create a data set for you. The width parameter sets the number of hours in the past that will be used.
Step56: As you can see from the print above both x_train and y_train contains 303 data points. For x_train you see that there are now 5 features which contain the page views from the 5 past hours.
So let's have a look what the increase from 1 to 5 features results to.
Step57: Now change the width parameter to see if you can get a better score.
7.3 Over-fitting
Now execute the code below to see the prediction of this model.
Step58: As you can see in the image above the prediction is not what you would expect from a perfect model. What happened is that the model learned the training data by heart without 'understanding' what the data is really about. This phenomenon is called over-fitting and will always occur if you make your model too complex.
Now play with the width variable below to see if you can find a more sensible width.
Step59: As you will have noticed by now is that it is better to have a non-perfect score which will give you a much better outcome. Now try the same thing for the following models
Step60: If everything is correct the LassoCV methods was selected.
Now we are going to train this best model on all the data. In this way we use all the available data to build a model. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (13.0, 8.0)
%matplotlib inline
import pickle
import sklearn
import sklearn.linear_model
import sklearn.preprocessing
import sklearn.gaussian_process
import sklearn.ensemble
Explanation: Machine Learning 101++ in Python
Created by:
Pieter Buteneers (@PieterButeneers), Director of Engineering in ML & AI at sinch.com
Bart De Vylder, Senior Research Engineer at Google DeepMind
Jeroen Boeye (@JeroenBoeye), Senior Machine Learning Engineer at Faktion
Joris Boeye (@JorisBoeye), Senior Data Scientist at ZF Wind Power
1. Imports
Let's first start with importing all the necessary packages. Some imports will be repeated in the exercises but if you want to skip some parts you can just execute the imports below and start with any exercise.
End of explanation
import pickle # Pickle files allow us to easily save and load python objects.
with open('data/cpu_page_views.pickle', 'rb') as file:
cpu_usage, page_views, page_names, total_page_views = pickle.load(file, encoding='latin1')
print('Array shapes:')
print('-'*25)
print(f'cpu_usage\t {cpu_usage.shape}')
print(f'page_views\t {page_views.shape}')
print(f'page_names\t {page_names.shape}')
print(f'total_page_views {total_page_views.shape}')
plt.figure(figsize=(13, 6))
plt.plot(total_page_views, label='Total page views')
plt.plot(cpu_usage, label='CPU %')
plt.legend()
plt.show()
Explanation: 2. Linear Regression
Linear Regression assumes a linear relationship between 2 variables.
As an example we'll consider the historical page views of a web server and compare it to its CPU usage. We'll try to predict the CPU usage of the server based on the page views of the different pages.
2.1 Data import and inspection
Let's import the data and take a look at it.
End of explanation
plt.figure(figsize=(13, 6))
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
### BEGIN SOLUTION
plt.scatter(total_page_views, cpu_usage)
### END SOLUTION
# plt.scatter( ? , ? )
plt.show()
Explanation: The orange line on the plot above is the number of page views in blue and the orange line is the CPU load that viewing this pages generates on the server.
2.2 Simple linear regression
First, we're going to work with the total page views on the server, and compare it to the CPU usage. We can make use of a PyPlot's scatter plot to understand the relation between the total page views and the CPU usage:
End of explanation
import sklearn.linear_model
simple_lin_model = sklearn.linear_model.LinearRegression()
Explanation: There clearly is a strong correlation between the page views and the CPU usage. Because of this correlation we can build a model to predict the CPU usage from the total page views. If we use a linear model we get a formula like the following:
$$ \text{cpu_usage} = c_0 + c_1 \text{total_page_views} $$
Since we don't know the exact values for $c_0$ and $c_1$ we will have to compute them. For that we'll make use of the scikit-learn machine learning library for Python and use least-squares linear regression
End of explanation
### BEGIN SOLUTION
simple_lin_model.fit(total_page_views.reshape((-1, 1)), cpu_usage)
### END SOLUTION
# simple_lin_model.fit( ? , ? )
Explanation: Now we need to feed the data to the model to fit it.
```
X = [[x_11, x_12, x_13, ...], y = [y_1,
[x_21, x_22, x_23, ...], y_2,
[x_31, x_32, x_33, ...], y_3,
...] ...]
```
In general, the model.fit(X,y) method takes a matrix X and vector y as arguments and tries to find coefficients that allow to predict the y_i's from the x_ij's. In our case the matrix X will consist of only one column containing the total page views. Our total_page_views variable however, is still only a one-dimensional vector, so we need to np.reshape() it into a two-dimensional array. Since there is only one feature the second dimension should be 1. You can leave one dimension unspecified by passing -1, it will be determined from the size of the data.
Then we fit our model using the the total page views and cpu. The coefficients found are automatically stored in the simple_lin_model object.
End of explanation
print(f"Coefficient = {simple_lin_model.coef_[0]:.2f}\nConstant term = {simple_lin_model.intercept_:.2f}")
Explanation: We can now inspect the coefficient $c_1$ and constant term (intercept) $c_0$ of the model:
End of explanation
### BEGIN SOLUTION
simple_lin_model.predict([[880]])
### END SOLUTION
# simple_lin_model.predict( [[ ? ]] )
Explanation: So this means that each additional page view adds about 0.11% CPU load to the server and all the other processes running on the server consume on average 0.72% CPU.
Once the model is trained we can use it to predict the outcome for a given input (or array of inputs). Note that the predict function requires a 2-dimensional array similar to the fit function.
What is the expected CPU usage when we have 880 page views per second?
End of explanation
### BEGIN SOLUTION
simple_lin_model.predict([[100]])
### END SOLUTION
# simple_lin_model.predict( [[ ? ]] )
Explanation: What is the expected CPU usage when we have 1000 page views per second? Is this technically possible? Why does the model predict it this way?
End of explanation
plt.figure(figsize=(13, 6))
plt.scatter(total_page_views, cpu_usage, color='black')
plt.plot(total_page_views, simple_lin_model.predict(total_page_views.reshape((-1, 1))), color='blue', linewidth=3)
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
plt.show()
Explanation: Now we plot the linear model together with our data to verify it captures the relationship correctly (the predict method can accept the entire total_page_views array at once).
End of explanation
R2 = simple_lin_model.score(total_page_views.reshape((-1, 1)), cpu_usage)
print(f'R2 = {R2:.3f}')
Explanation: Our model can calculate the R2 score indicating how well the linear model captures the data. A score of 1 means there is perfect linear correlation and the model can fit the data perfectly, a score of 0 (or lower) means that there is no correlation at all (and it does not make sense to try to model it that way). The score method takes the same arguments as the fit method.
End of explanation
with open('data/cpu_page_views_2.pickle', 'rb') as file:
cpu_usage, total_page_views = pickle.load(file, encoding='latin1')
print('Array shapes:')
print('-'*25)
print(f'cpu_usage\t {cpu_usage.shape}')
print(f'total_page_views {total_page_views.shape}')
simple_lin_model = sklearn.linear_model.LinearRegression()
simple_lin_model.fit(total_page_views, cpu_usage)
### BEGIN SOLUTION
prediction = simple_lin_model.predict([[8]])
### END SOLUTION
# prediction = simple_lin_model.predict(?)
print(f'The predicted value is: {prediction}')
assert prediction < 25
Explanation: 2.3 Extrapolation
Now let's repeat this experiment with similar but different data. We will try to predict what the CPU usage will be if there will be 8 page views (per second).
End of explanation
all_page_views = np.concatenate((total_page_views, [[8]]))
plt.figure(figsize=(13, 6))
plt.scatter(total_page_views, cpu_usage, color='black')
plt.plot(all_page_views, simple_lin_model.predict(all_page_views), color='blue', linewidth=3)
plt.axvline(8, color='r')
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
plt.show()
Explanation: Now let's plot what you have done.
End of explanation
plt.figure(figsize=(16, 5))
plt.plot(total_page_views, label='Total page views')
plt.plot(cpu_usage, label='CPU %')
plt.legend()
plt.show()
x = np.array([1, 2, 3])
selection = np.array([True, False, True])
x[selection]
Explanation: Is this what you would expect? Can you see what's wrong?
Let's plot the time series again to get a different view at the data.
End of explanation
### BEGIN SOLUTION
selection = cpu_usage < 25
### END SOLUTION
# selection = ?
assert selection.dtype == np.dtype('bool'), 'The selection variable should be an array of True/False values'
assert len(selection) == len(total_page_views)
simple_lin_model = sklearn.linear_model.LinearRegression()
simple_lin_model.fit(total_page_views[selection], cpu_usage[selection])
prediction = simple_lin_model.predict([[8]])
print(f'The predicted value is: {prediction}')
all_page_views = np.concatenate((total_page_views, [[8]]))
plt.figure(figsize=(13, 6))
plt.scatter(total_page_views, cpu_usage, c=selection, cmap='RdYlGn')
plt.plot(all_page_views, simple_lin_model.predict(all_page_views), color='blue', linewidth=3)
plt.axvline(8, color='r')
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
plt.show()
assert prediction > 23
Explanation: The spikes of CPU usage are actually backups that run at night and they can be ignored. So repeat the exercise again but ignore these data points.
You can subselect parts of arrays with a second array that holds True / False values like so:
x = np.array([1, 2, 3])
selection = np.array([True, False, True])
print(x[selection])
array([1, 3])
Try to create this selection array with True values where there is no backup going on and False when the backup occurs.
End of explanation
# load the data
with open('data/cpu_page_views.pickle', 'rb') as file:
cpu_usage, page_views, page_names, total_page_views = pickle.load(file, encoding='latin1')
print('Array shapes:')
print('-'*25)
print(f'cpu_usage\t {cpu_usage.shape}')
print(f'page_views\t {page_views.shape}')
print(f'page_names\t {page_names.shape}')
print(f'total_page_views {total_page_views.shape}\n')
print(page_names)
Explanation: So what you should have learned from the previous exercise is that you should always look at your data and/or write scripts to inspect your data. Additionally extrapolation does not always work because there are no training examples in that area.
3. Multiple linear regression
A server can host different pages and each of the page views will generate load on the CPU. This load will however not be the same for each page.
Now let us consider the separate page views and build a linear model for that. The model we try to fit takes the form:
$$\text{cpu_usage} = c_0 + c_1 \text{page_views}_1 + c_2 \text{page_views}_2 + \ldots + c_n \text{page_views}_n$$
where the $\text{page_views}_i$'s correspond the our different pages:
End of explanation
plt.figure(figsize=(13, 6))
for i in range(len(page_names)):
plt.plot(page_views[:,i], label=page_names[i])
plt.plot(cpu_usage, label= 'CPU %')
plt.legend()
plt.show()
plt.figure(figsize=(13, 6))
for i in range(len(page_names)):
plt.scatter(page_views[:,i], cpu_usage, label=page_names[i])
plt.xlabel("Page views")
plt.ylabel("CPU usage")
plt.legend()
plt.show()
Explanation: Let's have a look at this data.
End of explanation
multi_lin_model = sklearn.linear_model.LinearRegression()
Explanation: We start again by creating a LinearRegression model.
End of explanation
### BEGIN SOLUTION
multi_lin_model.fit(page_views, cpu_usage)
### END SOLUTION
# multi_lin_model.fit( ? , ? )
Explanation: Next we fit the model on the data, using multi_lin_model.fit(X,y). In contrast to the case above our page_views variable already has the correct shape to pass as the X matrix: it has one column per page.
End of explanation
# Some quick and dirty code to print the most consuming pages first
print('Index\tCPU (%)\t Page')
print('-'*41)
indices = np.argsort(-multi_lin_model.coef_)
for i in indices:
print(f"{i}\t{ multi_lin_model.coef_[i]:4.2f}\t {page_names[i]}")
Explanation: Now, given the coefficients calculated by the model, which capture the contribution of each page view to the total CPU usage, we can start to answer some interesting questions. For example,
which page view causes most CPU usage, on a per visit basis?
For this we can generate a table of page names with their coefficients in descending order:
End of explanation
print(f'The other processes on the server consume {multi_lin_model.intercept_:.2f}%')
Explanation: From this table we see that 'resources/js/basket.js' consumes the most per CPU per view. It generates about 0.30% CPU load for each additional page view. 'products/science.html' on the other hand is much leaner and only consumes about 0.04% CPU per view. Does this seem to be correct if you look at the scatter plot above?
Now let us investigate the constant term again.
End of explanation
R2 = multi_lin_model.score(page_views, cpu_usage)
print(f'R2 = {R2:.3f}')
Explanation: As you can see this term is very similar to the result achieved in single linear regression, but it is not entirely the same. This means that these models are not perfect. However, they seem to be able to give a reliable estimate.
Now let's compute the R2 score.
End of explanation
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.figure(figsize=(13, 6))
plt.plot(x, np.sin(x))
plt.show()
Explanation: As you can see from the R2 score, this model performs better. It can explain 91.5% of the variance instead of just 90.5% of the variance. So this gives the impression that this model is more accurate.
4. Non-linear Regression
Sometimes linear relations don't cut it anymore, so you might want a more complex method. There are 2 approaches to this:
* Use a non-linear method (such as Neural Networks, Support Vector Machines, Random Forests and Gaussian Processes)
* Use non-linear features as pre-processing for a linear method
Actually both methods are in essence identical and there is not always a clear distinction between the two. We will use the second approach in this section since it is easier to understand what is going on.
Please note that it is very often not even necessary to use non-linear methods, since the linear methods can be extremely powerful on their own and they are quite often very stable and reliable (in contrast to non-linear methods).
4.1. Fitting a sine function with linear regression
As an example task, we'll try to fit a sine function. We will use the np.sin() function to compute the sine of the elements in a numpy array.
End of explanation
# helper function to generate the data
def sine_train_data():
x_train = np.linspace(0, 6, 10).reshape((-1, 1))
y_train = np.sin(x_train)
return x_train, y_train
x_train, y_train = sine_train_data()
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
plt.show()
Explanation: For our training set, we will calculate 10 y values from evenly spaced x values using this function.
End of explanation
x_train, y_train = sine_train_data()
### BEGIN SOLUTION
model = sklearn.linear_model.LinearRegression()
model.fit(x_train, y_train)
### END SOLUTION
# model = ?
# model.fit( ? )
print(f'The R2 score of this model is: {model.score(x_train, y_train):.3}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
plt.plot(x, model.predict(x))
plt.show()
Explanation: Now let's try to fit a model to this data with linear regression.
End of explanation
import sklearn.preprocessing
x = [[2]]
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
pol_exp.fit_transform(x)
Explanation: As you can see this fit is not optimal.
4.2. Fitting a sine function using polynomial expansion
One of the easiest ways to make your machine learning technique more intelligent is to extract relevant features from the data. These features can be anything that you can find that will make it easier for the method to be able to fit the data. This means that as a machine learning engineer it is best to know and understand your data.
As some of you might remember from math class, you can create an approximation of any function (including a sine function) using a polynomial function with the Taylor expansion. So we will use that approach to learn a better fit.
In this case we will create what we call features using a polynomial expansion. If you set the degree to 3 it will generate data of the 0d, 1st, 2nd and 3rd order (including cross products) as shown in the example below (change x and degree to see the different expansions of x to a certain degree).
End of explanation
x = [[2, 3]]
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
pol_exp.fit_transform(x)
Explanation: As you can see above this function transforms $x$ into [$x^0$, $x^1$, $x^2$, $x^3$] with $x^0=1$ and $x^1 = x$. If you have 2 inputs it will also take the cross products so that [$x_1$, $x_2$] is transformed into: [1, $x_1$, $x_2$, $x_1^2$, $x_1x_2$, $x_2^2$, $x_1^3$, $x_1^2x_2$, $x_1x_2^2$, $x_2^3$] as shown below.
End of explanation
x_train, y_train = sine_train_data()
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
pol_exp.fit_transform(x_train)
Explanation: In this example we only have 1 input so the number of features is always the degree + 1.
Because of this polynomial features extraction finding of the coefficients of the polynomial becomes a linear problem, so similar to the previous exercise on multiple linear regression you can find the optimal weights as follows:
$$y = c_0 + c_1 x + c_2 x^2 + c_3 x^3 + \cdots + c_n x^n$$
So for multiple values of $x$ and $y$ you can minimize the error of this equation using linear regression. How this is done in practice is shown below.
End of explanation
x_train, y_train = sine_train_data()
### BEGIN SOLUTION
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=9)
### END SOLUTION
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
train_score = model.score(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this model is: {train_score:.6f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
Explanation: Now play with the degree of the polynomial expansion function below to create better features. Search for the optimal degree.
End of explanation
def sine_test_data():
x_test = 0.5 + np.arange(6).reshape((-1, 1))
y_test = np.sin(x_test)
return x_test, y_test
assert train_score > .99999, 'Adjust the degree parameter 2 cells above until the train_score > .99999'
x_test, y_test = sine_test_data()
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0, 6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
### BEGIN SOLUTION
test_score = model.score(pol_exp.fit_transform(x_test), y_test)
### END SOLUTION
# test_score = model.score( ? )
print(f'The R2 score of the model on the test set is: {test_score:.3f}')
assert test_score > 0.99
Explanation: What do you notice? When does it work better? And when does it work best?
Now let's test this on new and unseen data.
End of explanation
# a helper function to create the sine train set that can also add noise to the data
def noisy_sine_train_data(noise=None):
x_train = np.linspace(0, 6, 10).reshape((-1, 1))
y_train = np.sin(x_train)
# If fixed, set the random seed so that the next call of the
# random function always returns the same result
if noise == 'fixed':
np.random.seed(1)
x_train += np.random.randn(len(x_train)).reshape((-1, 1)) / 5
return x_train, y_train
x_train, y_train = noisy_sine_train_data(noise='fixed')
### BEGIN SOLUTION
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=9)
### END SOLUTION
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
train_score = model.score(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this method on the train set is {train_score:.3f}')
assert train_score > 0.99
Explanation: If everything is correct your score is very close to 1. Which means that we have built a model that can fit this data (almost) perfectly.
4.3. Add noise to the equation
Sadly all the data that we measure or gather doesn't have the mathematical precision of the data we used here. Quite often our measurements contain noise.
So let us repeat this process for data with more noise. Similarly as above, you have to choose the optimal degree of the polynomials.
End of explanation
x_test, y_test = sine_test_data()
print(f'The R2 score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
Explanation: Now let's see what this results to in the test set.
End of explanation
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
Explanation: As you can clearly see, this result is not that good. Why do you think this is?
Now plot the result to see the function you created.
End of explanation
x_train, y_train = noisy_sine_train_data()
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=9)
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this method on the train set is {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
x = np.arange(x_train[0], x_train[-1], 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
Explanation: Is this what you expect?
Now repeat the process below a couple of times for random noise.
End of explanation
x_train, y_train = noisy_sine_train_data(noise='fixed')
x_test, y_test = sine_test_data()
### BEGIN SOLUTION
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
### END SOLUTION
# pol_exp = ?
# model = ?
# model.fit( ? )
print(f'The score of this method on the train set is: {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.legend()
plt.show()
test_score = model.score(pol_exp.fit_transform(x_test), y_test)
print(f'The score of the model on the test set is: {test_score:.3f}')
assert test_score > 0.99, 'Adjust the degree parameter until test_score > 0.99'
Explanation: What did you observe? And what is the method learning? And how can you avoid this?
Try to figure out a solution for this problem without changing the noise level.
End of explanation
# create the data in case you skipped the previous exercise
# a helper function to create the sine train set that can also add noise to the data
def noisy_sine_train_data(noise=None):
x_train = np.linspace(0, 6, 10).reshape((-1, 1))
y_train = np.sin(x_train)
# If fixed, set the random seed so that the next call of the
# random function always returns the same result
if noise == 'fixed':
np.random.seed(1)
x_train += np.random.randn(len(x_train)).reshape((-1, 1)) / 5
return x_train, y_train
def sine_test_data():
x_test = 0.5 + np.arange(6).reshape((-1, 1))
y_test = np.sin(x_test)
return x_test, y_test
x_train, y_train = noisy_sine_train_data(noise='fixed')
# we randomly pick 3 data points to get a nice validation set
train_i = [0, 1, 3, 4, 6, 7, 9]
val_i = [2, 5, 8]
# create the train and validation sets
x_train_i = x_train[train_i, :]
y_train_i = y_train[train_i]
x_val_i = x_train[val_i, :]
y_val_i = y_train[val_i]
### BEGIN SOLUTION
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=4)
### END SOLUTION
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train_i), y_train_i)
### BEGIN SOLUTION
train_score = model.score(pol_exp.fit_transform(x_train_i), y_train_i)
validation_score = model.score(pol_exp.fit_transform(x_val_i), y_val_i)
### END SOLUTION
# train_score = model.score( ? )
# validation_score = model.score( ? )
print(f'The R2 score of this model on the train set is: {train_score:.3f}')
print(f'The R2 score of this model on the validation set is: {validation_score:.3f}')
Explanation: 5. Over-fitting and Cross-Validation
What you have experienced above is called over-fitting and happens when your model learns the noise that is inherent in the data.
This problem was caused because there were to many parameters in the model. So the model was too advanced so that it became capable of learning the noise in the data by heart. Reducing the number of parameters solves this problem. But how do you know how many parameters is optimal?
(Another way to solve this problem is to use more data. Because if there are more data points in the data and if there is more noise, your model isn't able to learn all that noise anymore and you get a better result. Since it's often not possible to gather more data we will not take this approach.)
In the exercise above you had to set the number of polynomial functions to get a better result, but how can you estimate this in a reliable way without manually selection the optimal parameters?
5.1. Validation set
A common way to solve this problem is through the use of a validation set. This means that you use a subset of the training data to train your model on, and another subset of the training data to validate your parameters. Based on the score of your model on this validation set you can select the optimal parameter.
So use this approach to select the best number of polynomials for the noisy sine function.
End of explanation
assert pol_exp.degree < 5, 'Select a polynomial degree < 5'
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
x_test, y_test = sine_test_data()
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print(f'The score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
Explanation: Now let's train on the entire train set (including the validation set) and test this result on the test set with the following code.
End of explanation
x_train, y_train = noisy_sine_train_data(noise='fixed')
### BEGIN SOLUTION
results = np.inf * np.ones((10, 10))
for i in range(10):
### END SOLUTION
# results = np.inf * np.ones(( ? , ?))
# The results array should have a shape of "the number of data points" x "the number of polynomial degrees to try"
# The ones are multiplied with a very large number, np.inf, since we are looking for the smallest error
# for i in range( ? ):
train_i = np.where(np.arange(10) != i)[0]
x_train_i = x_train[train_i, :]
y_train_i = y_train[train_i]
x_val_i = x_train[i:i+1, :]
y_val_i = y_train[i:i+1]
### BEGIN SOLUTION
for degree in range(10):
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=degree)
### END SOLUTION
# for degree in range(?):
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train_i), y_train_i)
### BEGIN SOLUTION
results[i, degree] = sklearn.metrics.mean_squared_error(model.predict(pol_exp.fit_transform(x_val_i)), y_val_i)
### END SOLUTION
# Fill out the results for each validation set and each degree in the results matrix
# results[ ? ] = sklearn.metrics.mean_squared_error(model.predict(pol_exp.fit_transform(x_val_i)), y_val_i)
Explanation: As you can see this approach works to select the optimal degree. Usually the test score is lower than the validation score, but in this case it is not because the test data doesn't contain noise.
5.2. Cross-Validation
To improve this procedure you can repeat the process above for different train and validation sets so that the optimal parameter is less dependent on the way the data was selected.
One basic strategy for this is leave-one-out cross validation, where each data point is left out of the train set once, and the model is then validated on this point. Now let's implement this. First make a 2-dimensional array results to store all your results using the np.ones() function: 1 dimension (row) for each validation set and 1 dimension (column) for each degree of the PolynomialFeatures() function. Then you loop over all the validation sets followed by a loop over all the degrees of the PolynomialFeatures() function you want to try out. Then set the result for that experiment in the right element of the results array.
We will use the mean squared error (MSE) instead of R2 because that is more stable. Since the MSE measures the error, smaller values are better.
Once you have your results, average them over all validation sets (using the np.mean() function over the correct axis) so that you know the average error for each degree over all validation sets. Now find the degree with the smallest error using the np.argmin() function.
Tip: Python doesnt have { and } for the beginning and end of for loops. It uses the tab indentation. So make sure your indentation is set right for each of the for loops.
End of explanation
max_degree = 10
plt.boxplot(results[:, : max_degree])
plt.xticks(range(1, max_degree + 1), range(max_degree))
plt.xlabel('Polynomial degree')
plt.ylabel('Mean Squared Error')
plt.show()
Explanation: Let's plot these results in a box plot to get an idea on how well the models performed on average.
Tip: change the max_degree variable if you want to see more details for the lower degrees.
End of explanation
### BEGIN SOLUTION
average_results = np.mean(results, axis=0)
degree = np.argmin(average_results)
### END SOLUTION
# average the results over all validation sets
# average_results = np.mean(results, axis= ? )
# find the optimal degree
# degree = np.argmin( ? )
print(f'The optimal degree for the polynomials is: {degree}')
Explanation: Next we will compute the best degree.
End of explanation
assert degree == 3
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=degree)
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
print(f'The score of this method on the train set is: {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print(f'The score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
Explanation: Now let's train the model on the entire train set (including the validation set) and have a look at the result.
End of explanation
x_train, y_train = noisy_sine_train_data(noise='fixed')
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=9)
### BEGIN SOLUTION
model = sklearn.linear_model.RidgeCV()
### END SOLUTION
# model = sklearn.linear_model. ?
model.fit(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this method on the train set is: {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13,8))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print(f'The R2 score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
Explanation: As you can see this automatic way of selecting the optimal degree has resulted in a good fit for the sine function.
5.3 Regularisation
When you have too many parameters in your model, there is a risk of over-fitting, i.e. your model learns the noise. To avoid this, techniques have been developed to make an estimation of this noise.
One of these techniques is Ridge Regression. This linear regression technique has an additional parameter called the regularisation parameter. This parameter basically sets the standard deviation of the noise you want to remove. The effect in practice is that it makes sure the weights of linear regression remain small and thus less over-fitting.
Since this is an additional parameter that needs to be set, it needs to be set using cross-validation as well. Luckily sklearn developed a method that does this for us in a computational efficient way called sklearn.linear_model.RidgeCV()
End of explanation
x_train, y_train = noisy_sine_train_data(noise='fixed')
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
model = sklearn.linear_model.RidgeCV()
model.fit(pol_exp.fit_transform(x_train), y_train)
print('The R2 score of this method on the train set is:',
f'{model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
# Now test outside the area of the training
x_test_extended = np.array([-3,-2,-1,7,8,9]).reshape((-1, 1))
y_test_extended = np.sin(x_test_extended)
plt.figure(figsize=(13, 8))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test_extended, y_test_extended, color='r', label='test')
plt.legend()
x = np.arange(-4,10, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print('The R2 score of the model on the test set outside the area used for training is:',
f'{model.score(pol_exp.fit_transform(x_test_extended), y_test_extended):.3f}')
Explanation: As you can see above, the result of Ridge Regression is not as good as reducing the number of features in this example. However it works a lot better than without regularisation (try that). In the example above you will notice that it makes the result a lot smoother and removes the unwanted spikes. It will actually make sure that if you have too many features you still get a reasonable result. So this means that it should be in your standard toolkit.
The removal of the extra features can be automated using feature selection. A very short introduction to sklearn on the topic can be found here.
Another method that is often used is sklearn.linear_model.LassoCV() which actually combines removal of features and estimation of the noise. It is however very dependent on the dataset which of the two methods performs best.
Cross-validation should be applied to any parameter you set in your function and that without looking at the test set.
Over-fitting is one of the biggest issues in machine learning and a lot of the research that is currently being done in machine learning is a search for techniques to avoid over-fitting. As a starting point we list a few of the techniques that you can use to avoid over-fitting:
* Use more data
* Artificially generate more data based on the original data
* Use a smaller model (with less parameters)
* Use less features (and thus less parameters)
* Use a regularisation parameter
* Artificially add noise to your model
* Only use linear models or make sure that the non-linearity in your model is closer to a linear function
* Combine multiple models that each over-fit in their own way into what is called an ensemble
5.4 Extrapolation
Now let's extend the range of the optimal plot you achieved from -4 to 10. What do you see? Does it look like a sine function?
End of explanation
# Some code to generate spirals. You can ignore this for now.
# To comply with standards in machine learning we use x1 and x2 as opposed to x and y for this graph
# because y is reserved for the output in Machine Learning (= 0 or 1 in this case)
r = np.arange(0.1, 1.5, 0.0001)
theta = 2 * np.pi * r
x1_0 = r * np.cos(theta)
x2_0 = r * np.sin(theta)
x1_1 = - r * np.cos(theta)
x2_1 = - r * np.sin(theta)
perm_indices = np.random.permutation(range(len(x1_0)))
x1_0_rand = x1_0[perm_indices[ : 1000]] + np.random.randn(1000) / 5
x2_0_rand = x2_0[perm_indices[ : 1000]] + np.random.randn(1000) / 5
x1_1_rand = x1_1[perm_indices[1000 : 2000]] + np.random.randn(1000) / 5
x2_1_rand = x2_1[perm_indices[1000 : 2000]] + np.random.randn(1000) / 5
plt.figure(figsize=(8, 8))
plt.scatter(x1_0_rand, x2_0_rand, color = 'b', alpha=0.6, linewidth=0)
plt.scatter(x1_1_rand, x2_1_rand, color = 'r', alpha=0.6, linewidth=0)
plt.plot(x1_0, x2_0, color = 'b', lw=3)
plt.plot(x1_1, x2_1, color='r', lw=3)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.xlabel('X1')
plt.ylabel('X2')
plt.show()
Explanation: As you can see, the extrapolation results for non-linear regression are even worse than for those of linear regression. This is because models only work well in the input space they have been trained in.
A possible way to be able to extrapolate and to use a non-linear method is to use forecasting techniques. This is handled in part 7, an optional part for those interested and going through the tutorial quite fast. Otherwise continue to the section on classification in exercise 6.
6. Classification
In classification the purpose is to separate 2 classes. As an example we will use the double spiral. It is a very common toy example in machine learning and allows you to visually show what is going on.
As shown in the graph below the purpose is to separate the blue from the red dots.
End of explanation
# Create a train and validation set
x_train_0 = np.concatenate((x1_0_rand[ : 800].reshape((-1,1)), x2_0_rand[ : 800].reshape((-1,1))), axis=1)
y_train_0 = np.zeros((len(x_train_0),))
x_train_1 = np.concatenate((x1_1_rand[ : 800].reshape((-1,1)), x2_1_rand[ : 800].reshape((-1,1))), axis=1)
y_train_1 = np.ones((len(x_train_1),))
x_val_0 = np.concatenate((x1_0_rand[800 : ].reshape((-1,1)), x2_0_rand[800 : ].reshape((-1,1))), axis=1)
y_val_0 = np.zeros((len(x_val_0),))
x_val_1 = np.concatenate((x1_1_rand[800 : ].reshape((-1,1)), x2_1_rand[800 : ].reshape((-1,1))), axis=1)
y_val_1 = np.ones((len(x_val_1),))
x_train = np.concatenate((x_train_0, x_train_1), axis=0)
y_train = np.concatenate((y_train_0, y_train_1), axis=0)
x_val = np.concatenate((x_val_0, x_val_1), axis=0)
y_val = np.concatenate((y_val_0, y_val_1), axis=0)
# Plot the train and test data
plt.figure(figsize=(8, 8))
plt.scatter(x_train[:, 0], x_train[:, 1], color='k', alpha=0.6, linewidth=0)
plt.scatter(x_val[:, 0], x_val[:, 1], color='y', alpha=0.6, linewidth=0)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.show()
Explanation: In a colored image this is easy to do, but when you remove the color it becomes much harder. Can you do the classification in the image below?
In black the samples from the train set are shown and in yellow the samples from the validation set.
End of explanation
### BEGIN SOLUTION
model = sklearn.linear_model.LogisticRegression()
model.fit(x_train, y_train)
### END SOLUTION
# model = sklearn.linear_model. ?
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
assert val_score > 0.5
Explanation: As you can see classifying is very hard to do when you don't get the answer even if you saw the solution earlier. But you will see that machine learning algorithms can solve this quite well if they can learn from examples.
This figure also illustrates that the train and validation set are from the same distribution, which is why they look very similar on the plot. If you want to put a model trained on this data set in production and the real data is from the same distribution, you can expect similar results in real life as on your validation set.
6.1 Linear classifier
Let's try to do this with a linear classifier.
Logistic regression, despite its name, is a linear model for classification rather than regression. Its sklearn implementation is sklearn.linear_model.LogisticRegression().
End of explanation
# A quick and dirty helper function to plot the decision boundaries
def plot_decision_boundary(model, pol_exp=None):
n=250
lin_space = np.linspace(-2, 2, num=n).reshape((-1, 1))
x1 = np.dot(lin_space, np.ones((1, n))).reshape((-1, 1))
x2 = np.dot(np.ones((n, 1)), lin_space.T).reshape((-1, 1))
x = np.concatenate((x1, x2), axis=1)
if pol_exp is None:
y = model.predict(x)
else:
y = model.predict(pol_exp.fit_transform(x))
i_0 = np.where(y < 0.5)
i_1 = np.where(y > 0.5)
plt.figure(figsize=(8,8))
plt.scatter(x[i_0, 0], x[i_0, 1], color='b', s=2, alpha=0.5, linewidth=0, marker='s')
plt.scatter(x[i_1, 0], x[i_1, 1], color='r',s=2, alpha=0.5, linewidth=0, marker='s')
plt.plot(x1_0, x2_0, color = 'b', lw=3)
plt.plot(x1_1, x2_1, color='r', lw=3)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
# Call the function
plot_decision_boundary(model)
Explanation: Now let's plot the result.
End of explanation
### BEGIN SOLUTION
model = sklearn.linear_model.LogisticRegressionCV(max_iter=1000)
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=10)
model.fit(pol_exp.fit_transform(x_train), y_train)
### END SOLUTION
# model = sklearn.linear_model. ?
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_train)), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_val)), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model, pol_exp=pol_exp)
assert val_score >= 0.8
Explanation: As you can see a linear classifier returns a linear decision boundary.
6.2 Non-linear classification
Now let's do this better with a non-linear classifier using polynomials. Play with the degree of the polynomial expansion and look for the effect on the validation set accuracy of the LogisticRegressionCV() model. This is a more advanced version of the default LogisticRegression() that uses cross validation to tune its hyper-parameters. What gives you the best results?
If you get a lot of "failed to converge" warnings consider increasing the max_iter parameter to 1000 or so. Getting some warnings is normal.
End of explanation
import sklearn.ensemble
### BEGIN SOLUTION
model = sklearn.ensemble.RandomForestClassifier()
model.fit(x_train, y_train)
### END SOLUTION
# model = ?
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.7
Explanation: If everything went well you should get a validation/test accuracy very close to 0.8.
6.3 Random Forests
An often used technique in machine learning are random forests. Basically they are decision trees, or in programmers terms, if-then-else structures, like the one shown below.
<img src="images/tree.png" width=70%>
Decision trees are know to over-fit a lot because they just learn the train set by heart and store it. Random forests on the other hand combine multiple different (randomly initialized) decision trees that all over-fit in their own way. But by combining their output using a voting mechanism, they tend to cancel out each other's mistakes. This approach is called an ensemble and can be used for any combination of machine learning techniques. A schema representation of how such a random forest works is shown below.
<img src="images/random_forest.jpg">
Now let's try to use a random forest to solve the double spiral problem. (see sklearn.ensemble.RandomForestClassifier())
End of explanation
### BEGIN SOLUTION
model = sklearn.ensemble.RandomForestClassifier(min_samples_leaf=.02)
### END SOLUTION
# model = sklearn.ensemble.RandomForestClassifier(min_samples_leaf= ? )
model.fit(x_train, y_train)
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.5
Explanation: As you can see they are quite powerful right out of the box without any parameter tuning. But we can get the results even better with some fine tuning.
Try changing the min_samples_leaf parameter for values between 0 and 0.5.
End of explanation
### BEGIN SOLUTION
model = sklearn.ensemble.RandomForestClassifier(n_estimators=100)
### END SOLUTION
# model = sklearn.ensemble.RandomForestClassifier(n_estimators= ? )
model.fit(x_train, y_train)
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.7
Explanation: The min_samples_leaf parameter sets the number of data points that can create a new branch/leaf in the tree. So in practice it limits the depth of the decision tree. The bigger this parameter is, the less deep the tree will be and less likely each tree will over-fit.
For this parameter you can set integer numbers to set the specific number of samples, or you can use values between 0 and 0.5 to express a percentage of the size of the dataset. Since you might experiment with a smaller dataset to roughly tune your parameters, it is best to use values between 0 and 0.5 so that the value you chose is not as dependant on the size of the dataset you are working with.
Now that you have found the optimal min_samples_leaf run the code again with the same parameter. Do you get the same result? Why not?
Another parameter to play with is the n_estimators parameter. Play with only this parameter to see what happens.
End of explanation
### BEGIN SOLUTION
model = sklearn.ensemble.RandomForestClassifier(n_estimators=1000, min_samples_leaf=0.02)
### END SOLUTION
# model = sklearn.ensemble.RandomForestClassifier(n_estimators= ? , min_samples_leaf= ? )
model.fit(x_train, y_train)
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.7
Explanation: As you can see increasing the number of estimators improves the model and reduces over-fitting. This parameter actually sets the number of trees in the random forest. The more trees there are in the forest the better the result is. But obviously it requires more computing power so that is the limiting factor here.
This is the basic idea behind ensembles: if you combine more tools you get a good result on average.
Now try combining the n_estimators and min_samples_leaf parameter below.
End of explanation
### BEGIN SOLUTION
model = sklearn.ensemble.RandomForestClassifier(n_estimators=100, min_samples_leaf=0.01)
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=15)
model.fit(pol_exp.fit_transform(x_train), y_train)
### END SOLUTION
# model = sklearn.ensemble.RandomForestClassifier(n_estimators= ? , min_samples_leaf= ? )
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ?)
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_train)), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_val)), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model, pol_exp=pol_exp)
assert val_score > 0.7
Explanation: As you have noticed by now it seems that random forests are less powerful than linear regression with polynomial feature extraction. This is because these polynomials are ideally suited for this task. This also means that you could get a better result if you would also apply polynomial expansion for random forests. Try that below.
End of explanation
with open('data/train_set_forecasting.pickle', 'rb') as file:
train_set = pickle.load(file, encoding='latin1')
print(f'Shape of the train set = {train_set.shape}')
plt.figure(figsize=(20,4))
plt.plot(train_set)
plt.show()
Explanation: As you have may have noticed, it is hard to get results that are better than the ones obtained using logistic regression. This illustrates that linear techniques are very powerful and often underrated. But in some situations they are not powerful enough and you need something stronger like a random forest or even neural networks (check this simulator if you want to play with the latter).
There is one neat trick that can be used for random forests. If you set the n_jobs it will use more than 1 core to compute. Set it to -1 to use all the cores (including hyper-threading cores). But don't do that during this tutorial because that would block the machine you are all working on.
To avoid over-fitting, you can set the max_depth parameter for random forests which limits the maximum depth of each tree. Alternatively, you can set the min_samples_split parameter which determines how many data points you need at least before you create another split (this is an additional if-else structure) while building the tree. Or the min_samples_leaf that sets the minimum amount of data points you have in each leaf. All 3 parameters are dependent on the number of data points in your dataset especially the last 2 so don't forget to adapt them if you have been playing around with a small subset of the data. (A good trick to solve this might be to use a range similar to [0.0001, 0.001, 0.01, 0.1] * len(x_train). Feel free to extend the range in any direction. It is generally good practice to construct them using a log scale like in the example, or better like this: 10.0**np.arange(-5, 0, 0.5) * len(x_train).) In our experience min_samples_split or min_samples_leaf give slightly better results and it usually doesn't make sense to combine more than 1 of these parameters.
In the previous exercises we have done a lot of the optimizations on the test set. This should of course be avoided. What you should do instead is to optimize and select your model using a validation set and of course you should automate this process as shown in one of the earlier exercises. One thing to take into account here is that you should use multiple initialisation of a random forest because the decision trees is randomly generated.
7. Forecasting (Optional)
We are going to forecast page views data, very similar to the data used in the anomaly detection section. The data contains 1 sample per hour.
End of explanation
import sklearn
import sklearn.linear_model
import sklearn.gaussian_process
model = sklearn.linear_model.LinearRegression()
# the input x_train contains all the data except the last data point
x_train = train_set[ : -1].reshape((-1, 1)) # the reshape is necessary since sklearn requires a 2 dimensional array
# the output y_train contains all the data except the first data point
y_train = train_set[1 : ]
# this code fits the model on the train data
model.fit(x_train, y_train)
# this score gives you how well it fits on the train set
# higher is better and 1.0 is perfect
print(f'The R2 train score of the linear model is {model.score(x_train, y_train):.3f}')
Explanation: In the graph above you can clearly see that there is a rising trend in the data.
7.1 One-step ahead prediction
This forecasting section will describe the one-step ahead prediction. In this case, this means that we will only predict the next data point i.e. the number of page views in the next hour.
Now let's first build a model that tries to predict the next data point from the previous one.
We will use a technique called teacher forcing where we assume that the output of the previous prediction is correct. This means that we can use the original time series as input. Now we only need to align input and output so that the output corresponds to the next sample in the input time series.
End of explanation
n_predictions = 100
import copy
# use the last data point as the first input for the predictions
x_test = copy.deepcopy(train_set[-1]) # make a copy to avoid overwriting the training data
prediction = []
for i in range(n_predictions):
# predict the next data point
y_test = model.predict([[x_test]])[0] # sklearn requires a 2 dimensional array and returns a one-dimensional one
### BEGIN SOLUTION
prediction.append(y_test)
x_test = y_test
### END SOLUTION
# prediction.append( ? )
# x_test = ?
prediction = np.array(prediction)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction)), 'g')
plt.plot(train_set, 'b')
plt.show()
Explanation: As you can see from the score above, the model is not perfect but it seems to get a relatively high score. Now let's make a prediction into the future and plot this.
To predict the data point after that we will use the predicted data to make a new prediction. The code below shows how this works for this data set using the linear model you used earlier. Don't forget to fill out the missing code.
End of explanation
def convert_time_series_to_train_data(ts, width):
x_train, y_train = [], []
for i in range(len(ts) - width - 1):
x_train.append(ts[i : i + width])
y_train.append(ts[i + width])
return np.array(x_train), np.array(y_train)
width = 5
x_train, y_train = convert_time_series_to_train_data(train_set, width)
print(x_train.shape, y_train.shape)
Explanation: As you can see from the image above the model doesn't quite seem to fit the data well. Let's see how we can improve this.
7.2 Multiple features
If your model is not smart enough there is a simple trick in machine learning to make your model more intelligent (but also more complex). This is by adding more features.
To make our model better we will use more than 1 sample from the past. To make your life easier there is a simple function below that will create a data set for you. The width parameter sets the number of hours in the past that will be used.
End of explanation
width = 5
x_train, y_train = convert_time_series_to_train_data(train_set, width)
model = sklearn.linear_model.LinearRegression()
model.fit(x_train, y_train)
print(f'The R2 score of the linear model with width={width} is {model.score(x_train, y_train):.3f}')
Explanation: As you can see from the print above both x_train and y_train contains 303 data points. For x_train you see that there are now 5 features which contain the page views from the 5 past hours.
So let's have a look what the increase from 1 to 5 features results to.
End of explanation
import copy
# this is a helper function to make the predictions
def predict(model, train_set, width, n_points):
prediction = []
# create the input data set for the first predicted output
# copy the data to make sure the original is not overwritten
x_test = copy.deepcopy(train_set[-width : ])
for i in range(n_points):
# predict only the next data point
prediction.append(model.predict(x_test.reshape((1, -1))))
# use the newly predicted data point as input for the next prediction
x_test[0 : -1] = x_test[1 : ]
x_test[-1] = prediction[-1]
return np.array(prediction)
n_predictions = 200
prediction = predict(model, train_set, width, n_predictions)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction[:,0])), 'g')
plt.plot(train_set, 'b')
plt.show()
Explanation: Now change the width parameter to see if you can get a better score.
7.3 Over-fitting
Now execute the code below to see the prediction of this model.
End of explanation
### BEGIN SOLUTION
width = 22
### END SOLUTION
# width = ?
x_train, y_train = convert_time_series_to_train_data(train_set, width)
model = sklearn.linear_model.LinearRegression()
model.fit(x_train, y_train)
print(f'The R2 score of the linear model with width={width} is {model.score(x_train, y_train):.3f}')
prediction = predict(model, train_set, width, 200)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction[:,0])), 'g')
plt.plot(train_set, 'b')
plt.show()
assert width > 1
Explanation: As you can see in the image above the prediction is not what you would expect from a perfect model. What happened is that the model learned the training data by heart without 'understanding' what the data is really about. This phenomenon is called over-fitting and will always occur if you make your model too complex.
Now play with the width variable below to see if you can find a more sensible width.
End of explanation
model_generators = [sklearn.linear_model.LinearRegression(),
sklearn.linear_model.RidgeCV(cv=3),
sklearn.linear_model.LassoCV(cv=3),
sklearn.ensemble.RandomForestRegressor(n_estimators=10)]
best_score = 0
### BEGIN SOLUTION
for model_gen in model_generators:
for width in range(1, 50):
### END SOLUTION
# for model_gen in ? :
# for width in range( ? , ? ):
x_train, y_train = convert_time_series_to_train_data(train_set, width)
# train the model on the first 48 hours
x_train_i, y_train_i = x_train[ : -48, :], y_train[ : -48]
# use the last 48 hours for validation
x_val_i, y_val_i = x_train[-48 : ], y_train[-48 : ]
# there is a try except clause here because some models do not converge for some data
try:
# Constructs a new, untrained, model with the same parameters
model = sklearn.base.clone(model_gen, safe=True)
### BEGIN SOLUTION
model.fit(x_train_i, y_train_i)
this_score = model.score(x_val_i, y_val_i)
### END SOLUTION
# model.fit( ? , ? )
# this_score = ?
if this_score > best_score:
best_score = this_score
# Constructs a new, untrained, model with the same parameters
best_model = sklearn.base.clone(model, safe=True)
best_width = width
except:
pass
print(f'{best_model.__class__.__name__} was selected as the best model with a width of {best_width}',
f'and a validation R2 score of {best_score:.3f}')
Explanation: As you will have noticed by now is that it is better to have a non-perfect score which will give you a much better outcome. Now try the same thing for the following models:
* sklearn.linear_model.RidgeCV()
* sklearn.linear_model.LassoCV()
* sklearn.ensemble.RandomForestRegressor()
The first 2 models also estimate the noise that is present in the data to avoid over-fitting. RidgeCV() will keep the weights that are found small, but it won't put them to zero. LassoCV() on the other hand will put several weights to 0. Execute model.coef_ to see the actual coefficients that have been found.
RandomForestRegressor() is the regression variant of the RandomForestClassifier() and is therefore thus a non-linear method. This makes this method a lot more complex and therefore it will be able to represent more complex shapes than the linear method. This also means that it is much more capable to learn the data by heart (and thus to over-fit). In many cases however this additional complexity allows to better understand the data given the correct parameter settings (try a couple of times width=25 (since it is random) and see what the results are; set the n_estimators parameter to a higher number to get a more stable results).
7.4 Automation
What we have done up to now is manually selecting the best outcome based on the test result. This can be considered cheating because you have just created a self-fulfilling prophecy. Additionally it is not only cheating it is also hard to find the exact width that gives the best result by just visually inspecting it. So we need a more objective approach to solve this.
To automate this process you can use a validation set. In this case we will use the last 48 hours of the training set to validate the score and select the best parameter value. This means that we will have to use a subset of the training set to fit the model.
End of explanation
### BEGIN SOLUTION
width = best_width
model = best_model
### END SOLUTION
# width = ?
# model = ?
x_train, y_train = convert_time_series_to_train_data(train_set, width)
### BEGIN SOLUTION
model.fit(x_train, y_train) # train on the full data set
### END SOLUTION
# model.fit( ? , ? )
n_predictions = 200
prediction = predict(model, train_set, width, n_predictions)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction[:,0])), 'g')
plt.plot(train_set, 'b')
plt.show()
Explanation: If everything is correct the LassoCV methods was selected.
Now we are going to train this best model on all the data. In this way we use all the available data to build a model.
End of explanation |
7,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finite-Time Air-Fuel Otto Cycles in Python
Octane Cycle Example
Module import
After installing this module, it should import normally as
Step1: Case setup and solution
1. Engine
To setup a case, first, instantiate the engine and operate as follows
Step2: From the engine, obtain a crankRod object as follows
Step3: 2. Reactive Mixture
Next, the reactive mixture is setup as follows
Step4: 3. Cumulative Reaction Function
The cumulative reaction function (of cranckshaft angle) is defined as a function as below
Step5: 4. Solver setup
The solver is setup with the information previously prepared, as below
Step6: 5. Solver solution
Once everything is setup, iteratively solving is simple
Step7: 6. Solution querying
After closing, results can be queried by accessing SOL data
Step8: 6. Case saving – Data persistence
SOL object can be pickled for data persistency
Step9: Solution Plots
Once a case is solved, plots can be easily generated
Step10: $P-\alpha$
Step11: $P-v$, linear and loglog
Step12: $n-\alpha$
Step13: $j-\alpha$
Step14: $Q_r-\alpha$ | Python Code:
import FTAF
FTAF.__version__
import math
import numpy
import matplotlib
import matplotlib.pylab as plt
%matplotlib inline
Explanation: Finite-Time Air-Fuel Otto Cycles in Python
Octane Cycle Example
Module import
After installing this module, it should import normally as:
End of explanation
# Reciprocating engine instantiation
ENG = FTAF.eng.recipr.engine({
'Vd' : 2000e-6, # 2000 cc
'z' : 4, # 4 cyl.
'rDS': 1, # D/S = 1 (square)
'rLR': 3.6, # L/R
'rV' : 8, # 8:1 compression
'th' : -30, # ignition angle, deg
'N' : 2400, # rotation, RPM
})
# Engine calculation and OPTIONAL output
ENG.calc()
print(ENG.Ɛ)
Explanation: Case setup and solution
1. Engine
To setup a case, first, instantiate the engine and operate as follows:
End of explanation
# Get the crankRod object from the solved engine
CR = ENG.CR()
# At this point, mechanism data can be OPTIONALLY plotted:
α = numpy.linspace(-numpy.pi, numpy.pi, 180)
V = CR.V(α)
# System volume (m³) as function of crankshaft angle (rad):
plt.plot(α, V, 'b-')
Explanation: From the engine, obtain a crankRod object as follows:
End of explanation
# Initialize admission conditions, as well as operating and mixture parameters:
P0, T0, V0, φ, ψ, ζ = 100.00, 300, CR.V0(), 0.5, 3.76, 0
# Initialize FUEL proportions with a thermo-chemical mixture (100% octane)
# FUEL proportions dont't need to add to 1 kmol. Any non-zero amount is OK.
fuelProp = FTAF.tc.tcTypes.tcMixture({
'H2' : 0.00, # 0% Hydrogen
'C8H18': 5.00, # 100% Octane - nevermind the 5 kmol amount
})
# Create the reactive mixture with FTAF.tc.tcAF.freshAFMix:
RM = FTAF.tc.tcAF.freshAFMix(P0, T0, V0, φ, ψ, fuelProp, ζ)
# The component amounts are automatically adjusted.
# Even the combustion products appear (but count as zero kmol each)
print(RM)
# The combustion reaction is also setup:
print(RM.asReaction())
# OPTIONALLY, the combustion time can be set:
# If this is set, the solver can calculate the δ parameter.
# If a value of δ is passed to the solver, it will override this.
RM.setReactionTime(2200e-6) # In seconds
print(RM.Δt)
Explanation: 2. Reactive Mixture
Next, the reactive mixture is setup as follows:
End of explanation
# This function has to have the three parameters in this order: α, θ, and δ.
def ga(α, θ, δ):
return (1 - numpy.cos((α - θ) * numpy.pi / δ)) / 2
Explanation: 3. Cumulative Reaction Function
The cumulative reaction function (of cranckshaft angle) is defined as a function as below:
End of explanation
# Instantiate the solver
SOL = FTAF.Otto.cycle.solver(
CR, # The crankRod mechanism
RM, # The reactive mixture
40, # Ns
20, # Nq
ga, # g(α, θ, δ)
P0, # Admission's P
T0, # Admission's T
δ = math.radians(90), # OPTIONAL - if given, overrides RM.Δt
)
# OPTIONAL: Initialized quantities are accessible
SOL.α # The α values reflect the Ns, Nq, θ, and δ parameters...
Explanation: 4. Solver setup
The solver is setup with the information previously prepared, as below:
End of explanation
# Solve iteratively
for i in range(len(SOL.α)-1):
# Prints the iteration number as the solution proceeds
print('{:3d}, '.format(i), end = '', flush = True)
if i % 10 == 9:
print('')
# SOL.iterate() makes one iteration
SOL.iterate()
# Closes the solution --- isochoric heat release
# ... and calculates the thermal efficiency
SOL.close()
Explanation: 5. Solver solution
Once everything is setup, iteratively solving is simple:
End of explanation
# Display the automatically calculated results
SOL.results
# All tracked properties and interactions (such as work) are available as:
SOL.W[:4]
# Specific interactions can be obtained using SOL.m0 --- the initial system mass:
(numpy.array(SOL.W) / SOL.m0)[:4]
# Example: wEnt, wRej, wNet
wEnt = sum([i for i in SOL.W if i > 0]) / SOL.m0
wRej = sum([-i for i in SOL.W if i < 0]) / SOL.m0
wNet = wRej - wEnt
print((wNet, wRej, wEnt)) # kJ/kg
# Engine mechanical power
pNet = (wNet * SOL.m0 / 2) * ENG.Ɛ['z'] * ENG.Ɛ['N'] / 60
print(pNet) # kW
Explanation: 6. Solution querying
After closing, results can be queried by accessing SOL data:
End of explanation
# Query the solution ID
SOL.ID
# Change the solution ID
SOL.ID = '03-FTAF-Octane.pickle'
# Save the solution:
SOL.save()
Explanation: 6. Case saving – Data persistence
SOL object can be pickled for data persistency:
End of explanation
matplotlib.rcParams['figure.figsize'] = (12, 4)
Explanation: Solution Plots
Once a case is solved, plots can be easily generated:
End of explanation
plt.plot(SOL.α, SOL.P)
plt.plot(SOL.α, SOL.P, 'b.')
plt.grid(dashes = (5, 3))
Explanation: $P-\alpha$:
End of explanation
plt.plot(SOL.v, SOL.P)
plt.plot(SOL.v, SOL.P, 'b.')
plt.grid(dashes = (5, 3))
plt.loglog(SOL.v, SOL.P)
plt.loglog(SOL.v, SOL.P, 'b.')
plt.grid(dashes = (5, 3))
Explanation: $P-v$, linear and loglog:
End of explanation
plt.plot(SOL.α[:-1], SOL.n)
plt.plot(SOL.α[:-1], SOL.n, 'b.')
plt.grid(dashes = (5, 3))
Explanation: $n-\alpha$: polytropic exponent
End of explanation
plt.plot(SOL.α[:-1], SOL.j)
plt.plot(SOL.α[:-1], SOL.j, 'b.')
plt.grid(dashes = (5, 3))
Explanation: $j-\alpha$: polytropic exponent convergence
End of explanation
plt.plot(SOL.α[:-1], SOL.Qr)
plt.plot(SOL.α[:-1], SOL.Qr, 'b.')
plt.grid(dashes = (5, 3))
Explanation: $Q_r-\alpha$: heat release due to reaction
End of explanation |
7,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating School Location Choice
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from EDB
Next we can review what was read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
coefficients
Step4: alt_values
Step5: chooser_data
Step6: landuse
Step7: spec
Step8: size_spec
Step9: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
Step10: Estimated coefficients
Step11: Output Estimation Results
Step12: Write updated utility coefficients
Step13: Write updated size coefficients
Step14: Write the model estimation report, including coefficient t-statistic and log likelihood
Step15: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file and *_size_terms.csv file to the configs folder, rename them to *_coefficients.csv and destination_choice_size_terms.csv, and run ActivitySim in simulation mode. Note that all the location
and desintation choice models share the same destination_choice_size_terms.csv input file, so if you
are updating all these models, you'll need to ensure that updated sections of this file for each model
are joined together correctly. | Python Code:
import larch # !conda install larch #for estimation
import pandas as pd
import numpy as np
import yaml
import larch.util.excel
import os
Explanation: Estimating School Location Choice
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
os.chdir('test')
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
modelname="school_location"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
Explanation: Load data and prep model for estimation
End of explanation
data.coefficients
Explanation: Review data loaded from EDB
Next we can review what was read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
coefficients
End of explanation
data.alt_values
Explanation: alt_values
End of explanation
data.chooser_data
Explanation: chooser_data
End of explanation
data.landuse
Explanation: landuse
End of explanation
data.spec
Explanation: spec
End of explanation
data.size_spec
Explanation: size_spec
End of explanation
model.estimate(method='BHHH', options={'maxiter':1000})
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
model.parameter_summary()
Explanation: Estimated coefficients
End of explanation
from activitysim.estimation.larch import update_coefficients, update_size_spec
result_dir = data.edb_directory/"estimated"
Explanation: Output Estimation Results
End of explanation
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
Explanation: Write updated utility coefficients
End of explanation
update_size_spec(
model, data, result_dir,
output_file=f"{modelname}_size_terms.csv",
)
Explanation: Write updated size coefficients
End of explanation
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
);
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
pd.read_csv(result_dir/f"{modelname}_size_terms.csv")
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file and *_size_terms.csv file to the configs folder, rename them to *_coefficients.csv and destination_choice_size_terms.csv, and run ActivitySim in simulation mode. Note that all the location
and desintation choice models share the same destination_choice_size_terms.csv input file, so if you
are updating all these models, you'll need to ensure that updated sections of this file for each model
are joined together correctly.
End of explanation |
7,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Dataset
Also known as digits if you're familiar with sklearn
Step1: Basic data analysis on the dataset
Step2: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image | Python Code:
import numpy as np
import keras
from keras.datasets import mnist
# Load the datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Explanation: MNIST Dataset
Also known as digits if you're familiar with sklearn:
```python
from sklearn.datasets import digits
```
Problem Definition
Recognize handwritten digits
Data
The MNIST database (link) has a database of handwritten digits.
The training set has $60,000$ samples.
The test set has $10,000$ samples.
The digits are size-normalized and centered in a fixed-size image.
The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
Load the data
The data is available in the repo's data folder. Let's load that using the keras library.
For now, let's load the data and see how it looks.
End of explanation
# What is the type of X_train?
# What is the type of y_train?
# Find number of observations in training data
# Find number of observations in test data
# Display first 2 records of X_train
# Display the first 10 records of y_train
# Find the number of observations for each digit in the y_train dataset
# Find the number of observations for each digit in the y_test dataset
# What is the dimension of X_train?. What does that mean?
Explanation: Basic data analysis on the dataset
End of explanation
from matplotlib import pyplot
import matplotlib as mpl
%matplotlib inline
# Displaying the first training data
fig = pyplot.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(X_train[0], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_ticks_position('left')
pyplot.show()
# Let's now display the 11th record
Explanation: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image
End of explanation |
7,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
spaCy Tutorial
(C) 2019-2020 by Damir Cavar
Version
Step1: We can load the English NLP pipeline in the following way
Step2: Tokenization
Step3: Part-of-Speech Tagging
We can tokenize and part of speech tag the individual tokens using the following code
Step4: The above output contains for every token in a line the token itself, the lemma, the Part-of-Speech tag, the dependency label, the orthographic shape (upper and lower case characters as X or x respectively), the boolean for the token being an alphanumeric string, and the boolean for it being a stopword.
Dependency Parse
Using the same approach as above for PoS-tags, we can print the Dependency Parse relations
Step5: As specified in the code, each line represents one token. The token is printed in the first column, followed by the dependency relation to it from the token in the third column, followed by its main category type.
Named Entity Recognition
Similarly to PoS-tags and Dependency Parse Relations, we can print out Named Entity labels
Step6: We can extend the input with some more entities
Step7: The corresponding NE-labels are
Step8: Pattern Matching in spaCy
Step9: spaCy is Missing
From the linguistic standpoint, when looking at the analytical output of the NLP pipeline in spaCy, there are some important components missing
Step10: We can load the visualizer
Step11: Loading the English NLP pipeline
Step12: Process an input sentence
Step13: If you want to generate a visualization running code outside of the Jupyter notebook, you could use the following code. You should not use this code, if you are running the notebook. Instead, use the function display.render two cells below.
Visualizing the Dependency Parse tree can be achieved by running the following server code and opening up a new tab on the URL http
Step14: Instead of serving the graph, one can render it directly into a Jupyter Notebook
Step16: In addition to the visualization of the Dependency Trees, we can visualize named entity annotations
Step17: Vectors
To use vectors in spaCy, you might consider installing the larger models for the particular language. The common module and language packages only come with the small models. The larger models can be installed as described on the spaCy vectors page
Step18: We can now import the English NLP pipeline to process some word list. Since the small models in spacy only include context-sensitive tensors, we should use the dowloaded large model for better word vectors. We load the large model as follows
Step19: We can process a list of words by the pipeline using the nlp object
Step20: As described in the spaCy chapter Word Vectors and Semantic Similarity, the resulting elements of Doc, Span, and Token provide a method similarity(), which returns the similarities between words
Step21: We can access the vectors of these objects using the vector attribute
Step22: The attribute has_vector returns a boolean depending on whether the token has a vector in the model or not. The token sasquatch has no vector. It is also out-of-vocabulary (OOV), as the fourth column shows. Thus, it also has a norm of $0$, that is, it has a length of $0$.
Here the token vector has a length of $300$. We can print out the vector for a token
Step23: Here just another example of similarities for some famous words
Step24: Similarities in Context
In spaCy parsing, tagging and NER models make use of vector representations of contexts that represent the meaning of words. A text meaning representation is represented as an array of floats, i.e. a tensor, computed during the NLP pipeline processing. With this approach words that have not been seen before can be typed or classified. SpaCy uses a 4-layer convolutional network for the computation of these tensors. In this approach these tensors model a context of four words left and right of any given word.
Let us use the example from the spaCy documentation and check the word labrador
Step25: We can now test for the context
Step26: Using this strategy we can compute document or text similarities as well
Step27: We can vary the word order in sentences and compare them
Step28: Custom Models
Optimization
Step29: Training Models
This example code for training an NER model is based on the training example in spaCy.
We will import some components from the future module. Read its documentation here.
Step30: We import the random module for pseudo-random number generation
Step31: We import the Path object from the pathlib module
Step32: We import spaCy
Step33: We also import the minibatch and compounding module from spaCy.utils
Step34: The training data is formated as JSON
Step35: We created a blank 'xx' model
Step36: We add the named entity labels to the NER model
Step37: Assuming that the model is empty and untrained, we reset and initialize the weights randomly using
Step38: We would not do this, if the model is supposed to be tuned or retrained on new data.
We get all pipe-names in the model that are not our NER related pipes to disable them during training
Step39: We can now disable the other pipes and train just the NER uing 100 iterations
Step40: We can test the trained model
Step41: We can define the output directory where the model will be saved as the models folder in the directory where the notebook is running
Step42: Save model to output dir
Step43: To make sure everything worked out well, we can test the saved model | Python Code:
import spacy
Explanation: spaCy Tutorial
(C) 2019-2020 by Damir Cavar
Version: 1.4, February 2020
Download: This and various other Jupyter notebooks are available from my GitHub repo.
This is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring 2018 at Indiana University. The following tutorial assumes that you are using a newer distribution of Python 3 and spaCy 2.2 or newer.
Introduction to spaCy
Follow the instructions on the spaCy homepage about installation of the module and language models. Your local spaCy module is correctly installed, if the following command is successfull:
End of explanation
nlp = spacy.load("en_core_web_sm")
Explanation: We can load the English NLP pipeline in the following way:
End of explanation
doc = nlp(u'Human ambition is the key to staying ahead of automation.')
for token in doc:
print(token.text)
Explanation: Tokenization
End of explanation
doc = nlp(u'John bought a car and Mary a motorcycle.')
for token in doc:
print("\t".join( (token.text, str(token.idx), token.lemma_, token.pos_, token.tag_, token.dep_,
token.shape_, str(token.is_alpha), str(token.is_stop) )))
Explanation: Part-of-Speech Tagging
We can tokenize and part of speech tag the individual tokens using the following code:
End of explanation
for token in doc:
print(token.text, token.dep_, token.head.text, token.head.pos_,
[child for child in token.children])
Explanation: The above output contains for every token in a line the token itself, the lemma, the Part-of-Speech tag, the dependency label, the orthographic shape (upper and lower case characters as X or x respectively), the boolean for the token being an alphanumeric string, and the boolean for it being a stopword.
Dependency Parse
Using the same approach as above for PoS-tags, we can print the Dependency Parse relations:
End of explanation
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
Explanation: As specified in the code, each line represents one token. The token is printed in the first column, followed by the dependency relation to it from the token in the third column, followed by its main category type.
Named Entity Recognition
Similarly to PoS-tags and Dependency Parse Relations, we can print out Named Entity labels:
End of explanation
doc = nlp(u'Ali Hassan Kuban said that Apple Inc. will buy Google in May 2018.')
Explanation: We can extend the input with some more entities:
End of explanation
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
Explanation: The corresponding NE-labels are:
End of explanation
from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)
pattern = [{'LOWER': 'hello'}, {'IS_PUNCT': True}, {'LOWER': 'world'}]
matcher.add('HelloWorld', None, pattern)
doc = nlp(u'Hello, world! Hello... world!')
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
print("-" * 50)
doc = nlp(u'Hello, world! Hello world!')
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
Explanation: Pattern Matching in spaCy
End of explanation
import spacy
Explanation: spaCy is Missing
From the linguistic standpoint, when looking at the analytical output of the NLP pipeline in spaCy, there are some important components missing:
Clause boundary detection
Constituent structure trees (scope relations over constituents and phrases)
Anaphora resolution
Coreference analysis
Temporal reference resolution
...
Clause Boundary Detection
Complex sentences consist of clauses. For precise processing of semantic properties of natural language utterances we need to segment the sentences into clauses. The following sentence:
The man said that the woman claimed that the child broke the toy.
can be broken into the following clauses:
Matrix clause: [ the man said ]
Embedded clause: [ that the woman claimed ]
Embedded clause: [ that the child broke the toy ]
These clauses do not form an ordered list or flat sequence, they in fact are hierarchically organized. The matrix clause verb selects as its complement an embedded finite clause with the complementizer that. The embedded predicate claimed selects the same kind of clausal complement. We express this hierarchical relation in form of embedding in tree representations:
[ the man said [ that the woman claimed [ that the child broke the toy ] ] ]
Or using a graphical representation in form of a tree:
<img src="Embedded_Clauses_1.png" width="60%" height="60%">
The hierarchical relation of sub-clauses is relevant when it comes to semantics. The clause John sold his car can be interpreted as an assertion that describes an event with John as the agent, and the car as the object of a selling event in the past. If the clause is embedded under a matrix clause that contains a sentential negation, the proposition is assumed to NOT be true: [ Mary did not say that [ John sold his car ] ]
It is possible with additional effort to translate the Dependency Trees into clauses and reconstruct the clause hierarchy into a relevant form or data structure. SpaCy does not offer a direct data output of such relations.
One problem still remains, and this is clausal discontinuities. None of the common NLP pipelines, and spaCy in particular, can deal with any kind of discontinuities in any reasonable way. Discontinuities can be observed when sytanctic structures are split over the clause or sentence, or elements ocur in a cannoically different position, as in the following example:
Which car did John claim that Mary took?
The embedded clause consists of the sequence [ Mary took which car ]. One part of the sequence appears dislocated and precedes the matrix clause in the above example. Simple Dependency Parsers cannot generate any reasonable output that makes it easy to identify and reconstruct the relations of clausal elements in these structures.
Constitutent Structure Trees
Dependency Parse trees are a simplification of relations of elements in the clause. They ignore structural and hierarchical relations in a sentence or clause, as shown in the examples above. Instead the Dependency Parse trees show simple functional relations in the sense of sentential functions like subject or object of a verb.
SpaCy does not output any kind of constituent structure and more detailed relational properties of phrases and more complex structural units in a sentence or clause.
Since many semantic properties are defined or determined in terms of structural relations and hierarchies, that is scope relations, this is more complicated to reconstruct or map from the Dependency Parse trees.
Anaphora Resolution
SpaCy does not offer any anaphora resolution annotation. That is, the referent of a pronoun, as in the following examples, is not annotated in the resulting linguistic data structure:
John saw him.
John said that he saw the house.
Tim sold his house. He moved to Paris.
John saw himself in the mirror.
Knowing the restrictions of pronominal binding (in English for example), we can partially generate the potential or most likely anaphora - antecedent relations. This - however - is not part of the spaCy output.
One problem, however, is that spaCy does not provide parse trees of the constituent structure and clausal hierarchies, which is crucial for the correct analysis of pronominal anaphoric relations.
Coreference Analysis
Some NLP pipelines are capable of providing coreference analyses for constituents in clauses. For example, the two clauses should be analyzed as talking about the same subject:
The CEO of Apple, Tim Cook, decided to apply for a job at Google. Cook said that he is not satisfied with the quality of the iPhones anymore. He prefers the Pixel 2.
The constituents [ the CEO of Apple, Tim Cook ] in the first sentence, [ Cook ] in the second sentence, and [ he ] in the third, should all be tagged as referencing the same entity, that is the one mentioned in the first sentence. SpaCy does not provide such a level of analysis or annotation.
Temporal Reference
For various analysis levels it is essential to identify the time references in a sentence or utterance, for example the time the utterance is made or the time the described event happened.
Certain tenses are expressed as periphrastic constructions, including auxiliaries and main verbs. SpaCy does not provide the relevant information to identify these constructions and tenses.
Using the Dependency Parse Visualizer
More on Dependency Parse trees
End of explanation
from spacy import displacy
Explanation: We can load the visualizer:
End of explanation
nlp = spacy.load("en_core_web_sm")
Explanation: Loading the English NLP pipeline:
End of explanation
#doc = nlp(u'John said yesterday that Mary bought a new car for her older son.')
#doc = nlp(u"Dick ran and Jane danced yesterday.")
#doc = nlp(u"Tim Cook is the CEO of Apple.")
#doc = nlp(u"Born in a small town, she took the midnight train going anywhere.")
doc = nlp(u"John met Peter and Susan called Paul.")
Explanation: Process an input sentence:
End of explanation
displacy.serve(doc, style='dep')
Explanation: If you want to generate a visualization running code outside of the Jupyter notebook, you could use the following code. You should not use this code, if you are running the notebook. Instead, use the function display.render two cells below.
Visualizing the Dependency Parse tree can be achieved by running the following server code and opening up a new tab on the URL http://localhost:5000/. You can shut down the server by clicking on the stop button at the top in the notebook toolbar.
End of explanation
displacy.render(doc, style='dep', jupyter=True, options={"distance": 120})
Explanation: Instead of serving the graph, one can render it directly into a Jupyter Notebook:
End of explanation
text = Apple decided to fire Tim Cook and hire somebody called John Doe as the new CEO.
They also discussed a merger with Google. On the long run it seems more likely that Apple
will merge with Amazon and Microsoft with Google. The companies will all relocate to
Austin in Texas before the end of the century. John Doe bought a Prosche.
doc = nlp(text)
displacy.render(doc, style='ent', jupyter=True)
Explanation: In addition to the visualization of the Dependency Trees, we can visualize named entity annotations:
End of explanation
import spacy
Explanation: Vectors
To use vectors in spaCy, you might consider installing the larger models for the particular language. The common module and language packages only come with the small models. The larger models can be installed as described on the spaCy vectors page:
python -m spacy download en_core_web_lg
The large model en_core_web_lg contains more than 1 million unique vectors.
Let us restart all necessary modules again, in particular spaCy:
End of explanation
nlp = spacy.load('en_core_web_lg')
#nlp = spacy.load("en_core_web_sm")
Explanation: We can now import the English NLP pipeline to process some word list. Since the small models in spacy only include context-sensitive tensors, we should use the dowloaded large model for better word vectors. We load the large model as follows:
End of explanation
tokens = nlp(u'dog poodle beagle cat banana apple')
Explanation: We can process a list of words by the pipeline using the nlp object:
End of explanation
for token1 in tokens:
for token2 in tokens:
print(token1, token2, token1.similarity(token2))
Explanation: As described in the spaCy chapter Word Vectors and Semantic Similarity, the resulting elements of Doc, Span, and Token provide a method similarity(), which returns the similarities between words:
End of explanation
tokens = nlp(u'dog cat banana sasquatch')
for token in tokens:
print(token.text, token.has_vector, token.vector_norm, token.is_oov)
Explanation: We can access the vectors of these objects using the vector attribute:
End of explanation
n = 0
print(tokens[n].text, len(tokens[n].vector), tokens[n].vector)
Explanation: The attribute has_vector returns a boolean depending on whether the token has a vector in the model or not. The token sasquatch has no vector. It is also out-of-vocabulary (OOV), as the fourth column shows. Thus, it also has a norm of $0$, that is, it has a length of $0$.
Here the token vector has a length of $300$. We can print out the vector for a token:
End of explanation
tokens = nlp(u'queen king chef')
for token1 in tokens:
for token2 in tokens:
print(token1, token2, token1.similarity(token2))
Explanation: Here just another example of similarities for some famous words:
End of explanation
tokens = nlp(u'labrador')
for token in tokens:
print(token.text, token.has_vector, token.vector_norm, token.is_oov)
Explanation: Similarities in Context
In spaCy parsing, tagging and NER models make use of vector representations of contexts that represent the meaning of words. A text meaning representation is represented as an array of floats, i.e. a tensor, computed during the NLP pipeline processing. With this approach words that have not been seen before can be typed or classified. SpaCy uses a 4-layer convolutional network for the computation of these tensors. In this approach these tensors model a context of four words left and right of any given word.
Let us use the example from the spaCy documentation and check the word labrador:
End of explanation
doc1 = nlp(u"The labrador barked.")
doc2 = nlp(u"The labrador swam.")
doc3 = nlp(u"the labrador people live in canada.")
dog = nlp(u"dog")
count = 0
for doc in [doc1, doc2, doc3]:
lab = doc[1]
count += 1
print(str(count) + ":", lab.similarity(dog))
Explanation: We can now test for the context:
End of explanation
docs = ( nlp(u"Paris is the largest city in France."),
nlp(u"Vilnius is the capital of Lithuania."),
nlp(u"An emu is a large bird.") )
for x in range(len(docs)):
for y in range(len(docs)):
print(x, y, docs[x].similarity(docs[y]))
Explanation: Using this strategy we can compute document or text similarities as well:
End of explanation
docs = [nlp(u"dog bites man"), nlp(u"man bites dog"),
nlp(u"man dog bites"), nlp(u"cat eats mouse")]
for doc in docs:
for other_doc in docs:
print('"' + doc.text + '"', '"' + other_doc.text + '"', doc.similarity(other_doc))
Explanation: We can vary the word order in sentences and compare them:
End of explanation
nlp = spacy.load('en_core_web_lg')
Explanation: Custom Models
Optimization
End of explanation
from __future__ import unicode_literals, print_function
Explanation: Training Models
This example code for training an NER model is based on the training example in spaCy.
We will import some components from the future module. Read its documentation here.
End of explanation
import random
Explanation: We import the random module for pseudo-random number generation:
End of explanation
from pathlib import Path
Explanation: We import the Path object from the pathlib module:
End of explanation
import spacy
Explanation: We import spaCy:
End of explanation
from spacy.util import minibatch, compounding
Explanation: We also import the minibatch and compounding module from spaCy.utils:
End of explanation
TRAIN_DATA = [
("Who is Shaka Khan?", {"entities": [(7, 17, "PERSON")]}),
("I like London and Berlin.", {"entities": [(7, 13, "LOC"), (18, 24, "LOC")]}),
]
Explanation: The training data is formated as JSON:
End of explanation
nlp = spacy.blank("xx") # create blank Language class
ner = nlp.create_pipe("ner")
nlp.add_pipe(ner, last=True)
Explanation: We created a blank 'xx' model:
End of explanation
for _, annotations in TRAIN_DATA:
for ent in annotations.get("entities"):
ner.add_label(ent[2])
Explanation: We add the named entity labels to the NER model:
End of explanation
nlp.begin_training()
Explanation: Assuming that the model is empty and untrained, we reset and initialize the weights randomly using:
End of explanation
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
Explanation: We would not do this, if the model is supposed to be tuned or retrained on new data.
We get all pipe-names in the model that are not our NER related pipes to disable them during training:
End of explanation
with nlp.disable_pipes(*other_pipes): # only train NER
for itn in range(100):
random.shuffle(TRAIN_DATA)
losses = {}
# batch up the examples using spaCy's minibatch
batches = minibatch(TRAIN_DATA, size=compounding(4.0, 32.0, 1.001))
for batch in batches:
texts, annotations = zip(*batch)
nlp.update(
texts, # batch of texts
annotations, # batch of annotations
drop=0.5, # dropout - make it harder to memorise data
losses=losses,
)
print("Losses", losses)
Explanation: We can now disable the other pipes and train just the NER uing 100 iterations:
End of explanation
for text, _ in TRAIN_DATA:
doc = nlp(text)
print("Entities", [(ent.text, ent.label_) for ent in doc.ents])
print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc])
Explanation: We can test the trained model:
End of explanation
output_dir = Path("./models/")
Explanation: We can define the output directory where the model will be saved as the models folder in the directory where the notebook is running:
End of explanation
if not output_dir.exists():
output_dir.mkdir()
nlp.to_disk(output_dir)
Explanation: Save model to output dir:
End of explanation
nlp2 = spacy.load(output_dir)
for text, _ in TRAIN_DATA:
doc = nlp2(text)
print("Entities", [(ent.text, ent.label_) for ent in doc.ents])
print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc])
Explanation: To make sure everything worked out well, we can test the saved model:
End of explanation |
7,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Support Vector Machines
Classification Loss vs. Hinge Loss vs. Huberized Hinge Loss vs. Square Hinge Loss
Step1: Analytic Expressions
Let $X \in R^{n \times d+1}$ and $y = (y_1,...,y_n)^T \in R^{n+1}$ and $\texttt{loss}(...) \ge 0$
Objective function
Step2: SVM Gradient Descent Usage Example
Step3: SVM Stochastic Gradient Descent Usage Example
Step4: Here we notice that non-stochastic descent walks straight to the solution, but stochastic descent randomly walks around, eventually making it.
Step5: Here we notice that both methods decrease the objective function quickly. However, non-stochastic decreases faster.
Step6: Here we notice that training error is lower than test error, that is to be expected. And stochastic error is higher than non-stochastic error, but they both converge. | Python Code:
%matplotlib nbagg
import matplotlib.pyplot as plt
plt.clf()
plt.cla()
import numpy as np
ax = plt.subplot(1,1,1)
x_plot=np.linspace(-2,2,1000)
y_plot1=x_plot.copy()
y_plot1[x_plot < 0]=1
y_plot1[x_plot == 0]=0
y_plot1[x_plot > 0]=0
plot1 = ax.plot(x_plot,y_plot1, label='Classification Loss')
y_plot2=np.maximum(np.zeros(x_plot.shape),1-x_plot.copy())
plot2 = ax.plot(x_plot,y_plot2, label='Hinge Loss')
y_plot4=np.power(np.maximum(np.zeros(x_plot.shape),1-x_plot.copy()),2)
plot4 = ax.plot(x_plot,y_plot4, label='Square Hinge Loss')
h=.8
y_plot3= -1 * np.ones(x_plot.shape)
y_plot3[x_plot > 1+h]=0
y_plot3[x_plot < 1-h]=1-x_plot[x_plot < 1-h]
y_plot3[y_plot3 == -1]= ((1+h-x_plot[y_plot3 == -1])**2)/(4*h)
plot3 = ax.plot(x_plot,y_plot3, label='Huberized Hinge Loss')
handles, labels = ax.get_legend_handles_labels()
plt.legend(handles, labels)
plt.title('Loss Comparison')
plt.ylabel('Loss')
plt.xlabel('Distance From Decision Boundary')
Explanation: Linear Support Vector Machines
Classification Loss vs. Hinge Loss vs. Huberized Hinge Loss vs. Square Hinge Loss
End of explanation
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
from sklearn.kernel_ridge import KernelRidge
from sklearn.cluster import KMeans
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
import matplotlib.pyplot as plt
import sys
import numpy as np
from numpy.linalg import norm
from numpy.random import randint
import random
import math
import time
def loss_hinge(y,t): # const
return max(0,1-(y*t))
def loss_hinge_der(y,t): # const
if 1-(y*t) <= 0:
return 0
return -y
def loss_huber_hinge(y,t,h): # const
if y*t > 1+h:
return 0
if abs(1-(y*t)) <= h:
return pow((1+h-(y*t)),2)/(4*h)
if y*t < 1-h:
return 1 - (y*t)
def loss_huber_hinge_der(y,t,h): # const
if y*t > 1+h:
return 0
if abs(1-(y*t)) <= h:
return 2*(1+h-(y*t))*(-y)/(4*h)
if y*t < 1-h:
return -y
def loss(y,t,loss_type,h): # const
if loss_type=='hinge':
return loss_hinge(y,t)
if loss_type=='modified_huber':
return loss_huber_hinge(y,t,h)
def loss_der(y,t,loss_type,h): # const
if loss_type=='hinge':
return loss_hinge_der(y,t)
if loss_type=='modified_huber':
return loss_huber_hinge_der(y,t,h)
def compute_obj(w,C,X,y,loss_type,h): # const
ret = 0.0
assert len(X)==len(y)
assert len(X[0])==len(w)
for i in range(len(X)):
ret += loss(y[i],np.dot(X[i],w),loss_type,h)
return norm(w)**2 + C*ret/len(X)
def compute_grad(w,C,X,y,loss_type,h): # const
if len(X)==len(w):
grad = 2*w.copy()
for i in range(len(w)):
grad[i] += C*(loss_der(y,np.dot(X,w),loss_type,h)*X[i])
return grad
if len(X)==len(y) and len(X[0])==len(w):
n=len(X)
X[n-1,len(w)-1]
grad = 2*w.copy()
for i in range(len(w)):
loss_sum = 0.0
for j in range(n):
loss_sum += \
loss_der(y[j],np.dot(X[j],w),loss_type,h)*X[j,i]
grad[i] += C/n*loss_sum
return grad
assert False
def numer_grad(w,ep,delta,C,X,y,loss_type,h): # const
return (compute_obj(w+(ep*delta),C,X,y,loss_type,h) \
-compute_obj(w-(ep*delta),C,X,y,loss_type,h))/(2*ep)
def grad_checker(w0,C,X,y,loss_type,h): # const
ep=.0001
delta=0
d=len(w0)
w=[]
for i in range(d):
delta=np.zeros(w0.shape)
delta[i] = 1
w.append(numer_grad(w0,ep,delta,C,X,y,loss_type,h))
return np.asarray(w)
def score(X, y, w): # const
error = 0.0
error_comp = 0.0
for i in range(len(X)):
prediction = np.sign(np.dot(w,X[i]))
if prediction == 1 and y[i] == 1:
error += 1
elif (prediction == -1 or prediction == 0) and y[i] == -1:
error += 1
else:
error_comp += 1
return 'correct',error/len(X), 'failed',error_comp/len(X)
def my_gradient_descent(X,y,w0=None,initial_step_size=.1,max_iter=1000,C=1,
loss_type=None,h=.01,X_test=None, y_test=None,stochastic=False,back_track=True): # const
tol=10**-4 # scikit learn default
if w0 == None:
w0 = np.zeros(len(X[0]))
if len(X) == 0:
return 'Error'
diff = -1
grad = -1
w = w0
obj_array = []
training_error_array = []
training_error_array.append(score(X, y, w=w))
testing_error_array = []
testing_error_array.append(score(X_test, y_test, w=w))
w_array = []
w_array.append(w.copy())
for i in range(max_iter):
# print 'i',i
obj=compute_obj(w,C,X,y,loss_type,h)
# print 'obj',obj
obj_array.append(obj)
w_p = w
if stochastic:
random_index = randint(1,len(X))
grad = compute_grad(w,C,X[random_index],y[random_index],loss_type,h)
else:
grad = compute_grad(w,C,X,y,loss_type,h)
assert norm(grad-grad_checker(w,C,X,y,loss_type,h)) < 10**-2
# print 'grad',grad
if norm(grad) < tol:
break
step_size = initial_step_size
if back_track:
while obj < compute_obj(w - (step_size * grad),C,X,y,loss_type,h):
step_size = step_size/2.0
if step_size < .00000001:
break
# print 'step_size',step_size
w += - step_size * grad
# print 'w',w
w_array.append(w.copy())
training_error_array.append(score(X, y, w=w))
testing_error_array.append(score(X_test, y_test, w=w))
if training_error_array[len(training_error_array)-1][1] > 0.99:
break
diff = norm(w-w_p)
if norm(grad) > tol:
print 'Warning: Did not converge.'
return w, w_array, obj_array, training_error_array, testing_error_array
def my_sgd(X,y,w0=None,step_size=.01,max_iter=1000,C=1,loss_type=None,h=.01,X_test=None,
y_test=None,stochastic=False): # const
return my_gradient_descent(X_train, y_train,w0=w0,
loss_type=loss_type,
max_iter=max_iter,
h=h,C=C,X_test=X_test,
y_test=y_test,
stochastic=stochastic)
def my_svm(X_train, y_train,loss_type=None,max_iter=None,h=None,C=None,X_test=None, y_test=None,
stochastic=False): # const
w0=np.zeros(len(X_train[0]))
if stochastic:
w, w_array, obj_array, training_error_array, testing_error_array = my_sgd(X_train, y_train,w0=w0,
loss_type=loss_type,
max_iter=max_iter,
h=h,C=C,X_test=X_test,
y_test=y_test,stochastic=True)
else:
w, w_array, obj_array, training_error_array, testing_error_array = \
my_gradient_descent(X_train, y_train,w0=w0, loss_type=loss_type, max_iter=max_iter, h=h,C=C,
X_test=X_test, y_test=y_test, stochastic=stochastic)
return w, w_array, obj_array, training_error_array, testing_error_array
Explanation: Analytic Expressions
Let $X \in R^{n \times d+1}$ and $y = (y_1,...,y_n)^T \in R^{n+1}$ and $\texttt{loss}(...) \ge 0$
Objective function:
$$F(w) = \|w\|^2 + \frac Cn \|\texttt{loss}(y,Xw)\|_1$$
Clearly,
$$(\vec\nabla F(w))j = 2w_j + \frac Cn \sum^{n}{i=1}\frac{d}{d(Xw)i}\texttt{loss}(y_i,(Xw)_i) \cdot X{i,j} \quad \texttt{for} ~j = 1,2,...,d+1$$
For Hinge Loss:
$$l_{hinge}(y,t) := \max(0, 1 - yt)$$
Then
$$\frac{d}{dt}l_{hinge}(y,t) := \begin{cases} 0, & \mbox{if } 1-yt \lt 0\
-y, & \mbox{if } 1-yt \gt 0 \end{cases}$$
And
$$F(w) = \|w\|^2 + \frac Cn \sum^n\max(0, 1 - y(Xw))$$
And $\texttt{for j = 1,2,...,d+1}$
$$(\vec\nabla F(w))_j = \begin{cases} 2w_j, & \mbox{if } 1-y(Xw) \lt 0\
2w_j + \frac Cn \sum^{n}{i=1} -y \cdot X{i,j}, & \mbox{if } 1-y(Xw) \gt 0 \end{cases}$$
Where $y(Xw) \in R^{n}$<br />
For Square Hinge Loss:
$$l_{square-hinge}(y,t) := \max(0, 1 - yt)^2$$
Then
$$\frac{d}{dt}l_{square-hinge}(y,t) := \begin{cases} 0, & \mbox{if } 1-yt \lt 0\
2(1 - yt)(-y), & \mbox{if } 1-yt \gt 0 \end{cases}$$
And
$$F(w) = \|w\|^2 + \frac Cn \sum^n\max(0, 1 - y(Xw))^2$$
And $\texttt{for j = 1,2,...,d+1}$
$$(\vec\nabla F(w))_j = \begin{cases} 2w_j, & \mbox{if } 1-y(Xw) \le 0\
2w_j + \frac Cn \sum^{n}_{i=1} 2(1 - y(Xw))(-y) \cdot X_{i,j}, & \mbox{if } 1-y(Xw) \gt 0 \end{cases}$$
Where $y*(Xw) \in R^{n}$<br />
For Huberized Hinge Loss:
$$l_{huber-hinge}(y,t) := \begin{cases} 0, & \mbox{if } yt \gt 1+h\
\frac{(1+h-yt)^2}{4h}, & \mbox{if } |1-yt| \le h \
1-yt, & \mbox{if } yt \lt 1-h \end{cases}$$
$$\frac{d}{dt}l_{huber-hinge}(y,t) := \begin{cases} 0, & \mbox{if } yt \gt 1+h\
\frac{2(1+h-yt)(-y)}{4h}, & \mbox{if } |1-yt| \le h \
-y, & \mbox{if } yt \lt 1-h \end{cases}$$
We have continuity in $\frac{d}{dt}l_{huber-hinge}(y,t)$ since the derivatives limits' agree at the critical points;
$$ \lim_{t^+ \rightarrow \frac{1+h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t) = 0$$
$$ \lim_{t^- \rightarrow \frac{1+h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t) = \lim_{t^- \rightarrow \frac{1+h}{y} } \frac{2(1+h-yt)(-y)}{4h} = \lim_{t^- \rightarrow \frac{1+h}{y} } \frac{2(1+h-y\frac{1+h}{y})(-y)}{4h} = 0$$
So
$$ \lim_{t^+ \rightarrow \frac{1+h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t) = \lim_{t^- \rightarrow \frac{1+h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t)$$
And
$$ \lim_{t^+ \rightarrow \frac{1-h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t) = \lim_{t^+ \rightarrow \frac{1-h}{y} } \frac{2(1+h-yt)(-y)}{4h} = \lim_{t^+ \rightarrow \frac{1-h}{y} } \frac{2(1+h-y\frac{1-h}{y})(-y)}{4h} = -y$$
$$\lim_{t^- \rightarrow \frac{1-h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t) = \lim_{t^- \rightarrow \frac{1-h}{y} } -y = -y $$
So
$$ \lim_{t^+ \rightarrow \frac{1-h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t) = \lim_{t^- \rightarrow \frac{1-h}{y} } \frac{d}{dt}l_{huber-hinge}(y,t)$$
Also note for Huberized Hinge Loss:
$$F(w) = \begin{cases} \|w\|^2, & \mbox{if } y(Xw) \gt 1+h\
\|w\|^2 + \frac Cn \sum^n_1 \frac{(1+h-y(Xw))^2}{4h}, & \mbox{if } |1-y(Xw)| \le h \
\|w\|^2 + \frac Cn \sum^n_1 1-y(Xw), & \mbox{if } y(Xw) \lt 1-h \end{cases}$$
Where $y(Xw) \in R^{n}$<br />
And $\texttt{for j = 1,2,...,d+1}$
$$(\vec\nabla F(w))j = \begin{cases} 2w_j, & \mbox{if } y(Xw) \gt 1+h\
2w_j + \frac Cn \sum^n_1 \frac{2(1+h-y(Xw))(-y)}{4h}(Xw), & \mbox{if } |1-y(Xw)| \le h \
2w_j + \frac Cn \sum^n_1 -yX{ij}, & \mbox{if } y(Xw) \lt 1-h \end{cases}$$
Where $y(Xw) \in R^{n}$<br />
SVM Implementation
End of explanation
# Generate data
'''Generate 2 Gaussians samples with the same covariance matrix'''
n, dim = 500, 2
np.random.seed(0)
C = np.array([[0., -0.23], [0.83, .23]])
gap = 1
X = np.r_[np.dot(np.random.randn(n, dim)+gap, C),
np.dot(np.random.randn(n, dim)-gap, C)]
# append constant dimension
X = np.column_stack((X, np.ones(X.shape[0])))
y = np.hstack((-1*np.ones(n), np.ones(n)))
assert len(X[y==-1])==len(y[y==-1]);assert len(X[y==1])==len(y[y==1]);assert len(X)==len(y)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.5, random_state=20140210)
assert len(X_train)>1;assert len(X_test)>1;assert len(X_train)==len(y_train);assert len(X_test)==len(y_test)
max_iter=1000
C=1.0
for loss_type in ['modified_huber','hinge']:
for h_index in range(1,10):
h=(.1*(1.1**h_index))
print 'parameters: loss_type',loss_type,'h',h
w, w_array, obj_array, training_error_array, testing_error_array = my_svm(X_train, y_train,
loss_type=loss_type,
max_iter=max_iter,h=h,C=C,
X_test=X_test, y_test=y_test,
stochastic=False)
print 'Custom w =',w,' test score = ',score(X_test, y_test, w=w)
clf = SGDClassifier(loss=loss_type, penalty="l2",alpha=1/C, fit_intercept=False)
clf.fit(X_train, y_train); assert clf.intercept_ == 0
print 'SGDClassifier w = ',clf.coef_[0],' test score = ',score(X_test, y_test,
w=clf.coef_[0])
clf = LinearSVC( penalty="l2",C=C, fit_intercept=False); clf.fit(X_train, y_train); assert clf.intercept_ == 0
print 'LinearSVC w = ',clf.coef_[0],' test score = ',score(X_test, y_test, w=clf.coef_[0])
print
print
# break
# break
Explanation: SVM Gradient Descent Usage Example
End of explanation
# Generate data
'''Generate 2 Gaussians samples with the same covariance matrix'''
n, dim = 2*(10**7), 2
n, dim = 500, 2
np.random.seed(0)
C = np.array([[0., -0.23], [0.83, .23]])
gap = 1
X = np.r_[np.dot(np.random.randn(n, dim)+gap, C),
np.dot(np.random.randn(n, dim)-gap, C)]
# append constant dimension
X = np.column_stack((X, np.ones(X.shape[0])))
y = np.hstack((-1*np.ones(n), np.ones(n)))
assert len(X[y==-1])==len(y[y==-1]);assert len(X[y==1])==len(y[y==1]);assert len(X)==len(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=20140210)
assert len(X_train)>1;assert len(X_test)>1;assert len(X_train)==len(y_train);assert len(X_test)==len(y_test)
max_iter=1000
C=1.0
for loss_type in ['modified_huber','hinge']:
for h_index in range(1,10):
h=(.1*(1.1**h_index))
print 'parameters: loss_type',loss_type,'h',h
w, w_stoch_array, obj_stoch_array, training_error_stoch_array, testing_error_stoch_array = \
my_svm(X_train,
y_train, loss_type=loss_type, max_iter=max_iter,h=h,C=C, X_test=X_test, y_test=y_test,stochastic=True)
print 'Custom w =',w,' test score = ',score(X_test, y_test, w=w)
clf = SGDClassifier(loss=loss_type, penalty="l2",alpha=1/C, fit_intercept=False);clf.fit(X_train, y_train)
assert clf.intercept_ == 0
print 'SGDClassifier w = ',clf.coef_[0],' test score = ',score(X_test, y_test, w=clf.coef_[0])
clf = LinearSVC( penalty="l2",C=C, fit_intercept=False)
clf.fit(X_train, y_train)
assert clf.intercept_ == 0
print 'LinearSVC w = ',clf.coef_[0],' test score = ',score(X_test, y_test, w=clf.coef_[0])
print
print
# break
# break
%matplotlib nbagg
plt.clf()
plt.cla()
ax = plt.subplot(1,1,1)
w_array = np.asarray(w_array)
ax.scatter(w_array[:,0],w_array[:,1],marker='^',label='Non-Stochastic')
w_stoch_array = np.asarray(w_stoch_array)
ax.scatter(w_stoch_array[:,0],w_stoch_array[:,1],marker='*',label='Stochastic')
handles, labels = ax.get_legend_handles_labels()
plt.legend(handles, labels)
plt.title('First Two Dimensions of Hyperplane over iterations')
plt.ylabel('w [1]')
plt.xlabel('w [0]')
Explanation: SVM Stochastic Gradient Descent Usage Example
End of explanation
%matplotlib nbagg
plt.clf()
plt.cla()
ax = plt.subplot(1,1,1)
obj_array = np.asarray(obj_array)
ax.scatter(range(1,len(obj_array)+1),obj_array,marker='^',label='Non-Stochastic')
obj_stoch_array = np.asarray(obj_stoch_array)
ax.scatter(range(1,len(obj_stoch_array)+1),obj_stoch_array,marker='*',label='Stochastic')
handles, labels = ax.get_legend_handles_labels()
plt.legend(handles, labels)
plt.title('Objective Function over iterations')
plt.ylabel('F (w)')
plt.xlabel('Iteration')
Explanation: Here we notice that non-stochastic descent walks straight to the solution, but stochastic descent randomly walks around, eventually making it.
End of explanation
%matplotlib nbagg
plt.clf()
plt.cla()
ax = plt.subplot(1,1,1)
training_error_array = np.asarray(training_error_array)
testing_error_array = np.asarray(testing_error_array)
ax.scatter(range(1,len(training_error_array)+1),training_error_array[:,1],marker='^',label='training error')
ax.scatter(range(1,len(testing_error_array)+1),testing_error_array[:,1],marker='*',label='testing error')
training_error_stoch_array = np.asarray(training_error_stoch_array)
testing_error_stoch_array = np.asarray(testing_error_stoch_array)
ax.scatter(range(1,len(training_error_stoch_array)+1),training_error_stoch_array[:,1],marker='+',
label='stochastic training error')
ax.scatter(range(1,len(testing_error_stoch_array)+1),testing_error_stoch_array[:,1],marker='x',
label='stochastic testing error')
handles, labels = ax.get_legend_handles_labels()
plt.legend(handles, labels)
plt.title('Classification Error over iterations')
plt.ylabel('Classification Error')
plt.xlabel('Iteration')
Explanation: Here we notice that both methods decrease the objective function quickly. However, non-stochastic decreases faster.
End of explanation
%matplotlib nbagg
plt.clf()
plt.cla()
ax = plt.subplot(1,1,1)
# print w
x_plot=[w[0], 0]
y_plot=[w[1], 0]
ax.plot(x_plot,y_plot)
# print clf.coef_[0]
x_plot=[clf.coef_[0][0], 0]
y_plot=[clf.coef_[0][1], 0]
ax.plot(x_plot,y_plot)
ax.scatter((X[y==0])[:,0],(X[y==0])[:,1],marker='*')
ax.scatter((X[y==1])[:,0],(X[y==1])[:,1],marker='^')
handles, labels = ax.get_legend_handles_labels()
plt.title('Data, Scikit-learn Hyperplane, and Our Own Hyperplane')
plt.ylabel('y')
plt.xlabel('x')
Explanation: Here we notice that training error is lower than test error, that is to be expected. And stochastic error is higher than non-stochastic error, but they both converge.
End of explanation |
7,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is an iPython Notebook!
Step1: Basics
All you need to know about Python is here
Step2: You can assign several variables at once
Step3: There is no "begin-end"! You use indentation to specify blocks. Here is simple IF statement
Step4: Types
Step5: Loops
for
Step6: while
Step7: Enumerate
Step8: Python code style
There is PEP 8 (Python Enhancement Proposal), which contains all wise ideas about Python code style. Let's look at some of them
Step10: String Quotes
PEP 8 quote
Step11: Some tricks
Sum all elements in an array is straightforward
Step12: However, there is no built-in function for multiplication
Step13: , so we have to write our solution. Let's start with straightforward one
Step14: There is another way to implement it. It is to write it in a functional-programming style
Step16: Python is really good for fast prototyping
Let's look at a simple problem (https
Step18: Faster solution for the problem
It is to count LCM (Least common multiple) of all numbers in a range of [1, N]
Step19: Even faster solution
(credits to Stas Minakov)
Step20: Very important references
PEP 8 - Style Guide for Python Code | Python Code:
# you can mix text and code in one place and
# run code from a Web browser
Explanation: This is an iPython Notebook!
End of explanation
a = 10
a
Explanation: Basics
All you need to know about Python is here:
You don't need to specify type of a variable
End of explanation
a, b = 1, 2
a, b
b, a = a, b
a, b
Explanation: You can assign several variables at once:
End of explanation
if a > b:
print("A is greater than B")
else:
print("B is greater than A")
Explanation: There is no "begin-end"! You use indentation to specify blocks. Here is simple IF statement:
End of explanation
# Integer
a = 1
print(a)
# Float
b = 1.0
print(b)
# String
c = "Hello world"
print(c)
# Unicode
d = u"Привет, мир!"
print(d)
# List (array)
e = [1, 2, 3]
print(e[2]) # 3
# Tuple (constant array)
f = (1, 2, 3)
print(f[0]) # 1
# Set
g = {1, 1, 1, 2}
print(g)
# Dictionary (hash table, hash map)
g = {1: 'One', 2: 'Two', 3: 'Three'}
print(g[1]) # 'One'
Explanation: Types
End of explanation
for i in range(10):
print(i)
Explanation: Loops
for
End of explanation
i = 0
while i < 10:
print(i)
i += 1
Explanation: while
End of explanation
items = ['apple', 'banana', 'stawberry', 'watermelon']
for item in items:
print(item)
for i, item in enumerate(items):
print(i, item)
Explanation: Enumerate
End of explanation
# Variable name
my_variable = 1
# Class method and function names
def my_function():
pass
# Constants
MY_CONSTANT = 1
# Class name
class MyClass(object):
# 'private' variable - use underscore before a name
_my_variable = 1
# 'protected' variable - use two underscores before a name
__my_variable = 1
# magic methods
def __init__(self):
self._another_my_variable = 1
Explanation: Python code style
There is PEP 8 (Python Enhancement Proposal), which contains all wise ideas about Python code style. Let's look at some of them:
Naming
End of explanation
'string'
"another string"
Multiline
string
'''
Another
multiline
string
'''
Explanation: String Quotes
PEP 8 quote:
In Python, single-quoted strings and double-quoted strings are the same. PEP 8 does not make a recommendation for this. Pick a rule and stick to it. When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability.
For triple-quoted strings, always use double quote characters to be consistent with the docstring convention in PEP 257.
My rule for single-quoted and double-quoted strings is:
1. Use single-quoted for keywords;
2. Use double-quoted for user text;
3. Use tripple-double-quoted for all multiline strings and docstrings.
End of explanation
sum([1,2,3,4,5])
Explanation: Some tricks
Sum all elements in an array is straightforward:
End of explanation
mult([1,2,3,4,5])
Explanation: However, there is no built-in function for multiplication:
End of explanation
def mult(array):
result = 1
for item in array:
result *= item
return result
mult([1,2,3,4,5])
Explanation: , so we have to write our solution. Let's start with straightforward one:
End of explanation
from functools import reduce
def mult_functional(array):
return reduce(lambda prev_result, current_item: prev_result * current_item, array, 1)
mult_functional([1,2,3,4,5])
%timeit mult(range(1, 1000))
%timeit mult_functional(range(1, 1000))
Explanation: There is another way to implement it. It is to write it in a functional-programming style:
End of explanation
def get_smallest_divisor_in_range(N):
Brute force all numbers from 2 to 1*2*3*4*5*6*...*N
until we get a number, which divides by all numbers
in a range of [1, N].
Example:
>>> get_smallest_devisor_in_range(10)
2520
range_multiplied = mult(range(2, N+1))
for x in range(2, range_multiplied, 2):
for divisor in range(3, N+1):
if x % divisor:
break
else:
break
return x
%time print(get_smallest_divisor_in_range(10))
%time print(get_smallest_divisor_in_range(20))
Explanation: Python is really good for fast prototyping
Let's look at a simple problem (https://projecteuler.net/problem=5):
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Brute force solution
End of explanation
def get_smallest_divisor_in_range_fast(N):
Optimal solution for the problem is to count
LCM (Least common multiple) of all numbers in
a range of [1, N].
prime_divisors = {}
# Loop from 2 to N.
for x in range(2, N+1):
# Find and save all prime divisors of `x`.
for divisor in range(2, int(x**0.5) + 1):
power = 0
# Find the power of the `divisor` in `x`.
while x % divisor == 0:
x /= divisor
power += 1
if power > 0:
# Save the `divisor` with the greatest power into our `prime_divisors` dict (hash-map).
if divisor in prime_divisors:
if prime_divisors[divisor] < power:
prime_divisors[divisor] = power
else:
prime_divisors[divisor] = power
# Stop searching more divisors if `x` is already equals to `1` (all divisors are already found).
if x == 1:
break
else:
# If `x` is prime, we won't find any divisors and
# the above `for` loop will be over without hitting `break`,
# thus we just need to save `x` as prime_divisor in power of 1.
prime_divisors[x] = 1
# Having all prime divisors in their lowest powers we multiply all of them to get the answer.
least_common_multiple = 1
for divisor, power in prime_divisors.items():
least_common_multiple *= divisor ** power
return least_common_multiple
%time print(get_smallest_divisor_in_range_fast(10))
%time print(get_smallest_divisor_in_range_fast(20))
%time print(get_smallest_divisor_in_range_fast(10000))
Explanation: Faster solution for the problem
It is to count LCM (Least common multiple) of all numbers in a range of [1, N]:
$[a,b]=p_1^{\max(d_1,e_1)}\cdot\dots\cdot p_k^{\max(d_k,e_k)}.$
For example:
$8\; \, \; \,= 2^3 \cdot 3^0 \cdot 5^0 \cdot 7^0 \,!$
$9\; \, \; \,= 2^0 \cdot 3^2 \cdot 5^0 \cdot 7^0 \,!$
$21\; \,= 2^0 \cdot 3^1 \cdot 5^0 \cdot 7^1. \,!$
$\operatorname{lcm}(8,9,21) = 2^3 \cdot 3^2 \cdot 5^0 \cdot 7^1 = 8 \cdot 9 \cdot 1 \cdot 7 = 504. \,!$
End of explanation
try:
from math import gcd
except ImportError: # Python 2.x has `gcd` in `fractions` module instead of `math`
from fractions import gcd
def get_smallest_divisor_in_range_fastest(N):
least_common_multiple = 1
for x in range(2, N + 1):
least_common_multiple = (least_common_multiple * x) // gcd(least_common_multiple, x)
return least_common_multiple
%time print(get_smallest_divisor_in_range_fastest(10))
%time print(get_smallest_divisor_in_range_fastest(20))
%time print(get_smallest_divisor_in_range_fastest(10000))
Explanation: Even faster solution
(credits to Stas Minakov)
End of explanation
import math
math?
Explanation: Very important references
PEP 8 - Style Guide for Python Code: https://www.python.org/dev/peps/pep-0008/
End of explanation |
7,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grove ADC Example
This example shows how to use the Grove ADC.
A Grove I2C ADC (v1.2) and PYNQ Grove Adapter are required. An analog input is also required. In this example, the Grove slide potentiometer was used.
In the example, the ADC is initialized, a test read is done, and then the sensor is set to log a reading every 100 milliseconds. The ADC can be connected to any Grove peripheral that provides an analog voltage.
1. Using Pmod to Grove Adapter
This example uses the PYNQ Pmod to Grove adapter. The adapter is connected to PMODA, and the grove ADC is connected to group G4 on adapter.
1. Simple ADC read()
Step1: 2. Starting logging once every 100 milliseconds
Step2: 3. Try to change the input signal during the logging.
For example, if using the Grove slide potentiometer, move the slider back and forth (slowly).
Stop the logging whenever done trying to change sensor's value.
Step3: 4. Plot values over time
The voltage values can be logged and displayed.
Step4: 2. Using Arduino Shield
This example uses the PYNQ Arduino shield. The grove ADC can be connected to any of the I2C groups on the shield.
1. Instantiation and read a single value
Step5: 2. Starting logging once every 100 milliseconds
Step6: 3. Try to change the input signal during the logging.
For example, if using the Grove slide potentiometer, move the slider back and forth (slowly).
Stop the logging whenever done trying to change sensor's value.
Step7: 4. Plot values over time
The voltage values can be logged and displayed. | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
from pynq.lib.pmod import Grove_ADC
from pynq.lib.pmod import PMOD_GROVE_G4
grove_adc = Grove_ADC(base.PMODA,PMOD_GROVE_G4)
print("{} V".format(round(grove_adc.read(),4)))
Explanation: Grove ADC Example
This example shows how to use the Grove ADC.
A Grove I2C ADC (v1.2) and PYNQ Grove Adapter are required. An analog input is also required. In this example, the Grove slide potentiometer was used.
In the example, the ADC is initialized, a test read is done, and then the sensor is set to log a reading every 100 milliseconds. The ADC can be connected to any Grove peripheral that provides an analog voltage.
1. Using Pmod to Grove Adapter
This example uses the PYNQ Pmod to Grove adapter. The adapter is connected to PMODA, and the grove ADC is connected to group G4 on adapter.
1. Simple ADC read()
End of explanation
grove_adc.set_log_interval_ms(100)
grove_adc.start_log()
Explanation: 2. Starting logging once every 100 milliseconds
End of explanation
log = grove_adc.get_log()
Explanation: 3. Try to change the input signal during the logging.
For example, if using the Grove slide potentiometer, move the slider back and forth (slowly).
Stop the logging whenever done trying to change sensor's value.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(log)), log, 'ro')
plt.title('Grove ADC Voltage Log')
plt.axis([0, len(log), min(log), max(log)])
plt.show()
Explanation: 4. Plot values over time
The voltage values can be logged and displayed.
End of explanation
from pynq.lib.arduino import Grove_ADC
from pynq.lib.arduino import ARDUINO_GROVE_I2C
grove_adc = Grove_ADC(base.ARDUINO,ARDUINO_GROVE_I2C)
print("{} V".format(round(grove_adc.read(),4)))
Explanation: 2. Using Arduino Shield
This example uses the PYNQ Arduino shield. The grove ADC can be connected to any of the I2C groups on the shield.
1. Instantiation and read a single value
End of explanation
grove_adc.set_log_interval_ms(100)
grove_adc.start_log()
Explanation: 2. Starting logging once every 100 milliseconds
End of explanation
log = grove_adc.get_log()
Explanation: 3. Try to change the input signal during the logging.
For example, if using the Grove slide potentiometer, move the slider back and forth (slowly).
Stop the logging whenever done trying to change sensor's value.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(log)), log, 'ro')
plt.title('Grove ADC Voltage Log')
plt.axis([0, len(log), min(log), max(log)])
plt.show()
Explanation: 4. Plot values over time
The voltage values can be logged and displayed.
End of explanation |
7,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Listwise ranking
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: We can then import all the necessary packages
Step3: We will continue to use the MovieLens 100K dataset. As before, we load the datasets and keep only the user id, movie title, and user rating features for this tutorial. We also do some houskeeping to prepare our vocabularies.
Step4: Data preprocessing
However, we cannot use the MovieLens dataset for list optimization directly. To perform listwise optimization, we need to have access to a list of movies each user has rated, but each example in the MovieLens 100K dataset contains only the rating of a single movie.
To get around this we transform the dataset so that each example contains a user id and a list of movies rated by that user. Some movies in the list will be ranked higher than others; the goal of our model will be to make predictions that match this ordering.
To do this, we use the tfrs.examples.movielens.movielens_to_listwise helper function. It takes the MovieLens 100K dataset and generates a dataset containing list examples as discussed above. The implementation details can be found in the source code.
Step5: We can inspect an example from the training data. The example includes a user id, a list of 10 movie ids, and their ratings by the user.
Step6: Model definition
We will train the same model with three different losses
Step7: Training the models
We can now train each of the three models.
Step8: Mean squared error model
This model is very similar to the model in the basic ranking tutorial. We train the model to minimize the mean squared error between the actual ratings and predicted ratings. Therefore, this loss is computed individually for each movie and the training is pointwise.
Step9: Pairwise hinge loss model
By minimizing the pairwise hinge loss, the model tries to maximize the difference between the model's predictions for a highly rated item and a low rated item
Step10: Listwise model
The ListMLE loss from TensorFlow Ranking expresses list maximum likelihood estimation. To calculate the ListMLE loss, we first use the user ratings to generate an optimal ranking. We then calculate the likelihood of each candidate being out-ranked by any item below it in the optimal ranking using the predicted scores. The model tries to minimize such likelihood to ensure highly rated candidates are not out-ranked by low rated candidates. You can learn more about the details of ListMLE in section 2.2 of the paper Position-aware ListMLE
Step11: Comparing the models | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
!pip install -q tensorflow-ranking
Explanation: Listwise ranking
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/examples/list_optimization"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/list_optimization.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/list_optimization.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/list_optimization.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In the basic ranking tutorial, we trained a model that can predict ratings for user/movie pairs. The model was trained to minimize the mean squared error of predicted ratings.
However, optimizing the model's predictions on individual movies is not necessarily the best method for training ranking models. We do not need ranking models to predict scores with great accuracy. Instead, we care more about the ability of the model to generate an ordered list of items that matches the user's preference ordering.
Instead of optimizing the model's predictions on individual query/item pairs, we can optimize the model's ranking of a list as a whole. This method is called listwise ranking.
In this tutorial, we will use TensorFlow Recommenders to build listwise ranking models. To do so, we will make use of ranking losses and metrics provided by TensorFlow Ranking, a TensorFlow package that focuses on learning to rank.
Preliminaries
If TensorFlow Ranking is not available in your runtime environment, you can install it using pip:
End of explanation
import pprint
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_ranking as tfr
import tensorflow_recommenders as tfrs
Explanation: We can then import all the necessary packages:
End of explanation
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
})
movies = movies.map(lambda x: x["movie_title"])
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
Explanation: We will continue to use the MovieLens 100K dataset. As before, we load the datasets and keep only the user id, movie title, and user rating features for this tutorial. We also do some houskeeping to prepare our vocabularies.
End of explanation
tf.random.set_seed(42)
# Split between train and tests sets, as before.
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
# We sample 50 lists for each user for the training data. For each list we
# sample 5 movies from the movies the user rated.
train = tfrs.examples.movielens.sample_listwise(
train,
num_list_per_user=50,
num_examples_per_list=5,
seed=42
)
test = tfrs.examples.movielens.sample_listwise(
test,
num_list_per_user=1,
num_examples_per_list=5,
seed=42
)
Explanation: Data preprocessing
However, we cannot use the MovieLens dataset for list optimization directly. To perform listwise optimization, we need to have access to a list of movies each user has rated, but each example in the MovieLens 100K dataset contains only the rating of a single movie.
To get around this we transform the dataset so that each example contains a user id and a list of movies rated by that user. Some movies in the list will be ranked higher than others; the goal of our model will be to make predictions that match this ordering.
To do this, we use the tfrs.examples.movielens.movielens_to_listwise helper function. It takes the MovieLens 100K dataset and generates a dataset containing list examples as discussed above. The implementation details can be found in the source code.
End of explanation
for example in train.take(1):
pprint.pprint(example)
Explanation: We can inspect an example from the training data. The example includes a user id, a list of 10 movie ids, and their ratings by the user.
End of explanation
class RankingModel(tfrs.Model):
def __init__(self, loss):
super().__init__()
embedding_dimension = 32
# Compute embeddings for users.
self.user_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_user_ids),
tf.keras.layers.Embedding(len(unique_user_ids) + 2, embedding_dimension)
])
# Compute embeddings for movies.
self.movie_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_titles),
tf.keras.layers.Embedding(len(unique_movie_titles) + 2, embedding_dimension)
])
# Compute predictions.
self.score_model = tf.keras.Sequential([
# Learn multiple dense layers.
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(64, activation="relu"),
# Make rating predictions in the final layer.
tf.keras.layers.Dense(1)
])
self.task = tfrs.tasks.Ranking(
loss=loss,
metrics=[
tfr.keras.metrics.NDCGMetric(name="ndcg_metric"),
tf.keras.metrics.RootMeanSquaredError()
]
)
def call(self, features):
# We first convert the id features into embeddings.
# User embeddings are a [batch_size, embedding_dim] tensor.
user_embeddings = self.user_embeddings(features["user_id"])
# Movie embeddings are a [batch_size, num_movies_in_list, embedding_dim]
# tensor.
movie_embeddings = self.movie_embeddings(features["movie_title"])
# We want to concatenate user embeddings with movie emebeddings to pass
# them into the ranking model. To do so, we need to reshape the user
# embeddings to match the shape of movie embeddings.
list_length = features["movie_title"].shape[1]
user_embedding_repeated = tf.repeat(
tf.expand_dims(user_embeddings, 1), [list_length], axis=1)
# Once reshaped, we concatenate and pass into the dense layers to generate
# predictions.
concatenated_embeddings = tf.concat(
[user_embedding_repeated, movie_embeddings], 2)
return self.score_model(concatenated_embeddings)
def compute_loss(self, features, training=False):
labels = features.pop("user_rating")
scores = self(features)
return self.task(
labels=labels,
predictions=tf.squeeze(scores, axis=-1),
)
Explanation: Model definition
We will train the same model with three different losses:
mean squared error,
pairwise hinge loss, and
a listwise ListMLE loss.
These three losses correspond to pointwise, pairwise, and listwise optimization.
To evaluate the model we use normalized discounted cumulative gain (NDCG). NDCG measures a predicted ranking by taking a weighted sum of the actual rating of each candidate. The ratings of movies that are ranked lower by the model would be discounted more. As a result, a good model that ranks highly-rated movies on top would have a high NDCG result. Since this metric takes the ranked position of each candidate into account, it is a listwise metric.
End of explanation
epochs = 30
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
Explanation: Training the models
We can now train each of the three models.
End of explanation
mse_model = RankingModel(tf.keras.losses.MeanSquaredError())
mse_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
mse_model.fit(cached_train, epochs=epochs, verbose=False)
Explanation: Mean squared error model
This model is very similar to the model in the basic ranking tutorial. We train the model to minimize the mean squared error between the actual ratings and predicted ratings. Therefore, this loss is computed individually for each movie and the training is pointwise.
End of explanation
hinge_model = RankingModel(tfr.keras.losses.PairwiseHingeLoss())
hinge_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
hinge_model.fit(cached_train, epochs=epochs, verbose=False)
Explanation: Pairwise hinge loss model
By minimizing the pairwise hinge loss, the model tries to maximize the difference between the model's predictions for a highly rated item and a low rated item: the bigger that difference is, the lower the model loss. However, once the difference is large enough, the loss becomes zero, stopping the model from further optimizing this particular pair and letting it focus on other pairs that are incorrectly ranked
This loss is not computed for individual movies, but rather for pairs of movies. Hence the training using this loss is pairwise.
End of explanation
listwise_model = RankingModel(tfr.keras.losses.ListMLELoss())
listwise_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
listwise_model.fit(cached_train, epochs=epochs, verbose=False)
Explanation: Listwise model
The ListMLE loss from TensorFlow Ranking expresses list maximum likelihood estimation. To calculate the ListMLE loss, we first use the user ratings to generate an optimal ranking. We then calculate the likelihood of each candidate being out-ranked by any item below it in the optimal ranking using the predicted scores. The model tries to minimize such likelihood to ensure highly rated candidates are not out-ranked by low rated candidates. You can learn more about the details of ListMLE in section 2.2 of the paper Position-aware ListMLE: A Sequential Learning Process.
Note that since the likelihood is computed with respect to a candidate and all candidates below it in the optimal ranking, the loss is not pairwise but listwise. Hence the training uses list optimization.
End of explanation
mse_model_result = mse_model.evaluate(cached_test, return_dict=True)
print("NDCG of the MSE Model: {:.4f}".format(mse_model_result["ndcg_metric"]))
hinge_model_result = hinge_model.evaluate(cached_test, return_dict=True)
print("NDCG of the pairwise hinge loss model: {:.4f}".format(hinge_model_result["ndcg_metric"]))
listwise_model_result = listwise_model.evaluate(cached_test, return_dict=True)
print("NDCG of the ListMLE model: {:.4f}".format(listwise_model_result["ndcg_metric"]))
Explanation: Comparing the models
End of explanation |
7,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
深入MNIST
TensorFlow是一个非常强大的用来做大规模数值计算的库。其所擅长的任务之一就是实现以及训练深度神经网络。
在本教程中,我们将学到构建一个TensorFlow模型的基本步骤,并将通过这些步骤为MNIST构建一个深度卷积神经网络。
这个教程假设你已经熟悉神经网络和MNIST数据集。如果你尚未了解,请查看新手指南。
关于本教程
本教程首先解释了mnist_softmax.py中的代码 —— 一个简单的Tensorflow模型的应用。然后展示了一些提高精度的方法。
你可以运行本教程中的代码,或通读代码。
本教程将会完成:
创建一个softmax回归算法用以输入MNIST图片来辨识数位的模型,用Tensorfow通过辨识成百上千的例子来训练模型(运行我们首个Tensorflow session)
使用测试数据来测试模型准确度
构建,训练,测试一个多层卷积神经网络来提高精度
安装
在创建模型之前,我们会先加载MNIST数据集,然后启动一个TensorFlow的session。
加载MNIST数据
为了方便起见,我们已经准备了一个脚本来自动下载和导入MNIST数据集。它会自动创建一个MNIST_data的目录来存储数据。
Step1: 这里,mnist是一个轻量级的类。它以Numpy数组的形式存储着训练、校验和测试数据集。同时提供了一个函数,用于在迭代每一小批数据,后面我们将会用到。
运行TensorFlow的InteractiveSession
Tensorflow依赖于一个高效的C++后端来进行计算。与后端的这个连接叫做session。一般而言,使用TensorFlow程序的流程是先创建一个图,然后在session中启动它。
这里,我们使用更加方便的InteractiveSession类。通过它,你可以更加灵活地构建你的代码。它能让你在运行图的时候,插入一些计算图,这些计算图是由某些操作(operations)构成的。这对于工作在交互式环境中的人们来说非常便利,比如使用IPython。如果你没有使用InteractiveSession,那么你需要在启动session之前构建整个计算图,然后启动该计算图。
Step2: 计算图
为了在Python中进行高效的数值计算,我们通常会使用像NumPy一类的库,将一些诸如矩阵乘法的耗时操作在Python环境的外部来计算,这些计算通常会通过其它语言并用更为高效的代码来实现。
但遗憾的是,每一个操作切换回Python环境时仍需要不小的开销。如果你想在GPU或者分布式环境中计算时,这一开销更加可怖,这一开销主要可能是用来进行数据迁移。
TensorFlow也是在Python外部完成其主要工作,但是进行了改进以避免这种开销。其并没有采用在Python外部独立运行某个耗时操作的方式,而是先让我们描述一个交互操作图,然后完全将其运行在Python外部。这与Theano或Torch的做法类似。
因此Python代码的目的是用来构建这个可以在外部运行的计算图,以及安排计算图的哪一部分应该被运行。详情请查看基本用法中的计算图一节。
构建Softmax 回归模型
在这一节中我们将建立一个拥有一个线性层的softmax回归模型。在下一节,我们会将其扩展为一个拥有多层卷积网络的softmax回归模型。
占位符
我们通过为输入图像和目标输出类别创建节点,来开始构建计算图。
Step3: 这里的x和y_并不是特定的值,相反,他们都只是一个占位符,可以在TensorFlow运行某一计算时根据该占位符输入具体的值。
输入图片x是一个2维的浮点数张量。这里,分配给它的维度为[None, 784],其中784是一张展平的MNIST图片的维度。None表示其值大小不定,在这里作为第一个维度值,用以指代batch的大小,意即x的数量不定。输出类别值y_也是一个2维张量,其中每一行为一个10维的独热码向量,用于代表对应某一MNIST图片的类别。
虽然占位符的维度参数是可选的,但有了它,TensorFlow能够自动捕捉因数据维度不一致导致的错误。
变量
我们现在为模型定义权重W和偏置b。可以将它们当作额外的输入量,但是TensorFlow有一个更好的处理方式:变量。一个变量代表着TensorFlow计算图中的一个值,能够在计算过程中使用,甚至进行修改。在机器学习的应用过程中,模型参数一般用变量来表示。
Step4: 我们在调用tf.Variable的时候传入初始值。在这个例子里,我们把W和b都初始化为零向量。W是一个784x10的矩阵(因为我们有784个特征和10个输出值)。b是一个10维的向量(因为我们有10个分类)。
变量需要通过seesion初始化后,才能在session中使用。这一初始化步骤为,为初始值指定具体值(本例当中是全为零),并将其分配给每个变量,可以一次性为所有变量完成此操作。
Step5: 类别预测与损失函数
现在我们可以实现我们的回归模型了。这只需要一行!我们把向量化后的图片x和权重矩阵W相乘,加上偏置b。
Step6: 我们可以指定损失函数来指示模型预测一个实例有多不准;我们要在整个训练过程中使其最小化。这里我们的损失函数是目标类别和预测类别之间的交叉熵。斤现在初级教程中一样,我们使用稳定方程:
Step7: 注意,tf.nn.softmax_cross_entropy_with_logits隐式地对模型未归一化模型预测值和所有类别的总值应用了softmax函数,tf.reduce_sum取了总值的平均值。
训练模型
我们已经定义好模型和训练用的损失函数,那么用TensorFlow进行训练就很简单了。因为TensorFlow知道整个计算图,它可以使用自动微分法找到对于各个变量的损失的梯度值。TensorFlow有大量内置的优化算法这个例子中,我们用最速下降法让交叉熵下降,步长为0.5。
Step8: 这一行代码实际上是用来往计算图上添加一个新操作,其中包括计算梯度,计算每个参数的步长变化,并且计算出新的参数值。
返回的train_step操作对象,在运行时会使用梯度下降来更新参数。因此,整个模型的训练可以通过反复地运行train_step来完成。
Step9: 每一步迭代,我们都会加载50个训练样本,然后执行一次train_step,并通过feed_dict将x和y_张量占位符用训练替代为训练数据。
注意,在计算图中,你可以用feed_dict来替代任何张量,并不仅限于替换占位符。
评估模型
那么我们的模型性能如何呢?
首先让我们找出那些预测正确的标签。tf.argmax是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,而tf.argmax(y_,1)代表正确的标签,我们可以用tf.equal来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。
Step10: 这里返回一个布尔数组。为了计算我们分类的准确率,我们将布尔值转换为浮点数来代表对、错,然后取平均值。例如:[True, False, True, True]变为[1,0,1,1],计算出平均值为0.75。
Step11: 最后,我们可以计算出在测试数据上的准确率,大概是92%。
Step12: 构建一个多层卷积网络
在MNIST上只有91%正确率,实在太糟糕。在这个小节里,我们用一个稍微复杂的模型:卷积神经网络来改善效果。这会达到大概99.2%的准确率。虽然不是最高,但是还是比较让人满意。
权重初始化
为了创建这个模型,我们需要创建大量的权重和偏置项。这个模型中的权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度。由于我们使用的是[ReLU](https
Step13: 卷积和池化
TensorFlow在卷积和池化上有很强的灵活性。我们怎么处理边界?步长应该设多大?在这个实例里,我们会一直使用vanilla版本。我们的卷积使用1步长(stride size),0边距(padding size)的模板,保证输出和输入是同一个大小。我们的池化用简单传统的2x2大小的模板做max pooling。为了代码更简洁,我们把这部分抽象成一个函数。
Step14: 第一层卷积
现在我们可以开始实现第一层了。它由一个卷积接一个max pooling完成。卷积在每个5x5的patch中算出32个特征。卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小,接着是输入的通道数目,最后是输出的通道数目。 而对于每一个输出通道都有一个对应的偏置量。
Step15: 为了用这一层,我们把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)。
Step16: 我们把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。max_pool_2x2方法会将图像降为14乘14。
Step17: 第二层卷积
为了构建一个更深的网络,我们会把几个类似的层堆叠起来。第二层中,每个5x5的patch会得到64个特征。
Step18: 密集连接层
现在,图片尺寸减小到7x7,我们加入一个有1024个神经元的全连接层,用于处理整个图片。我们把池化层输出的张量reshape成一些向量,乘上权重矩阵,加上偏置,然后对其使用ReLU。
Step19: Dropout
为了减少过拟合,我们在输出层之前加入dropout。我们用一个placeholder来代表一个神经元的输出在dropout中保持不变的概率。这样我们可以在训练过程中启用dropout,在测试过程中关闭dropout。 TensorFlow的tf.nn.dropout操作除了可以屏蔽神经元的输出外,还会自动处理神经元输出值的scale。所以用dropout的时候可以不用考虑scale。<sup>1</sup>
1
Step20: 输出层
最后,我们添加一个softmax层,就像前面的单层softmax回归一样。
Step21: 训练和评估模型
这个模型的效果如何呢?为了进行训练和评估,我们使用与之前简单的单层SoftMax神经网络模型几乎相同的一套代码,只是我们会用更加复杂的ADAM优化器来做梯度最速下降,在feed_dict中加入额外的参数keep_prob来控制dropout比例。然后每100次迭代输出一次日志。
不同之处是:
我们替换最速梯度下降优化器为更复杂的自适应动量估计优化器
我们在feed_dict增加了keep_prob以控制dropout率
我们在训练过程中每100次迭代记录一次日志
你可以随意运行这段代码,但它有20,000次迭代可能要运行一段时间(可能超过半小时)。 | Python Code:
import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
Explanation: 深入MNIST
TensorFlow是一个非常强大的用来做大规模数值计算的库。其所擅长的任务之一就是实现以及训练深度神经网络。
在本教程中,我们将学到构建一个TensorFlow模型的基本步骤,并将通过这些步骤为MNIST构建一个深度卷积神经网络。
这个教程假设你已经熟悉神经网络和MNIST数据集。如果你尚未了解,请查看新手指南。
关于本教程
本教程首先解释了mnist_softmax.py中的代码 —— 一个简单的Tensorflow模型的应用。然后展示了一些提高精度的方法。
你可以运行本教程中的代码,或通读代码。
本教程将会完成:
创建一个softmax回归算法用以输入MNIST图片来辨识数位的模型,用Tensorfow通过辨识成百上千的例子来训练模型(运行我们首个Tensorflow session)
使用测试数据来测试模型准确度
构建,训练,测试一个多层卷积神经网络来提高精度
安装
在创建模型之前,我们会先加载MNIST数据集,然后启动一个TensorFlow的session。
加载MNIST数据
为了方便起见,我们已经准备了一个脚本来自动下载和导入MNIST数据集。它会自动创建一个MNIST_data的目录来存储数据。
End of explanation
import tensorflow as tf
sess = tf.InteractiveSession()
Explanation: 这里,mnist是一个轻量级的类。它以Numpy数组的形式存储着训练、校验和测试数据集。同时提供了一个函数,用于在迭代每一小批数据,后面我们将会用到。
运行TensorFlow的InteractiveSession
Tensorflow依赖于一个高效的C++后端来进行计算。与后端的这个连接叫做session。一般而言,使用TensorFlow程序的流程是先创建一个图,然后在session中启动它。
这里,我们使用更加方便的InteractiveSession类。通过它,你可以更加灵活地构建你的代码。它能让你在运行图的时候,插入一些计算图,这些计算图是由某些操作(operations)构成的。这对于工作在交互式环境中的人们来说非常便利,比如使用IPython。如果你没有使用InteractiveSession,那么你需要在启动session之前构建整个计算图,然后启动该计算图。
End of explanation
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
Explanation: 计算图
为了在Python中进行高效的数值计算,我们通常会使用像NumPy一类的库,将一些诸如矩阵乘法的耗时操作在Python环境的外部来计算,这些计算通常会通过其它语言并用更为高效的代码来实现。
但遗憾的是,每一个操作切换回Python环境时仍需要不小的开销。如果你想在GPU或者分布式环境中计算时,这一开销更加可怖,这一开销主要可能是用来进行数据迁移。
TensorFlow也是在Python外部完成其主要工作,但是进行了改进以避免这种开销。其并没有采用在Python外部独立运行某个耗时操作的方式,而是先让我们描述一个交互操作图,然后完全将其运行在Python外部。这与Theano或Torch的做法类似。
因此Python代码的目的是用来构建这个可以在外部运行的计算图,以及安排计算图的哪一部分应该被运行。详情请查看基本用法中的计算图一节。
构建Softmax 回归模型
在这一节中我们将建立一个拥有一个线性层的softmax回归模型。在下一节,我们会将其扩展为一个拥有多层卷积网络的softmax回归模型。
占位符
我们通过为输入图像和目标输出类别创建节点,来开始构建计算图。
End of explanation
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
Explanation: 这里的x和y_并不是特定的值,相反,他们都只是一个占位符,可以在TensorFlow运行某一计算时根据该占位符输入具体的值。
输入图片x是一个2维的浮点数张量。这里,分配给它的维度为[None, 784],其中784是一张展平的MNIST图片的维度。None表示其值大小不定,在这里作为第一个维度值,用以指代batch的大小,意即x的数量不定。输出类别值y_也是一个2维张量,其中每一行为一个10维的独热码向量,用于代表对应某一MNIST图片的类别。
虽然占位符的维度参数是可选的,但有了它,TensorFlow能够自动捕捉因数据维度不一致导致的错误。
变量
我们现在为模型定义权重W和偏置b。可以将它们当作额外的输入量,但是TensorFlow有一个更好的处理方式:变量。一个变量代表着TensorFlow计算图中的一个值,能够在计算过程中使用,甚至进行修改。在机器学习的应用过程中,模型参数一般用变量来表示。
End of explanation
sess.run(tf.global_variables_initializer())
Explanation: 我们在调用tf.Variable的时候传入初始值。在这个例子里,我们把W和b都初始化为零向量。W是一个784x10的矩阵(因为我们有784个特征和10个输出值)。b是一个10维的向量(因为我们有10个分类)。
变量需要通过seesion初始化后,才能在session中使用。这一初始化步骤为,为初始值指定具体值(本例当中是全为零),并将其分配给每个变量,可以一次性为所有变量完成此操作。
End of explanation
y = tf.matmul(x,W) + b
Explanation: 类别预测与损失函数
现在我们可以实现我们的回归模型了。这只需要一行!我们把向量化后的图片x和权重矩阵W相乘,加上偏置b。
End of explanation
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
Explanation: 我们可以指定损失函数来指示模型预测一个实例有多不准;我们要在整个训练过程中使其最小化。这里我们的损失函数是目标类别和预测类别之间的交叉熵。斤现在初级教程中一样,我们使用稳定方程:
End of explanation
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
Explanation: 注意,tf.nn.softmax_cross_entropy_with_logits隐式地对模型未归一化模型预测值和所有类别的总值应用了softmax函数,tf.reduce_sum取了总值的平均值。
训练模型
我们已经定义好模型和训练用的损失函数,那么用TensorFlow进行训练就很简单了。因为TensorFlow知道整个计算图,它可以使用自动微分法找到对于各个变量的损失的梯度值。TensorFlow有大量内置的优化算法这个例子中,我们用最速下降法让交叉熵下降,步长为0.5。
End of explanation
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
Explanation: 这一行代码实际上是用来往计算图上添加一个新操作,其中包括计算梯度,计算每个参数的步长变化,并且计算出新的参数值。
返回的train_step操作对象,在运行时会使用梯度下降来更新参数。因此,整个模型的训练可以通过反复地运行train_step来完成。
End of explanation
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
Explanation: 每一步迭代,我们都会加载50个训练样本,然后执行一次train_step,并通过feed_dict将x和y_张量占位符用训练替代为训练数据。
注意,在计算图中,你可以用feed_dict来替代任何张量,并不仅限于替换占位符。
评估模型
那么我们的模型性能如何呢?
首先让我们找出那些预测正确的标签。tf.argmax是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,而tf.argmax(y_,1)代表正确的标签,我们可以用tf.equal来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
Explanation: 这里返回一个布尔数组。为了计算我们分类的准确率,我们将布尔值转换为浮点数来代表对、错,然后取平均值。例如:[True, False, True, True]变为[1,0,1,1],计算出平均值为0.75。
End of explanation
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
Explanation: 最后,我们可以计算出在测试数据上的准确率,大概是92%。
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: 构建一个多层卷积网络
在MNIST上只有91%正确率,实在太糟糕。在这个小节里,我们用一个稍微复杂的模型:卷积神经网络来改善效果。这会达到大概99.2%的准确率。虽然不是最高,但是还是比较让人满意。
权重初始化
为了创建这个模型,我们需要创建大量的权重和偏置项。这个模型中的权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度。由于我们使用的是[ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)神经元,因此比较好的做法是用一个较小的正数来初始化偏置项,以避免神经元节点输出恒为0的问题(dead neurons)。为了不在建立模型的时候反复做初始化操作,我们定义两个函数用于初始化。
End of explanation
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
Explanation: 卷积和池化
TensorFlow在卷积和池化上有很强的灵活性。我们怎么处理边界?步长应该设多大?在这个实例里,我们会一直使用vanilla版本。我们的卷积使用1步长(stride size),0边距(padding size)的模板,保证输出和输入是同一个大小。我们的池化用简单传统的2x2大小的模板做max pooling。为了代码更简洁,我们把这部分抽象成一个函数。
End of explanation
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
Explanation: 第一层卷积
现在我们可以开始实现第一层了。它由一个卷积接一个max pooling完成。卷积在每个5x5的patch中算出32个特征。卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小,接着是输入的通道数目,最后是输出的通道数目。 而对于每一个输出通道都有一个对应的偏置量。
End of explanation
x_image = tf.reshape(x, [-1,28,28,1])
Explanation: 为了用这一层,我们把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)。
End of explanation
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
Explanation: 我们把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。max_pool_2x2方法会将图像降为14乘14。
End of explanation
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
Explanation: 第二层卷积
为了构建一个更深的网络,我们会把几个类似的层堆叠起来。第二层中,每个5x5的patch会得到64个特征。
End of explanation
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
Explanation: 密集连接层
现在,图片尺寸减小到7x7,我们加入一个有1024个神经元的全连接层,用于处理整个图片。我们把池化层输出的张量reshape成一些向量,乘上权重矩阵,加上偏置,然后对其使用ReLU。
End of explanation
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
Explanation: Dropout
为了减少过拟合,我们在输出层之前加入dropout。我们用一个placeholder来代表一个神经元的输出在dropout中保持不变的概率。这样我们可以在训练过程中启用dropout,在测试过程中关闭dropout。 TensorFlow的tf.nn.dropout操作除了可以屏蔽神经元的输出外,还会自动处理神经元输出值的scale。所以用dropout的时候可以不用考虑scale。<sup>1</sup>
1: 事实上,对于这个小型卷积网络,有没有dropout性能都差不多。dropout通常对降低过拟合总是很有用,但是是对于大型的神经网络来说。
End of explanation
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
Explanation: 输出层
最后,我们添加一个softmax层,就像前面的单层softmax回归一样。
End of explanation
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g" % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
# print ("test accuracy %g" % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
# Tensorflow throw OOM error if evaluate accuracy at once and memory is not enough
cross_accuracy = 0
for i in range(100):
testSet = mnist.test.next_batch(50)
each_accuracy = accuracy.eval(feed_dict={ x: testSet[0], y_: testSet[1], keep_prob: 1.0})
cross_accuracy += each_accuracy
print("test %d accuracy %g" % (i,each_accuracy))
print("test average accuracy %g" % (cross_accuracy/100,))
Explanation: 训练和评估模型
这个模型的效果如何呢?为了进行训练和评估,我们使用与之前简单的单层SoftMax神经网络模型几乎相同的一套代码,只是我们会用更加复杂的ADAM优化器来做梯度最速下降,在feed_dict中加入额外的参数keep_prob来控制dropout比例。然后每100次迭代输出一次日志。
不同之处是:
我们替换最速梯度下降优化器为更复杂的自适应动量估计优化器
我们在feed_dict增加了keep_prob以控制dropout率
我们在训练过程中每100次迭代记录一次日志
你可以随意运行这段代码,但它有20,000次迭代可能要运行一段时间(可能超过半小时)。
End of explanation |
7,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rand 2011 Bayesian Analysis
This notebook outlines how to begin the duplication the analysis of the Rand et al. 2011 study "Dynamic social networks promote cooperation in experiments with humans" Link to Paper
This notebook focuses on using a Bayesian approach. Just one example is shown. Refer to the other Cooperation Analysis notebook for the remaining regression formulas to do full replication
Spreadsheet
Stan_GLM
select-from-dataframe
summarize
This notebook also requires that bedrock-core be installed locally into the python kernel running this notebook. This can be installed via command line using
Step1: Test Connection to Bedrock Server
This code assumes a local bedrock is hosted at localhost on port 81. Change the SERVER variable to match your server's URL and port.
Step2: Check for Spreadsheet Opal
The following code block checks the Bedrock server for the Spreadsheet Opal. This Opal is used to load .csv, .xls, and other such files into a Bedrock matrix format. The code below calls the Bedrock /dataloaders/ingest endpoint to check if the opals.spreadsheet.Spreadsheet.Spreadsheet opal is installed.
If the code below shows the Opal is not installed, there are two options
Step3: Check for STAN GLM Opal
The following code block checks the Bedrock server for the STAN GLM Opal.
If the code below shows the Opal is not installed, there are two options
Step4: Check for select-from-dataframe Opal
The following code block checks the Bedrock server for the select-from-dataframe Opal. This allows you to filter by row and reduce the columns in a dataframe loaded by the server.
If the code below shows the Opal is not installed, there are two options
Step5: Check for summarize Opal
The following code block checks the Bedrock server for the summarize Opal. This allows you to summarize a matrix with an optional groupby clause.
If the code below shows the Opal is not installed, there are two options
Step6: Step 2
Step7: Now Upload the source file to the Bedrock Server
This code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note
Step8: Check available data sources for the CSV file
Call the Bedrock sources list to see available data sources. Note, that the Rand2011 data source should now be available
Step9: Create a Bedrock Matrix from the CSV Source
In order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum
Step10: Look at basic statistics on the source data
Here we can see that Bedrock has computed some basic statistics on the source data.
For numeric data
The quartiles, max, mean, min, and standard deviation are provided
For non-numeric data
The label values and counts for each label are provided.
For both types
The proposed tags and data type that Bedrock is suggesting are provided
Step11: Step 3
Step12: Check that Matrix is filtered
Step13: Step 4
Step14: Visualize the output of the analysis
Here the output of the analysis is downloaded and from here can be visualized and exported | Python Code:
from bedrock.client.client import BedrockAPI
Explanation: Rand 2011 Bayesian Analysis
This notebook outlines how to begin the duplication the analysis of the Rand et al. 2011 study "Dynamic social networks promote cooperation in experiments with humans" Link to Paper
This notebook focuses on using a Bayesian approach. Just one example is shown. Refer to the other Cooperation Analysis notebook for the remaining regression formulas to do full replication
Spreadsheet
Stan_GLM
select-from-dataframe
summarize
This notebook also requires that bedrock-core be installed locally into the python kernel running this notebook. This can be installed via command line using:
pip install git+https://github.com/Bedrock-py/bedrock-core.git
The other requirements to run this notebook are:
pandas
Step 1: Check Environment
First check that Bedrock is installed locally. If the following cell does not run without error, check the install procedure above and try again. Also, ensure that the kernel selected is the same as the kernel where bedrock-core is installed
End of explanation
import requests
import pandas
import pprint
SERVER = "http://localhost:81/"
api = BedrockAPI(SERVER)
Explanation: Test Connection to Bedrock Server
This code assumes a local bedrock is hosted at localhost on port 81. Change the SERVER variable to match your server's URL and port.
End of explanation
resp = api.ingest("opals.spreadsheet.Spreadsheet.Spreadsheet")
if resp.json():
print("Spreadsheet Opal Installed!")
else:
print("Spreadsheet Opal Not Installed!")
Explanation: Check for Spreadsheet Opal
The following code block checks the Bedrock server for the Spreadsheet Opal. This Opal is used to load .csv, .xls, and other such files into a Bedrock matrix format. The code below calls the Bedrock /dataloaders/ingest endpoint to check if the opals.spreadsheet.Spreadsheet.Spreadsheet opal is installed.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the Spreadsheet Opal with pip on the server Spreadsheet
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
resp = api.analytic('opals.stan.Stan.Stan_GLM')
if resp.json():
print("Stan_GLM Opal Installed!")
else:
print("Stan_GLM Opal Not Installed!")
Explanation: Check for STAN GLM Opal
The following code block checks the Bedrock server for the STAN GLM Opal.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the Stan GLM Opal with pip on the server Stan GLM
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
resp = api.analytic('opals.select-from-dataframe.SelectByCondition.SelectByCondition')
if resp.json():
print("Select-from-dataframe Opal Installed!")
else:
print("Select-from-dataframe Opal Not Installed!")
Explanation: Check for select-from-dataframe Opal
The following code block checks the Bedrock server for the select-from-dataframe Opal. This allows you to filter by row and reduce the columns in a dataframe loaded by the server.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the select-from-datafram Opal with pip on the server select-from-dataframe
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
resp = api.analytic('opals.summarize.Summarize.Summarize')
if resp.json():
print("Summarize Opal Installed!")
else:
print("Summarize Opal Not Installed!")
Explanation: Check for summarize Opal
The following code block checks the Bedrock server for the summarize Opal. This allows you to summarize a matrix with an optional groupby clause.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the summarize with pip on the server summarize
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
filepath = 'Rand2011PNAS_cooperation_data.csv'
datafile = pandas.read_csv('Rand2011PNAS_cooperation_data.csv')
datafile.head(10)
Explanation: Step 2: Upload Data to Bedrock and Create Matrix
Now that everything is installed, begin the workflow by uploading the csv data and creating a matrix. To understand this fully, it is useful to understand how a data loading workflow occurs in Bedrock.
Create a datasource that points to the original source file
Generate a matrix from the data source (filters can be applied during this step to pre-filter the data source on load
Analytics work on the generated matrix
Note: Each time a matrix is generated from a data source it will create a new copy with a new UUID to represent that matrix
Check for csv file locally
The following code opens the file and prints out the first part. The file must be a csv file with a header that has labels for each column. The file is comma delimited csv.
End of explanation
ingest_id = 'opals.spreadsheet.Spreadsheet.Spreadsheet'
resp = api.put_source('Rand2011', ingest_id, 'default', {'file': open(filepath, "rb")})
if resp.status_code == 201:
source_id = resp.json()['src_id']
print('Source {0} successfully uploaded'.format(filepath))
else:
try:
print("Error in Upload: {}".format(resp.json()['msg']))
except Exception:
pass
try:
source_id = resp.json()['src_id']
print("Using existing source. If this is not the desired behavior, upload with a different name.")
except Exception:
print("No existing source id provided")
Explanation: Now Upload the source file to the Bedrock Server
This code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note: This simply copies the file to the server, but does not create a Bedrock Matrix format
If the following fails to upload. Check that the csv file is in the correct comma delimited format with headers.
End of explanation
available_sources = api.list("dataloader", "sources").json()
s = next(filter(lambda source: source['src_id'] == source_id, available_sources),'None')
if s != 'None':
pp = pprint.PrettyPrinter()
pp.pprint(s)
else:
print("Could not find source")
Explanation: Check available data sources for the CSV file
Call the Bedrock sources list to see available data sources. Note, that the Rand2011 data source should now be available
End of explanation
resp = api.create_matrix(source_id, 'rand_mtx')
mtx = resp[0]
matrix_id = mtx['id']
print(mtx)
resp
Explanation: Create a Bedrock Matrix from the CSV Source
In order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum
End of explanation
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = []
summary_mtx = api.run_analytic(analytic_id, mtx, 'rand_mtx_summary', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(matrix_id, summary_mtx['id'], 'matrix.csv')
output
Explanation: Look at basic statistics on the source data
Here we can see that Bedrock has computed some basic statistics on the source data.
For numeric data
The quartiles, max, mean, min, and standard deviation are provided
For non-numeric data
The label values and counts for each label are provided.
For both types
The proposed tags and data type that Bedrock is suggesting are provided
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = [
{"attrname":"colname","value":"condition"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"Static"}
]
filtered_mtx = api.run_analytic(analytic_id, mtx, 'rand_static_only', input_data=inputData, parameter_data=paramsData)
filtered_mtx
Explanation: Step 3: Filter the data based on a condition
Filter the data to only the Static Condition
End of explanation
output = api.download_results_matrix('rand_mtx', 'rand_static_only', 'matrix.csv', remote_header_file='features.txt')
output
Explanation: Check that Matrix is filtered
End of explanation
analytic_id = "opals.stan.Stan.Stan_GLM"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ round_num"},
{"attrname":"family","value":'logit'},
{"attrname":"chains","value":"3"},
{"attrname":"iter","value":"3000"}
]
result_mtx = api.run_analytic(analytic_id, mtx, 'rand_bayesian1', input_data=inputData, parameter_data=paramsData)
result_mtx
Explanation: Step 4: Run Bayesian Logistic Regression
This uses Stan to perform Bayesian logistic regression comparing the effect of the round on the decision
End of explanation
summary_table = api.download_results_matrix('rand_mtx', 'rand_bayesian1', 'matrix.csv')
summary_table
prior_summary = api.download_results_matrix('rand_mtx', 'rand_bayesian1', 'prior_summary.txt')
print(prior_summary)
Explanation: Visualize the output of the analysis
Here the output of the analysis is downloaded and from here can be visualized and exported
End of explanation |
7,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: View Table
Step3: Delete Column
Step4: View Table | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Add A Column
Slug: add_a_column
Summary: Add a column in a table in SQL.
Date: 2016-05-01 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
Explanation: Create Data
End of explanation
%%sql
-- Select everything
SELECT *
-- From the table 'criminals'
FROM criminals
Explanation: View Table
End of explanation
%%sql
-- Edit the table
ALTER TABLE criminals
-- Add a column called 'state' that contains text with the default value being 'CA'
ADD COLUMN state text DEFAULT 'CA'
Explanation: Delete Column
End of explanation
%%sql
-- Select everything
SELECT *
-- From the table 'criminals'
FROM criminals
Explanation: View Table
End of explanation |
7,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TPOT tutorial on the Titanic dataset
The Titanic machine learning competition on Kaggle is one of the most popular beginner's competitions on the platform. We will use that competition here to demonstrate the implementation of TPOT.
Step1: Data Exploration
Step2: Data Munging
The first and most important step in using TPOT on any data set is to rename the target class/response variable to class.
Step3: At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 5 categorical variables which contain non-numerical values
Step4: We then check the number of levels that each of the five categorical variables have.
Step5: As we can see, Sex and Embarked have few levels. Let's find out what they are.
Step6: We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.
Step7: Since Name and Ticket have so many levels, we drop them from our analysis for the sake of simplicity. For Cabin, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.
Step8: Drop the unused features from the dataset.
Step9: We then add the encoded features to form the final dataset to be used with TPOT.
Step10: Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.
Step11: Finally we store the class labels, which we need to predict, in a separate variable.
Step12: Data Analysis using TPOT
To begin our analysis, we need to divide our training data into training and validation sets. The validation set is just to give us an idea of the test set error. The model selection and tuning is entirely taken care of by TPOT, so if we want to, we can skip creating this validation set.
Step13: After that, we proceed to calling the fit, score and export functions on our training dataset. To get a better idea of how these functions work, refer the TPOT documentation here.
An important TPOT parameter to set is the number of generations. Since our aim is to just illustrate the use of TPOT, we have set it to 5. On a standard laptop with 4GB RAM, it roughly takes 5 minutes per generation to run. For each added generation, it should take 5 mins more. Thus, for the default value of 100, total run time could be roughly around 8 hours.
Step14: Let's have a look at the generated code. As we can see, the random forest classifier performed the best on the given dataset out of all the other models that TPOT currently evaluates on. If we ran TPOT for more generations, then the score should improve further.
Step15: Make predictions on the submission data
Step16: The most important step here is to check for new levels in the categorical variables of the submission dataset that are absent in the training set. We identify them and set them to our placeholder value of '-999', i.e., we treat them as missing values. This ensures training consistency, as otherwise the model does not know what to do with the new levels in the submission dataset.
Step17: We then carry out the data munging steps as done earlier for the training dataset.
Step18: While calling MultiLabelBinarizer for the submission data set, we first fit on the training set again to learn the levels and then transform the submission dataset values. This further ensures that only those levels that were present in the training dataset are transformed. If new levels are still found in the submission dataset then it will return an error and we need to go back and check our earlier step of replacing new levels with the placeholder value. | Python Code:
# Import required libraries
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
# Load the data
titanic = pd.read_csv('data/titanic_train.csv')
titanic.head(5)
Explanation: TPOT tutorial on the Titanic dataset
The Titanic machine learning competition on Kaggle is one of the most popular beginner's competitions on the platform. We will use that competition here to demonstrate the implementation of TPOT.
End of explanation
titanic.groupby('Sex').Survived.value_counts()
titanic.groupby(['Pclass','Sex']).Survived.value_counts()
id = pd.crosstab([titanic.Pclass, titanic.Sex], titanic.Survived.astype(float))
id.div(id.sum(1).astype(float), 0)
Explanation: Data Exploration
End of explanation
titanic.rename(columns={'Survived': 'class'}, inplace=True)
Explanation: Data Munging
The first and most important step in using TPOT on any data set is to rename the target class/response variable to class.
End of explanation
titanic.dtypes
Explanation: At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 5 categorical variables which contain non-numerical values: Name, Sex, Ticket, Cabin and Embarked.
End of explanation
for cat in ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']:
print("Number of levels in category '{0}': \b {1:2.2f} ".format(cat, titanic[cat].unique().size))
Explanation: We then check the number of levels that each of the five categorical variables have.
End of explanation
for cat in ['Sex', 'Embarked']:
print("Levels for catgeory '{0}': {1}".format(cat, titanic[cat].unique()))
Explanation: As we can see, Sex and Embarked have few levels. Let's find out what they are.
End of explanation
titanic['Sex'] = titanic['Sex'].map({'male':0,'female':1})
titanic['Embarked'] = titanic['Embarked'].map({'S':0,'C':1,'Q':2})
titanic = titanic.fillna(-999)
pd.isnull(titanic).any()
Explanation: We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.
End of explanation
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
CabinTrans = mlb.fit_transform([{str(val)} for val in titanic['Cabin'].values])
CabinTrans
Explanation: Since Name and Ticket have so many levels, we drop them from our analysis for the sake of simplicity. For Cabin, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.
End of explanation
titanic_new = titanic.drop(['Name','Ticket','Cabin','class'], axis=1)
assert (len(titanic['Cabin'].unique()) == len(mlb.classes_)), "Not Equal" #check correct encoding done
Explanation: Drop the unused features from the dataset.
End of explanation
titanic_new = np.hstack((titanic_new.values,CabinTrans))
np.isnan(titanic_new).any()
Explanation: We then add the encoded features to form the final dataset to be used with TPOT.
End of explanation
titanic_new[0].size
Explanation: Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.
End of explanation
titanic_class = titanic['class'].values
Explanation: Finally we store the class labels, which we need to predict, in a separate variable.
End of explanation
training_indices, validation_indices = training_indices, testing_indices = train_test_split(titanic.index, stratify = titanic_class, train_size=0.75, test_size=0.25)
training_indices.size, validation_indices.size
Explanation: Data Analysis using TPOT
To begin our analysis, we need to divide our training data into training and validation sets. The validation set is just to give us an idea of the test set error. The model selection and tuning is entirely taken care of by TPOT, so if we want to, we can skip creating this validation set.
End of explanation
tpot = TPOTClassifier(verbosity=2, max_time_mins=2, max_eval_time_mins=0.04, population_size=40)
tpot.fit(titanic_new[training_indices], titanic_class[training_indices])
tpot.score(titanic_new[validation_indices], titanic.loc[validation_indices, 'class'].values)
tpot.export('tpot_titanic_pipeline.py')
Explanation: After that, we proceed to calling the fit, score and export functions on our training dataset. To get a better idea of how these functions work, refer the TPOT documentation here.
An important TPOT parameter to set is the number of generations. Since our aim is to just illustrate the use of TPOT, we have set it to 5. On a standard laptop with 4GB RAM, it roughly takes 5 minutes per generation to run. For each added generation, it should take 5 mins more. Thus, for the default value of 100, total run time could be roughly around 8 hours.
End of explanation
# %load tpot_titanic_pipeline.py
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# NOTE: Make sure that the class is labeled 'class' in the data file
tpot_data = np.recfromcsv('PATH/TO/DATA/FILE', delimiter='COLUMN_SEPARATOR', dtype=np.float64)
features = np.delete(tpot_data.view(np.float64).reshape(tpot_data.size, -1), tpot_data.dtype.names.index('class'), axis=1)
training_features, testing_features, training_classes, testing_classes = \
train_test_split(features, tpot_data['class'], random_state=None)
exported_pipeline = RandomForestClassifier(bootstrap=False, max_features=0.4, min_samples_leaf=1, min_samples_split=9)
exported_pipeline.fit(training_features, training_classes)
results = exported_pipeline.predict(testing_features)
Explanation: Let's have a look at the generated code. As we can see, the random forest classifier performed the best on the given dataset out of all the other models that TPOT currently evaluates on. If we ran TPOT for more generations, then the score should improve further.
End of explanation
# Read in the submission dataset
titanic_sub = pd.read_csv('data/titanic_test.csv')
titanic_sub.describe()
Explanation: Make predictions on the submission data
End of explanation
for var in ['Cabin']: #,'Name','Ticket']:
new = list(set(titanic_sub[var]) - set(titanic[var]))
titanic_sub.ix[titanic_sub[var].isin(new), var] = -999
Explanation: The most important step here is to check for new levels in the categorical variables of the submission dataset that are absent in the training set. We identify them and set them to our placeholder value of '-999', i.e., we treat them as missing values. This ensures training consistency, as otherwise the model does not know what to do with the new levels in the submission dataset.
End of explanation
titanic_sub['Sex'] = titanic_sub['Sex'].map({'male':0,'female':1})
titanic_sub['Embarked'] = titanic_sub['Embarked'].map({'S':0,'C':1,'Q':2})
titanic_sub = titanic_sub.fillna(-999)
pd.isnull(titanic_sub).any()
Explanation: We then carry out the data munging steps as done earlier for the training dataset.
End of explanation
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
SubCabinTrans = mlb.fit([{str(val)} for val in titanic['Cabin'].values]).transform([{str(val)} for val in titanic_sub['Cabin'].values])
titanic_sub = titanic_sub.drop(['Name','Ticket','Cabin'], axis=1)
# Form the new submission data set
titanic_sub_new = np.hstack((titanic_sub.values,SubCabinTrans))
np.any(np.isnan(titanic_sub_new))
# Ensure equal number of features in both the final training and submission dataset
assert (titanic_new.shape[1] == titanic_sub_new.shape[1]), "Not Equal"
# Generate the predictions
submission = tpot.predict(titanic_sub_new)
# Create the submission file
final = pd.DataFrame({'PassengerId': titanic_sub['PassengerId'], 'Survived': submission})
final.to_csv('data/submission.csv', index = False)
final.shape
Explanation: While calling MultiLabelBinarizer for the submission data set, we first fit on the training set again to learn the levels and then transform the submission dataset values. This further ensures that only those levels that were present in the training dataset are transformed. If new levels are still found in the submission dataset then it will return an error and we need to go back and check our earlier step of replacing new levels with the placeholder value.
End of explanation |
7,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Audio using the Base Overlay
The PYNQ-Z1 board contains an integrated MIC, and line out connected to a 3.5mm jack. Both these interfaces are connected to the FPGA fabric of the Zynq® chip. The Microphone has a PDM interface, and the line out is a PWM driven mono output.
It is possible to play back audio from the board in a notebook, and to capture audio from other interfaces like HDMI, or a USB audio capture device. This notebook will only consider the MIC and line out interfaces on the board.
The Microphone is integrated onto the board, as indicated in the image below. The MIC hole should not be covered when capturing audio.
Audio IP in base overlay
To use audio on the PYNQ-Z1, audio controllers must be included in a hardware library or overlay. The base overlay contains a the PDM capture and PWM driver for the two audio interfaces as indicated in the image below
Step1: Capture audio
Capture a 4 second sample from the microphone, and save the raw pdm file to disk
Step2: Playback on the board
Connect headphones, or speakers to the 3.5mm line out and playback the captured audio
Step3: You can also playback from a pre-recorded pdm file | Python Code:
from pynq.drivers import Audio
audio = Audio()
Explanation: Audio using the Base Overlay
The PYNQ-Z1 board contains an integrated MIC, and line out connected to a 3.5mm jack. Both these interfaces are connected to the FPGA fabric of the Zynq® chip. The Microphone has a PDM interface, and the line out is a PWM driven mono output.
It is possible to play back audio from the board in a notebook, and to capture audio from other interfaces like HDMI, or a USB audio capture device. This notebook will only consider the MIC and line out interfaces on the board.
The Microphone is integrated onto the board, as indicated in the image below. The MIC hole should not be covered when capturing audio.
Audio IP in base overlay
To use audio on the PYNQ-Z1, audio controllers must be included in a hardware library or overlay. The base overlay contains a the PDM capture and PWM driver for the two audio interfaces as indicated in the image below:
The Audio IP in the base overlay consists of a PDM block to interface the MIC, and an Audio Direct IP block to drive the line out (PWM). There are three multiplexors. This allows the line out to be driven from the PS, or the MIC can be streamed directly to the output. The line out can also be disabled.
Using the MIC
To use the MIC, first create an instance of the Audio class. The audio class can be used to access both the MIC and the line out.
End of explanation
# Record a sample
audio.record(4)
# Save recorded sample
audio.save("Recording_1.pdm")
Explanation: Capture audio
Capture a 4 second sample from the microphone, and save the raw pdm file to disk:
End of explanation
# Play recorded sample
audio.play()
Explanation: Playback on the board
Connect headphones, or speakers to the 3.5mm line out and playback the captured audio:
End of explanation
# Load a sample
audio.load("/home/xilinx/pynq/drivers/tests/pynq_welcome.pdm")
# Play loaded sample
audio.play()
Explanation: You can also playback from a pre-recorded pdm file
End of explanation |
7,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: MarkDown input
WARNING
Step2: Create HTML document
Save notebook before creating doc
Make sure to update the notebook name if necessary
selected_cells is the list of cells that will be written in the html doc
Count cells (i.e. including code and markdown cells) starting from 0
A code cell has 2 parts | Python Code:
# path_img1 = 'data/svgclock.svg'
# path_img2 = 'data/example2.jpg'
# path_img3 = 'data/example4.png'
path_img1 = 'http://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg'
path_img2 = 'http://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Einstein_1921_by_F_Schmutzer_-_restoration.jpg/220px-Einstein_1921_by_F_Schmutzer_-_restoration.jpg'
path_img3 = 'http://upload.wikimedia.org/wikipedia/en/thumb/f/f9/Singing_in_the_rain_poster.jpg/220px-Singing_in_the_rain_poster.jpg'
text_md = r
#Header One
##Header Two
###Header Three
####Header Four
<p><em>Native</em> <strong>HTML</strong> sample</p>
##Caractères avec accents
+ Un poème de Paul Verlaine
Les sanglots longs
Des violons
De l'automne
Blessent mon cœur
D'une langueur
Monotone.
Tout suffocant
Et blême, quand
Sonne l'heure,
Je me souviens
Des jours anciens
Et je pleure
Et je m'en vais
Au vent mauvais
Qui m'emporte
Deçà, delà,
Pareil à la
Feuille morte.
+ Autre example: çà avec accent, symbole de l'Euro €
+ Et aussi équations Latex inline $\mu$ ou encore $e^{i\pi}=-1$
##Multi line Latex examples
A mathjax expression, by default left-aligned.
$$
\frac{n!}{k!(n-k)!} = \binom{n}{k}
$$
A mathjax expression, centered.
<div class="math_center">
$$
\begin{equation} x = a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{a_4} } } } \end{equation}
$$
</div>
A mathjax expression, right-aligned.
<div class="math_right">
$$
A_{m,n} = \begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{pmatrix}
$$
</div>
##List Example
Here is a list of items :
- Item no 1
- Item no 2
- etc
- Very long item that I need to write on 2 lines
azerty azerty azerty azerty azerty azerty azerty azerty azerty azerty
##A block quote
> Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
##Tables
**[links](http://example.com)**
4. Apples
5. Oranges
6. Pears
A centered table
<div class="table_center">
| Header 1 | *Header* 2 |
| -------- | -------- |
| `Cell 1` | [Cell 2](http://example.com) link |
| Cell 3 | **Cell 4** |
</div>
Another table, left-aligned by default.
See how the mathjax expression can be left-aligned or centered in a table cell.
|title1|title2|
|-|-|
||$$\frac{n!}{k!(n-k)!} = \binom{n}{k}$$|
||<div class="math_center">$$\frac{n!}{k!(n-k)!} = \binom{n}{k}$$</div>|
|||
###Finally some variables
+ %s
+ %s
+ %s
% (path_img1,
path_img1,
path_img2,
path_img3,
'variableA',
'variableB',
'variableC')
text_html = md.md_to_html(text_md)
HTML(text_html)
Explanation: MarkDown input
WARNING: text string must be raw e.g. r'example string'
End of explanation
fname = 'demo_ezmarkdown.ipynb'
output = 'saved/output.html'
tpl = md.template.TEMPLATE_OUTPUT_CELLS_ONLY
selected_cells = [1, 3]
md.write_html_doc(fname, output, selected_cells, template=tpl)
Explanation: Create HTML document
Save notebook before creating doc
Make sure to update the notebook name if necessary
selected_cells is the list of cells that will be written in the html doc
Count cells (i.e. including code and markdown cells) starting from 0
A code cell has 2 parts: In[ ] and Out[ ]
Several templates are available, accessible by autocomplete.
TEMPLATE_INPUT_AND_OUTPUT_CELLS:
TEMPLATE_INPUT_CELLS_ONLY
TEMPLATE_OUTPUT_CELLS_ONLY
TEMPLATE_INPUT_CELLS_TOGGLE_OUTPUT_CELLS
TEMPLATE_OUTPUT_CELLS_TOGGLE_INPUT_CELLS
The selected template controls how code cells will be rendered in the HTML doc. Try them out.
The resulting HTML doc is created where variable output indicates.
A subdirectory 'saved' is created is necessary.
End of explanation |
7,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) filter from single trial
activity to estimate source power for two frequencies of interest.
The original reference for DICS is
Step1: Read raw data | Python Code:
# Author: Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.time_frequency import compute_epochs_csd
from mne.beamformer import dics_source_power
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
Explanation: Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) filter from single trial
activity to estimate source power for two frequencies of interest.
The original reference for DICS is:
Gross et al. Dynamic imaging of coherent sources: Studying neural interactions
in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Computing the data and noise cross-spectral density matrices
# The time-frequency window was chosen on the basis of spectrograms from
# example time_frequency/plot_time_frequency.py
# As fsum is False compute_epochs_csd returns a list of CrossSpectralDensity
# instances than can then be passed to dics_source_power
data_csds = compute_epochs_csd(epochs, mode='multitaper', tmin=0.04, tmax=0.15,
fmin=15, fmax=30, fsum=False)
noise_csds = compute_epochs_csd(epochs, mode='multitaper', tmin=-0.11,
tmax=-0.001, fmin=15, fmax=30, fsum=False)
# Compute DICS spatial filter and estimate source power
stc = dics_source_power(epochs.info, forward, noise_csds, data_csds)
clim = dict(kind='value', lims=[1.6, 1.9, 2.2])
for i, csd in enumerate(data_csds):
message = 'DICS source power at %0.1f Hz' % csd.frequencies[0]
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
time_label=message, figure=i, clim=clim)
brain.set_data_time_index(i)
brain.show_view('lateral')
# Uncomment line below to save images
# brain.save_image('DICS_source_power_freq_%d.png' % csd.frequencies[0])
Explanation: Read raw data
End of explanation |
7,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following is adapted from Visualizing TensorFlow Graphs in Jupyter Notebooks
And excuted in
bash
docker run -it -p 8888
Step1: Run the follwing
Step2:
Step3:
Step8:
Step9: The following is adapted from Visualizing CNN architectures side by side with mxnet | Python Code:
import tensorflow as tf
g = tf.Graph()
with g.as_default():
a = tf.placeholder(tf.float32, name="a")
b = tf.placeholder(tf.float32, name="b")
c = a + b
[node.name for node in g.as_graph_def().node]
g.as_graph_def().node[2].input
%%bash
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get install -yq --no-install-recommends graphviz
%%bash
pip install graphviz
from graphviz import Digraph
dot = Digraph()
for n in g.as_graph_def().node:
# Each node has a name and a label. The name identifies the node
# while the label is what will be displayed in the graph.
# We're using the name as a label for simplicity.
dot.node(n.name, label=n.name)
for i in n.input:
# Edges are determined by the names of the nodes
dot.edge(i, n.name)
# Jupyter can automatically display the DOT graph,
# which allows us to just return it as a value.
dot
def tf_to_dot(graph):
dot = Digraph()
for n in g.as_graph_def().node:
dot.node(n.name, label=n.name)
for i in n.input:
dot.edge(i, n.name)
return dot
g = tf.Graph()
with g.as_default():
pi = tf.constant(3.14, name="pi")
r = tf.placeholder(tf.float32, name="r")
y = pi * r * r
tf_to_dot(g)
%%bash
mkdir vis_logs
Explanation: The following is adapted from Visualizing TensorFlow Graphs in Jupyter Notebooks
And excuted in
bash
docker run -it -p 8888:8888 -p 6006:6006 -v `pwd`:/space/ -w /space/ --rm --name md waleedka/modern-deep-learning jupyter notebook --ip=0.0.0.0 --allow-root
End of explanation
g = tf.Graph()
with g.as_default():
pi = tf.constant(3.14, name="pi")
r = tf.placeholder(tf.float32, name="r")
y = pi * r * r
tf.summary.FileWriter("vis_logs", g).close()
Explanation: Run the follwing:
bash
docker exec -it md tensorboard --logdir=dl/vis_logs
And navigate to http://localhost:6006/#graphs
End of explanation
g = tf.Graph()
with g.as_default():
X = tf.placeholder(tf.float32, name="X")
W1 = tf.placeholder(tf.float32, name="W1")
b1 = tf.placeholder(tf.float32, name="b1")
a1 = tf.nn.relu(tf.matmul(X, W1) + b1)
W2 = tf.placeholder(tf.float32, name="W2")
b2 = tf.placeholder(tf.float32, name="b2")
a2 = tf.nn.relu(tf.matmul(a1, W2) + b2)
W3 = tf.placeholder(tf.float32, name="W3")
b3 = tf.placeholder(tf.float32, name="b3")
y_hat = tf.matmul(a2, W3) + b3
tf.summary.FileWriter("vis_logs", g).close()
Explanation:
End of explanation
g = tf.Graph()
with g.as_default():
X = tf.placeholder(tf.float32, name="X")
with tf.name_scope("Layer1"):
W1 = tf.placeholder(tf.float32, name="W1")
b1 = tf.placeholder(tf.float32, name="b1")
a1 = tf.nn.relu(tf.matmul(X, W1) + b1)
with tf.name_scope("Layer2"):
W2 = tf.placeholder(tf.float32, name="W2")
b2 = tf.placeholder(tf.float32, name="b2")
a2 = tf.nn.relu(tf.matmul(a1, W2) + b2)
with tf.name_scope("Layer3"):
W3 = tf.placeholder(tf.float32, name="W3")
b3 = tf.placeholder(tf.float32, name="b3")
y_hat = tf.matmul(a2, W3) + b3
tf.summary.FileWriter("vis_logs", g).close()
Explanation:
End of explanation
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
# TensorFlow Graph visualizer code
import numpy as np
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script src="//cdnjs.cloudflare.com/ajax/libs/polymer/0.3.3/platform.js"></script>
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Simply call this to display the result. Unfortunately it doesn't save the output together with
# the Jupyter notebook, so we can only show a non-interactive image here.
show_graph(g)
Explanation:
End of explanation
%%bash
pip install mxnet
%%bash
# https://github.com/dmlc/mxnet-model-gallery/blob/master/imagenet-1k-vgg.md
wget http://data.dmlc.ml/mxnet/models/imagenet/vgg/vgg19.tar.gz
%%bash
wget http://data.dmlc.ml/models/imagenet/inception-bn/Inception-BN-symbol.json
%%bash
cat Inception-BN-symbol.json
%%bash
wget http://data.dmlc.ml/mxnet/models/imagenet/resnet/50-layers/resnet-50-symbol.json && wget http://data.dmlc.ml/mxnet/models/imagenet/resnet/50-layers/resnet-50-0000.params
import mxnet as mx
sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-50', 0)
mx.viz.plot_network(sym, node_attrs={"shape":'rect',"fixedsize":'false'}, save_format='png')
import mxnet as mx
user = mx.symbol.Variable('user')
item = mx.symbol.Variable('item')
score = mx.symbol.Variable('score')
# Set dummy dimensions
k = 64
max_user = 100
max_item = 50
# user feature lookup
user = mx.symbol.Embedding(data = user, input_dim = max_user, output_dim = k)
# item feature lookup
item = mx.symbol.Embedding(data = item, input_dim = max_item, output_dim = k)
# predict by the inner product, which is elementwise product and then sum
net = user * item
net = mx.symbol.sum_axis(data = net, axis = 1)
net = mx.symbol.Flatten(data = net)
# loss layer
net = mx.symbol.LinearRegressionOutput(data = net, label = score)
# Visualize your network
mx.viz.plot_network(net)
Explanation: The following is adapted from Visualizing CNN architectures side by side with mxnet
End of explanation |
7,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2
Step1: 3
Step2: 4
Step3: 5
Step4: 7
Step5: 8
Step6: 9
Step7: 10 | Python Code:
# The story is stored in the file "story.txt".
f = open("story.txt", "r")
story = f.read()
print(story)
Explanation: 2: Reading the file in
Instructions
The story is stored in the "story.txt" file. Open the file and read the contents into the story variable.
Answer
End of explanation
# We can split strings into lists with the .split() method.
# If we use a space as the input to .split(), it will split based on the space.
text = "Bears are probably better than sharks, but I can't get close enough to one to be sure."
tokenized_text = text.split(" ")
tokenized_story = story.split(" ")
print(tokenized_story)
Explanation: 3: Tokenizing the file
Instructions
The story is loaded into the story variable.
Tokenize the story, and store the tokens into the tokenized_story variable.
Answer
End of explanation
# We can use the .replace function to replace punctuation in a string.
text = "Who really shot John F. Kennedy?"
text = text.replace("?", "?!")
# The question mark has been replaced with ?!.
##print(text)
# We can replace strings with blank spaces, meaning that they are just removed.
text = text.replace("?", "")
# The question mark is gone now.
##print(text)
no_punctuation_tokens = []
for token in tokenized_story:
for p in [".", ",", "\n", "'", ";", "?", "!", "-", ":"]:
token = token.replace(p, "")
no_punctuation_tokens.append(token)
print(no_punctuation_tokens)
Explanation: 4: Replacing punctuation
Instructions
The story has been loaded into tokenized_story.
Replace all of the punctuation in each of the tokens.
You'll need to loop through tokenized_story to do so.
You'll need to use multiple replace statements, one for each punctuation character to replace.
Append the token to no_punctuation_tokens once you are done replacing characters.
Don't forget to remove newlines!
Print out no_punctuation_tokens if you want to see which types of punctuation are still in the data.
Answer
End of explanation
# We can make strings all lowercase using the .lower() method.
text = "MY CAPS LOCK IS STUCK"
text = text.lower()
# The text is much nicer to read now.
print(text)
lowercase_tokens = []
for token in no_punctuation_tokens:
lowercase_tokens.append(token.lower())
print(lowercase_tokens)
Explanation: 5: Lowercasing the words
Instructions
The tokens without punctuation have been loaded into no_punctuation_tokens.
Loop through the tokens and lowercase each one.
Append each token to lowercase_tokens when you're done lowercasing.
Answer
End of explanation
# A simple function that takes in a number of miles, and turns it into kilometers
# The input at position 0 will be put into the miles variable.
def miles_to_km(miles):
# return is a special keyword that indicates that the function will output whatever comes after it.
return miles/0.62137
# Returns the number of kilometers equivalent to one mile
print(miles_to_km(1))
# Convert a from 10 miles to kilometers
a = 10
a = miles_to_km(a)
# We can convert and assign to a different variable
b = 50
c = miles_to_km(b)
fahrenheit = 80
celsius = (fahrenheit - 32)/1.8
def f2c(f):
c = (f - 32)/1.8
return c
celsius_100 = f2c(100)
celsius_150 = f2c(150)
print(celsius_100, celsius_150)
Explanation: 7: Making a basic function
Instructions
Define a function that takes degrees in fahrenheit as an input, and return degrees celsius
Use it to convert 100 degrees fahrenheit to celsius. Assign the result to celsius_100.
Use it to convert 150 degrees fahrenheit to celsius. Assign the result to celsius_150.
Answer
End of explanation
def split_string(text):
return text.split(" ")
sally = "Sally sells seashells by the seashore."
# This splits the string into a list.
print(split_string(sally))
# We can assign the output of a function to a variable.
sally_tokens = split_string(sally)
lowercase_me = "I wish I was in ALL lowercase"
def to_lowercase(text):
return text.lower()
lowercased_string = to_lowercase(lowercase_me)
print(lowercased_string)
Explanation: 8: Practice: functions
Instructions
Make a function that takes a string as input and outputs a lowercase version.
Then use it to turn the string lowercase_me to lowercase.
Assign the result to lowercased_string.
Answer
End of explanation
# Sometimes, you will have problems with your code that cause python to throw an exception.
# Don't worry, it happens to all of us many times a day.
# An exception means that the program can't run, so you'll get an error in the results view instead of the normal output.
# There are a few different types of exceptions.
# The first we'll look at is a SyntaxError.
# This means that something is typed incorrectly (statements misspelled, quotes missing, and so on)
a = ["Errors are no fun!", "But they can be fixed", "Just fix the syntax and everything will be fine"]
b = 5
for item in a:
if b == 5:
print(item)
Explanation: 9: Types of errors
Instructions
There are multiple syntax errors in the code cell below. You can tell because of the error showing up in the results panel. Fix the errors and get the code running properly. It should print all of the items in a.
Answer
End of explanation
a = 5
if a == 6:
print("6 is obviously the best number")
print("What's going on, guys?")
else:
print("I never liked that 6")
Explanation: 10: More syntax errors
Instructions
The code below has multiple syntax errors. Fix them so the code prints out "I never liked that 6"
Answer
End of explanation |
7,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TOC Thematic Report - February 2019 (Part 2
Step1: 2. Calculate annual trends
Step2: 1. 1990 to 2016
Step3: There are lots of warnings printed above, but the main one of interest is
Step4: It seems that one Irish station has no associated data. This is as expected, because all the data supplied by Julian for this site comes from "near-shore" sampling (rather than "open water") and these have been omitted from the data upload - see here for details.
3. Basic checking
3.1. Boxplots
Step5: 4. Data restructuring
The code below is taken from here. It is used to generate output files in the format requested by Heleen.
4.1. Combine datasets
Step6: 7.2. Check record completeness
See e-mail from Heleen received 25/10/2016 at 15
Step7: 4.3. SO4 at Abiskojaure
SO4 for this station ('station_id=36458' in the "core" dataset) should be removed. See here.
Step8: 7.4. Relative slope
Step9: 7.5. Tidy
Step10: 7.6. Convert to "wide" format | Python Code:
# Select projects
prj_grid = nivapy.da.select_resa_projects(eng)
prj_grid
prj_df = prj_grid.get_selected_df()
print (len(prj_df))
prj_df
# Get stations
stn_df = nivapy.da.select_resa_project_stations(prj_df, eng)
print(len(stn_df))
stn_df.head()
# Map
nivapy.spatial.quickmap(stn_df, popup='station_code')
Explanation: TOC Thematic Report - February 2019 (Part 2: Annual trends)
1. Get list of stations
End of explanation
# User input
# Specify projects of interest
proj_list = ['ICPWaters US', 'ICPWaters NO', 'ICPWaters CA',
'ICPWaters UK', 'ICPWaters FI', 'ICPWaters SE',
'ICPWaters CZ', 'ICPWaters IT', 'ICPWaters PL',
'ICPWaters CH', 'ICPWaters LV', 'ICPWaters EE',
'ICPWaters IE', 'ICPWaters MD', 'ICPWaters DE']
# Specify results folder
res_fold = (r'../../../Thematic_Trends_Report_2019/results')
Explanation: 2. Calculate annual trends
End of explanation
# Specify period of interest
st_yr, end_yr = 1990, 2016
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=plot_fold)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df is not None:
nd_df.to_csv(nd_csv, index=False)
Explanation: 1. 1990 to 2016
End of explanation
# Get stations with no data
stn_df[stn_df['station_id'].isin(nd_df['station_id'])]
Explanation: There are lots of warnings printed above, but the main one of interest is:
Some stations have no relevant data in the period specified.
Which station(s) are missing data?
End of explanation
# Set up plot
fig = plt.figure(figsize=(20,10))
sn.set(style="ticks", palette="muted",
color_codes=True, font_scale=2)
# Horizontal boxplots
ax = sn.boxplot(x="mean", y="par_id", data=res_df,
whis=np.inf, color="c")
# Add "raw" data points for each observation, with some "jitter"
# to make them visible
sn.stripplot(x="mean", y="par_id", data=res_df, jitter=True,
size=3, color=".3", linewidth=0)
# Remove axis lines
sn.despine(trim=True)
Explanation: It seems that one Irish station has no associated data. This is as expected, because all the data supplied by Julian for this site comes from "near-shore" sampling (rather than "open water") and these have been omitted from the data upload - see here for details.
3. Basic checking
3.1. Boxplots
End of explanation
# Change 'period' col to 'data_period' and add 'analysis_period'
res_df['data_period'] = res_df['period']
del res_df['period']
res_df['analysis_period'] = '1990-2016'
# Join
df = pd.merge(res_df, stn_df, how='left', on='station_id')
# Re-order columns
df = df[['station_id',
'station_code', 'station_name',
'latitude', 'longitude', 'analysis_period', 'data_period',
'par_id', 'non_missing', 'n_start', 'n_end', 'mean', 'median',
'std_dev', 'mk_stat', 'norm_mk_stat', 'mk_p_val', 'trend',
'sen_slp']]
df.head()
Explanation: 4. Data restructuring
The code below is taken from here. It is used to generate output files in the format requested by Heleen.
4.1. Combine datasets
End of explanation
def include(row):
if ((row['analysis_period'] == '1990-2016') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 18)):
return 'yes'
elif ((row['analysis_period'] == '1990-2004') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 10)):
return 'yes'
elif ((row['analysis_period'] == '2002-2016') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 10)):
return 'yes'
else:
return 'no'
df['include'] = df.apply(include, axis=1)
Explanation: 7.2. Check record completeness
See e-mail from Heleen received 25/10/2016 at 15:56. The 'non_missing' threshold is based of 65% of the data period (e.g. 65% of 27 years for 1990 to 2016).
End of explanation
# Remove sulphate-related series at Abiskojaure
df = df.query('not((station_id==36458) and ((par_id=="ESO4") or '
'(par_id=="ESO4X") or '
'(par_id=="ESO4_ECl")))')
Explanation: 4.3. SO4 at Abiskojaure
SO4 for this station ('station_id=36458' in the "core" dataset) should be removed. See here.
End of explanation
# Relative slope
df['rel_sen_slp'] = df['sen_slp'] / df['median']
Explanation: 7.4. Relative slope
End of explanation
# Remove unwanted cols
df.drop(labels=['mean', 'n_end', 'n_start', 'mk_stat', 'norm_mk_stat'],
axis=1, inplace=True)
# Reorder columns
df = df[['station_id', 'station_code',
'station_name', 'latitude', 'longitude', 'analysis_period',
'data_period', 'par_id', 'non_missing', 'median', 'std_dev',
'mk_p_val', 'trend', 'sen_slp', 'rel_sen_slp', 'include']]
# Write to output
out_path = os.path.join(res_fold, 'toc_core_trends_long_format.csv')
df.to_csv(out_path, index=False, encoding='utf-8')
df.head()
Explanation: 7.5. Tidy
End of explanation
del df['data_period']
# Melt to "long" format
melt_df = pd.melt(df,
id_vars=['station_id', 'station_code',
'station_name', 'latitude', 'longitude',
'analysis_period', 'par_id', 'include'],
var_name='stat')
# Get only values where include='yes'
melt_df = melt_df.query('include == "yes"')
del melt_df['include']
# Build multi-index on everything except "value"
melt_df.set_index(['station_id', 'station_code',
'station_name', 'latitude', 'longitude', 'par_id',
'analysis_period',
'stat'], inplace=True)
# Unstack levels of interest to columns
wide_df = melt_df.unstack(level=['par_id', 'analysis_period', 'stat'])
# Drop unwanted "value" level in index
wide_df.columns = wide_df.columns.droplevel(0)
# Replace multi-index with separate components concatenated with '_'
wide_df.columns = ["_".join(item) for item in wide_df.columns]
# Reset multiindex on rows
wide_df = wide_df.reset_index()
# Save output
out_path = os.path.join(res_fold, 'toc_trends_wide_format.csv')
wide_df.to_csv(out_path, index=False, encoding='utf-8')
wide_df.head()
Explanation: 7.6. Convert to "wide" format
End of explanation |
7,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Data Processing with PySAL & Pandas
PySAL has two simple ways to read in data. But, first, you need to get the path from where your notebook is running on your computer to the place the data is. For example, to find where the notebook is running
Step1: PySAL has a command that it uses to get the paths of its example datasets. Let's work with a commonly-used dataset first.
Step2: For the purposes of this part of the workshop, we'll use the NAT.dbf example data, and the usjoin.csv data.
Step3: Working with shapefiles
To read in a shapefile, we will need the path to the file.
Step4: Then, we open the file using the ps.open command
Step5: f is what we call a "file handle." That means that it only points to the data and provides ways to work with it. By itself, it does not read the whole dataset into memory. To see basic information about the file, we can use a few different methods.
For instance, the header of the file, which contains most of the metadata about the file
Step6: To actually read in the shapes from memory, you can use the following commands
Step7: So, all 3085 polygons have been read in from file. These are stored in PySAL shape objects, which can be used by PySAL and can be converted to other Python shape objects. ]
They typically have a few methods. So, since we've read in polygonal data, we can get some properties about the polygons. Let's just have a look at the first polygon
Step8: While in the Jupyter Notebook, you can examine what properties an object has by using the tab key.
Step9: Working with Data Tables
When you're working with tables of data, like a csv or dbf, you can extract your data in the following way. Let's open the dbf file we got the path for above.
Step10: Just like with the shapefile, we can examine the header of the dbf file
Step11: So, the header is a list containing the names of all of the fields we can read. If we were interested in getting the ['NAME', 'STATE_NAME', 'HR90', 'HR80'] fields.
If we just wanted to grab the data of interest, HR90, we can use either by_col or by_col_array, depending on the format we want the resulting data in
Step12: As you can see, the by_col function returns a list of data, with no shape. It can only return one column at a time
Step13: This error message is called a "traceback," as you see in the top right, and it usually provides feedback on why the previous command did not execute correctly. Here, you see that one-too-many arguments was provided to __call__, which tells us we cannot pass as many arguments as we did to by_col.
If you want to read in many columns at once and store them to an array, use by_col_array
Step14: It is best to use by_col_array on data of a single type. That is, if you read in a lot of columns, some of them numbers and some of them strings, all columns will get converted to the same datatype
Step15: Note that the numerical columns, HR90 & HR80 are now considered strings, since they show up with the single tickmarks around them, like '0.0'.
These methods work similarly for .csv files as well
Using Pandas with PySAL
A new functionality added to PySAL recently allows you to work with shapefile/dbf pairs using Pandas. This optional extension is only turned on if you have Pandas installed. The extension is the ps.pdio module
Step16: To use it, you can read in shapefile/dbf pairs using the ps.pdio.read_files command.
Step17: This reads in the entire database table and adds a column to the end, called geometry, that stores the geometries read in from the shapefile.
Now, you can work with it like a standard pandas dataframe.
Step18: The read_files function only works on shapefile/dbf pairs. If you need to read in data using CSVs, use pandas directly
Step19: The nice thing about working with pandas dataframes is that they have very powerful baked-in support for relational-style queries. By this, I mean that it is very easy to find things like
Step20: Or, to get the rows of the table that are in Arizona, we can use the query function of the dataframe
Step21: Behind the scenes, this uses a fast vectorized library, numexpr, to essentially do the following.
First, compare each row's STATE_NAME column to 'Arizona' and return True if the row matches
Step22: Then, use that to filter out rows where the condition is true
Step23: We might need this behind the scenes knowledge when we want to chain together conditions, or when we need to do spatial queries.
This is because spatial queries are somewhat more complex. Let's say, for example, we want all of the counties in the US to the West of -121 longitude. We need a way to express that question. Ideally, we want something like
Step24: If we use this as a filter on the table, we can get only the rows that match that condition, just like we did for the STATE_NAME query
Step25: This works on any type of spatial query.
For instance, if we wanted to find all of the counties that are within a threshold distance from an observation's centroid, we can do it in the following way.
First, specify the observation. Here, we'll use Cook County, IL
Step26: Moving in and out of the dataframe
Most things in PySAL will be explicit about what type their input should be. Most of the time, PySAL functions require either lists or arrays. This is why the file-handler methods are the default IO method in PySAL
Step27: To extract many columns, you must select the columns you want and call their .values attribute.
If we were interested in grabbing all of the HR variables in the dataframe, we could first select those column names
Step28: We can use this to focus only on the columns we want
Step29: With this, calling .values gives an array containing all of the entries in this subset of the table | Python Code:
!pwd
Explanation: Spatial Data Processing with PySAL & Pandas
PySAL has two simple ways to read in data. But, first, you need to get the path from where your notebook is running on your computer to the place the data is. For example, to find where the notebook is running:
End of explanation
dbf_path = ps.examples.get_path('NAT.dbf')
print(dbf_path)
Explanation: PySAL has a command that it uses to get the paths of its example datasets. Let's work with a commonly-used dataset first.
End of explanation
csv_path = ps.examples.get_path('usjoin.csv')
Explanation: For the purposes of this part of the workshop, we'll use the NAT.dbf example data, and the usjoin.csv data.
End of explanation
shp_path = ps.examples.get_path('NAT.shp')
print(shp_path)
Explanation: Working with shapefiles
To read in a shapefile, we will need the path to the file.
End of explanation
f = ps.open(shp_path)
Explanation: Then, we open the file using the ps.open command:
End of explanation
f.header
Explanation: f is what we call a "file handle." That means that it only points to the data and provides ways to work with it. By itself, it does not read the whole dataset into memory. To see basic information about the file, we can use a few different methods.
For instance, the header of the file, which contains most of the metadata about the file:
End of explanation
f.by_row(14) #gets the 14th shape from the file
all_polygons = f.read() #reads in all polygons from memory
len(all_polygons)
Explanation: To actually read in the shapes from memory, you can use the following commands:
End of explanation
all_polygons[0].centroid #the centroid of the first polygon
all_polygons[0].area
all_polygons[0].perimeter
Explanation: So, all 3085 polygons have been read in from file. These are stored in PySAL shape objects, which can be used by PySAL and can be converted to other Python shape objects. ]
They typically have a few methods. So, since we've read in polygonal data, we can get some properties about the polygons. Let's just have a look at the first polygon:
End of explanation
polygon = all_polygons[0]
polygon. #press tab when the cursor is right after the dot
Explanation: While in the Jupyter Notebook, you can examine what properties an object has by using the tab key.
End of explanation
f = ps.open(dbf_path)
Explanation: Working with Data Tables
When you're working with tables of data, like a csv or dbf, you can extract your data in the following way. Let's open the dbf file we got the path for above.
End of explanation
f.header
Explanation: Just like with the shapefile, we can examine the header of the dbf file
End of explanation
HR90 = f.by_col('HR90')
print(type(HR90).__name__, HR90[0:5])
HR90 = f.by_col_array('HR90')
print(type(HR90).__name__, HR90[0:5])
Explanation: So, the header is a list containing the names of all of the fields we can read. If we were interested in getting the ['NAME', 'STATE_NAME', 'HR90', 'HR80'] fields.
If we just wanted to grab the data of interest, HR90, we can use either by_col or by_col_array, depending on the format we want the resulting data in:
End of explanation
HRs = f.by_col('HR90', 'HR80')
Explanation: As you can see, the by_col function returns a list of data, with no shape. It can only return one column at a time:
End of explanation
HRs = f.by_col_array('HR90', 'HR80')
HRs
Explanation: This error message is called a "traceback," as you see in the top right, and it usually provides feedback on why the previous command did not execute correctly. Here, you see that one-too-many arguments was provided to __call__, which tells us we cannot pass as many arguments as we did to by_col.
If you want to read in many columns at once and store them to an array, use by_col_array:
End of explanation
allcolumns = f.by_col_array(['NAME', 'STATE_NAME', 'HR90', 'HR80'])
allcolumns
Explanation: It is best to use by_col_array on data of a single type. That is, if you read in a lot of columns, some of them numbers and some of them strings, all columns will get converted to the same datatype:
End of explanation
ps.pdio
Explanation: Note that the numerical columns, HR90 & HR80 are now considered strings, since they show up with the single tickmarks around them, like '0.0'.
These methods work similarly for .csv files as well
Using Pandas with PySAL
A new functionality added to PySAL recently allows you to work with shapefile/dbf pairs using Pandas. This optional extension is only turned on if you have Pandas installed. The extension is the ps.pdio module:
End of explanation
data_table = ps.pdio.read_files(shp_path)
Explanation: To use it, you can read in shapefile/dbf pairs using the ps.pdio.read_files command.
End of explanation
data_table.head()
Explanation: This reads in the entire database table and adds a column to the end, called geometry, that stores the geometries read in from the shapefile.
Now, you can work with it like a standard pandas dataframe.
End of explanation
usjoin = pd.read_csv(csv_path)
#usjoin = ps.pdio.read_files(usjoin) #will not work, not a shp/dbf pair
Explanation: The read_files function only works on shapefile/dbf pairs. If you need to read in data using CSVs, use pandas directly:
End of explanation
data_table.groupby("STATE_NAME").size()
Explanation: The nice thing about working with pandas dataframes is that they have very powerful baked-in support for relational-style queries. By this, I mean that it is very easy to find things like:
The number of counties in each state:
End of explanation
data_table.query('STATE_NAME == "Arizona"')
Explanation: Or, to get the rows of the table that are in Arizona, we can use the query function of the dataframe:
End of explanation
data_table.STATE_NAME == 'Arizona'
Explanation: Behind the scenes, this uses a fast vectorized library, numexpr, to essentially do the following.
First, compare each row's STATE_NAME column to 'Arizona' and return True if the row matches:
End of explanation
data_table[data_table.STATE_NAME == 'Arizona']
Explanation: Then, use that to filter out rows where the condition is true:
End of explanation
data_table.geometry.apply(lambda poly: poly.centroid[0] < -121)
Explanation: We might need this behind the scenes knowledge when we want to chain together conditions, or when we need to do spatial queries.
This is because spatial queries are somewhat more complex. Let's say, for example, we want all of the counties in the US to the West of -121 longitude. We need a way to express that question. Ideally, we want something like:
SELECT
*
FROM
data_table
WHERE
x_centroid < -121
So, let's refer to an arbitrary polygon in the the dataframe's geometry column as poly. The centroid of a PySAL polygon is stored as an (X,Y) pair, so the longidtude is the first element of the pair, poly.centroid[0].
Then, applying this condition to each geometry, we get the same kind of filter we used above to grab only counties in Arizona:
End of explanation
data_table[data_table.geometry.apply(lambda x: x.centroid[0] < -121)]
Explanation: If we use this as a filter on the table, we can get only the rows that match that condition, just like we did for the STATE_NAME query:
End of explanation
data_table.query('(NAME == "Cook") & (STATE_NAME == "Illinois")')
geom = data_table.query('(NAME == "Cook") & (STATE_NAME == "Illinois")').geometry
geom.values[0].centroid
cook_county_centroid = geom.values[0].centroid
import scipy.spatial.distance as d
def near_target_point(polygon, target=cook_county_centroid, threshold=1):
return d.euclidean(polygon.centroid, target) < threshold
data_table[data_table.geometry.apply(near_target_point)]
Explanation: This works on any type of spatial query.
For instance, if we wanted to find all of the counties that are within a threshold distance from an observation's centroid, we can do it in the following way.
First, specify the observation. Here, we'll use Cook County, IL:
End of explanation
data_table.NAME.tolist()
Explanation: Moving in and out of the dataframe
Most things in PySAL will be explicit about what type their input should be. Most of the time, PySAL functions require either lists or arrays. This is why the file-handler methods are the default IO method in PySAL: the rest of the computational tools are built around their datatypes.
However, it is very easy to get the correct datatype from Pandas using the values and tolist commands.
tolist() will convert its entries to a list. But, it can only be called on individual columns (called Series in pandas documentation)
So, to turn the NAME column into a list:
End of explanation
HRs = [col for col in data_table.columns if col.startswith('HR')]
Explanation: To extract many columns, you must select the columns you want and call their .values attribute.
If we were interested in grabbing all of the HR variables in the dataframe, we could first select those column names:
End of explanation
data_table[HRs]
Explanation: We can use this to focus only on the columns we want:
End of explanation
data_table[HRs].values
Explanation: With this, calling .values gives an array containing all of the entries in this subset of the table:
End of explanation |
7,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
26 May 2016
I trained a simple fp_linear network (FingerprintLayer -> Linear Regression) to learn how to count the sum of all of the nodes. This was a sanity check before progressing further with training on actual HIV data.
I was expecting that most of the weights and biases should be some really small number close to zero, while the final linear regression weights should be something close to the array [0, 1, 2, 3, 4, ..., N] for N features.
Step1: Okay, focus in on the LinReg weights
Step2: Exactly what I was expecting! Yay!
I'm also curious to see what the weights and biases look like for score_sine. | Python Code:
import pickle as pkl
from pprint import pprint
def open_wb(path):
with open(path, 'rb') as f:
wb = pkl.load(f)
return wb
wb = open_wb('../experiments/wbs/fp_linear-cf.score_sum-5000_iters-10_wb.pkl')
pprint(wb)
Explanation: 26 May 2016
I trained a simple fp_linear network (FingerprintLayer -> Linear Regression) to learn how to count the sum of all of the nodes. This was a sanity check before progressing further with training on actual HIV data.
I was expecting that most of the weights and biases should be some really small number close to zero, while the final linear regression weights should be something close to the array [0, 1, 2, 3, 4, ..., N] for N features.
End of explanation
wb['layer1_LinearRegressionLayer']['linweights'].shape
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
sns.set_style('white')
sns.set_context('poster')
%matplotlib inline
plt.bar(np.arange(1, 11) - 0.35, wb['layer1_LinearRegressionLayer']['linweights'])
Explanation: Okay, focus in on the LinReg weights:
End of explanation
wb = open_wb('../experiments/wbs/')
Explanation: Exactly what I was expecting! Yay!
I'm also curious to see what the weights and biases look like for score_sine.
End of explanation |
7,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Report04 - Nathan Yee
This notebook contains report04 for computational baysian statistics fall 2016
MIT License
Step1: Parking meter theft
From DASL(http
Step2: Next, we need to normalize the CON (contractor) collections by the amount gathered by the CITY. This will give us a ratio of contractor collections to city collections.
Step3: Next, lets see how the means of the RATIO data compare between the general contractors and BRINK.
Step5: We see that for every dollar gathered by the city, general contractors report 244.7 dollars while BRINK only reports 229.6 dollars.
To further investigate the differences between BRINK and general contractors, we will use bayesian statistics to see what values of mu and sigma of normal distributions fit our data. So, we create a class that accepts mu's and sigmas's. The likelihhod function allows us to iterate over our hypotheses and see which values of mu and sigma are most likely.
Step6: Now, we want to calculate the marginal distribution of mu for the general contractors and BRINK. We can determine a distribution of how much money was stolen by calculating the difference between these distributions.
First we will generate sequences of mu's and sigmas.
Step7: Next, create our hypotheses as pairs of these mu's and sigmas.
Step8: Next, we use a contour plot to make sure we have selected a proper range of mu's and sigmas.
Step9: We see that our values of mu and sigma fit well within cutoff range.
Now, do the same for BRINK.
Step10: Finally, to get a distribution of possible ratio values, we extract the marginal distributions of mu from both the general contractors and BRINK.
Step11: To see how much money was stolen, we compute the difference of the marginal distributions. This immediately gives us difference of the means of the ratios as we could have calcuated earlier.
Step12: To calculate the probability that money was stolen from the city, we simply look at a plot of the cdf of pmf_diff and see the probability that the difference is less than zero.
Step13: And we have answered the first question
Step14: Above we see a plot of stolen money in millions. We have also calculated a credible interval that tells us that there is a 50% chance that Brink stole between 1.4 to 3.6 million dollars. Interestingly, our distribution tells us that there is a probability that BRINK actually gave the city money. However, this is extremely unlikely and is an artifact of our normal distribution.
Smoking kills, analysis of smoking in different states
http
Step15: Data seems reasonable, now let's see if a linear regression is appropriate for predicting distributions.
Step17: Data looks pretty linear. Now let's get the slope and intercept of the line of least squares. Abstract numpy's least squares function using a function of our own.
Step18: To use our leastSquares function, we first create x and y vectors. Where x is the cig, y is the lung. Then call leastSquares to get the slope and intercept.
Step19: Now plot the line and the original data to see if it looks linear.
Step20: Nice. Based on the plot above, we can conclude that bayesian linear regression will give us reasonable distributions for predicting future values. Now we want to create our hypotheses. Each hypothesis will consist of a intercept, slope and sigma.
Step21: Create hypos and our update data.
Step23: Create least squares suite. The likelihood function will depend the data and normal distributions for each hypothesis.
Step24: Now use our hypos to create the LeastSquaresHypos suite.
Step25: Update LeastSquaresHypos with our data.
Step26: Next, use marginal distributions to see how our good our intercept, slope, and sigma guesses were. Note that I have already found values that work well. There aren't really guesses at this point.
For the intercepts, we choose to stay relatively close to our original intercept. This is the one value we sacrifice accuracy on. This ends up being OK because the slopes and sigmas are in good ranges. For a large value of x, slope and sigma diverge faster than intercept.
Step27: All of the important slopes are contained in our guesses.
Step28: All of the important sigmas are contained in our guesses.
Step31: Next, we want to sample random data from our hypotheses. We will do this by writing two functions, getY, and getRandomData. getRandomData calls getY to the random y data.
Step32: Now, lets see what our random data looks like plotted underneath the original data and line of least squares.
Step34: From the plot above, our random data seems to make a distribution in the direction of the line. However, we are unable to see exactly what this distribution looks like as many of the random blue points overlap. We will visualize exactly what is going on with a desnity function.
To get the density (intensity), we will make a bucketData function. This takes in a range of x and y values and sees how many of the points fall into each bucket.
Step36: Now create function to unpack the buckets into usable x (cigs), y (lungs), z(densities/intencities) arrays.
Step37: Use said function to unpack buckets.
Step44: Now, to make our lives easier, we will plot the density plot in Mathematica. But first, we need to export our data as CSV's.
Step46: Below we have our density plot underneath our original data / fitted line. Based on this plot, we can reason that our previous calculations seem reasonable.
x axis = Number of cigarettes sold (smoked) per capita
y axis = Death rate per 1,000 from lung cancer
<img src="cigsLungsAllPlots.png" alt="Density Plot with orignal data/fit" height="400" width="400">
Now, we want to answer our original question. If California were to lower its number of cigarette sold per capita from 28.60 to 20, how many people would die from lung cancer per 1,000 people? Now, I will do this by finding the most likeli distribution for a state at CIG = 20 and subtract the known California value (CIG = 28.60, LUNG = 22.07) from that distribution.
First make a function that makes Pmf's for a given CIG value.
Step47: Use our function to make a LUNGS distribution for a state at CIG = 20.
Step48: Now make our California lives saved prediction by subtracting 22.07 from the previous distribution
Step49: Based on the above distribution, we that California will most likely reduce its death by lung cancer per 1,000 people from 22.07 to 17.06. We can also use this distribution to see what values are credible. | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalNormalPdf, MakeNormalPmf, MakeMixture
import thinkplot
import matplotlib.pyplot as plt
import pandas as pd
Explanation: Report04 - Nathan Yee
This notebook contains report04 for computational baysian statistics fall 2016
MIT License: https://opensource.org/licenses/MIT
End of explanation
df = pd.read_csv('parking.csv', skiprows=17, delimiter='\t')
df.head()
Explanation: Parking meter theft
From DASL(http://lib.stat.cmu.edu/DASL/Datafiles/brinkdat.html)
The variable CON in the datafile Parking Meter Theft represents monthly parking meter collections by the principle contractor in New York City from May 1977 to March 1981. In addition to contractor collections, the city made collections from a number of "control" meters close to City Hall. These are recorded under the varia- ble CITY. From May 1978 to April 1980 the contractor was Brink's. In 1983 the city presented evidence in court that Brink's employees has been stealing parking meter moneys - delivering to the city less than the total collections. The court was satisfied that theft has taken place, but the actual amount of shortage was in question. Assume that there was no theft before or after Brink's tenure and estimate the monthly short- age and its 95% confidence limits.
So we are asking two questions. What is the probability that that money has been stolen? And how much money was stolen?
This problem is similar to that of "Improving Reading Ability" by Allen Downey
This is a repeat problem from last report but I've spend a couple of hours revising my calculations, improving explanation, and cleaning up rushed code.
To do this, we will use a series of distributions and see which are most likely based on the parking meter data. To start, we load our data from the csv file.
End of explanation
df['RATIO'] = df['CON'] / df['CITY']
Explanation: Next, we need to normalize the CON (contractor) collections by the amount gathered by the CITY. This will give us a ratio of contractor collections to city collections.
End of explanation
grouped = df.groupby('BRINK')
for name, group in grouped:
print(name, group.RATIO.mean())
Explanation: Next, lets see how the means of the RATIO data compare between the general contractors and BRINK.
End of explanation
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
Computes the likelihood of a pair of mu and sigma given data. In this case, our
data consists the ratios of contractor collections to city collections.
Args:
data: sequence of ratios of contractor collections to city collections.
hypo: mu, sigma
Returns:
The likelihood of the data under the paricular mu and sigma
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
Explanation: We see that for every dollar gathered by the city, general contractors report 244.7 dollars while BRINK only reports 229.6 dollars.
To further investigate the differences between BRINK and general contractors, we will use bayesian statistics to see what values of mu and sigma of normal distributions fit our data. So, we create a class that accepts mu's and sigmas's. The likelihhod function allows us to iterate over our hypotheses and see which values of mu and sigma are most likely.
End of explanation
mus = np.linspace(210, 270, 301)
sigmas = np.linspace(10, 65, 301)
Explanation: Now, we want to calculate the marginal distribution of mu for the general contractors and BRINK. We can determine a distribution of how much money was stolen by calculating the difference between these distributions.
First we will generate sequences of mu's and sigmas.
End of explanation
from itertools import product
general = Normal(product(mus, sigmas))
data = df[df.BRINK==0].RATIO
general.Update(data)
Explanation: Next, create our hypotheses as pairs of these mu's and sigmas.
End of explanation
thinkplot.Contour(general, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
Explanation: Next, we use a contour plot to make sure we have selected a proper range of mu's and sigmas.
End of explanation
brink = Normal(product(mus, sigmas))
data = df[df.BRINK==1].RATIO
brink.Update(data)
thinkplot.Contour(brink, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
Explanation: We see that our values of mu and sigma fit well within cutoff range.
Now, do the same for BRINK.
End of explanation
general_mu = general.Marginal(0)
thinkplot.Pdf(general_mu)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
BRINK_mu = brink.Marginal(0)
thinkplot.Pdf(BRINK_mu)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
Explanation: Finally, to get a distribution of possible ratio values, we extract the marginal distributions of mu from both the general contractors and BRINK.
End of explanation
pmf_diff = BRINK_mu - general_mu
pmf_diff.Mean()
Explanation: To see how much money was stolen, we compute the difference of the marginal distributions. This immediately gives us difference of the means of the ratios as we could have calcuated earlier.
End of explanation
cdf_diff = pmf_diff.MakeCdf()
thinkplot.Cdf(cdf_diff)
thinkplot.Config(xlabel='ratio difference', ylabel='cumulative probability')
cdf_diff[0]
Explanation: To calculate the probability that money was stolen from the city, we simply look at a plot of the cdf of pmf_diff and see the probability that the difference is less than zero.
End of explanation
money_city = np.where(df['BRINK']==1, df['CITY'], 0).sum(0)
print((pmf_diff * money_city).CredibleInterval(50))
thinkplot.Pmf(pmf_diff * money_city)
thinkplot.Config(xlabel='money stolen', ylabel='probability')
Explanation: And we have answered the first question: the probability that money was stolen from the city is 93.9%
And lastly, we calculate how much money was stolen from the city. To do this, we first calculate how much money the city collected during (general or Brink) times. Then we can multiply this times our pmf_diff to get a probability distribution of potential stolen money.
End of explanation
df = pd.read_csv('smokingKills.csv', skiprows=21, delimiter='\t')
df.head()
Explanation: Above we see a plot of stolen money in millions. We have also calculated a credible interval that tells us that there is a 50% chance that Brink stole between 1.4 to 3.6 million dollars. Interestingly, our distribution tells us that there is a probability that BRINK actually gave the city money. However, this is extremely unlikely and is an artifact of our normal distribution.
Smoking kills, analysis of smoking in different states
http://lib.stat.cmu.edu/DASL/Datafiles/cigcancerdat.html
The data are per capita numbers of cigarettes smoked (sold) by 43 states and the
District of Columbia in 1960 together with death rates per thouusand population from
various forms of cancer.
If California were to lower its number of cigarette sold per capita from 28.60 to 20, how many people would die from lung cancer per 1,000 people?
To answer this question, I combine known values of California (CIG = 28.60, LUNG = 22.07) with a predictive distribution for any state at CIG = 20 to estimate the death rate from lung cancer per 1,000 people. To get this predictive distribution, I will use bayesian linear regression.
First, let's load our data into a Pandas dataframe to see what it looks like.
End of explanation
df.plot('CIG', 'LUNG', kind='scatter')
Explanation: Data seems reasonable, now let's see if a linear regression is appropriate for predicting distributions.
End of explanation
def leastSquares(x, y):
leastSquares takes in two arrays of values. Then it returns the slope and intercept
of the least squares of the two.
Args:
x (numpy array): numpy array of values.
y (numpy array): numpy array of values.
Returns:
slope, intercept (tuple): returns a tuple of floats.
A = np.vstack([x, np.ones(len(x))]).T
slope, intercept = np.linalg.lstsq(A, y)[0]
return slope, intercept
Explanation: Data looks pretty linear. Now let's get the slope and intercept of the line of least squares. Abstract numpy's least squares function using a function of our own.
End of explanation
cigs = np.array(df['CIG'])
lungs = np.array(df['LUNG'])
slope, intercept = leastSquares(cigs, lungs)
print(slope, intercept)
Explanation: To use our leastSquares function, we first create x and y vectors. Where x is the cig, y is the lung. Then call leastSquares to get the slope and intercept.
End of explanation
plt.plot(cigs, lungs, 'o', label='Original data', markersize=10)
plt.plot(cigs, slope*cigs + intercept, 'r', label='Fitted line')
plt.xlabel('Number of cigarettes sold (smoked) per capita)')
plt.ylabel('Death rate per 1,000 from lung cancer')
plt.legend()
plt.show()
Explanation: Now plot the line and the original data to see if it looks linear.
End of explanation
intercepts = np.linspace(7.5, 8, 5)
slopes = np.linspace(.4, .5, 5)
sigmas = np.linspace(4.5, 5.5, 5)
Explanation: Nice. Based on the plot above, we can conclude that bayesian linear regression will give us reasonable distributions for predicting future values. Now we want to create our hypotheses. Each hypothesis will consist of a intercept, slope and sigma.
End of explanation
hypos = ((intercept, slope, sigma) for intercept in intercepts
for slope in slopes for sigma in sigmas)
data = [(cig, lung) for cig in cigs for lung in lungs]
Explanation: Create hypos and our update data.
End of explanation
class leastSquaresHypos(Suite, Joint):
def Likelihood(self, data, hypo):
Likelihood calculates the probability of a particular line (hypo)
based on data (cigs Vs lungs) of our original dataset. This is
done with a normal pmf as each hypo also contains a sigma.
Args:
data (tuple): tuple that contains ages (float), heights (float)
hypo (tuple): intercept (float), slope (float), sigma (float)
Returns:
P(data|hypo)
intercept, slope, sigma = hypo
total_likelihood = 1
for cig, measured_lung in data:
hypothesized_lung = slope * cig + intercept
error = measured_lung - hypothesized_lung
total_likelihood *= EvalNormalPdf(error, mu=0, sigma=sigma)
return total_likelihood
Explanation: Create least squares suite. The likelihood function will depend the data and normal distributions for each hypothesis.
End of explanation
LeastSquaresHypos = leastSquaresHypos(hypos)
Explanation: Now use our hypos to create the LeastSquaresHypos suite.
End of explanation
for item in data:
LeastSquaresHypos.Update([item])
Explanation: Update LeastSquaresHypos with our data.
End of explanation
marginal_intercepts = LeastSquaresHypos.Marginal(0)
thinkplot.hist(marginal_intercepts)
thinkplot.Config(xlabel='intercept', ylabel='probability')
Explanation: Next, use marginal distributions to see how our good our intercept, slope, and sigma guesses were. Note that I have already found values that work well. There aren't really guesses at this point.
For the intercepts, we choose to stay relatively close to our original intercept. This is the one value we sacrifice accuracy on. This ends up being OK because the slopes and sigmas are in good ranges. For a large value of x, slope and sigma diverge faster than intercept.
End of explanation
marginal_slopes = LeastSquaresHypos.Marginal(1)
thinkplot.hist(marginal_slopes)
thinkplot.Config(xlabel='slope', ylabel='probability')
Explanation: All of the important slopes are contained in our guesses.
End of explanation
marginal_sigmas = LeastSquaresHypos.Marginal(2)
thinkplot.hist(marginal_sigmas)
thinkplot.Config(xlabel='sigma', ylabel='probability')
Explanation: All of the important sigmas are contained in our guesses.
End of explanation
def getY(hypo_samples, random_x):
getY takes in random hypos and random x's and returns the coorisponding
random height
Args:
hypo_samples
random_y = np.zeros(len(random_x))
for i in range(len(random_x)):
intercept = hypo_samples[i][0]
slope = hypo_samples[i][1]
sigma = hypo_samples[i][2]
month = random_x[i]
random_y[i] = np.random.normal((slope * month + intercept), sigma, 1)
return random_y
def getRandomData(start, end, n, LeastSquaresHypos):
Args:
start: (number): Starting x range of our data
end: (number): Ending x range of our data
n (int): Number of samples
LeastSquaresHypos (Suite): Contains the hypos we want to sample
random_hypos = LeastSquaresHypos.Sample(n)
random_x = np.random.uniform(start, end, n)
random_y = getY(random_hypos, random_x)
return random_x, random_y
num_samples = 10000
random_cigs, random_lungs = getRandomData(14, 43, num_samples, LeastSquaresHypos)
Explanation: Next, we want to sample random data from our hypotheses. We will do this by writing two functions, getY, and getRandomData. getRandomData calls getY to the random y data.
End of explanation
plt.plot(random_cigs, random_lungs, 'o', label='Random Sampling')
plt.plot(cigs, lungs, 'o', label='Original data', markersize=10)
plt.plot(cigs, slope*cigs + intercept, 'r', label='Fitted line')
plt.xlabel('Number of cigarettes sold (smoked) per capita)')
plt.ylabel('Death rate per 1,000 from lung cancer')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
Explanation: Now, lets see what our random data looks like plotted underneath the original data and line of least squares.
End of explanation
def bucketData(num_buckets, x_range, y_range, x_data, y_data):
Computes the buckets and density of items of data for graphing pixel space
Args:
num_buckets (int): Is sqrt of number of buckets
x_range (tuple): Contains floats for x_start and x_end
y_range (tuple): Contains floats for y_start and y_end
x_data (sequence): Random x data. Could be something like ages
y_data (sequence): Random y data. Could be somethign like heights
Returns:
buckets (dict): Dictionary containing density of points.
x_start, x_end = x_range
y_start, y_end = y_range
# create horizontal and vertical linearly spaced ranges as buckets.
hori_range, hori_step = np.linspace(x_start, x_end, num_buckets, retstep=True)
vert_range, vert_step = np.linspace(y_start, y_end, num_buckets, retstep=True)
hori_step = hori_step / 2
vert_step = vert_step / 2
# store each bucket as a tuple in a the buckets dictionary.
buckets = dict()
keys = [(hori, vert) for hori in hori_range for vert in vert_range]
# set each bucket as empty
for key in keys:
buckets[key] = 0
# loop through the randomly sampled data
for x, y in zip(x_data, y_data):
# check each bucket and see if randomly sampled data
for key in buckets:
if x > key[0] - hori_step and x < key[0] + hori_step:
if y > key[1] - vert_step and y < key[1] + vert_step:
buckets[key] += 1
break # can only fit in a single bucket
return(buckets)
buckets = bucketData(num_buckets=40, x_range=(14, 43), y_range=(-10, 50), x_data=random_cigs, y_data=random_lungs)
Explanation: From the plot above, our random data seems to make a distribution in the direction of the line. However, we are unable to see exactly what this distribution looks like as many of the random blue points overlap. We will visualize exactly what is going on with a desnity function.
To get the density (intensity), we will make a bucketData function. This takes in a range of x and y values and sees how many of the points fall into each bucket.
End of explanation
def unpackBuckets(buckets):
Unpacks buckets into three new ordered lists such that zip(xNew, yNew, zNew) would
create x, y, z triples.
xNew = []
yNew = []
zNew = []
for key in buckets:
xNew.append(key[0])
yNew.append(key[1])
zNew.append(buckets[key])
return xNew, yNew, zNew
Explanation: Now create function to unpack the buckets into usable x (cigs), y (lungs), z(densities/intencities) arrays.
End of explanation
cigsNew, lungsNew, intensities = unpackBuckets(buckets)
Explanation: Use said function to unpack buckets.
End of explanation
def append_to_file(path, data):
append_to_file appends a line of data to specified file. Then adds new line
Args:
path (string): the file path
Return:
VOID
with open(path, 'a') as file:
file.write(data + '\n')
def delete_file_contents(path):
delete_file_contents deletes the contents of a file
Args:
path: (string): the file path
Return:
VOID
with open(path, 'w'):
pass
def threeSequenceCSV(x, y, z):
Writes the x, y, z arrays to a CSV
Args:
x (sequence): x data
y (sequence): y data
z (sequence): z data
file_name = 'cigsLungsIntensity.csv'
delete_file_contents(file_name)
for xi, yi, zi in zip(x, y, z):
append_to_file(file_name, "{}, {}, {}".format(xi, yi, zi))
def twoSequenceCSV(x, y):
Writes the x, y arrays to a CSV
Args:
x (sequence): x data
y (sequence): y data
file_name = 'cigsLungs.csv'
delete_file_contents(file_name)
for xi, yi in zip(x, y):
append_to_file(file_name, "{}, {}".format(xi, yi))
def fittedLineCSV(x, slope, intercept):
Writes line data to a CSV
Args:
x (sequence): x data
slope (float): slope of line
intercept (float): intercept of line
file_name = 'cigsLungsFitted.csv'
delete_file_contents(file_name)
for xi in x:
append_to_file(file_name, "{}, {}".format(xi, slope*xi + intercept))
def makeCSVData(random_x, random_y, intensities, original_x, original_y, slope, intercept):
Calls the 3 csv making functions with appropriate parameters.
threeSequenceCSV(random_x, random_y, intensities)
twoSequenceCSV(original_x, original_y)
fittedLineCSV(original_x, slope, intercept)
makeCSVData(cigsNew, lungsNew, intensities, cigs, lungs, slope, intercept)
Explanation: Now, to make our lives easier, we will plot the density plot in Mathematica. But first, we need to export our data as CSV's.
End of explanation
def MakeLungsPmf(suite, x):
MakeLungsPmf takes in a suite (intercept, slope, sigma) hypos and an x value.
It returns a mixture Pmf at that particular x value.
Args:
suite (Suite): Suite object of (intercept, slope, sigma) hypos
x (number): The value used to calculate the center of the new distribution
metapmf = Pmf()
counter=0
for (intercept, slope, sigma), prob in suite.Items():
mu = slope * x + intercept
pmf = MakeNormalPmf(mu, sigma, num_sigmas=4, n=301)
metapmf.Set(pmf, prob)
counter+=1
if counter % 100 == 0:
print(counter)
mix = MakeMixture(metapmf)
return mix
Explanation: Below we have our density plot underneath our original data / fitted line. Based on this plot, we can reason that our previous calculations seem reasonable.
x axis = Number of cigarettes sold (smoked) per capita
y axis = Death rate per 1,000 from lung cancer
<img src="cigsLungsAllPlots.png" alt="Density Plot with orignal data/fit" height="400" width="400">
Now, we want to answer our original question. If California were to lower its number of cigarette sold per capita from 28.60 to 20, how many people would die from lung cancer per 1,000 people? Now, I will do this by finding the most likeli distribution for a state at CIG = 20 and subtract the known California value (CIG = 28.60, LUNG = 22.07) from that distribution.
First make a function that makes Pmf's for a given CIG value.
End of explanation
cigs20 = MakeLungsPmf(LeastSquaresHypos, 20)
Explanation: Use our function to make a LUNGS distribution for a state at CIG = 20.
End of explanation
cali20_lives_predict = cigs20.AddConstant(-22.07)
thinkplot.Pmf(cali20_lives_predict)
thinkplot.Config(xlabel='lives saved per 1,000 from lung cancer', ylabel='probability')
22.07 + cali20_lives_predict.Mean()
Explanation: Now make our California lives saved prediction by subtracting 22.07 from the previous distribution
End of explanation
cali20_lives_predict.CredibleInterval(50)
Explanation: Based on the above distribution, we that California will most likely reduce its death by lung cancer per 1,000 people from 22.07 to 17.06. We can also use this distribution to see what values are credible.
End of explanation |
7,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pipeline Tutorial with HeteroSecureBoost
install
Pipeline is distributed along with fate_client.
bash
pip install fate_client
To use Pipeline, we need to first specify which FATE Flow Service to connect to. Once fate_client installed, one can find an cmd enterpoint name pipeline
Step1: Assume we have a FATE Flow Service in 127.0.0.1
Step2: Hetero SecureBoost Example
Before start a modeling task, data to be used should be uploaded. Please refer to this guide.
The pipeline package provides components to compose a FATE pipeline.
Step3: Make a pipeline instance
Step4: Define a Reader to load data
Step5: Add a DataTransform component to parse raw data into Data Instance
Step6: Add a Intersection component to perform PSI for hetero-scenario
Step7: Now, we define the HeteroSecureBoost component. The following parameters will be set for all parties involved.
Step8: To show the evaluation result, an "Evaluation" component is needed.
Step9: Add components to pipeline, in order of execution
Step10: Now, submit(fit) our pipeline
Step11: Once training is done, trained model may be used for prediction. Optionally, save the trained pipeline for future use.
Step12: First, deploy needed components from train pipeline
Step13: Define new Reader components for reading prediction data
Step14: Optionally, define new Evaluation component.
Step15: Add components to predict pipeline in order of execution
Step16: Then, run prediction job | Python Code:
!pipeline --help
Explanation: Pipeline Tutorial with HeteroSecureBoost
install
Pipeline is distributed along with fate_client.
bash
pip install fate_client
To use Pipeline, we need to first specify which FATE Flow Service to connect to. Once fate_client installed, one can find an cmd enterpoint name pipeline:
End of explanation
!pipeline init --ip 127.0.0.1 --port 9380
Explanation: Assume we have a FATE Flow Service in 127.0.0.1:9380(defaults in standalone), then exec
End of explanation
from pipeline.backend.pipeline import PipeLine
from pipeline.component import Reader, DataTransform, Intersection, HeteroSecureBoost, Evaluation
from pipeline.interface import Data
Explanation: Hetero SecureBoost Example
Before start a modeling task, data to be used should be uploaded. Please refer to this guide.
The pipeline package provides components to compose a FATE pipeline.
End of explanation
pipeline = PipeLine() \
.set_initiator(role='guest', party_id=9999) \
.set_roles(guest=9999, host=10000)
Explanation: Make a pipeline instance:
- initiator:
* role: guest
* party: 9999
- roles:
* guest: 9999
* host: 10000
End of explanation
reader_0 = Reader(name="reader_0")
# set guest parameter
reader_0.get_party_instance(role='guest', party_id=9999).component_param(
table={"name": "breast_hetero_guest", "namespace": "experiment"})
# set host parameter
reader_0.get_party_instance(role='host', party_id=10000).component_param(
table={"name": "breast_hetero_host", "namespace": "experiment"})
Explanation: Define a Reader to load data
End of explanation
data_transform_0 = DataTransform(name="data_transform_0")
# set guest parameter
data_transform_0.get_party_instance(role='guest', party_id=9999).component_param(
with_label=True)
data_transform_0.get_party_instance(role='host', party_id=[10000]).component_param(
with_label=False)
Explanation: Add a DataTransform component to parse raw data into Data Instance
End of explanation
intersect_0 = Intersection(name="intersect_0")
Explanation: Add a Intersection component to perform PSI for hetero-scenario
End of explanation
hetero_secureboost_0 = HeteroSecureBoost(name="hetero_secureboost_0",
num_trees=5,
bin_num=16,
task_type="classification",
objective_param={"objective": "cross_entropy"},
encrypt_param={"method": "paillier"},
tree_param={"max_depth": 3})
Explanation: Now, we define the HeteroSecureBoost component. The following parameters will be set for all parties involved.
End of explanation
evaluation_0 = Evaluation(name="evaluation_0", eval_type="binary")
Explanation: To show the evaluation result, an "Evaluation" component is needed.
End of explanation
pipeline.add_component(reader_0)
pipeline.add_component(data_transform_0, data=Data(data=reader_0.output.data))
pipeline.add_component(intersect_0, data=Data(data=data_transform_0.output.data))
pipeline.add_component(hetero_secureboost_0, data=Data(train_data=intersect_0.output.data))
pipeline.add_component(evaluation_0, data=Data(data=hetero_secureboost_0.output.data))
pipeline.compile();
Explanation: Add components to pipeline, in order of execution:
- data_transform_0 comsume reader_0's output data
- intersect_0 comsume data_transform_0's output data
- hetero_secureboost_0 consume intersect_0's output data
- evaluation_0 consume hetero_secureboost_0's prediciton result on training data
Then compile our pipeline to make it ready for submission.
End of explanation
pipeline.fit()
Explanation: Now, submit(fit) our pipeline:
End of explanation
pipeline.dump("pipeline_saved.pkl");
Explanation: Once training is done, trained model may be used for prediction. Optionally, save the trained pipeline for future use.
End of explanation
pipeline = PipeLine.load_model_from_file('pipeline_saved.pkl')
pipeline.deploy_component([pipeline.data_transform_0, pipeline.intersect_0, pipeline.hetero_secureboost_0]);
Explanation: First, deploy needed components from train pipeline
End of explanation
reader_1 = Reader(name="reader_1")
reader_1.get_party_instance(role="guest", party_id=9999).component_param(table={"name": "breast_hetero_guest", "namespace": "experiment"})
reader_1.get_party_instance(role="host", party_id=10000).component_param(table={"name": "breast_hetero_host", "namespace": "experiment"})
Explanation: Define new Reader components for reading prediction data
End of explanation
evaluation_0 = Evaluation(name="evaluation_0", eval_type="binary")
Explanation: Optionally, define new Evaluation component.
End of explanation
predict_pipeline = PipeLine()
predict_pipeline.add_component(reader_1)\
.add_component(pipeline,
data=Data(predict_input={pipeline.data_transform_0.input.data: reader_1.output.data}))\
.add_component(evaluation_0, data=Data(data=pipeline.hetero_secureboost_0.output.data));
Explanation: Add components to predict pipeline in order of execution:
End of explanation
predict_pipeline.predict()
Explanation: Then, run prediction job
End of explanation |
7,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 01
Import
Step1: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
Step2: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
Step3: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step4: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 01
Import
End of explanation
def print_sum(a=0.0, b=0):
print(a+b)
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
interact(print_sum, a=(-10.,10.,.1),b=(-8,8,2));
assert True # leave this for grading the print_sum exercise
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
def print_string(s, length=True):
print(s)
if length == True:
print(len(s))
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
interact(print_string,s='Hello World!',length=True);
assert True # leave this for grading the print_string exercise
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation |
7,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Required inputs for Akita are
Step1: Download a few Micro-C datasets, processed using distiller (https
Step2: Write out these cooler files and labels to a samples table.
Step3: Next, we want to choose genomic sequences to form batches for stochastic gradient descent, divide them into training/validation/test sets, and construct TFRecords to provide to downstream programs.
The script akita_data.py implements this procedure.
The most relevant options here are
Step4: The data for training is now saved in data/1m as tfrecords (for training, validation, and testing), where contigs.bed contains the original large contiguous regions from which training sequences were taken, and sequences.bed contains the train/valid/test sequences.
Step5: Now train a model!
(Note | Python Code:
import json
import os
import shutil
import subprocess
if not os.path.isfile('./data/hg38.ml.fa'):
print('downloading hg38.ml.fa')
subprocess.call('curl -o ./data/hg38.ml.fa.gz https://storage.googleapis.com/basenji_barnyard/hg38.ml.fa.gz', shell=True)
subprocess.call('gunzip ./data/hg38.ml.fa.gz', shell=True)
Explanation: Required inputs for Akita are:
* binned Hi-C or Micro-C data stored in cooler format (https://github.com/mirnylab/cooler)
* Genome FASTA file
First, make sure you have a FASTA file available consistent with genome used for the coolers. Either add a symlink for a the data directory or download the machine learning friendly simplified version in the next cell.
End of explanation
if not os.path.exists('./data/coolers'):
os.mkdir('./data/coolers')
if not os.path.isfile('./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'):
subprocess.call('curl -o ./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'+
' https://storage.googleapis.com/basenji_hic/tutorials/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', shell=True)
subprocess.call('curl -o ./data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool'+
' https://storage.googleapis.com/basenji_hic/tutorials/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', shell=True)
ls ./data/coolers/
Explanation: Download a few Micro-C datasets, processed using distiller (https://github.com/mirnylab/distiller-nf), binned to 2048bp, and iteratively corrected.
End of explanation
lines = [['index','identifier','file','clip','sum_stat','description']]
lines.append(['0', 'HFF', './data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', '2', 'sum', 'HFF'])
lines.append(['1', 'H1hESC', './data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', '2', 'sum', 'H1hESC'])
samples_out = open('data/microc_cools.txt', 'w')
for line in lines:
print('\t'.join(line), file=samples_out)
samples_out.close()
Explanation: Write out these cooler files and labels to a samples table.
End of explanation
if os.path.isdir('data/1m'):
shutil.rmtree('data/1m')
! akita_data.py --sample 0.05 -g ./data/hg38_gaps_binsize2048_numconseq10.bed -l 1048576 --crop 65536 --local -o ./data/1m --as_obsexp -p 8 -t .1 -v .1 -w 2048 --snap 2048 --stride_train 262144 --stride_test 32768 ./data/hg38.ml.fa ./data/microc_cools.txt
Explanation: Next, we want to choose genomic sequences to form batches for stochastic gradient descent, divide them into training/validation/test sets, and construct TFRecords to provide to downstream programs.
The script akita_data.py implements this procedure.
The most relevant options here are:
| Option/Argument | Value | Note |
|:---|:---|:---|
| --sample | 0.1 | Down-sample the genome to 10% to speed things up here. |
| -g | data/hg38_gaps_binsize2048_numconseq10.bed | Dodge large-scale unmappable regions determined from filtered cooler bins. |
| -l | 1048576 | Sequence length. |
| --crop | 65536 | Crop edges of matrix so loss is only computed over the central region. |
| --local | True | Run locally, as opposed to on a SLURM scheduler. |
| -o | data/1m | Output directory |
| -p | 8 | Uses multiple concourrent processes to read/write. |
| -t | .1 | Hold out 10% sequences for testing. |
| -v | .1 | Hold out 10% sequences for validation. |
| -w | 2048 | Pool the nucleotide-resolution values to 2048 bp bins. |
| fasta_file| data/hg38.ml.fa | FASTA file to extract sequences from. |
| targets_file | data/microc_cools.txt | Target table with cooler paths. |
Note: make sure to export BASENJIDIR as outlined in the basenji installation tips
(https://github.com/calico/basenji/tree/master/#installation).
End of explanation
! cut -f4 data/1m/sequences.bed | sort | uniq -c
! head -n3 data/1m/sequences.bed
Explanation: The data for training is now saved in data/1m as tfrecords (for training, validation, and testing), where contigs.bed contains the original large contiguous regions from which training sequences were taken, and sequences.bed contains the train/valid/test sequences.
End of explanation
# specify model parameters json to have only two targets
params_file = './params.json'
with open(params_file) as params_file:
params_tutorial = json.load(params_file)
params_tutorial['model']['head_hic'][-1]['units'] =2
with open('./data/1m/params_tutorial.json','w') as params_tutorial_file:
json.dump(params_tutorial,params_tutorial_file)
### note that training with default parameters requires GPU with >12Gb RAM ###
! akita_train.py -k -o ./data/1m/train_out/ ./data/1m/params_tutorial.json ./data/1m/
Explanation: Now train a model!
(Note: for training production-level models, please remove the --sample option when generating tfrecords)
End of explanation |
7,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of neonatal ventilator alarms
Author
Step1: Import modules containing own functions
Step2: List and set the working directory and the directories to write out data
Step3: List of the recordings
Step4: Import clinical details
Step5: Import ventilator parameters retrieved with 1/sec frequency
Step6: Calculating parameters / body weight kg
Step7: Resampling to remove half-empty rows
Step8: Save processed slow_measurements DataFrames to pickle archive
Step9: Import processed 'slow_measurements' data from pickle archive
Calculate recording durations
Step10: Visualising recording durations
Step11: Write recording times out into files
Step12: Import ventilator modes and settings
Import ventilation settings
Step13: Import ventilation modes
Step14: Save ventilation modes and settings into Excel files
Step15: Import alarm settings
Step16: Import alarm states
Step17: Calculate the total and average time of all recordings
Step18: Generate alarm events from alarm states
Step19: Using the files containing the alarm states, for each alarm category in each recording create a DataFrame with the timestamps the alarm went off and the duration of the alarm and store them in a dictionary of dictionaries
Step20: Calculate descriptive statistics for each alarm in each recording and write them to file
Step21: Visualise alarm statistics for the individual alarms in the individual recording
Step22: Example plots
Step23: Write all graphs to files
Generate cumulative descriptive statistics of all alarms combined in each recording
For each recording, what was the total number of alarm events and the number of events normalized for 24 hour periods
Step24: In each recording, what was the mean, median, sd, mad, min, 25pc, 75pc, max of alarm durations
Step25: Visualize cumulative statistics of recordings
Step26: Generate cumulative statistics of each alarm in all recordings combined
Step27: For each alarm, what was number of alarm events across all recordings and the number of events normalized per 24 hour recording time
Step28: For each alarm, what was the total duration of alarm events across all recordings and normalized per 24 hour recording time
Step29: For each alarm what was the mean, median, sd, mad, min, 25pc, 75pc, max of alarm durations
Step30: Visualising cumulative statistics of alarms
Step31: Calculate cumulative descriptive statistics of all alarms in all recording together
Step32: Visualise the duration of all alarm events as histogram
Step33: How many short alarms did occur?
Step34: Check which are the longest alarms
Step35: How many alarm events are longer than 10 minutes but shorter than 1 hour?¶
Step36: how many alarm events are longer than 1 minutes?¶
Step37: Check which are the most frequent alarms
Step38: Visualise MV and RR limit alarms
Generate dictionaries with the alarm counts (absolute and per 24H recording period) for MV low and high alarms and RR high alarms for those recordings where this occurs
Step39: Investigate the relationship of MV and RR parameter readings, ventilation settings and alarm settings
Step40: Create the tables and figures of the paper
Table 1
Step41: Figure 1
Step42: Figure 2
Step43: Figure 3 | Python Code:
import IPython
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import os
import sys
import pickle
import scipy as sp
from scipy import stats
from pandas import Series, DataFrame
from datetime import datetime, timedelta
%matplotlib inline
matplotlib.style.use('classic')
matplotlib.rcParams['figure.facecolor'] = 'w'
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', 100)
pd.set_option('mode.chained_assignment', None)
print("Python version: {}".format(sys.version))
print("IPython version: {}".format(IPython.__version__))
print("pandas version: {}".format(pd.__version__))
print("matplotlib version: {}".format(matplotlib.__version__))
print("NumPy version: {}".format(np.__version__))
print("SciPy version: {}".format(sp.__version__))
Explanation: Analysis of neonatal ventilator alarms
Author: Dr Gusztav Belteki
This Notebook contains the code used for data processing, statistical analysis and
visualization described in the following paper:
Belteki G, Morley CJ. Frequency, duration and cause of ventilator alarms on a
neonatal intensive care unit. Arch Dis Child Fetal Neonatal Ed.
2018 Jul;103(4):F307-F311. doi: 10.1136/archdischild-2017-313493.
Epub 2017 Oct 27. PubMed PMID: 29079651.
Link to the paper: https://fn.bmj.com/content/103/4/F307.long
Contact: gusztav.belteki@addenbrookes.nhs.uk; gbelteki@aol.com
Importing the required libraries and setting options
End of explanation
from gb_loader import *
from gb_stats import *
from gb_transform import *
Explanation: Import modules containing own functions
End of explanation
# Topic of the Notebook which will also be the name of the subfolder containing results
TOPIC = 'alarms_2'
# Name of the external hard drive
DRIVE = 'GUSZTI'
# Directory containing clinical and blood gas data
CWD = '/Users/guszti/ventilation_data'
# Directory on external drive to read the ventilation data from
DIR_READ = '/Volumes/%s/ventilation_data' % DRIVE
# Directory to write results and selected images to
if not os.path.isdir('%s/%s/%s' % (CWD, 'Analyses', TOPIC)):
os.makedirs('%s/%s/%s' % (CWD, 'Analyses', TOPIC))
DIR_WRITE = '%s/%s/%s' % (CWD, 'Analyses', TOPIC)
# Images and raw data will be written on an external hard drive
if not os.path.isdir('/Volumes/%s/data_dump/%s' % (DRIVE, TOPIC)):
os.makedirs('/Volumes/%s/data_dump/%s' % (DRIVE, TOPIC))
DATA_DUMP = '/Volumes/%s/data_dump/%s' % (DRIVE, TOPIC)
os.chdir(CWD)
os.getcwd()
DIR_READ
DIR_WRITE
DATA_DUMP
Explanation: List and set the working directory and the directories to write out data
End of explanation
# One recording from each patient, all of them 24 hours old or longer
# The sub folders containing the individual recordings have the same names within cwd
recordings = ['DG001', 'DG002_1', 'DG003', 'DG004', 'DG005_1', 'DG006_2', 'DG007', 'DG008', 'DG009', 'DG010',
'DG011', 'DG013', 'DG014', 'DG015', 'DG016', 'DG017', 'DG018_1', 'DG020',
'DG021', 'DG022', 'DG023', 'DG025', 'DG026', 'DG027', 'DG028', 'DG029', 'DG030',
'DG031', 'DG032_2', 'DG033', 'DG034', 'DG035', 'DG037', 'DG038_1', 'DG039', 'DG040_1', 'DG041',
'DG042', 'DG043', 'DG044', 'DG045', 'DG046_2', 'DG047', 'DG048', 'DG049', 'DG050']
Explanation: List of the recordings
End of explanation
clinical_details = pd.read_excel('%s/data_grabber_patient_data_combined.xlsx' % CWD)
clinical_details.index = clinical_details['Recording']
clinical_details.info()
current_weights = {}
for recording in recordings:
current_weights[recording] = clinical_details.loc[recording, 'Current weight' ] / 1000
Explanation: Import clinical details
End of explanation
slow_measurements = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = slow_measurement_finder(flist)
print('Loading recording %s' % recording)
print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
slow_measurements[recording] = data_loader(fnames)
# 46 recordings from 46 patients (4 recordings excluded as they were < 24 hours lon)
len(slow_measurements)
Explanation: Import ventilator parameters retrieved with 1/sec frequency
End of explanation
for recording in recordings:
try:
a = slow_measurements[recording]
a['VT_kg'] = a['5001|VT [mL]'] / current_weights[recording]
a['VTi_kg'] = a['5001|VTi [mL]'] / current_weights[recording]
a['VTe_kg'] = a['5001|VTe [mL]'] / current_weights[recording]
a['VTmand_kg'] = a['5001|VTmand [mL]'] / current_weights[recording]
a['VTspon_kg'] = a['5001|VTspon [mL]'] / current_weights[recording]
a['VTimand_kg'] = a['5001|VTimand [mL]'] / current_weights[recording]
a['VTemand_kg'] = a['5001|VTemand [mL]'] / current_weights[recording]
a['VTispon_kg'] = a['5001|VTispon [mL]'] / current_weights[recording]
a['VTespon_kg'] = a['5001|VTespon [mL]'] / current_weights[recording]
except KeyError:
# print('%s does not have all of the parameters' % recording)
pass
for recording in recordings:
try:
a = slow_measurements[recording]
a['VThf_kg'] = a['5001|VThf [mL]'] / current_weights[recording]
a['DCO2_corr_kg'] = a['5001|DCO2 [10*mL^2/s]'] * 10 / (current_weights[recording]) ** 2
except KeyError:
# print('%s does not have all of the parameters' % recording)
pass
for recording in recordings:
try:
a = slow_measurements[recording]
a['MV_kg'] = a['5001|MV [L/min]'] / current_weights[recording]
a['MVi_kg'] = a['5001|MVi [L/min]'] / current_weights[recording]
a['MVe_kg'] = a['5001|MVe [L/min]'] / current_weights[recording]
a['MVemand_kg'] = a['5001|MVemand [L/min]'] / current_weights[recording]
a['MVespon_kg'] = a['5001|MVespon [L/min]'] / current_weights[recording]
a['MVleak_kg'] = a['5001|MVleak [L/min]'] / current_weights[recording]
except KeyError:
# print('%s does not have all of the parameters' % recording)
pass
Explanation: Calculating parameters / body weight kg
End of explanation
# 1/sec data are retrieved in two parts which need to be joined
# This resampling steps combines the two parts
for recording in recordings:
slow_measurements[recording] = slow_measurements[recording].resample('1S').mean()
# Example
slow_measurements['DG003'].head();
Explanation: Resampling to remove half-empty rows
End of explanation
len(recordings)
rec1 = recordings[:15]; rec2 = recordings[15:30]; rec3 = recordings[30:40]; rec4 = recordings[40:]
Explanation: Save processed slow_measurements DataFrames to pickle archive
End of explanation
# Time stamps are obtained from 'slow measurements'
recording_duration = {}
for recording in recordings:
recording_duration[recording] = slow_measurements[recording].index[-1] - slow_measurements[recording].index[0]
recording_duration_seconds = {}
recording_duration_hours = {}
for recording in recordings:
temp = recording_duration[recording]
recording_duration_seconds[recording] = temp.total_seconds()
recording_duration_hours[recording] = temp.total_seconds() / 3600
Explanation: Import processed 'slow_measurements' data from pickle archive
Calculate recording durations
End of explanation
v = list(range(1, len(recordings)+1))
w = [value for key, value in sorted(recording_duration_hours.items()) if key in recordings]
fig = plt.figure()
fig.set_size_inches(20, 10)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
ax1.bar(v, w, color = 'blue')
plt.xlabel("Recordings", fontsize = 22)
plt.ylabel("Hours", fontsize = 22)
plt.title("Recording periods" , fontsize = 22)
plt.yticks(fontsize = 22)
plt.xticks([i+1.5 for i, _ in enumerate(recordings)], recordings, fontsize = 22, rotation = 'vertical');
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'recording_durations.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
Explanation: Visualising recording durations
End of explanation
recording_times_frame = DataFrame([recording_duration, recording_duration_hours, recording_duration_seconds],
index = ['days', 'hours', 'seconds'])
recording_times_frame
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'recording_periods.xlsx'))
recording_times_frame.to_excel(writer,'rec_periods')
writer.save()
Explanation: Write recording times out into files
End of explanation
vent_settings = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = slow_setting_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
vent_settings[recording] = data_loader(fnames)
# remove less important ventilator settings to simplify the table
vent_settings_selected = {}
for recording in recordings:
vent_settings_selected[recording] = vent_settings_cleaner(vent_settings[recording])
# Create a another dictionary of Dataframes wit some of the ventilation settings (set VT, set RR, set Pmax)
lsts = [(['VT_weight'], ['VTi', 'VThf']), (['RR_set'], ['RR']), (['Pmax'], ['Pmax', 'Ampl hf max'])]
vent_settings_2 = {}
for recording in recordings:
frmes = []
for name, pars in lsts:
if pars in [['VTi', 'VThf']]:
ind = []
val = []
for index, row in vent_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'] / current_weights[recording])
frmes.append(DataFrame(val, index = ind, columns = name))
else:
ind = []
val = []
for index, row in vent_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'])
frmes.append(DataFrame(val, index = ind, columns = name))
vent_settings_2[recording] = pd.concat(frmes)
vent_settings_2[recording].drop_duplicates(inplace = True)
Explanation: Import ventilator modes and settings
Import ventilation settings
End of explanation
vent_modes = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = slow_text_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
vent_modes[recording] = data_loader(fnames)
# remove less important ventilator mode settings to simplify the table
vent_modes_selected = {}
for recording in recordings:
vent_modes_selected[recording] = vent_mode_cleaner(vent_modes[recording])
Explanation: Import ventilation modes
End of explanation
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_settings.xlsx'))
for recording in recordings:
vent_settings[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_settings_selected.xlsx'))
for recording in recordings:
vent_settings_selected[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_settings_2.xlsx'))
for recording in recordings:
vent_settings_2[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_modes.xlsx'))
for recording in recordings:
vent_modes[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_modes_selected.xlsx'))
for recording in recordings:
vent_modes_selected[recording].to_excel(writer,'%s' % recording)
writer.save()
Explanation: Save ventilation modes and settings into Excel files
End of explanation
alarm_settings = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = alarm_setting_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
alarm_settings[recording] = data_loader(fnames)
# Remove etCO2 limits which were not used
alarm_settings_selected = {}
for recording in recordings:
alarm_settings_selected[recording] = alarm_setting_cleaner(alarm_settings[recording])
# Create a another dictionary of Dataframes with some of the alarm settings
lsts = [(['MV_high_weight'], ['MVe_HL']), (['MV_low_weight'], ['MVe_LL']),
(['PIP_high'], ['PIP_HL']), (['RR_high'], ['RR_HL'])]
alarm_settings_2 = {}
for recording in recordings:
frmes = []
for name, pars in lsts:
if pars in [['MVe_HL'], ['MVe_LL']]:
ind = []
val = []
for index, row in alarm_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'] / current_weights[recording])
frmes.append(DataFrame(val, index = ind, columns = name))
else:
ind = []
val = []
for index, row in alarm_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'])
frmes.append(DataFrame(val, index = ind, columns = name))
alarm_settings_2[recording] = pd.concat(frmes)
alarm_settings_2[recording].drop_duplicates(inplace = True)
# Write DataFrames containing alarm settings to a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_settings.xlsx'))
for recording in recordings:
alarm_settings[recording].to_excel(writer,'%s' % recording)
writer.save()
# Write DataFrames containing alarm settings to a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_settings_2.xlsx'))
for recording in recordings:
alarm_settings_2[recording].to_excel(writer,'%s' % recording)
writer.save()
Explanation: Import alarm settings
End of explanation
alarm_states = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = alarm_state_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
alarm_states[recording] = data_loader(fnames)
# Write DataFrames containing alarm states to a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_states.xlsx'))
for recording in recordings:
alarm_states[recording].to_excel(writer,'%s' % recording)
writer.save()
Explanation: Import alarm states
End of explanation
total_recording_time = timedelta(0)
for recording in recordings:
total_recording_time += recording_duration[recording]
total_recording_time
mean_recording_time = total_recording_time / len(recordings)
mean_recording_time
Explanation: Calculate the total and average time of all recordings
End of explanation
# Define function to retrieve alarm events from alarm timing data
def alarm_events_calculator(dframe, al):
'''
DataFrame, str -> DataFrame
dframe: DataFrame containing alarm states
al: alarm category (string)
Returns a pd.DataFrame object with the time stamps when the alarm went off and the duration (in seconds)
of the alarm for alarm 'al' in recording 'rec'
'''
alarms = dframe
alarm = alarms[alarms.Name == al]
length = len(alarm)
delta = np.array([(alarm.Date_Time[i] - alarm.Date_Time[i-1]).total_seconds()
for i in range(1, length) if alarm['State New'][i] == 'NotActive' and alarm['State New'][i-1] == 'Active'])
stamp = np.array([alarm.index[i-1]
for i in range(1, length) if alarm['State New'][i] == 'NotActive' and alarm['State New'][i-1] == 'Active'])
data = {'duration_seconds': delta, 'time_went_off': stamp,}
alarm_t = DataFrame(data, columns = ['time_went_off', 'duration_seconds'])
return alarm_t
Explanation: Generate alarm events from alarm states
End of explanation
# Create a list of alarms occurring during each recording
alarm_list = {}
for recording in recordings:
alarm_list[recording] = sorted(set(alarm_states[recording].Name))
alarm_events = {}
for recording in recordings:
alarm_events[recording] = {}
for alarm in alarm_list[recording]:
alarm_events[recording][alarm] = alarm_events_calculator(alarm_states[recording], alarm)
# Write Dataframes containing the alarm events in Excel files,
# one Excel file for each recording
for recording in recordings:
writer = pd.ExcelWriter('%s/%s%s' % (DIR_WRITE, recording, '_alarm_events.xlsx'))
for alarm in alarm_list[recording]:
alarm_events[recording][alarm].to_excel(writer, alarm[:20])
writer.save()
Explanation: Using the files containing the alarm states, for each alarm category in each recording create a DataFrame with the timestamps the alarm went off and the duration of the alarm and store them in a dictionary of dictionaries
End of explanation
def alarm_stats_calculator(dframe, rec, al):
'''
dframe: DataFrame containing alarm events
rec: recording (string)
al: alarm (string)
Returns detailed statistics about a particular alarm (al) in a particular recording (rec);
- number of times alarm went off and its value normalized to 24 hour periods
- mean, median, standard deviation, mean absolute deviation, minimum, 25th centile, 75th centile, maximum
time period when the alarm was off
- the total amount of time the alarm was off and its relative value in percent as the total recording time
'''
alarm = dframe[al].duration_seconds
return (alarm.size, round((alarm.size / (recording_duration_hours[rec] / 24)), 1),
round(alarm.mean() , 1), round(alarm.median(), 1), round(alarm.std(), 1), round(alarm.min() , 1),
round(alarm.quantile(0.25), 1), round(alarm.quantile(0.75), 1), round(alarm.max(), 1),
round(alarm.sum(), 1), round(alarm.sum() * 100 / recording_duration_seconds[rec] ,3))
alarm_stats = {}
for recording in recordings:
alarm_stats[recording] = {}
for alarm in alarm_list[recording]:
data = alarm_stats_calculator(alarm_events[recording], recording, alarm)
frame = DataFrame([data], columns = ['number of events', 'number of event per 24h',
'mean duration (s)', 'median duration (s)', 'SD duration (s)',
'miminum duration (s)',
'duration 25th centile (s)', 'duration 75th centile (s)',
'maximum duration (s)', 'cumulative duration (s)',
'percentage of recording length (%)'], index = [alarm])
alarm_stats[recording][alarm] = frame
# Write descriptive statistics in a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats.xlsx'))
for recording in recordings:
stats = []
for alarm in alarm_stats[recording]:
stats.append(alarm_stats[recording][alarm])
stats_all = pd.concat(stats)
stats_all.to_excel(writer, recording)
writer.save()
Explanation: Calculate descriptive statistics for each alarm in each recording and write them to file
End of explanation
# Generates a plot with the cumulative times (in seconds) of the various alarm occurring during recording (rec).
# Displays the plot
def alarm_plot_1(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['cumulative duration (s)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("seconds", fontsize = 24)
plt.title("Recording %s : How long was the alarm active over the %d seconds of recording?" % (rec,
recording_duration_seconds[rec]), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
# Generates a plot with the cumulative times (in seconds) of the various alarm occurring during recording (rec).
# Does not displays the plot but write it into a jpg file.
# NB: the resolution of the image is only 100 dpi - for publication quality higher is needed
def alarm_plot_1_write(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['cumulative duration (s)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("seconds", fontsize = 24)
plt.title("Recording %s : How long was the alarm active over the %d seconds of recording?" % (rec,
recording_duration_seconds[rec]), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
fig.savefig('%s/%s_%s.jpg' % (dir_write, 'alarm_durations_1', rec), dpi=100, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
# Generates a plot with the cumulative times (expressed as percentage of the total recording time)
# of the various alarm occurring during recording (rec).
# Displays the plot
def alarm_plot_2(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 24)
plt.title("Recording %s: How long the alarm active over the %s hours of recording?" % (rec,
str(recording_duration[rec])), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
# Generates a plot with the cumulative times (expressed as percentage of the total recording time)
# of the various alarm occurring during recording (rec).
# Does not displays the plot but write it into a jpg file.
# NB: the resolution of the image is only 100 dpi - for publication quality higher is needed
def alarm_plot_2_write(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 24)
plt.title("Recording %s: How long the alarm active over the %s hours of recording?" % (rec,
str(recording_duration[rec])), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
fig.savefig('%s/%s_%s.jpg' % (dir_write, 'alarm_durations_2', rec), dpi=100, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
# Displays the individual alarm events of the recording (rec) along the time axis
# Displays the plot
def alarm_plot_3(rec):
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(17, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 16, markeredgewidth = 1 )
plt.xlabel("Time", fontsize = 20)
plt.title("Alarm events during recording %s" % rec , fontsize = 24)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 18);
plt.xticks(fontsize = 14, rotation = 30)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5);
# Displays the individual alarm events of recording (rec) along the time axis
# Does not displays the plot but write it into a jpg file.
# NB: the resolution of the image is only 100 dpi - for publication quality higher is needed
def alarm_plot_3_write(rec):
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(17, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 16, markeredgewidth = 1 )
plt.xlabel("Time", fontsize = 20)
plt.title("Alarm events during recording %s" % rec , fontsize = 24)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 18);
plt.xticks(fontsize = 14, rotation = 30)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5)
fig.savefig('%s/%s_%s.pdf' % (dir_write, 'individual_alarms', rec), dpi=100, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='pdf',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
Explanation: Visualise alarm statistics for the individual alarms in the individual recording
End of explanation
alarm_plot_1('DG032_2')
alarm_plot_2('DG032_2')
alarm_plot_3('DG032_2')
Explanation: Example plots
End of explanation
total_alarm_number_recordings = {} # dictionary containing the total number of alarm events in each recording
for recording in recordings:
total = 0
for alarm in alarm_list[recording]:
total += len(alarm_events[recording][alarm].index)
total_alarm_number_recordings[recording] = total
total_alarm_number_recordings_24H = {} # dictionary containing the total number of alarm events in each recording
# corrected for 24 hour period
for recording in recordings:
total_alarm_number_recordings_24H[recording] = (total_alarm_number_recordings[recording] /
(recording_duration[recording].total_seconds() / 86400))
Explanation: Write all graphs to files
Generate cumulative descriptive statistics of all alarms combined in each recording
For each recording, what was the total number of alarm events and the number of events normalized for 24 hour periods
End of explanation
alarm_durations_recordings = {} # a dictionary of Series. Each series contains all the alarm durations of a recording
for recording in recordings:
durations = []
for alarm in alarm_list[recording]:
durations.append(alarm_events[recording][alarm]['duration_seconds'])
durations = pd.concat(durations)
alarm_durations_recordings[recording] = durations
# Dictionaries containing various descriptive statistics for each recording
mean_alarm_duration_recordings = {}
median_alarm_duration_recordings = {}
sd_alarm_duration_recordings = {}
mad_alarm_duration_recordings = {}
min_alarm_duration_recordings = {}
pc25_alarm_duration_recordings = {}
pc75_alarm_duration_recordings = {}
max_alarm_duration_recordings = {}
for recording in recordings:
mean_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].mean(), 4)
median_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].median(), 4)
sd_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].std(), 4)
mad_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].mad(), 4)
min_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].min(), 4)
pc25_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].quantile(0.25), 4)
pc75_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].quantile(0.75), 4)
max_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].max(), 4)
# Create DataFrame containing cumulative alarm statistics for each recording
alarm_stats_cum_rec = DataFrame([total_alarm_number_recordings,
total_alarm_number_recordings_24H,
mean_alarm_duration_recordings,
median_alarm_duration_recordings,
sd_alarm_duration_recordings,
mad_alarm_duration_recordings,
min_alarm_duration_recordings,
pc25_alarm_duration_recordings,
pc75_alarm_duration_recordings,
max_alarm_duration_recordings],
index = ['count', 'count per 24h', 'mean duration (sec)', 'median duration (sec)', 'sd duration (sec)',
'mad duration (sec)', 'min duration (sec)', '25th cent duration (sec)', '75th cent duration (sec)',
'max duration (sec)'])
alarm_stats_cum_rec.round(2)
# Write statistics to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats_cum_rec.xlsx'))
alarm_stats_cum_rec.round(2).to_excel(writer, 'cumulative_stats')
writer.save()
Explanation: In each recording, what was the mean, median, sd, mad, min, 25pc, 75pc, max of alarm durations
End of explanation
# Plot the absolute number of alarm events for each recording
fig = plt.figure()
fig.set_size_inches(12, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(recordings)+1)), alarm_stats_cum_rec.loc['count', :], color = 'blue')
plt.ylabel("Recordings", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(recordings)], recordings, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_rec.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of alarm events in each recording normalised for 24 hour periods
fig = plt.figure()
fig.set_size_inches(12, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(recordings)+1)), alarm_stats_cum_rec.loc['count per 24h', :], color = 'blue')
plt.ylabel("Recordings", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events per 24 hours" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(recordings)], recordings, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_24H_rec.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Median duration of alarm events
fig = plt.figure()
fig.set_size_inches(12, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(recordings)+1)), alarm_stats_cum_rec.loc['mean duration (sec)', :], color = 'blue')
plt.ylabel("Recordings", fontsize = 22)
plt.xlabel("seconds", fontsize = 22)
plt.title("Median duration of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(recordings)], recordings, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'median_duration_rec.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
Explanation: Visualize cumulative statistics of recordings
End of explanation
# Create a list of all alarms occurring in any recording
total_alarm_list = set()
for recording in recordings:
total_alarm_list.update(alarm_list[recording])
total_alarm_list = sorted(total_alarm_list)
# A list of all alarms occurring during the service evaluation
total_alarm_list
# Write alarm list to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'total_alarm_list.xlsx'))
DataFrame(total_alarm_list, columns = ['alarm categories']).to_excel(writer, 'total_alarm_list')
writer.save()
Explanation: Generate cumulative statistics of each alarm in all recordings combined
End of explanation
total_alarm_number_alarms = {} # dictionary containing the number of alarm events in all recordings for the
# various alarm categories
for alarm in total_alarm_list:
total = 0
for recording in recordings:
if alarm in alarm_list[recording]:
total += len(alarm_events[recording][alarm].index)
total_alarm_number_alarms[alarm] = total
# Write alarm list to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'total_alarm_list_numbers.xlsx'))
DataFrame([total_alarm_number_alarms]).T.to_excel(writer, 'total_alarm_list')
writer.save()
total_alarm_number_alarms_24H = {} # dictionary containing the number of alarm events in all recordings for the
# various alarm categories normalized for 24 hour recording periods
for alarm in total_alarm_list:
total_alarm_number_alarms_24H[alarm] = round(((total_alarm_number_alarms[alarm] /
(total_recording_time.total_seconds() / 86400))), 4)
Explanation: For each alarm, what was number of alarm events across all recordings and the number of events normalized per 24 hour recording time
End of explanation
alarm_durations_alarms = {} # a dictionary of Series. Each Series contains all durations of a particular alarm
# in all recordings
for alarm in total_alarm_list:
durations = []
for recording in recordings:
if alarm in alarm_list[recording]:
durations.append(alarm_events[recording][alarm]['duration_seconds'])
durations = pd.concat(durations)
alarm_durations_alarms[alarm] = durations
cum_alarm_duration_alarms = {} # dictionary containing the total duration of alarms in all recordings for the
# various alarm categories
for alarm in total_alarm_list:
cum_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].sum(), 4)
cum_alarm_duration_alarms_24H = {} # dictionary containing the total duration of alarms in all recordings for the
# various alarm categories normalized for 24 hour recording periods
for alarm in total_alarm_list:
cum_alarm_duration_alarms_24H[alarm] = round(((cum_alarm_duration_alarms[alarm] /
(total_recording_time.total_seconds() / 86400))), 4)
Explanation: For each alarm, what was the total duration of alarm events across all recordings and normalized per 24 hour recording time
End of explanation
# libraries containing various descriptive statistics for each recording
mean_alarm_duration_alarms = {}
median_alarm_duration_alarms = {}
sd_alarm_duration_alarms = {}
mad_alarm_duration_alarms = {}
min_alarm_duration_alarms = {}
pc25_alarm_duration_alarms = {}
pc75_alarm_duration_alarms = {}
max_alarm_duration_alarms = {}
for alarm in total_alarm_list:
mean_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].mean(), 4)
median_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].median(), 4)
sd_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].std(), 4)
mad_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].mad(), 4)
min_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].min(), 4)
pc25_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].quantile(0.25), 4)
pc75_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].quantile(0.75), 4)
max_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].max(), 4)
# Create DataFrame containing cumulative alarm statistics for each alarm
alarm_stats_cum_al = DataFrame([total_alarm_number_alarms,
total_alarm_number_alarms_24H,
cum_alarm_duration_alarms,
cum_alarm_duration_alarms_24H,
mean_alarm_duration_alarms,
median_alarm_duration_alarms,
sd_alarm_duration_alarms,
mad_alarm_duration_alarms,
min_alarm_duration_alarms,
pc25_alarm_duration_alarms,
pc75_alarm_duration_alarms,
max_alarm_duration_alarms],
index = ['count', 'count per 24h', 'total alarm duration (sec)', 'total alarm duration per 24 hours (sec)',
'mean duration (sec)', 'median duration (sec)', 'sd duration (sec)', 'mad duration (sec)',
'min duration (sec)', '25th cent duration (sec)', '75th cent duration (sec)',
'max duration (sec)'])
# Dataframe containing cumulative alarm statistics for each alarm
alarm_stats_cum_al.round(2)
# Write Dataframe containing cumulative alarm statistics for each alarm to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats_cum_al.xlsx'))
alarm_stats_cum_al.round(2).to_excel(writer, 'cumulative_stats')
writer.save()
Explanation: For each alarm what was the mean, median, sd, mad, min, 25pc, 75pc, max of alarm durations
End of explanation
# Reduce a too long alarm name
total_alarm_list[0] = 'A setting, alarm limit or vent...'
# Total number of alarm events in all recordings
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(total_alarm_list)+1)), alarm_stats_cum_al.loc['count', :], color = 'blue')
plt.ylabel("Alarms", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(total_alarm_list)], total_alarm_list, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_al.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Total number of alarm events in all recordings normalized for 24 hour periods
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(total_alarm_list)+1)), alarm_stats_cum_al.loc['count per 24h', :], color = 'blue')
plt.ylabel("Alarms", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events per 24 hour" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(total_alarm_list)], total_alarm_list, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_24H_al.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Median duration of alarm events in all recordings
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(total_alarm_list)+1)), alarm_stats_cum_al.loc['median duration (sec)', :], color = 'blue')
plt.ylabel("Alarms", fontsize = 22)
plt.xlabel("seconds", fontsize = 22)
plt.title("Median duration of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(total_alarm_list)], total_alarm_list, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'median_events_al.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
Explanation: Visualising cumulative statistics of alarms
End of explanation
all_durations = [] # Series containing durations of all alarm events in all the recording
for recording in recordings:
for alarm in alarm_list[recording]:
all_durations.append(alarm_events[recording][alarm]['duration_seconds'])
all_durations = pd.concat(all_durations)
# The total number of alarm events in all the recordings
total_count = len(all_durations)
total_count
# The total number of alarm events in all the recordings per 24 hour
total_count_24H = total_count / (total_recording_time.total_seconds() / 86400)
total_count_24H
# Calculate descriptive statistics (expressed in seconds)
mean_duration_total = round(all_durations.mean(), 4)
median_duration_total = round(all_durations.median(), 4)
sd_duration_total = round(all_durations.std(), 4)
mad_duration_total = round(all_durations.mad(), 4)
min_duration_total = round(all_durations.min(), 4)
pc25_duration_total = round(all_durations.quantile(0.25), 4)
pc75_duration_total = round(all_durations.quantile(0.75), 4)
max_duration_total = round(all_durations.max(), 4)
alarm_stats_cum_total = DataFrame([ total_count, total_count_24H,
mean_duration_total, median_duration_total,
sd_duration_total, mad_duration_total, min_duration_total,
pc25_duration_total, pc75_duration_total, max_duration_total],
columns = ['all alarms in all recordings'],
index = ['total alarm events', 'total alarm events per 24 hours',
'mean alarm duration (sec)', 'median alarm duration (sec)',
'sd alarm duration (sec)', 'mad alarm duration (sec)',
'min alarm duration (sec)', '25 centile alarm duration (sec)',
'75 centile alarm duration (sec)', 'max alarm duration (sec)'])
# Cumulative statistics of the whole datasett
alarm_stats_cum_total.round(2)
# Write cumulative statistics to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats_cum_total.xlsx'))
alarm_stats_cum_total.to_excel(writer, 'cumulative_stats')
writer.save()
Explanation: Calculate cumulative descriptive statistics of all alarms in all recording together
End of explanation
# Histogram showing the number of alarms which were shorter than 1 minute
fig = plt.figure()
fig.set_size_inches(12, 6)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 60))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 20)
plt.ylabel('Number of events', fontsize = 20)
plt.xticks(range(0,60,2), fontsize = 10)
plt.yticks(fontsize = 10)
plt.title('Histogram of alarm durations', fontsize = 20)
fig.savefig('%s/%s' % (DIR_WRITE, 'alarm_duration_hist_1.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Histogram showing the number of alarms which were shorter than 10 minutes
fig = plt.figure()
fig.set_size_inches(12, 6)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 600))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 20)
plt.ylabel('Number of events', fontsize = 20)
plt.xticks(range(0, 600, 60), fontsize = 10)
plt.yticks(fontsize = 10)
plt.title('Histogram of alarm durations', fontsize = 20)
fig.savefig('%s/%s' % (DIR_WRITE, 'alarm_duration_hist_2.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Histogram showing all data with a bin size of 1minutes and log X axis
fig = plt.figure()
fig.set_size_inches(12, 6)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 50000, 60))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 20)
plt.ylabel('Number of events', fontsize = 20)
plt.xticks(range(0, 50000, 600), fontsize = 10)
plt.yticks(fontsize = 10)
plt.xscale('log')
plt.yscale('log')
plt.title('Histogram of alarm durations', fontsize = 20)
fig.savefig('%s/%s' % (DIR_WRITE, 'alarm_duration_hist_3.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
Explanation: Visualise the duration of all alarm events as histogram
End of explanation
under_10_sec = sorted([al for al in all_durations if al < 10])
len(under_10_sec)
under_1_min = sorted([al for al in all_durations if al <= 60])
len(under_1_min)
under_10_sec_MV_low = sorted([al for al in alarm_durations_alarms['Minute volume < low limit'] if al < 10])
under_10_sec_MV_high = sorted([al for al in alarm_durations_alarms['Minute volume > high limit'] if al < 10])
under_10_sec_RR_high = sorted([al for al in alarm_durations_alarms['Respiratory rate > high limit'] if al < 10])
len(under_10_sec_MV_low), len(under_10_sec_MV_high), len(under_10_sec_RR_high)
# Short alarms (<10 sec) in the categories where the user sets the limits
len(under_10_sec_MV_low) + len(under_10_sec_MV_high) + len(under_10_sec_RR_high)
Explanation: How many short alarms did occur?
End of explanation
# How many alarm events are longer than 1 hour?
over_1_hour = sorted([al for al in all_durations if al > 3600])
len(over_1_hour)
# Which alarms were longer than one hour?
alarms_over_1_hour = []
for recording in recordings:
for alarm in alarm_list[recording]:
for event in alarm_events[recording][alarm]['duration_seconds']:
if event > 3600:
alarms_over_1_hour.append((recording, alarm, event))
alarms_over_1_hour = DataFrame(sorted(alarms_over_1_hour, key = lambda x: x[2], reverse = True),
columns = ['recording', 'alarm', 'duration (seconds)'])
alarms_over_1_hour
alarms_over_1_hour.groupby('alarm').count()
Explanation: Check which are the longest alarms
End of explanation
over_10_minutes = sorted([al for al in all_durations if al > 600 and al <= 3600])
len(over_10_minutes)
alarms_over_10_min = []
# which alarms were longer than 10 minutes but shorter than 1 hour
for recording in recordings:
for alarm in alarm_list[recording]:
for event in alarm_events[recording][alarm]['duration_seconds']:
if event > 600 and event <= 3600:
alarms_over_10_min.append((recording, alarm, event))
alarms_over_10_min = DataFrame(sorted(alarms_over_10_min, key = lambda x: x[2], reverse = True),
columns = ['recording', 'alarm', 'duration (seconds)'])
alarms_over_10_min.groupby('alarm').count()
Explanation: How many alarm events are longer than 10 minutes but shorter than 1 hour?¶
End of explanation
over_1_minutes = sorted([al for al in all_durations if al > 60 and al <= 600])
len(over_1_minutes)
alarms_over_1_min = []
# which alarms were longer than 1 minutes but shorter than 10 minutes
for recording in recordings:
for alarm in alarm_list[recording]:
for event in alarm_events[recording][alarm]['duration_seconds']:
if event > 60 and event <= 600:
alarms_over_1_min.append((recording, alarm, event))
alarms_over_1_min = DataFrame(sorted(alarms_over_1_min, key = lambda x: x[2], reverse = True),
columns = ['recording', 'alarm', 'duration (seconds)'])
alarms_over_1_min.groupby('alarm').count()
# Write long alarms into a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'long_alarms.xlsx'))
alarms_over_1_hour.to_excel(writer, 'over_1hour')
alarms_over_10_min.to_excel(writer, '10min_to_1hour')
alarms_over_1_min.to_excel(writer, '1min_to_10min')
writer.save()
Explanation: how many alarm events are longer than 1 minutes?¶
End of explanation
# Identify the most frequent alarm events
frequent_alarms = alarm_stats_cum_al.loc['count'].sort_values(inplace = False, ascending = False)
# The eight most frequent alarms
frequent_alarms[:8]
# How many percent of all alarms were these 8 frequent alarms?
round(frequent_alarms[:8].sum() / frequent_alarms.sum(), 3) * 100
# Write frequent alarm in an Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'frequent_alarms.xlsx'))
DataFrame(frequent_alarms[:8]).to_excel(writer, 'frequent_alarms')
writer.save()
# Number of alarms where the user sets the limits
user_set_alarms = (frequent_alarms['Minute volume < low limit'] + frequent_alarms['Minute volume > high limit'] +
frequent_alarms['Respiratory rate > high limit'])
int(user_set_alarms)
# What proportion of all alarms were these 3 user-set alarms?
print('%.3f' % (user_set_alarms / frequent_alarms.sum()))
# Frequent alarms related to VT not achieved
other_frequent_alarms = (frequent_alarms['Tidal volume < low Limit'] + frequent_alarms['Volume not constant'] +
frequent_alarms['Tube obstructed'])
int(other_frequent_alarms)
# What proportion of all alarms were alarms related to VT not achieved?
print('%.3f' % (other_frequent_alarms / frequent_alarms.sum()))
Explanation: Check which are the most frequent alarms
End of explanation
MV_low_count = {}
for recording in recordings:
try:
MV_low_count[recording] = alarm_stats[recording]['Minute volume < low limit']['number of events'].iloc[0]
except KeyError:
# print('No "MV_low" alarm in recording %s' % recording)
pass
MV_low_count_24H = {}
for recording in recordings:
try:
MV_low_count_24H[recording] = \
alarm_stats[recording]['Minute volume < low limit']['number of event per 24h'].iloc[0]
except KeyError:
# print('No "MV_low" alarm in recording %s' % recording)
pass
MV_high_count = {}
for recording in recordings:
try:
MV_high_count[recording] = alarm_stats[recording]['Minute volume > high limit']['number of events'].iloc[0]
except KeyError:
# print('No "MV_high" alarm in recording %s' % recording)
pass
MV_high_count_24H = {}
for recording in recordings:
try:
MV_high_count_24H[recording] = alarm_stats[recording]['Minute volume > high limit']['number of event per 24h'].iloc[0]
except KeyError:
# print('No "MV_high" alarm in recording %s' % recording)
pass
RR_high_count = {}
for recording in recordings:
try:
RR_high_count[recording] = alarm_stats[recording]['Respiratory rate > high limit']['number of events'].iloc[0]
except KeyError:
# print('No "RR_high" alarm in recording %s' % recording)
pass
RR_high_count_24H = {}
for recording in recordings:
try:
RR_high_count_24H[recording] = alarm_stats[recording]['Respiratory rate > high limit']['number of event per 24h'].iloc[0]
except KeyError:
# print('No "RR_high" alarm in recording %s' % recording)
pass
# Plot the number of MV < low limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_low_count)+1)), MV_low_count.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events", fontsize = 16)
plt.title("MV < low limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_low_count.keys())], MV_low_count.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_low.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of MV < low limit alarm events normalized for 24 hours and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_low_count_24H)+1)), MV_low_count_24H.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events per 24 hours", fontsize = 16)
plt.title("MV < low limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_low_count_24H.keys())], MV_low_count_24H.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_low_24H.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of MV > low limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_high_count)+1)), MV_high_count.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events", fontsize = 16)
plt.title("MV > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_high_count.keys())], MV_high_count.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_high.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of MV > low limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_high_count_24H)+1)), MV_high_count_24H.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events per 24 hours", fontsize = 16)
plt.title("MV > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_high_count_24H.keys())], MV_high_count_24H.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_high_24H.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of RR > high limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(RR_high_count)+1)), RR_high_count.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events", fontsize = 16)
plt.title("RR > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(RR_high_count.keys())], RR_high_count.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'RR_high.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of RR > high limit alarm events normalized for 24 hours and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(RR_high_count_24H)+1)), RR_high_count_24H.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events per 24 hours", fontsize = 16)
plt.title("RR > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(RR_high_count_24H.keys())],
RR_high_count_24H.keys(), rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'RR_high_24H.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
Explanation: Visualise MV and RR limit alarms
Generate dictionaries with the alarm counts (absolute and per 24H recording period) for MV low and high alarms and RR high alarms for those recordings where this occurs
End of explanation
for recording in recordings:
slow_measurements[recording] = pd.concat([slow_measurements[recording],
vent_settings_2[recording], alarm_settings_2[recording]], axis = 0, join = 'outer')
slow_measurements[recording].sort_index(inplace = True)
for recording in recordings:
slow_measurements[recording] = slow_measurements[recording].fillna(method = 'pad')
def minute_volume_plotter(rec, ylim = False):
'''
Plots the total minute volumme (using the data obtained with 1/sec sampling rate)
together with the "MV low" and "MV high" alarm limits
Displays the plot
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['MV_low_weight'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['MV_high_weight'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Minute volume - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('L/kg/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
minute_volume_plotter('DG003')
def minute_volume_plotter_2(rec, ylim = False, version = ''):
'''
Plots the total minute volumme (using the data obtained with 1/sec sampling rate)
together with the "MV low" and "MV high" alarm limits
Writes the plot to file (does not display the plot)
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['alarm_MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['alarm_MV_low_weight'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['alarm_MV_high_weight'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Minute volume - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('L/kg/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
fig.savefig('%s/%s_%s%s.jpg' % (dir_write, 'minute_volume', rec, version), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
def resp_rate_plotter(rec, ylim = False):
'''
Plots the total reapiratory rate (using the data obtained with 1/sec sampling rate)
together with the set backup rate and "RR high" alarm limits
Displays the plot
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 10
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Respiratory rate - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('1/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set']);
resp_rate_plotter('DG003')
def resp_rate_plotter_2(rec, ylim = False, version = ''):
'''
Plots the total reapiratory rate (using the data obtained with 1/sec sampling rate)
together with the set backup rate and "RR high" alarm limits
Writes the plots to files (does not display the plot)
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 10
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['alarm_RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Respiratory rate - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('1/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'])
fig.savefig('%s/%s_%s%s.jpg' % (dir_write, 'resp_rate', rec, version), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
Explanation: Investigate the relationship of MV and RR parameter readings, ventilation settings and alarm settings
End of explanation
clinical_details_for_paper = clinical_details[['Gestation', 'Birth weight', 'Current weight', 'Main diagnoses']]
clinical_details_for_paper = clinical_details_for_paper.loc[recordings]
# clinical_details_for_paper
vent_modes_all = {}
for recording in recordings:
vent_modes_all[recording] = vent_modes_selected[recording].Text.unique()
vent_modes_all[recording] = [mode[5:] for mode in vent_modes_all[recording] if mode.startswith(' Mode')]
vent_modes_all = DataFrame([vent_modes_all]).T
vent_modes_all.columns = ['Ventilation modes']
vent_modes_all = vent_modes_all.loc[recordings]
# vent_modes_all
recording_duration_hours_all = DataFrame([recording_duration_hours]).T
recording_duration_hours_all.columns = ['Recording duration (hours)']
Table_1 = recording_duration_hours_all.join([clinical_details_for_paper, vent_modes_all])
Table_1
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'Table_1.xlsx'))
Table_1.to_excel(writer)
writer.save()
Explanation: Create the tables and figures of the paper
Table 1
End of explanation
rec = 'DG032_2'
filetype = 'jpg'
dpi = 300
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(10, 4)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 14, markeredgewidth = 0.5 )
plt.xlabel("Time", fontsize = 14)
plt.title(rec)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 14);
plt.xticks(fontsize = 8)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5)
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_1a'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG032_2'
filetype = 'jpg'
dpi = 300
fig = plt.figure()
fig.set_size_inches(8, 4)
fig.subplots_adjust(left=0.5, bottom=None, right=None, top=None, wspace=None, hspace= None)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 14)
plt.title(rec)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 14)
plt.xticks(fontsize = 14);
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_1b'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG032_2'
filetype = 'tiff'
dpi = 300
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(9, 7)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.3)
ax1 = fig.add_subplot(2, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 10, markeredgewidth = 0.5 )
plt.xlabel("Time", fontsize = 12)
plt.title(rec)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 12);
plt.xticks(fontsize = 8)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5)
ax1 = fig.add_subplot(2, 1, 2)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 12)
plt.title(rec)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 12)
plt.xticks(fontsize = 8);
fig.savefig('%s/%s.tiff' % (DIR_WRITE, 'Figure_1'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
Explanation: Figure 1
End of explanation
rec = 'DG003'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['MV_low_weight'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['MV_high_weight'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('L/min/kg', size = 14, color = 'black')
ax1.tick_params(which = 'both', labelsize=12)
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2a_color'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG003'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'black', alpha = 0.6, ylim = [0, ymax] );
slow_measurements[rec]['MV_low_weight'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['MV_high_weight'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('L/min/kg', size = 14, color = 'black')
ax1.tick_params(which = 'both', labelsize=12)
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2a_bw'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG041'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 15
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('1/min', size = 14, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'])
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2b_color'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG041'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 15
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'black', alpha = 0.6, ylim = [0, ymax] );
slow_measurements[rec]['RR_high'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('1/min', size = 14, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'])
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2b_bw'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec0 = 'DG003'
rec1 = 'DG041'
filetype = 'tiff'
dpi = 300
ymax0 = slow_measurements[rec0]['MV_high_weight'].max() + 0.3
ymax1 = slow_measurements[rec1]['5001|RR [1/min]'].max() + 15
fig = plt.figure()
fig.set_size_inches(6, 9)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.3)
ax0 = fig.add_subplot(2, 1, 1);
slow_measurements[rec0]['MV_kg'].plot(ax = ax0, color = 'blue', ylim = [0, ymax0] );
slow_measurements[rec0]['MV_low_weight'].plot(ax = ax0, color = 'green', linewidth = 3, ylim = [0, ymax0] );
slow_measurements[rec0]['MV_high_weight'].plot(ax = ax0, color = 'red', linewidth = 3, ylim = [0, ymax0] );
ax0.set_title(rec0, size = 12, color = 'black')
ax0.set_xlabel('', size = 12, color = 'black')
ax0.set_ylabel('L/min/kg', size = 12, color = 'black')
ax0.tick_params(which = 'both', labelsize=10)
ax0.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax0.legend(['MV_kg', 'alarm_low', 'alarm_high']);
ax1 = fig.add_subplot(2, 1, 2);
slow_measurements[rec1]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax1] );
slow_measurements[rec1]['RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax1] );
slow_measurements[rec1]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax1] );
ax1.set_title(rec1, size = 12, color = 'black')
ax1.set_xlabel('Time', size = 12, color = 'black')
ax1.set_ylabel('1/min', size = 12, color = 'black')
ax1.tick_params(which = 'both', labelsize=10)
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'], loc = 4)
fig.savefig('%s/%s.tiff' % (DIR_WRITE, 'Figure_2'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
Explanation: Figure 2
End of explanation
# Histogram showing the number of alarms which were shorter than 1 minute
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 60))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 12)
plt.ylabel('Number of alarm events', fontsize = 12)
plt.xticks(range(0,60,4), fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Histogram of alarm durations', fontsize = 12)
fig.savefig('%s/%s' % (DIR_WRITE, 'Figure_3.tiff'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='tiff',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
Explanation: Figure 3
End of explanation |
7,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear time series analysis - AR/MA models
Lorenzo Biasi (3529646), Julius Vernie (3502879)
Task 1. AR(p) models.
1.1
Step1: We can see that simulating the data as an AR(1) model is not effective in giving us anything similar the aquired data. This is due to the fact that we made the wrong assumptions when we computed the coefficients of our data. Our data is in fact clearly not a stationary process and in particular cannot be from an AR(1) model alone, as there is a linear trend in time. The meaning of the slope that we computed shows that successive data points are strongly correlated.
Step2: 1.2
Before estimating the coefficients of the AR(1) model we remove the linear trend in time, thus making it resemble more closely the model with which we are trying to analyze it.
Step3: This time we obtain different coefficients, that we can use to simulate the data and see if they give us a similar result the real data.
Step4: In the next plot we can see that our predicted values have an error that decays exponentially the further we try to make a prediction. By the time it arrives to 5 time steps of distance it equal to the variance.
Step5: 1.4
By plotting the data we can already see that this cannot be a simple AR model. The data seems divided in 2 parts with very few data points in the middle.
Step6: We tried to simulate the data with these coefficients but it is clearly uneffective
Step7: By plotting the return plot we can better understand what is going on. The data can be divided in two parts. We can see that successive data is always around one of this two poles. If it were a real AR model we would expect something like the return plots shown below this one.
Step8: We can see that in the autocorelation plot the trend is exponential, which is what we would expect, but it is taking too long to decay for being a an AR model with small value of $p$
Step9: Task 2. Autocorrelation and partial autocorrelation.
2.1
Step10: For computing the $\hat p$ for the AR model we predicted the parameters $a_i$ for various AR(p). We find that for p = 6 we do not have any correlation between previous values and future values.
Step11: For the MA $\hat q$ could be around 4-6 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio
from sklearn import datasets, linear_model
%matplotlib inline
def set_data(p, x):
temp = x.flatten()
n = len(temp[p:])
x_T = temp[p:].reshape((n, 1))
X_p = np.ones((n, p + 1))
for i in range(1, p + 1):
X_p[:, i] = temp[i - 1: i - 1 + n]
return X_p, x_T
def AR(coeff, init, T):
offset = coeff[0]
mult_coef = np.flip(coeff, 0)[:-1]
series = np.zeros(T)
for k, x_i in enumerate(init):
series[k] = x_i
for i in range(k + 1, T):
series[i] = np.sum(mult_coef * series[i - k - 1:i]) + np.random.normal() + offset
return series
def estimated_autocorrelation(x):
n = len(x)
mu, sigma2 = np.mean(x), np.var(x)
r = np.correlate(x - mu, x - mu, mode = 'full')[-n:]
result = r/(sigma2 * (np.arange(n, 0, -1)))
return result
def test_AR(x, coef, N):
x = x.flatten()
offset = coef[0]
slope = coef[1]
ave_err = np.empty((len(x) - N, N))
x_temp = np.empty(N)
for i in range(len(x) - N):
x_temp[0] = x[i] * slope + offset
for j in range(N -1):
x_temp[j + 1] = x_temp[j] * slope + offset
ave_err[i, :] = (x_temp - x[i:i+N])**2
return ave_err
x = sio.loadmat('Tut2_file1.mat')['x'].flatten()
plt.plot(x * 2, ',')
plt.xlabel('time')
plt.ylabel('x')
X_p, x_T = set_data(1, x)
model = linear_model.LinearRegression()
model.fit(X_p, x_T)
model.coef_
Explanation: Linear time series analysis - AR/MA models
Lorenzo Biasi (3529646), Julius Vernie (3502879)
Task 1. AR(p) models.
1.1
End of explanation
x_1 = AR(np.append(model.coef_, 0), [0, x[0]], 50001)
plt.plot(x_1[1:], ',')
plt.xlabel('time')
plt.ylabel('x')
Explanation: We can see that simulating the data as an AR(1) model is not effective in giving us anything similar the aquired data. This is due to the fact that we made the wrong assumptions when we computed the coefficients of our data. Our data is in fact clearly not a stationary process and in particular cannot be from an AR(1) model alone, as there is a linear trend in time. The meaning of the slope that we computed shows that successive data points are strongly correlated.
End of explanation
rgr = linear_model.LinearRegression()
x = x.reshape((len(x)), 1)
t = np.arange(len(x)).reshape(x.shape)
rgr.fit(t, x)
x_star= x - rgr.predict(t)
plt.plot(x_star.flatten(), ',')
plt.xlabel('time')
plt.ylabel('x')
Explanation: 1.2
Before estimating the coefficients of the AR(1) model we remove the linear trend in time, thus making it resemble more closely the model with which we are trying to analyze it.
End of explanation
X_p, x_T = set_data(1, x_star)
model.fit(X_p, x_T)
model.coef_
x_1 = AR(np.append(model.coef_[0], 0), [0, x_star[0]], 50000)
plt.plot(x_1, ',')
plt.xlabel('time')
plt.ylabel('x')
plt.plot(x_star[1:], x_star[:-1], ',')
plt.xlabel(r'x$_{t - 1}$')
plt.ylabel(r'x$_{t}$')
Explanation: This time we obtain different coefficients, that we can use to simulate the data and see if they give us a similar result the real data.
End of explanation
err = test_AR(x_star, model.coef_[0], 10)
np.sum(err, axis=0) / err.shape[0]
plt.plot(np.sum(err, axis=0) / err.shape[0], 'o', label='Error')
plt.plot([0, 10.], np.ones(2)* np.var(x_star), 'r', label='Variance')
plt.grid(linestyle='dotted')
plt.xlabel(r'$\Delta t$')
plt.ylabel('Error')
Explanation: In the next plot we can see that our predicted values have an error that decays exponentially the further we try to make a prediction. By the time it arrives to 5 time steps of distance it equal to the variance.
End of explanation
x = sio.loadmat('Tut2_file2.mat')['x'].flatten()
plt.plot(x, ',')
plt.xlabel('time')
plt.ylabel('x')
np.mean(x)
X_p, x_T = set_data(1, x)
model = linear_model.LinearRegression()
model.fit(X_p, x_T)
model.coef_
Explanation: 1.4
By plotting the data we can already see that this cannot be a simple AR model. The data seems divided in 2 parts with very few data points in the middle.
End of explanation
x_1 = AR(model.coef_[0], x[:1], 50001)
plt.plot(x_1[1:], ',')
plt.xlabel('time')
plt.ylabel('x')
Explanation: We tried to simulate the data with these coefficients but it is clearly uneffective
End of explanation
plt.plot(x[1:], x[:-1], ',')
plt.xlabel(r'x$_{t - 1}$')
plt.ylabel(r'x$_{t}$')
plt.plot(x_star[1:], x_star[:-1], ',')
plt.xlabel(r'x$_{t - 1}$')
plt.ylabel(r'x$_{t}$')
Explanation: By plotting the return plot we can better understand what is going on. The data can be divided in two parts. We can see that successive data is always around one of this two poles. If it were a real AR model we would expect something like the return plots shown below this one.
End of explanation
plt.plot(estimated_autocorrelation(x)[:200])
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
plt.plot(estimated_autocorrelation(x_1.flatten())[:20])
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
Explanation: We can see that in the autocorelation plot the trend is exponential, which is what we would expect, but it is taking too long to decay for being a an AR model with small value of $p$
End of explanation
data = sio.loadmat('Tut2_file3.mat')
x_AR = data['x_AR'].flatten()
x_MA = data['x_MA'].flatten()
Explanation: Task 2. Autocorrelation and partial autocorrelation.
2.1
End of explanation
for i in range(3,7):
X_p, x_T = set_data(i, x_AR)
model = linear_model.LinearRegression()
model.fit(X_p, x_T)
plt.plot(estimated_autocorrelation((x_T - model.predict(X_p)).flatten())[:20], \
label='AR(' + str(i) + ')')
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
plt.legend()
Explanation: For computing the $\hat p$ for the AR model we predicted the parameters $a_i$ for various AR(p). We find that for p = 6 we do not have any correlation between previous values and future values.
End of explanation
plt.plot(estimated_autocorrelation(x_MA)[:20])
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
test_AR(x, )
Explanation: For the MA $\hat q$ could be around 4-6
End of explanation |
7,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Zpracování rotačních spekter $N_2(C\,^3\Pi_g\rightarrow B\,^3\Pi_u)$
V atmosférických výbojích často pozorujeme záření molekuly dusíky v důsledku přechodu $N_2(C\,^3\Pi_g\rightarrow B\,^3\Pi_u)$. Ve starší literatuře se setkáme s označením "druhý pozitivní systém". Většina vibračních pásů tohoto systému se nachází v blízké UV oblasti a zasahuje i do viditelné části spektra. Toto záření je tedy často původcem charakteristické fialové barvy atmosférických výbojů.
S rostoucí teplotou se více populují stavy s vyšším rotačním číslem. Ve spektrech se to projevuje tak, že roste relativní intenzita části pásů směrem ke kratším vlnovým délkám (tedy směrem "doleva").
Toho se dá využít k rychlému odhady teploty, pokud máme k dispozici kalibrační křivku
$$
\frac{I_0}{ I_1}= f(T, {\rm integrační~limity})
$$
Step1: Úkoly
Projděte všechny soubory v adresáři N2spec. Každý z nich obsahuje dusíkové spektrum získané při určité teplotě. Výše popsanou metodou s využitím kalibrační křivky (N2spec/calibration.txt) přiřaďte každému souboru správnou teplotu.
Jméno každého souboru skrývá souřadnice x a y. Vyneste získané hodnoty teploty do dvourozměrného pole podle těchto souřadnic a vykreslete takto získaný obraz.
Můžeme začít tím, že zjistíme, co se v daném adresáři skrývá
Step2: Předchozí přístup vlastně nevyužívá python. Pro práci s obsahem adresářů je v pythonu např. knihovna glob. S její pomocí můžeme zjistit, kolik souborů je třeba zpracovat.
Step3: Měli bychom si ověřit, jakou mají naše soubory strukturu
Step4: Vidíme, že soubory skrývají spektra ve dvou sloupcích oddělených tabulátorem. První sloupec obsahuje údaje o vlnové délce (podle něj tedy můžeme rozhodnout, v jakém intervalu budeme integrovat), ve druhém sloupci najdeme intenzitu, tedy to, co bylo ve výše ukázaných grafech na ose y. První řádek obsahuje hlavičku a začíná znakem "#". Toto je tedy symbol označující komentáře - každý řádek od tohoto symbolu dál by měl být při načítání přeskočen.
Pro načítání takovýchto souborů nám poslouží funkce numpy.genfromtxt().
Step5: A jak se počítá integrál? Podívejme se na funkci numpy.trapz.
Step6: trapz sice umí spočítat integrál, ale neumí se omezit na daný interval. Na to budeme využívat výběr z pole s podmínkou.
Step7: Můžeme tedy přistoupit k výpočtu obou integrálů i jejich podílu.
Step8: Teď už jen zbývá správně použít kalibrační křivku a k podílu integrálů přiřadit správnou teplotu. Soubor musíme nejdříve načíst. Nejdřív zkontrolujeme, zda můžeme zase použít numpy.genfromtxt bez zvlzvláštních nastavení
Step9: Jenomže přesná hodnota I0_over_I1 v souboru calibration.txt není! Co teď?
Budeme muset použít google. Klíčová slova
Step10: Takže teď už zbývá jenom sepsat výše uvedené do cyklu a vytvořit si vhodnou strukturu na podržení výsledných teplot v paměti. Např. slovník.
Step11: Je třeba dostat informace o souřadnicích ze jména souboru
Step12: Vidíme, že naše výsledné pole by mělo být 50x50 pixelů velké, aby se do něj data vešla.
Step13: Pomocí plt.imshow() můžeme výsledek vykreslit. | Python Code:
#kod v teto bunce neni soucasti lekce,
#presto ho ale netajime
import massiveOES
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import colors as mcolors
colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)
N2 = massiveOES.SpecDB('N2CB.db')
spec_cold = N2.get_spectrum(Trot=300, Tvib=300, wmin=325, wmax=337.6)
spec_hot = N2.get_spectrum(Trot=3000, Tvib=3000, wmin=325, wmax=337.6)
dump=spec_cold.refine_mesh()
dump=spec_hot.refine_mesh()
spec_cold.convolve_with_slit_function(gauss=5e-2)
spec_hot.convolve_with_slit_function(gauss=5e-2)
plt.rcParams['font.size'] = 13
plt.rcParams['figure.figsize'] = (15,4)
fig, axs = plt.subplots(1,2)
axs[0].plot(spec_cold.x, spec_cold.y, color='blue', label = 'T$_{rot}$ = 300 K')
axs[0].plot(spec_hot.x, spec_hot.y, color='green', label = 'T$_{rot}$ = 3000 K')
axs[0].legend(loc='upper left')
axs[0].set_xlabel('wavelength [nm]')
axs[0].set_ylabel('relative photon flux [arb. u.]')
int_lims = 320, 335, 336.9
axs[1].plot(spec_hot.x, spec_hot.y, color='green', label = 'T$_{rot}$ = 3000 K')
axs[1].fill_between(spec_hot.x, spec_hot.y,
where=(spec_hot.x > int_lims[0]) & (spec_hot.x<int_lims[1]),
color='green', alpha=0.5)
axs[1].fill_between(spec_hot.x, spec_hot.y,
where=(spec_hot.x > int_lims[1]) & (spec_hot.x<int_lims[2]),
color='darkorange', alpha=0.5)
axs[1].annotate('$I_0$', xy=(334, 70), xytext = (330, 300),
arrowprops=dict(facecolor='green', width = 2, headwidth=7),
color = 'green', alpha=0.9, size=20)
axs[1].annotate('$I_1$', xy=(336, 220), xytext = (334, 500),
arrowprops=dict(facecolor='darkorange', width = 2, headwidth=7),
color = 'darkorange', alpha=1, size=20)
axs[1].set_xlabel('wavelength [nm]')
txt = axs[1].set_ylabel('relative photon flux [arb. u.]')
import numpy
import matplotlib.pyplot as plt
%matplotlib inline
cal = numpy.genfromtxt('N2spec/calibration.txt')
plt.rcParams['figure.figsize'] = (7.5,4)
ax = plt.plot(cal[:,0], cal[:,1])
plt.xlabel('temperature [K]')
plt.ylabel('I$_0$ / I$_1$')
txt = plt.title('limits = (320, 335, 336.9) nm')
Explanation: Zpracování rotačních spekter $N_2(C\,^3\Pi_g\rightarrow B\,^3\Pi_u)$
V atmosférických výbojích často pozorujeme záření molekuly dusíky v důsledku přechodu $N_2(C\,^3\Pi_g\rightarrow B\,^3\Pi_u)$. Ve starší literatuře se setkáme s označením "druhý pozitivní systém". Většina vibračních pásů tohoto systému se nachází v blízké UV oblasti a zasahuje i do viditelné části spektra. Toto záření je tedy často původcem charakteristické fialové barvy atmosférických výbojů.
S rostoucí teplotou se více populují stavy s vyšším rotačním číslem. Ve spektrech se to projevuje tak, že roste relativní intenzita části pásů směrem ke kratším vlnovým délkám (tedy směrem "doleva").
Toho se dá využít k rychlému odhady teploty, pokud máme k dispozici kalibrační křivku
$$
\frac{I_0}{ I_1}= f(T, {\rm integrační~limity})
$$
End of explanation
!ls N2spec/ | head #!ls vypíše obsah adresáře, head jej omezí na prvních 10 řádků
Explanation: Úkoly
Projděte všechny soubory v adresáři N2spec. Každý z nich obsahuje dusíkové spektrum získané při určité teplotě. Výše popsanou metodou s využitím kalibrační křivky (N2spec/calibration.txt) přiřaďte každému souboru správnou teplotu.
Jméno každého souboru skrývá souřadnice x a y. Vyneste získané hodnoty teploty do dvourozměrného pole podle těchto souřadnic a vykreslete takto získaný obraz.
Můžeme začít tím, že zjistíme, co se v daném adresáři skrývá:
End of explanation
import glob
filelist = glob.glob('N2spec/y*x*.txt')
print('Dnes zpracujeme ' + str(len(filelist)) + ' souborů')
Explanation: Předchozí přístup vlastně nevyužívá python. Pro práci s obsahem adresářů je v pythonu např. knihovna glob. S její pomocí můžeme zjistit, kolik souborů je třeba zpracovat.
End of explanation
!head N2spec/y0x10.txt
Explanation: Měli bychom si ověřit, jakou mají naše soubory strukturu:
End of explanation
import numpy
numpy.genfromtxt?
#parametr converters umozni zpracovat i nestandardni vstupy
#napr. desetinna carka misto tecky
sample = numpy.genfromtxt(filelist[0], comments='#')
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sample[:,0], sample[:,1])
#dvojtečková notace [:,0]: čti jako "vezmi všechny řádky, nultý sloupec"
Explanation: Vidíme, že soubory skrývají spektra ve dvou sloupcích oddělených tabulátorem. První sloupec obsahuje údaje o vlnové délce (podle něj tedy můžeme rozhodnout, v jakém intervalu budeme integrovat), ve druhém sloupci najdeme intenzitu, tedy to, co bylo ve výše ukázaných grafech na ose y. První řádek obsahuje hlavičku a začíná znakem "#". Toto je tedy symbol označující komentáře - každý řádek od tohoto symbolu dál by měl být při načítání přeskočen.
Pro načítání takovýchto souborů nám poslouží funkce numpy.genfromtxt().
End of explanation
numpy.trapz?
Explanation: A jak se počítá integrál? Podívejme se na funkci numpy.trapz.
End of explanation
condition0 = (sample[:,0] > 320) & (sample[:,0] < 335)
condition1 = (sample[:,0] > 335) & (sample[:,0] < 336.9)
condition2 = sample[:,0] > 336.9
plt.plot(sample[condition0,0], sample[condition0,1], color = 'blue')
plt.plot(sample[condition1,0], sample[condition1,1], color = 'red')
plt.plot(sample[condition2,0], sample[condition2,1], color = 'green')
Explanation: trapz sice umí spočítat integrál, ale neumí se omezit na daný interval. Na to budeme využívat výběr z pole s podmínkou.
End of explanation
I0 = numpy.trapz(y = sample[condition0,1], x = sample[condition0,0])
I1 = numpy.trapz(y = sample[condition1,1], x = sample[condition1, 0])
I0_over_I1 = I0 / I1
print(I0_over_I1)
Explanation: Můžeme tedy přistoupit k výpočtu obou integrálů i jejich podílu.
End of explanation
!head N2spec/calibration.txt
#hurá, půjde to po dobrém
cal = numpy.genfromtxt('N2spec/calibration.txt')
condition_temp = (cal[:,1] == I0_over_I1)
print(cal[condition_temp,0])
Explanation: Teď už jen zbývá správně použít kalibrační křivku a k podílu integrálů přiřadit správnou teplotu. Soubor musíme nejdříve načíst. Nejdřív zkontrolujeme, zda můžeme zase použít numpy.genfromtxt bez zvlzvláštních nastavení:
End of explanation
def find_nearest(array,value):
idx = (numpy.abs(array-value)).argmin()
return idx
nearest_index = find_nearest(cal[:,1], I0_over_I1)
print(nearest_index)
print(cal[nearest_index, 0])
Explanation: Jenomže přesná hodnota I0_over_I1 v souboru calibration.txt není! Co teď?
Budeme muset použít google. Klíčová slova: numpy find nearest value.
.
.
.
.
.
.
(obrázek tu máte, ale přece to nebudete opisovat...)
<img src="http://physics.muni.cz/~janvorac/stack_overflow.png"></img>
End of explanation
temp_dict = {}
for fname in glob.glob('N2spec/y*x*.txt'):
data = numpy.genfromtxt(fname)
I0 = numpy.trapz(y = data[condition0,1], x = data[condition0,0])
I1 = numpy.trapz(y = data[condition1,1], x = data[condition1, 0])
I0_over_I1 = I0 / I1
index = find_nearest(cal[:,1], I0_over_I1)
T = cal[index][0]
temp_dict[fname] = T
Explanation: Takže teď už zbývá jenom sepsat výše uvedené do cyklu a vytvořit si vhodnou strukturu na podržení výsledných teplot v paměti. Např. slovník.
End of explanation
xs = []
ys = []
for fname in temp_dict:
#nejdrive rozdelime jmeno souboru podle "/" a vezmeme jen druhy prvek [1]
#prvni bude vzdy jen tentyz adresar
fname = fname.split('/')[1]
#abychom ziskali y, rozdelime jmenu souboru podle "x" a vezmeme prvni prvek [0]
y = fname.split('x')[0]
#jmeno souboru zacina znakem "y", ten preskocime [1:]
y = int(y[1:])
ys.append(y)
#tady rozdelime jmeno souboru podle "x" a vezmeme druhy prvek [1]
x = fname.split('x')[1]
#zbytek rozdelime podle "." a vezmeme prvni prvek [0]
x = x.split('.')[0]
x = int(x)
xs.append(x)
print(numpy.unique(xs))
print(numpy.unique(ys))
Explanation: Je třeba dostat informace o souřadnicích ze jména souboru:
End of explanation
result = numpy.zeros((50, 50)) #tady bude vysledek
for i, fname in enumerate(temp_dict):
#print(i, fname)
x = xs[i]
y = ys[i]
T = temp_dict[fname]
result[y, x] = T
Explanation: Vidíme, že naše výsledné pole by mělo být 50x50 pixelů velké, aby se do něj data vešla.
End of explanation
#plt.imshow(result)
#plt.colorbar()
Explanation: Pomocí plt.imshow() můžeme výsledek vykreslit.
End of explanation |
7,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
By default Prophet fits additive seasonalities, meaning the effect of the seasonality is added to the trend to get the forecast. This time series of the number of air passengers is an example of when additive seasonality does not work
Step1: This time series has a clear yearly cycle, but the seasonality in the forecast is too large at the start of the time series and too small at the end. In this time series, the seasonality is not a constant additive factor as assumed by Prophet, rather it grows with the trend. This is multiplicative seasonality.
Prophet can model multiplicative seasonality by setting seasonality_mode='multiplicative' in the input arguments
Step2: The components figure will now show the seasonality as a percent of the trend
Step3: With seasonality_mode='multiplicative', holiday effects will also be modeled as multiplicative. Any added seasonalities or extra regressors will by default use whatever seasonality_mode is set to, but can be overriden by specifying mode='additive' or mode='multiplicative' as an argument when adding the seasonality or regressor.
For example, this block sets the built-in seasonalities to multiplicative, but includes an additive quarterly seasonality and an additive regressor | Python Code:
%%R -w 10 -h 6 -u in
df <- read.csv('../examples/example_air_passengers.csv')
m <- prophet(df)
future <- make_future_dataframe(m, 50, freq = 'm')
forecast <- predict(m, future)
plot(m, forecast)
df = pd.read_csv('../examples/example_air_passengers.csv')
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(50, freq='MS')
forecast = m.predict(future)
fig = m.plot(forecast)
Explanation: By default Prophet fits additive seasonalities, meaning the effect of the seasonality is added to the trend to get the forecast. This time series of the number of air passengers is an example of when additive seasonality does not work:
End of explanation
%%R -w 10 -h 6 -u in
m <- prophet(df, seasonality.mode = 'multiplicative')
forecast <- predict(m, future)
plot(m, forecast)
m = Prophet(seasonality_mode='multiplicative')
m.fit(df)
forecast = m.predict(future)
fig = m.plot(forecast)
Explanation: This time series has a clear yearly cycle, but the seasonality in the forecast is too large at the start of the time series and too small at the end. In this time series, the seasonality is not a constant additive factor as assumed by Prophet, rather it grows with the trend. This is multiplicative seasonality.
Prophet can model multiplicative seasonality by setting seasonality_mode='multiplicative' in the input arguments:
End of explanation
%%R -w 9 -h 6 -u in
prophet_plot_components(m, forecast)
fig = m.plot_components(forecast)
Explanation: The components figure will now show the seasonality as a percent of the trend:
End of explanation
%%R
m <- prophet(seasonality.mode = 'multiplicative')
m <- add_seasonality(m, 'quarterly', period = 91.25, fourier.order = 8, mode = 'additive')
m <- add_regressor(m, 'regressor', mode = 'additive')
m = Prophet(seasonality_mode='multiplicative')
m.add_seasonality('quarterly', period=91.25, fourier_order=8, mode='additive')
m.add_regressor('regressor', mode='additive')
Explanation: With seasonality_mode='multiplicative', holiday effects will also be modeled as multiplicative. Any added seasonalities or extra regressors will by default use whatever seasonality_mode is set to, but can be overriden by specifying mode='additive' or mode='multiplicative' as an argument when adding the seasonality or regressor.
For example, this block sets the built-in seasonalities to multiplicative, but includes an additive quarterly seasonality and an additive regressor:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.