repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
daniel-koehn/Theory-of-seismic-waves-II | 05_2D_acoustic_FD_modelling/6_fdac2d_marmousi_model_exercise.ipynb | gpl-3.0 | # Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
"""
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
from mpl_toolkits.axes_grid1 import make_axes_locatable
"""
Explanation: Exercise: 2D acoustic FD modelling of the Marmousi-2 model
In this exercise, you have to apply all the knowledge about FD modelling, we covered so far. While the modelling examples in the last lesson where quite simple, we now calculate the 2D acoustic wave propagation in a more realistic problem called the Marmousi-2 model.
Developed in the 1990s by the French Petroleum Institute (IFP) (Versteeg, 1994), the Marmousi model is a widely used benchmark problem for seismic imaging and inversion techniques. Beside the original acoustic version of the model, an elastic version was developed by Martin et al. (2006).
The Marmousi-2 model consists of a 460 m thick water layer above an elastic subseafloor model. The sediment model is very simple near the left and right boundaries but rather complex in the centre. At both sides, the subseafloor is approximately horizontally layered, while steep thrust faults are disturbing the layers in the centre of the model. Embedded in the thrust fault system and layers are small scale hydrocarbon reservoirs.
Exercise
Setup and model the 2D acoustic wave propagation in the Marmousi-2 model:
Define the model discretization based on the given Marmousi-2 P-wave velocity model. Maximum wave propagation time should be 6 s
Calculate the central frequency $f_0$ of the source wavelet based on the grid dispersion criterion
\begin{equation}
dx \le \frac{vp_{min}}{N_\lambda f_0}, \nonumber
\end{equation}
which you can use for the FD modelling run, based on the pre-defined $dx$ of the Marmousi-2 model , minimum P-wave velocity $vp_{min}$ and $N_\lambda = 4$ gridpoints per dominant wavelength.
- Start a modelling run for the Marmousi-2 model with the 3-point spatial FD operator. Place an airgun for a central shot at x = 5000 m at a depth = 40 m below the sea surface. Do not forget to calculate an appropriate time step $dt$
- Add an additional function update_d2px_d2pz_5pt to approximate the 2nd spatial derivatives by the 5-point FD operator in the 2D acoustic FD code derived here. Add an option op to switch between the 3-point and 5-point operator
- Start an additional modelling run for the Marmousi-2 model with the 5-point operator.
- Imagine you place an Ocean-Bottom-Cable (OBC) on the seafloor of the Marmousi-2 model. Calculate an OBC shot gather, by placing receivers at each gridpoint of the Cartesian model in x-direction in a depth of 460 m. Modify the FD code to record seismograms at each receiver position. Do not forget to return the seismograms from the modelling function.
- Plot and compare the seismograms produced by the 3- and 5-point operators, respectively.
End of explanation
"""
# Import Marmousi-2 Vp model
# --------------------------
# DEFINE MODEL DISCRETIZATION HERE!
nx = # number of grid points in x-direction
nz = # number of grid points in z-direction
dx = # spatial grid point distance in x-direction (m)
dz = dx # spatial grid point distance in z-direction (m)
# Define model filename
name_vp = "marmousi-2/marmousi_II_marine.vp"
# Open file and write binary data to vp
f = open(name_vp)
data_type = np.dtype ('float32').newbyteorder ('<')
vp = np.fromfile (f, dtype=data_type)
# Reshape (1 x nx*nz) vector to (nx x nz) matrix
vp = vp.reshape(nx,nz)
"""
Explanation: The elastic version of the Marmousi-2 model, together with FD modelled high frequency shot gather data is available from here. The central part of the P-wave velocity model of the Marmousi-2 with the spatial discretization:
$nx = 500$ gridpoints
$nz = 174$ gridpoints
$dx = dz = 20\; m$
is available as IEEE little endian binary file marmousi_II_marine.vp in the marmousi-2 directory. It can be imported to Python with the following code snippet:
End of explanation
"""
# Plot Marmousi-2 vp-model
# ------------------------
# Define xmax, zmax and model extension
xmax = nx * dx
zmax = nz * dz
extent = [0, xmax, zmax, 0]
fig = plt.figure(figsize=(12,3)) # define figure size
image = plt.imshow((vp.T)/1000, cmap=plt.cm.viridis, interpolation='nearest',
extent=extent)
cbar = plt.colorbar(aspect=10, pad=0.02)
cbar.set_label('Vp [km/s]', labelpad=10)
plt.title('Marmousi-2 model')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
# Definition of modelling parameters
# ----------------------------------
# DEFINE MAXIMUM RECORDING TIME HERE!
tmax = # maximum wave propagation time (s)
# DEFINE YOUR SHOT POSITION HERE!
xsrc = # x-source position (m)
zsrc = # z-source position (m)
# CALCULATE DOMINANT FREQUENCY OF THE SOURCE WAVELET HERE!
f0 = # dominant frequency of the source (Hz)
print("f0 = ", f0, " Hz")
t0 = 4.0/f0 # source time shift (s)
isnap = 2 # snapshot interval (timesteps)
@jit(nopython=True) # use JIT for C-performance
def update_d2px_d2pz_3pt(p, dx, dz, nx, nz, d2px, d2pz):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2
return d2px, d2pz
# ADD THE SPATIAL 5-POINT OPERATORS HERE!
@jit(nopython=True) # use JIT for C-performance
def update_d2px_d2pz_5pt(p, dx, dz, nx, nz, d2px, d2pz):
return d2px, d2pz
# Define simple absorbing boundary frame based on wavefield damping
# according to Cerjan et al., 1985, Geophysics, 50, 705-708
def absorb(nx,nz):
FW = 60 # thickness of absorbing frame (gridpoints)
a = 0.0053
coeff = np.zeros(FW)
# define coefficients in absorbing frame
for i in range(FW):
coeff[i] = np.exp(-(a**2 * (FW-i)**2))
# initialize array of absorbing coefficients
absorb_coeff = np.ones((nx,nz))
# compute coefficients for left grid boundaries (x-direction)
zb=0
for i in range(FW):
ze = nz - i - 1
for j in range(zb,ze):
absorb_coeff[i,j] = coeff[i]
# compute coefficients for right grid boundaries (x-direction)
zb=0
for i in range(FW):
ii = nx - i - 1
ze = nz - i - 1
for j in range(zb,ze):
absorb_coeff[ii,j] = coeff[i]
# compute coefficients for bottom grid boundaries (z-direction)
xb=0
for j in range(FW):
jj = nz - j - 1
xb = j
xe = nx - j
for i in range(xb,xe):
absorb_coeff[i,jj] = coeff[j]
return absorb_coeff
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
def FD_2D_acoustic_JIT(vp, dt,dx,dz,f0,xsrc,zsrc,op):
# calculate number of time steps nt
# ---------------------------------
nt = (int)(tmax/dt)
# locate source on Cartesian FD grid
# ----------------------------------
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# define clip value: 0.1 * absolute maximum value of source wavelet
clip = 0.5 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2
# Define absorbing boundary frame
# -------------------------------
absorb_coeff = absorb(nx,nz)
# Define squared vp-model
# -----------------------
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# INITIALIZE SEISMOGRAMS HERE!
# ----------------------------
# Initalize animation of pressure wavefield
# -----------------------------------------
fig = plt.figure(figsize=(7,3)) # define figure size
extent = [0.0,xmax,zmax,0.0] # define model extension
# Plot Vp-model
image = plt.imshow((vp.T)/1000, cmap=plt.cm.gray, interpolation='nearest',
extent=extent)
# Plot pressure wavefield movie
image1 = plt.imshow(p.T, animated=True, cmap="RdBu", alpha=.75, extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Pressure wavefield')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
plt.ion()
plt.show(block=False)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
if(op==3):
d2px, d2pz = update_d2px_d2pz_3pt(p, dx, dz, nx, nz, d2px, d2pz)
# ADD FD APPROXIMATION OF SPATIAL DERIVATIVES BY 5 POINT OPERATOR HERE!
#if(op==5):
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Apply absorbing boundary frame
# ------------------------------
p *= absorb_coeff
pnew *= absorb_coeff
# Remap Time Levels
# -----------------
pold, p = p, pnew
# WRITE SEISMOGRAMS HERE!
# display pressure snapshots
if (it % isnap) == 0:
image1.set_data(p.T)
fig.canvas.draw()
# DO NOT FORGET TO RETURN THE SEISMOGRAM HERE!
"""
Explanation: After reading the model into Python, we can take a look at it ...
End of explanation
"""
# Run 2D acoustic FD modelling with 3-point spatial operater
# ----------------------------------------------------------
%matplotlib notebook
op = 3 # define spatial FD operator (3-point)
# DEFINE TIME STEP HERE!
dt = # time step (s)
FD_2D_acoustic_JIT(vp,dt,dx,dz,f0,xsrc,zsrc,op)
# Run 2D acoustic FD modelling with 5-point spatial operater
# ----------------------------------------------------------
%matplotlib notebook
op = 5 # define spatial FD operator (5-point)
# DEFINE TIME STEP HERE!
dt = # time step (s)
FD_2D_acoustic_JIT(vp,dt,dx,dz,f0,xsrc,zsrc,op)
%matplotlib notebook
# PLOT YOUR MODELLED OBC SHOT GATHER HERE!
clip_seis = 1e-7
extent_seis = [0.0,xmax/1000,tmax,0.0]
plt.subplot(121)
plt.imshow(seis_marm_3pt.T, cmap=plt.cm.gray, aspect=2, vmin=-clip_seis,
vmax=clip_seis, extent=extent_seis)
plt.title('3-point operator')
plt.xlabel('x [km]')
plt.ylabel('t [s]')
ax = plt.subplot(122)
plt.imshow(seis_marm_5pt.T, cmap=plt.cm.gray, aspect=2, vmin=-clip_seis,
vmax=clip_seis, extent=extent_seis)
ax.set_yticks([])
plt.title('5-point operator')
plt.xlabel('x [km]')
#plt.ylabel('t [s]')
plt.tight_layout()
plt.show()
"""
Explanation: Model wave propagation in the Marmousi-2 model with 2D acoustic FD code
Time to model acoustic wave propagation in the Marmouisi-2 model using the 3-point operator. We only have to define the timestep $dt$:
End of explanation
"""
|
tensorflow/lucid | notebooks/feature-visualization/regularization.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install --quiet lucid
import numpy as np
import scipy.ndimage as nd
import tensorflow as tf
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
# Let's import a model from the Lucid modelzoo!
model = models.InceptionV1()
model.load_graphdef()
"""
Explanation: Visualization Regularization - Feature Visualization
This notebook uses Lucid to reproduce some of the results in the section "The Enemy of Feature Visualization" of Feature Visualization.
This notebook doesn't introduce the abstractions behind lucid; you may wish to also read the Lucid tutorial.
Note: The easiest way to use this tutorial is as a colab notebook, which allows you to dive in with no setup. We recommend you enable a free GPU by going:
Runtime → Change runtime type → Hardware Accelerator: GPU
Install, Import, Load Model
End of explanation
"""
LEARNING_RATE = 0.05
optimizer = tf.train.AdamOptimizer(LEARNING_RATE)
imgs = render.render_vis(model, "mixed4b_pre_relu:452",
optimizer=optimizer,
transforms=[],
param_f=lambda: param.image(64, fft=False, decorrelate=False),
thresholds=(1, 32, 128, 256, 2048), verbose=False)
# Note that we're doubling the image scale to make artifacts more obvious
show([nd.zoom(img[0], [2,2,1], order=0) for img in imgs])
"""
Explanation: Naive Feature Visualization
The code reproducing the following diagrams uses CONSTANTS to provide input values.
<img src="https://storage.googleapis.com/lucid-static/feature-visualization/10.png" width="800"></img>
End of explanation
"""
L1 = -0.05
TV = -0.25
BLUR = -1.0
obj = objectives.channel("mixed4b_pre_relu", 452)
obj += L1 * objectives.L1(constant=.5)
obj += TV * objectives.total_variation()
obj += BLUR * objectives.blur_input_each_step()
imgs = render.render_vis(model, obj,
transforms=[],
param_f=lambda: param.image(64, fft=False, decorrelate=False),
thresholds=(1, 32, 128, 256, 2048), verbose=False)
# Note that we're doubling the image scale to make artifacts more obvious
show([nd.zoom(img[0], [2,2,1], order=0) for img in imgs])
"""
Explanation: Frequency Penalization
<img src="https://storage.googleapis.com/lucid-static/feature-visualization/12.png" width="800"></img>
End of explanation
"""
JITTER = 1
ROTATE = 5
SCALE = 1.1
transforms = [
transform.pad(2*JITTER),
transform.jitter(JITTER),
transform.random_scale([SCALE ** (n/10.) for n in range(-10, 11)]),
transform.random_rotate(range(-ROTATE, ROTATE+1))
]
imgs = render.render_vis(model, "mixed4b_pre_relu:452", transforms=transforms,
param_f=lambda: param.image(64),
thresholds=(1, 32, 128, 256, 2048), verbose=False)
# Note that we're doubling the image scale to make artifacts more obvious
show([nd.zoom(img[0], [2,2,1], order=0) for img in imgs])
"""
Explanation: Transformation Robustness
<img src="https://storage.googleapis.com/lucid-static/feature-visualization/13.png" width="800"></img>
End of explanation
"""
LEARNING_RATE = 0.05
DECORRELATE = True
ROBUSTNESS = True
# `fft` parameter controls spatial decorrelation
# `decorrelate` parameter controls channel decorrelation
param_f = lambda: param.image(64, fft=DECORRELATE, decorrelate=DECORRELATE)
if ROBUSTNESS:
transforms = transform.standard_transforms
else:
transforms = []
optimizer = tf.train.AdamOptimizer(LEARNING_RATE)
imgs = render.render_vis(model, "mixed4b_pre_relu:452",
optimizer=optimizer,
transforms=transforms,
param_f=param_f,
thresholds=(1, 32, 128, 256, 2048), verbose=False)
# Note that we're doubling the image scale to make artifacts more obvious
show([nd.zoom(img[0], [2,2,1], order=0) for img in imgs])
"""
Explanation: Preconditioning
<img src="https://storage.googleapis.com/lucid-static/feature-visualization/15.png" width="800"></img>
End of explanation
"""
|
jamesjia94/BIDMach | tutorials/CreateModels.ipynb | bsd-3-clause | import BIDMat.{CMat,CSMat,DMat,Dict,IDict,FMat,FND,GDMat,GMat,GIMat,GLMat,GSDMat,GSMat,
HMat,IMat,Image,LMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.JPlotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA,SVD}
import BIDMach.datasources.{DataSource,MatSource,FileSource,SFileSource}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}
Mat.checkMKL
Mat.checkCUDA
Mat.setInline
if (Mat.hasCUDA > 0) GPUmem
"""
Explanation: Creating Models
One of the main goals of BIDMat/BIDMach is to make model creation, customization and experimentation much easier.
To that end is has reusable classes that cover the elements of Learning:
Model: The core class for a learning algorithm, and often the only one you need to implement.
DataSource: A source of data, like an in-memory matrix, a set of files (possibly on HDFS) or a data iterator (for Spark).
DataSink: A target for data such as predictions, like an in-memory matrix, a set of files, or an iterator.
Updaters: Update a model using minibatch update from a Model class. Includes SGD, ADAGRAD, Monte-Carlo updates, and multiplicative updates.
Mixins: Secondary Loss functions that are added to the global gradient. Includes L1 and L2 regularizers, cluster quality metrics, factor model metrics.
Learner: Combines the classes above and provides high-level control over the learning process: iterations, stop/start/resume
When creating a new model, its often only necessary to creat a new model class. We recently needed a scalable SVD (Singular Value Decomposition) for some student projects. Lets walk through creating this from scratch.
Scalable SVD
This model works like the previous example of in-memory SVD for a matrix M. The singular values of M are the eigenvalues of M M^T so we do subspace iteration:
$$P = M M^T Q$$
$$(Q,R) = QR(P)$$
But now we want to deal with an M which is too big to fit in memory. In the minibatch context, we can write M as a horizontal concatenation of mini-batches (this assumes data samples are columns of M and features are rows):
$$M = M_1 M_2 \cdots M_n$$
and then $$P = \sum_{i=1}^n M_i M_i^T Q$$
so we can compute $P$ by operating only on the minibatches $M_i$. We need to be able to fit $P$ and $Q$ in memory, their size is only $k~ |F|$ where $k$ is the SVD dimension and $F$ is the feature set.
Model Class
We start by defining a new model class which extends BIDMach's Model class. It will always take an Options instance as an argument:
<b>
<code style="color:blue">
class SVD(opts:SVD.Opts = new SVD.Options) extends Model(opts)
</code>
</b>
The options are defined in the "Object" associated with the class. In Scala "Object" defines a singleton which holds all of the static methods of the class. It looks like this:
<b><code style="color:blue">
object SVD {
trait Opts extends Model.Opts {
var deliciousness = 3
}
class Options extends Opts {}
...
</code></b>
Truthfully, an SVD model doesnt need a "deliciousness" option, in fact it doesnt need any Options at all - or rather what it needs is inherited from its parent. But its there to show how options are created. The Opts are defined as a trait rather than a class so they can be mixed in with the Options of other learning classes.
Local Variables and Initialization
There are three variables we need to keep track of:
<b><code style="color:blue">
var Q:Mat = null; // (Left) Singular vectors
var SV:Mat = null; // Singular values
var P:Mat = null; // P (accumulator)
</code></b>
and an initialization routine sets these to appropriate values.
Minibatch Update
Each update should update the stable model: Here its $P$:
<b><code style="color:blue">
def dobatch(mats:Array[Mat], ipass:Int, pos:Long):Unit = {
val M = mats(0);
P ~ P + (Q.t * M *^ M).t // Compute P = M * M^t * Q efficiently
}
</code></b>
Score Batches
The score method should return a floating point vector of scores for this minibatch.
<b><code style="color:blue">
def evalbatch(mat:Array[Mat], ipass:Int, pos:Long):FMat = {
SV ~ P ∙ Q; // Estimate the singular values
val diff = (P / SV) - Q; // residual
row(-(math.sqrt(norm(diff) / diff.length))); // return the norm of the residual
}
</code></b>
Update the Model
At the end of a pass over the data, we update $Q$. Not all models need this step, and minibatch algorithms typically dont have it.
<b><code style="color:blue">
override def updatePass(ipass:Int) = {
QRdecompt(P, Q, null); // Basic subspace iteration
P.clear; // Clear P for the next pass
}
</code></b>
Convenience Functions
We're done defining the SVD model. We can run it now, but to make that easier we'll define a couple of convenience functions.
An in-memory Learner
<b><code style="color:blue">
class MatOptions extends Learner.Options with SVD.Opts with MatSource.Opts with Batch.Opts
def learner(mat:Mat):(Learner, MatOptions) = {
val opts = new MatOptions;
opts.batchSize = math.min(100000, mat.ncols/30 + 1)
val nn = new Learner(
new MatSource(Array(mat), opts),
new SVD(opts),
null,
new Batch(opts),
null,
opts)
(nn, opts)
}
</code></b>
A File-based Learner
<b><code style="color:blue">
class FileOptions extends Learner.Options with SVD.Opts with FileSource.Opts with Batch.Opts
def learner(fnames:String):(Learner, FileOptions) = {
val opts = new FileOptions;
opts.batchSize = 10000;
opts.fnames = List(FileSource.simpleEnum(fnames, 1, 0));
implicit val threads = threadPool(4);
val nn = new Learner(
new FileSource(opts),
new SVD(opts),
null,
new Batch(opts),
null,
opts)
(nn, opts)
}
</code></b>
A Predictor
A predictor is a Learner which runs an existing model over a DataSource and outputs to a DataSink. For SVD, the predictor outputs the right singular vectors, which may be too large to fit in memory. Here's a memory-to-memory predictor:
<b><code style="color:blue">
class PredOptions extends Learner.Options with SVD.Opts with MatSource.Opts with MatSink.Opts;
// This function constructs a predictor from an existing model
def predictor(model:Model, mat1:Mat):(Learner, PredOptions) = {
val nopts = new PredOptions;
nopts.batchSize = math.min(10000, mat1.ncols/30 + 1)
nopts.dim = model.opts.dim;
val newmod = new SVD(nopts);
newmod.refresh = false
model.copyTo(newmod)
val nn = new Learner(
new MatSource(Array(mat1), nopts),
newmod,
null,
null,
new MatSink(nopts),
nopts)
(nn, nopts)
}
</code></b>
Testing
Now lets try it out! First we initialize BIDMach as before.
End of explanation
"""
val dir="../data/MNIST8M/parts/"
val (nn, opts) = SVD.learner(dir+"data%02d.fmat.lz4");
"""
Explanation: We'll run on the MNIST 8M (8 millon images) digit data, which is a large dataset distributed over multiple files
End of explanation
"""
opts.nend = 10;
opts.dim = 20;
opts.npasses = 2;
opts.batchSize = 20000;
"""
Explanation: Let's set some options:
End of explanation
"""
nn.train
"""
Explanation: and release the beast:
End of explanation
"""
val svals = FMat(nn.modelmats(1));
val svecs = FMat(nn.modelmats(0));
semilogy(svals)
"""
Explanation: The model matrices for this model hold the results. They are generic matrices, so we cast them to FMats:
End of explanation
"""
tic
val MMt = zeros(784,784);
for (i <- 0 until opts.nend) {
val Mi = loadFMat(dir+"data%02d.fmat.lz4" format i);
MMt ~ Mi *^ Mi;
print(".");
}
println;
toc
"""
Explanation: To see how well we did, we will compute the SVD directly by computing $M M^T$ and computing its eigendecomposition. Normally we can't do this because $MM^T$ is nfeats x nfeats, but for this dataset nfeats is only 784.
End of explanation
"""
val (eval, evecs) = feig(MMt);
val topvecs = evecs(?, 783 to 784-opts.dim by -1);
"""
Explanation: Now we call an eigenvalue routine to compute the eigenvalues and eigenvectors of $MM^T$, which are the singular values and left singular vectors of $M$.
End of explanation
"""
val dots = svecs ∙ topvecs;
svecs ~ svecs ∘ (2*(dots>0) - 1);
"""
Explanation: Eigenvectors have a sign ambiguity, and its common to see V or -V. So next we compute dot products between the two sets of vectors and flip signs if a dot product is negative:
End of explanation
"""
val onerow = topvecs.view(28,28*opts.dim);
val nc = onerow.ncols;
val tworows = onerow(?,0->(nc/2)) on onerow(?,(nc/2)->nc)
show((tworows.t*500+128) ⊗ ones(3,3))
val onerow = svecs.view(28,28*opts.dim);
val nc = onerow.ncols;
val tworows = onerow(?,0->(nc/2)) on onerow(?,(nc/2)->nc)
show((tworows.t*500+128) ⊗ ones(3,3))
"""
Explanation: Lets now look at the eigenvectors as small images, decreasing in strength from left to right. First the reference eigenvectors:
End of explanation
"""
val (pp, popts) = SVD.predictor(nn.model, dir+"data%02d.fmat.lz4", dir+"rSingVectors%02d.fmat.lz4")
popts.ofcols = 100000 // number of columns per file, here the same as the input files
popts.nend = 10 // Number of input files to process
pp.predict
"""
Explanation: Extracting Right Singular Vectors
So far, we obtained the singular values and left singular vectors from the model's modelmats array. The right singular vectors grow in size with the dataset, and in general wont fit in memory. But we can still compute them by running a predictor on the dataset. This predictor takes a parametrized input file name for the matrix $M$ and a parametrized output file name to hold the right singular vectors. A key option to set is <code>ofcols</code> the number of samples per output file:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb | apache-2.0 | import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade tensorflow -q
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile -q
! gcloud components update --quiet
"""
Explanation: Notebook is a revised version of notebook from Amy Wu and Shen Zhimo
E2E ML on GCP: MLOps stage 6 : serving: get started with Vertex AI Matching Engine and Two Towers builtin algorithm
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/main/notebooks/ocommunity/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Run in Vertex Workbench
</a>
</td>
</table>
Overview
This tutorial demonstrates how to use the Vertex AI Two-Tower built-in algorithm with Vertex AI Matching Engine.
Dataset
This tutorial uses the movielens_100k sample dataset in the public bucket gs://cloud-samples-data/vertex-ai/matching-engine/two-tower, which was generated from the MovieLens movie rating dataset. For this tutorial, the data only includes the user id feature for users, and the movie id and movie title features for movies. In this example, the user is the query object and the movie is the candidate object, and each training example in the dataset contains a user and a movie they rated (we only include positive ratings in the dataset). The two-tower model will embed the user and the movie in the same embedding space, so that given a user, the model will recommend movies it thinks the user will like.
Objective
In this notebook, you will learn how to use the Two-Tower builtin algorithms for generating embeddings for a dataset, for use with generating an Matching Engine Index, with the Vertex AI Matching Engine service.
This tutorial uses the following Google Cloud ML services:
Vertex AI Two-Towers builtin algorithm
Vertex AI Matching Engine
Vertex AI Batch Prediction
The tutorial covers the following steps:
Train the Two-Tower algorithm to generate embeddings (encoder) for the dataset.
Hyperparameter tune the trained Two-Tower encoder.
Make example predictions (embeddings) from then trained encoder.
Generate embeddings using the trained Two-Tower builtin algorithm.
Store embeddings to format supported by Matching Engine.
Create a Matching Engine Index for the embeddings.
Deploy the Matching Engine Index to a Index Endpoint.
Make a matching engine prediction request.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the packages required for executing this notebook.
End of explanation
"""
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you do not know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
shell_output = ! gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = shell_output[0]
print("Project Number:", PROJECT_NUMBER)
"""
Explanation: Get your project number
Now that the project ID is set, you get your corresponding project number.
End of explanation
"""
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in algorithm also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import os
from google.cloud import aiplatform
%load_ext tensorboard
"""
Explanation: Import libraries and define constants
End of explanation
"""
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
"""
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
DATASET_NAME = "movielens_100k" # Change to your dataset name.
# Change to your data and schema paths. These are paths to the movielens_100k
# sample data.
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/training_data/*"
INPUT_SCHEMA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/input_schema.json"
# URI of the two-tower training Docker image.
LEARNER_IMAGE_URI = "us-docker.pkg.dev/vertex-ai-restricted/builtin-algorithm/two-tower"
# Change to your output location.
OUTPUT_DIR = f"{BUCKET_URI}/experiment/output"
TRAIN_BATCH_SIZE = 100 # Batch size for training.
NUM_EPOCHS = 3 # Number of epochs for training.
print(f"Dataset name: {DATASET_NAME}")
print(f"Training data path: {TRAINING_DATA_PATH}")
print(f"Input schema path: {INPUT_SCHEMA_PATH}")
print(f"Output directory: {OUTPUT_DIR}")
print(f"Train batch size: {TRAIN_BATCH_SIZE}")
print(f"Number of epochs: {NUM_EPOCHS}")
"""
Explanation: Introduction to Two-Tower algorithm
Two-tower models learn to represent two items of various types (such as user profiles, search queries, web documents, answer passages, or images) in the same vector space, so that similar or related items are close to each other. These two items are referred to as the query and candidate object, since when paired with a nearest neighbor search service such as Vertex Matching Engine, the two-tower model can retrieve candidate objects related to an input query object. These objects are encoded by a query and candidate encoder (the two "towers") respectively, which are trained on pairs of relevant items. This built-in algorithm exports trained query and candidate encoders as model artifacts, which can be deployed in Vertex Prediction for usage in a recommendation system.
Configure training parameters for the Two-Tower builtin algorithm
The following table shows parameters that are common to all Vertex AI Training jobs created using the gcloud ai custom-jobs create command. See the official documentation for all the possible arguments.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| display-name | string | Name of the job. | Yes |
| worker-pool-spec | string | Comma-separated list of arguments specifying a worker pool configuration (see below). | Yes |
| region | string | Region to submit the job to. | No |
The worker-pool-spec flag can be specified multiple times, one for each worker pool. The following table shows the arguments used to specify a worker pool.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| machine-type | string | Machine type for the pool. See the official documentation for supported machines. | Yes |
| replica-count | int | The number of replicas of the machine in the pool. | No |
| container-image-uri | string | Docker image to run on each worker. | No |
The following table shows the parameters for the two-tower model training job:
| Parameter | Data type | Description | Required |
|--|--|--|--|
| training_data_path | string | Cloud Storage pattern where training data is stored. | Yes |
| input_schema_path | string | Cloud Storage path where the JSON input schema is stored. | Yes |
| input_file_format | string | The file format of input. Currently supports jsonl and tfrecord. | No - default is jsonl. |
| job_dir | string | Cloud Storage directory where the model output files will be stored. | Yes |
| eval_data_path | string | Cloud Storage pattern where eval data is stored. | No |
| candidate_data_path | string | Cloud Storage pattern where candidate data is stored. Only used for top_k_categorical_accuracy metrics. If not set, it's generated from training/eval data. | No |
| train_batch_size | int | Batch size for training. | No - Default is 100. |
| eval_batch_size | int | Batch size for evaluation. | No - Default is 100. |
| eval_split | float | Split fraction to use for the evaluation dataset, if eval_data_path is not provided. | No - Default is 0.2 |
| optimizer | string | Training optimizer. Lowercase string name of any TF2.3 Keras optimizer is supported ('sgd', 'nadam', 'ftrl', etc.). See TensorFlow documentation. | No - Default is 'adagrad'. |
| learning_rate | float | Learning rate for training. | No - Default is the default learning rate of the specified optimizer. |
| momentum | float | Momentum for optimizer, if specified. | No - Default is the default momentum value for the specified optimizer. |
| metrics | string | Metrics used to evaluate the model. Can be either auc, top_k_categorical_accuracy or precision_at_1. | No - Default is auc. |
| num_epochs | int | Number of epochs for training. | No - Default is 10. |
| num_hidden_layers | int | Number of hidden layers. | No |
| num_nodes_hidden_layer{index} | int | Num of nodes in hidden layer {index}. The range of index is 1 to 20. | No |
| output_dim | int | The output embedding dimension for each encoder tower of the two-tower model. | No - Default is 64. |
| training_steps_per_epoch | int | Number of steps per epoch to run the training for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. |
| eval_steps_per_epoch | int | Number of steps per epoch to run the evaluation for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. |
| gpu_memory_alloc | int | Amount of memory allocated per GPU (in MB). | No - Default is no limit. |
End of explanation
"""
TRAIN_COMPUTE = "n1-standard-8"
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
"""
Explanation: Train on Vertex AI Training with CPU
Submit the Two-Tower training job to Vertex AI Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set.
Prepare your machine specification
Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
"""
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
"""
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
"""
JOB_NAME = "twotowers_cpu_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
f"--train_batch_size={TRAIN_BATCH_SIZE}",
f"--num_epochs={NUM_EPOCHS}",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
"""
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
container_spec: The training container containing the training package.
Let's dive deeper now into the container specification:
image_uri: The training image.
command: The command to invoke in the training image. Defaults to the command entry point specified for the training image.
args: The command line arguments to pass to the corresponding command entry point in training image.
End of explanation
"""
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
"""
Explanation: Create a custom job
Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters:
display_name: A human readable name for the custom job.
worker_pool_specs: The specification for the corresponding VM instances.
End of explanation
"""
job.run()
"""
Explanation: Execute the custom job
Next, execute your custom job using the method run().
End of explanation
"""
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}/*
"""
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
"""
JOB_NAME = "twotowers_gpu_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
TRAIN_COMPUTE = "n1-highmem-4"
TRAIN_GPU = "NVIDIA_TESLA_K80"
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": 1,
}
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
"--training_steps_per_epoch=1500",
"--eval_steps_per_epoch=1500",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
"""
Explanation: Train on Vertex AI Training with GPU
Next, train the Two Tower model using a GPU.
End of explanation
"""
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
"""
Explanation: Create and execute the custom job
Next, create and execute the custom job.
End of explanation
"""
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}/*
"""
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
"""
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/tfrecord/*"
JOB_NAME = "twotowers_tfrec_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
TRAIN_COMPUTE = "n1-standard-8"
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
f"--train_batch_size={TRAIN_BATCH_SIZE}",
f"--num_epochs={NUM_EPOCHS}",
"--input_file_format=tfrecord",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
"""
Explanation: Train on Vertex AI Training with TFRecords
Next, train the Two Tower model using TFRecords
End of explanation
"""
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
"""
Explanation: Create and execute the custom job
Next, create and execute the custom job.
End of explanation
"""
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}
"""
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
"""
try:
TENSORBOARD_DIR = os.path.join(OUTPUT_DIR, "tensorboard")
%tensorboard --logdir {TENSORBOARD_DIR}
except Exception as e:
print(e)
"""
Explanation: Tensorboard
When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below:
For Workbench AI Notebooks users, the TensorBoard widget above won't work. We recommend you to launch TensorBoard through the Cloud Shell.
In your Cloud Shell, launch Tensorboard on port 8080:
export TENSORBOARD_DIR=gs://xxxxx/tensorboard
tensorboard --logdir=${TENSORBOARD_DIR} --port=8080
Click the "Web Preview" button at the top-right of the Cloud Shell window (looks like an eye in a rectangle).
Select "Preview on port 8080". This should launch the TensorBoard webpage in a new tab in your browser.
After the job finishes successfully, you can view the output directory:
End of explanation
"""
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aiplatform.HyperparameterTuningJob(
display_name="twotowers_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_auc": "maximize",
},
parameter_spec={
"learning_rate": hpt.DoubleParameterSpec(min=0.0001, max=0.1, scale="log"),
"num_hidden_layers": hpt.IntegerParameterSpec(min=0, max=2, scale="linear"),
"num_nodes_hidden_layer1": hpt.IntegerParameterSpec(
min=1, max=128, scale="log"
),
"num_nodes_hidden_layer2": hpt.IntegerParameterSpec(
min=1, max=128, scale="log"
),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
"""
Explanation: Hyperparameter tuning
You may want to optimize the hyperparameters used during training to improve your model's accuracy and performance.
For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimizes are the number of hidden layers, the size of the hidden layers, and the learning rate.
Learn more about Hyperparameter tuning overview.
End of explanation
"""
hpt_job.run()
"""
Explanation: Run the hyperparameter tuning job
Use the run() method to execute the hyperparameter tuning job.
End of explanation
"""
print(hpt_job.trials)
"""
Explanation: Display the hyperparameter tuning job trial results
After the hyperparameter tuning job has completed, the property trials will return the results for each trial.
End of explanation
"""
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
"""
Explanation: Best trial
Now look at which trial was the best:
End of explanation
"""
hpt_job.delete()
"""
Explanation: Delete the hyperparameter tuning job
The method 'delete()' will delete the hyperparameter tuning job.
End of explanation
"""
BEST_MODEL = OUTPUT_DIR + "/trial_" + best[0]
! gsutil ls {BEST_MODEL}
"""
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
"""
# The following imports the query (user) encoder model.
MODEL_TYPE = "query"
# Use the following instead to import the candidate (movie) encoder model.
# MODEL_TYPE = 'candidate'
DISPLAY_NAME = f"{DATASET_NAME}_{MODEL_TYPE}" # The display name of the model.
MODEL_NAME = f"{MODEL_TYPE}_model" # Used by the deployment container.
model = aiplatform.Model.upload(
display_name=DISPLAY_NAME,
artifact_uri=BEST_MODEL,
serving_container_image_uri="us-central1-docker.pkg.dev/cloud-ml-algos/two-tower/deploy",
serving_container_health_route=f"/v1/models/{MODEL_NAME}",
serving_container_predict_route=f"/v1/models/{MODEL_NAME}:predict",
serving_container_environment_variables={
"MODEL_BASE_PATH": "$(AIP_STORAGE_URI)",
"MODEL_NAME": MODEL_NAME,
},
)
"""
Explanation: Upload the model to Vertex AI Model resource
Your training job will export two TF SavedModels under gs://<job_dir>/query_model and gs://<job_dir>/candidate_model. These exported models can be used for online or batch prediction in Vertex Prediction.
First, import the query (or candidate) model using the upload() method, with the following parameters:
display_name: A human readable name for the model resource.
artifact_uri: The Cloud Storage location of the model artifacts.
serving_container_image_uri: The deployment container. In this tutorial, you use the prebuilt Two-Tower deployment container.
serving_container_health_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For Two-Towers, this will be /v1/models/[model_name].
serving_container_predict_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For Two-Towers, this will be /v1/models/[model_name]:predict.
serving_container_environment_variables: Preset environment variables to pass into the deployment container.
Note: The underlying deployment container is built on TensorFlow Serving.
End of explanation
"""
endpoint = aiplatform.Endpoint.create(display_name=DATASET_NAME)
"""
Explanation: Deploy the model to Vertex AI Endpoint
Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions:
Create an Endpoint resource exposing an external interface to users consuming the model.
After the Endpoint is ready, deploy one or more instances of a model to the Endpoint. The deployed model runs the custom container image running Two-Tower encoder to serve embeddings.
Refer to Vertex AI Predictions guide to Deploy a model using the Vertex AI API for more information about the APIs used in the following cells.
Create a Vertex AI Endpoint
Next, you create the Vertex AI Endpoint, from which you subsequently deploy your Vertex AI Model resource to.
End of explanation
"""
response = endpoint.deploy(
model=model,
deployed_model_display_name=DISPLAY_NAME,
machine_type=DEPLOY_COMPUTE,
traffic_split={"0": 100},
)
print(endpoint)
"""
Explanation: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings:
The machine type.
The (if any) type and number of GPUs.
Static, manual or auto-scaling of VM instances.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.
deployed_model_displayed_name: The human readable name for the deployed model instance.
machine_type: The machine type for each VM instance.
Do to the requirements to provision the resource, this may take upto a few minutes.
End of explanation
"""
# Input items for the query model:
input_items = [
{"data": '{"user_id": ["1"]}', "key": "key1"},
{"data": '{"user_id": ["2"]}', "key": "key2"},
]
# Input items for the candidate model:
# input_items = [{
# 'data' : '{"movie_id": ["1"], "movie_title": ["fake title"]}',
# 'key': 'key1'
# }]
encodings = endpoint.predict(input_items)
print(f"Number of encodings: {len(encodings.predictions)}")
print(encodings.predictions[0]["encoding"])
"""
Explanation: Creating embeddings
Now that you have deployed the query/candidate encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data.
Make an online prediction with SDK
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed model using Vertex AI SDK for Python.
The input data you want predicted embeddings on should be provided as a stringified JSON in the data field. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
End of explanation
"""
import json
request = json.dumps({"instances": input_items})
with open("request.json", "w") as writer:
writer.write(f"{request}\n")
ENDPOINT_ID = endpoint.resource_name
! gcloud ai endpoints predict {ENDPOINT_ID} \
--region={REGION} \
--json-request=request.json
"""
Explanation: Make an online prediction with gcloud
You can also do online prediction using the gcloud CLI.
End of explanation
"""
QUERY_EMBEDDING_PATH = f"{BUCKET_URI}/embeddings/train.jsonl"
import tensorflow as tf
with tf.io.gfile.GFile(QUERY_EMBEDDING_PATH, "w") as f:
for i in range(0, 1000):
query = {"data": '{"user_id": ["' + str(i) + '"]}', "key": f"key{i}"}
f.write(json.dumps(query) + "\n")
print("\nNumber of embeddings: ")
! gsutil cat {QUERY_EMBEDDING_PATH} | wc -l
"""
Explanation: Make a batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
Create the batch input file
Next, you generate the batch input file to generate embeddings for the dataset, which you subsequently use to create an index with Vertex AI Matching Engine. In this example, the dataset contains a 1000 unique identifiers (0...999). You will use the trained encoder to generate a predicted embedding for each unique identifier.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 4
batch_predict_job = model.batch_predict(
job_display_name=f"batch_predict_{DISPLAY_NAME}",
gcs_source=[QUERY_EMBEDDING_PATH],
gcs_destination_prefix=f"{BUCKET_URI}/embeddings/output",
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
"""
Explanation: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- job_display_name: The human readable name for the prediction job.
- gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.
- model_parameters: Additional filtering parameters for serving prediction results.
- machine_type: The type of machine to use for training.
- accelerator_type: The hardware accelerator type.
- accelerator_count: The number of accelerators to attach to a worker replica.
- starting_replica_count: The number of compute instances to initially provision.
- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Compute instance scaling
You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
End of explanation
"""
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
result_files = []
for prediction_result in prediction_results:
result_file = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
result_files.append(result_file)
print(result_files)
"""
Explanation: Get the predicted embeddings
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
"""
embeddings = []
for result_file in result_files:
with tf.io.gfile.GFile(result_file, "r") as f:
instances = list(f)
for instance in instances:
instance = instance.replace('\\"', "'")
result = json.loads(instance)
prediction = result["prediction"]
key = prediction["key"][3:]
encoding = prediction["encoding"]
embedding = {"id": key, "embedding": encoding}
embeddings.append(embedding)
print("Number of embeddings", len(embeddings))
print("Encoding Dimensions", len(embeddings[0]["embedding"]))
print("Example embedding", embeddings[0])
with open("embeddings.json", "w") as f:
for i in range(len(embeddings)):
f.write(json.dumps(embeddings[i]).replace('"', "'"))
f.write("\n")
! head -n 2 embeddings.json
"""
Explanation: Save the embeddings in JSONL format
Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as:
{ 'id': .., 'embedding': [ ... ] }
The format of the embeddings for the index can be in either CSV, JSON, or Avro format.
Learn more about Embedding Formats for Indexing
End of explanation
"""
EMBEDDINGS_URI = f"{BUCKET_URI}/embeddings/twotower/"
! gsutil cp embeddings.json {EMBEDDINGS_URI}
"""
Explanation: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
End of explanation
"""
DIMENSIONS = len(embeddings[0]["embedding"])
DISPLAY_NAME = "movies"
tree_ah_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDINGS_URI,
dimensions=DIMENSIONS,
approximate_neighbors_count=50,
distance_measure_type="DOT_PRODUCT_DISTANCE",
description="Two tower generated embeddings",
labels={"label_name": "label_value"},
# TreeAH specific parameters
leaf_node_embedding_count=100,
leaf_nodes_to_search_percent=7,
)
INDEX_RESOURCE_NAME = tree_ah_index.resource_name
print(INDEX_RESOURCE_NAME)
"""
Explanation: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported:
create_tree_ah_index(): Shallow tree + Asymmetric hashing.
create_brute_force_index(): Linear search.
In this tutorial, you use the create_tree_ah_index()for production scale. The method is called with the following parameters:
display_name: A human readable name for the index.
contents_delta_uri: A Cloud Storage location for the embeddings, which are either to be inserted, updated or deleted.
dimensions: The number of dimensions of the input vector
approximate_neighbors_count: (for Tree AH) The default number of neighbors to find via approximate search before exact reordering is performed. Exact reordering is a procedure where results returned by an approximate search algorithm are reordered via a more expensive distance computation.
distance_measure_type: The distance measure used in nearest neighbor search.
SQUARED_L2_DISTANCE: Euclidean (L2) Distance
L1_DISTANCE: Manhattan (L1) Distance
COSINE_DISTANCE: Cosine Distance. Defined as 1 - cosine similarity.
DOT_PRODUCT_DISTANCE: Default value. Defined as a negative of the dot product.
description: A human readble description of the index.
labels: User metadata in the form of a dictionary.
leaf_node_embedding_count: Number of embeddings on each leaf node. The default value is 1000 if not set.
leaf_nodes_to_search_percent: The default percentage of leaf nodes that any query may be searched. Must be in range 1-100, inclusive. The default value is 10 (means 10%) if not set.
This may take upto 30 minutes.
Learn more about Configuring Matching Engine Indexes.
End of explanation
"""
# This is for display only; you can name the range anything.
PEERING_RANGE_NAME = "vertex-ai-prediction-peering-range"
NETWORK = "default"
# NOTE: `prefix-length=16` means a CIDR block with mask /16 will be
# reserved for use by Google services, such as Vertex AI.
! gcloud compute addresses create $PEERING_RANGE_NAME \
--global \
--prefix-length=16 \
--description="peering range for Google service" \
--network=$NETWORK \
--purpose=VPC_PEERING
"""
Explanation: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT: you can only setup one VPC peering to servicenetworking.googleapis.com per project.
Create VPC peering for default network
For simplicity, we setup VPC peering to the default network. You can create a different network for your project.
If you setup VPC peering with any other network, make sure that the network already exists and that your VM is running on that network.
End of explanation
"""
! gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--network=$NETWORK \
--ranges=$PEERING_RANGE_NAME \
--project=$PROJECT_ID
"""
Explanation: Create the VPC connection
Next, create the connection for VPC peering.
Note: If you get a PERMISSION DENIED, you may not have the neccessary role 'Compute Network Admin' set for your default service account. In the Cloud Console, do the following steps.
Goto IAM & Admin
Find your service account.
Click edit icon.
Select Add Another Role.
Enter 'Compute Network Admin'.
Select Save
End of explanation
"""
! gcloud compute networks peerings list --network $NETWORK
"""
Explanation: Check the status of your peering connections.
End of explanation
"""
full_network_name = f"projects/{PROJECT_NUMBER}/global/networks/{NETWORK}"
"""
Explanation: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
End of explanation
"""
index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(
display_name="index_endpoint_for_demo",
description="index endpoint description",
network=full_network_name,
)
INDEX_ENDPOINT_NAME = index_endpoint.resource_name
print(INDEX_ENDPOINT_NAME)
"""
Explanation: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters:
display_name: A human readable name for the Index Endpoint.
description: A description for the Index Endpoint.
network: The VPC network resource name.
End of explanation
"""
DEPLOYED_INDEX_ID = "tree_ah_twotower_deployed_" + TIMESTAMP
MIN_NODES = 1
MAX_NODES = 2
DEPLOY_COMPUTE = "n1-standard-16"
index_endpoint.deploy_index(
display_name="deployed_index_for_demo",
index=tree_ah_index,
deployed_index_id=DEPLOYED_INDEX_ID,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
print(index_endpoint.deployed_indexes)
"""
Explanation: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters:
display_name: A human readable name for the deployed index.
index: Your index.
deployed_index_id: A user assigned identifier for the deployed index.
machine_type: (optional) The VM instance type.
min_replica_count: (optional) Minimum number of VM instances for auto-scaling.
max_replica_count: (optional) Maximum number of VM instances for auto-scaling.
Learn more about Machine resources for Index Endpoint
End of explanation
"""
# The number of nearest neighbors to be retrieved from database for each query.
NUM_NEIGHBOURS = 10
# Test query
queries = [embeddings[0]["embedding"], embeddings[1]["embedding"]]
matches = index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID, queries=queries, num_neighbors=NUM_NEIGHBOURS
)
for instance in matches:
print("INSTANCE")
for match in instance:
print(match)
"""
Explanation: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters:
deployed_index_id: The identifier of the deployed index.
queries: A list of queries (instances).
num_neighbors: The number of closest matches to return.
End of explanation
"""
# Delete endpoint resource
endpoint.delete(force=True)
# Delete model resource
model.delete()
# Force undeployment of indexes and delete endpoint
try:
index_endpoint.delete(force=True)
except Exception as e:
print(e)
# Delete indexes
try:
tree_ah_index.delete()
brute_force_index.delete()
except Exception as e:
print(e)
# Delete Cloud Storage objects that were created
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $OUTPUT_DIR
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation
"""
|
Krekelmans/Train_prediction_kaggle | BDS_Lab09_FILL_IN-plots.ipynb | mit | import os
os.getcwd()
%matplotlib inline
%pylab inline
import pandas as pd
import numpy as np
from collections import Counter, OrderedDict
import json
import matplotlib
import matplotlib.pyplot as plt
import re
from scipy.misc import imread
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
"""
Explanation: Predicting the occupancies of Belgian trains
In this lab, we will go over some of the typical steps in a data science pipeline:
Data processing & cleaning
Exploratory Data Analysis
Feature extraction/engineering
Model selection & hyper-parameter tuning
Data linking
...
We will make use of the following technologies and libraries:
Python3.5
Python libraries: pandas, numpy, sklearn, matplotlib, ...
Kaggle
NO SPARK!!! (next lab will deal with machine learning with Spark MLlib)
End of explanation
"""
from pandas.io.json import json_normalize
import pickle
training_json = pd.DataFrame()
with open('data/training_data.nldjson') as data_file:
for line in data_file:
training_json = training_json.append(json_normalize(json.loads(line)))
with open('data/test.nldjson') as data_file:
for line in data_file:
out_test_json = json_normalize(json.loads(line))
out_test = out_test_json
training = training_json
out_test[0:1]
"""
Explanation: 0. Create a kaggle account! https://www.kaggle.com/
The competition can be found here: https://inclass.kaggle.com/c/train-occupancy-prediction-v2/leaderboard
Create an account and form a team (shuffle II), use your names and BDS_ as a prefix in your team name
Note: you can only make 5 submissions per day
There are also student groups from Kortrijk (Master of Science in Industrial Engineering) participating. They get no help at all (you get this notebook) but this is their final lab + they have no project. THEREFORE: Let's push them down the leaderboard!!! ;)
Your deadline: the end of the kaggle competition.
Evaluation: Your work will be evaluated for 50%, your result will also matter for another 50%. The top 5 student groups get bonus points for this part of the course!
1. Loading and processing the data
Trains can get really crowded sometimes, so wouldn't it be great to know in advance how busy your train will be, so you can take an earlier or later one? iRail, created just that. their application, SpitsGids, shows you the occupancy of every train in Belgium. Furthermore, you can indicate the occupancy yourself. Using the collected data, machine learning models can be trained to predict what the occupancy level of a train will be.
The dataset which we will use during this labo is composed of two files:
train.nldjson: contains labeled training data (JSON records, separated by newlines)
test.nldjson: unlabeled data for which we will create a submission for a Kaggle competition at the end of this lab (again: JSON records, separated by newlines). Each of the records is uniquely identifiable through an id
A json record has the following structure:
{
"querytype": "occupancy",
"querytime": "2016-09-29T16:24:43+02:00",
"post": {
"connection": "http://irail.be/connections/008811601/20160929/S85666",
"from": "http://irail.be/stations/NMBS/008811601",
"to": "http://irail.be/stations/NMBS/008811676",
"date": "20160929",
"vehicle": "http://irail.be/vehicle/S85666",
"occupancy": "http://api.irail.be/terms/medium"
},
"user_agent": "Railer/1610 CFNetwork/808.0.2 Darwin/16.0.0"
}
This is how the five first rows of a processed DataFrame COULD look like
1.1: Load in both files and store the data in a pandas DataFrame, different methodologies can be applied in order to parse the JSON records (pd.io.json.json_normalize, json library, ...)
Loading the data
Loading the json files and dumping them via pickle
End of explanation
"""
training['querytime'] = pd.to_datetime(training['querytime'])
out_test['querytime'] = pd.to_datetime(out_test['querytime'])
training = training.dropna()
training['post.occupancy'] = training['post.occupancy'].apply(lambda x: x.split("http://api.irail.be/terms/",1)[1])
training['post.vehicle'] = training['post.vehicle'].apply(lambda x: x.split("http://irail.be/vehicle/",1)[1])
out_test['post.vehicle'] = out_test['post.vehicle'].apply(lambda x: x.split("http://irail.be/vehicle/",1)[1])
#create class column, eg IC058 -> IC
training['post.class'] = training['post.vehicle'].apply(lambda x: " ".join(re.findall("[a-zA-Z]+", x)))
out_test['post.class'] = out_test['post.vehicle'].apply(lambda x: " ".join(re.findall("[a-zA-Z]+", x)))
#reset the index because you have duplicate indexes now because you appended DFs in a for loop
training = training.reset_index()
stations_df = pd.read_csv('data/stations.csv')
stations_df['from'] = stations_df.index
stations_df['destination'] = stations_df['from']
stations_df[0:4]
#post.from en post.to are in the some format of URI
stations_df["zoekterm"]=stations_df["name"]+" trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Zaventem"), "zoekterm"] = "Zaventem trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Charleroi"), "zoekterm"] = "Charleroi trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Brussel"), "zoekterm"] = "Brussel trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Gent"), "zoekterm"] = "Gent trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Liège"), "zoekterm"] = "Luik trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Antwerpen"), "zoekterm"] = "Antwerpen trein"
druktes_df = pd.read_csv('data/station_druktes.csv')
druktes_df[0:4]
training = pd.merge(training,stations_df[["URI","from"]], left_on = 'post.from', right_on = 'URI')
training = pd.merge(training,stations_df[["URI","destination"]], left_on = 'post.to', right_on = 'URI')
training = training.drop(['URI_y','URI_x'],1)
out_test = pd.merge(out_test,stations_df[["URI","from"]], left_on = 'post.from', right_on = 'URI')
out_test = pd.merge(out_test,stations_df[["URI","destination"]], left_on = 'post.to', right_on = 'URI')
out_test = out_test.drop(['URI_y','URI_x'],1)
"""
Explanation: Processing the data
Cleaning the variables, and adding station variables to our dataframe
End of explanation
"""
fig, ax = plt.subplots(1,1, figsize=(5,5))
training['post.class'].value_counts().plot(kind='pie', ax=ax, autopct='%1.1f%%')
#we have a lot of null/undefined, especially in our test set, we can't simply throw them away
"""
Explanation: 1.2: Clean the data! Make sure the station- and vehicle-identifiers are in the right format. A station identifier consists of 9 characters (prefix = '00') and a vehicle identifier consists of the concatentation of the vehicle type (IC/L/S/P/...) and the line identifier. Try to fix as much of the records as possible, drop only the unfixable ones. How many records did you drop?
2. Exploratory Data Analysis (EDA)
Let's create some visualisations of our data in order to gain some insights. Which features are useful, which ones aren't?
We will create 3 visualisations:
* Pie chart of the class distribution
* Stacked Bar Chart depicting the distribution for one aggregated variable (such as the weekday or the vehicle type)
* Scattter plot depicting the 'crowdiness' of the stations in Belgium
For each of the visualisations, code to generate the plot has already been handed to you. You only need to prepare the data (i.e. create a new dataframe or select certain columns) such that it complies with the input specifications. If you want to create your own plotting code or extend the given code, you are free to do so!
2.1: *Create a pie_chart with the distribution of the different classes. Have a look at our webscraping lab for plotting pie charts. TIP: the value_counts() does most of the work for you!
End of explanation
"""
training['weekday'] = training['querytime'].apply(lambda l: l.weekday())
out_test['weekday'] = out_test['querytime'].apply(lambda l: l.weekday())
print("timerange from training data:",training['querytime'].min(),training['querytime'].max())
print(training['querytime'].describe())
print(out_test['querytime'].describe())
date_training = training.set_index('querytime')
date_test = out_test.set_index('querytime')
grouper = pd.TimeGrouper("1d")
date_training = date_training.groupby(grouper).size()
date_test = date_test.groupby(grouper).size()
# plot
fig, ax = plt.subplots(1,1, figsize=(10,7))
ax.plot(date_training)
ax.plot(date_test)
fig, ax = plt.subplots(1,1, figsize=(6,6))
training['weekday'].value_counts().plot(kind='pie', ax=ax, autopct='%1.1f%%')
training['post.occupancy'].value_counts()
"""
Explanation: 2.2: *Analyze the timestamps in the training and testset. First convert the timestamps to a pandas datetime object using pd.datetime. http://pandas.pydata.org/pandas-docs/stable/timeseries.html
Have the column in this data format simplifies a lot of work, since it allows you to convert and extract time features more easily. For example:
- df['weekday] = df['time'].apply(lambda l: l.weekday())
would map every date to a day of the week in [0,6].
A. What are the ranges of training and testset, is your challenges one of interpolating or extrapolating in the future?
TIP: The describe() function can already be helpful!
B. Plot the number of records in both training and testset per day. Have a look here on how to work with the timegrouper functionality: http://stackoverflow.com/questions/15297053/how-can-i-divide-single-values-of-a-dataframe-by-monthly-averages
C. OPTIONAL: Have insight into the time dependence can get you a long way: Make additional visualizations to make you understand how time affects train occupancy.
End of explanation
"""
training[0:1]
occup = pd.crosstab(training['post.class'], training['post.occupancy'])
weekday = pd.crosstab(training['post.class'], training['weekday'])
occup = occup.drop(['BUS', 'EUR', 'EXT', 'ICE', 'ICT', 'P', 'TGV', 'THA', 'TRN', 'ic', 'null', ''])
occup = occup.apply(lambda r: r/r.sum(), axis=1)
occup[0:4]
weekday = weekday.drop(['BUS', 'EUR', 'EXT', 'ICE', 'ICT', 'P', 'TGV', 'THA', 'TRN', 'ic', 'null', ''])
weekday = weekday.apply(lambda r: r/r.sum(), axis=1)
df_occup = pd.DataFrame(occup)
df_occup.plot.bar(stacked=True);
df_weekday = pd.DataFrame(weekday)
df_weekday.plot.bar(stacked=True);
"""
Explanation: 2.3: *Create a stacked_bar_chart with the distribution of the three classes over an aggregated variable (group the data by weekday, vehicle_type, ...). More info on creating stacked bar charts can be found here: http://pandas.pydata.org/pandas-docs/stable/visualization.html#bar-plots
The dataframe you need will require your grouping variables as the index, and 1 column occupancy category, for example:
| Index | Occupancy_Low | Occupancy_Medium | Occupancy_High | Sum_Occupancy |
|-------|----------------|------------------|----------------|---------------|
| IC | 15 | 30 | 10 | 55 |
| S | 20 | 10 | 30 | 60 |
| L | 12 | 9 | 14 | 35 |
If you want the values to be relative (%), add a sum column and use it to divide the occupancy columns
End of explanation
"""
stops = stations_df[['URI','longitude','latitude']]
dest_count = training['post.to'].value_counts()
dest_count_df = pd.DataFrame({'id':dest_count.index, 'count':dest_count.values})
dest_loc = pd.merge(dest_count_df, stops, left_on = 'id', right_on = 'URI')
dest_loc = dest_loc[['id', 'count', 'latitude','longitude']]
fig, ax = plt.subplots(figsize=(12,10))
ax.scatter(dest_loc.longitude, dest_loc.latitude, s=dest_loc['count'] )
"""
Explanation: 2.4: * To have an idea about the hotspots in the railway network make a scatter plot that depicts the number of visitors per station. Aggregate on the destination station and use the GTFS dataset at iRail to find the geolocation of the stations (stops.txt): https://gtfs.irail.be/nmbs
End of explanation
"""
def get_seconds_since_midnight(x):
midnight = x.replace(hour=0, minute=0, second=0, microsecond=0)
return (x - midnight).seconds
def get_line_number(x):
pattern = re.compile("^[A-Z]+([0-9]+)$")
if pattern.match(x):
return int(pattern.match(x).group(1))
else:
return x
training['seconds_since_midnight'] = training['querytime'].apply(get_seconds_since_midnight)
training['month'] = training['querytime'].apply(lambda x: x.month)
training['occupancy'] = training['post.occupancy'].map({'low': 0, 'medium': 1, 'high': 2})
out_test['seconds_since_midnight'] = out_test['querytime'].apply(get_seconds_since_midnight)
out_test['month'] = out_test['querytime'].apply(lambda x: x.month)
fig, ax = plt.subplots(figsize=(5, 5))
corr_frame = training[['seconds_since_midnight', 'month', 'occupancy']].corr()
cax = ax.matshow(abs(corr_frame))
fig.colorbar(cax)
tickpos = np.array(range(0,len(corr_frame.columns)))
plt.xticks(tickpos,corr_frame.columns, rotation='vertical')
plt.yticks(tickpos,corr_frame.columns, rotation='horizontal')
plt.grid(None)
pd.tools.plotting.scatter_matrix(training[['seconds_since_midnight', 'month', 'occupancy']],
alpha=0.2, diagonal='kde', figsize=(10,10))
plt.grid(None)
"""
Explanation: 3. Predictive modeling: creating a baseline
Now that we have processed, cleaned and explored our data it is time to create a predictive model that predicts the occupancies of future Belgian trains. We will start with applying Logistic Regression on features extracted from our initial dataset. Some code has already been given to get you started.
Feature extraction
Some possible features include (bold ones are already implemented for you):
The day of the week
The number of seconds since midnight of the querytime
The train vehicle type (IC/P/L/...)
The line number
The line category
Information about the from- and to-station (their identifier, their coordinates, the number of visitors, ...)
The month
A binary variable indicating whether a morning (6-10AM) or evening jam (3-7PM) is ongoing
...
In order to do reveal relations between these features you can try and plot them with:
<a href="https://datascience.stackexchange.com/questions/10459/calculation-and-visualization-of-correlation-matrix-with-pandas"> Correlation plot </a>
<a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization-scatter-matrix"> Scatter matrix </a>
These relations can be important since some models do not perform very will when features are highly correlated
Feature normalization
Most models require the features to have a similar range, preferables [0,1]. A minmax scaler is usually sufficient: x -> (x - xmin) / (xmax - xmin)
Scikit will be used quite extensively from now on, have a look here for preprocessing functionality: http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
Dealing with categorical variables
All machine learning techniques, except for tree-based methods, assume that variables are ordinal (you can define an order). For some variables, such as the day of the week or the train vehicle type, this is not true. Therefore, a pre-processing step is required that transforms these categorical variables. A few examples of such transformations are:
One-hot-encoding (supported by pandas: get_dummies )
Binary encoding: map each variable to a number, binary encode these numbers and use each bit as a feature (advantage of this technique is that it introduces a lot less new variables in contrast to one-hot-encoding)
Hash encoding
...
3.1: Extract more features than the two given ones. Make sure you extract at least one categorical variable, and transform it! What gains (in terms of current accuracy (0.417339475755)) do you achieve with new features in comparison to the given code?
End of explanation
"""
skf = StratifiedKFold(n_splits=5, random_state=1337)
X = training[['seconds_since_midnight', 'month']]
y = training['occupancy']
cms = []
accs = []
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y[train_index], y[test_index]
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
predictions = log_reg.predict(X_test)
cm = confusion_matrix(y_test, predictions)
cms.append(cm)
accs.append(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
#accs.append(sum([float(cm[i][i]) for i in range(len(cm))])/np.sum(cm))
print('Confusion matrix:\n', np.mean(cms, axis=0))
print('Avg accuracy', np.mean(accs), '+-', np.std(accs))
print('Predict all lows', float(len(y[y == 0]))/float(len(y)))
"""
Explanation: We train our model on a 'training set' and evaluate it on the testset. Functionality for making this split automatically can be found <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection"> here </a>
Our first model is a linear logistic regression model, more information on the API <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model"> here </a>
The confusion matrix is part of the <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics"> metrics functionality </a>
End of explanation
"""
training_class = training_holidays[training_holidays.class_enc != 0]
training_class = training_class[training_class.class_enc != 14]
test_class = training_holidays[(training_holidays.class_enc == 0)|(training_holidays.class_enc == 14)]
training_class["class_pred"]=training_class["class_enc"]
training_holidays_enc = pd.concat([training_class,test_class])
X_train = training_class[['seconds_since_midnight','weekday', 'month','id','id_2']]
X_test = test_class[['seconds_since_midnight','weekday', 'month','id','id_2']]
y_train = training_class['class_enc']
train.occupancy.value_counts()/train.shape[0]
test.occupancy.value_counts()/test.shape[0]
out_test_holidays_druktes.occupancy.value_counts()/out_test_holidays_druktes.shape[0]
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.cross_validation import train_test_split
train, test = train_test_split(training_holidays_druktes, test_size=0.2, random_state=42)
X_train = train[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8', 'temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']]
X_test = test[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8', 'temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']]
y_train = train['occupancy']
y_test = test['occupancy']
#month uit de set halen als we ongeziene willen predicten
from xgboost import XGBClassifier
xgb = XGBClassifier(n_estimators=5000, max_depth=3, min_child_weight=6, learning_rate=0.01,
colsample_bytree=0.5, subsample=0.6, gamma=0., nthread=-1,
max_delta_step=1, objective='multi:softmax')
xgb.fit(X_train, y_train, sample_weight=[1]*len(y_train))
print(xgb.score(X_train,y_train))
print(xgb.score(X_test, y_test))
ac = AdaBoostClassifier()
ada_param_grid = {'n_estimators': [10, 30, 100, 300, 1000],
'learning_rate': [0.1, 0.3, 1.0, 3.0]}
ac_grid = GridSearchCV(ac,ada_param_grid,cv=3,
scoring='accuracy')
ac_grid.fit(X_train, y_train)
ac = ac_grid.best_estimator_
#ac.fit(X_train, y_train)
#print(ac_grid.score(X_train,y_train))
#print(ac_grid.score(X_test, y_test))
rf = RandomForestClassifier()
param_dist = {"n_estimators": [20],
"max_depth": [7, None],
"max_features": range(4, 6),
"min_samples_split": range(2, 7),
"min_samples_leaf": range(1, 7),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
rand = GridSearchCV(rf,param_dist,cv=3,
scoring='accuracy')
rand.fit(X_train, y_train)
rf = rand.best_estimator_
print(rand.best_estimator_)
# rf.fit(X_train, y_train)
# print(rf.score(X_train,y_train))
# print(rf.score(X_test, y_test))
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(random_state=0)
dtc.fit(X_train, y_train)
print(dtc.score(X_train,y_train))
print(dtc.score(X_test, y_test))
rf2 = rand.best_estimator_
rf3 = rand.best_estimator_
rf4 = rand.best_estimator_
# voting_clf = VotingClassifier(
# estimators=[('ac', ac), ('rf', rf), ('dtc', dtc),('rf2', rf2), ('rf3', rf3), ('rf4', rf4), ('xgb', xgb)],
# voting='hard'
# )
voting_clf = VotingClassifier(
estimators=[('ac', ac), ('rf', rf), ('xgb', xgb)],
voting='hard'
)
from sklearn.metrics import accuracy_score
for clf in (ac, rf, xgb, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
ac.fit(X_train, y_train)
voting_clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
pd.DataFrame([X_train.columns, rf.feature_importances_])
y_predict_test = voting_clf.predict(out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8','temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']])
y_predict_test = rf.predict(out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8','temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']])
out_test_holidays_druktes["occupancy"] = y_predict_test
out_test_holidays_druktes.occupancy.value_counts()/out_test_holidays_druktes.shape[0]
train.occupancy.value_counts()/train.shape[0]
out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','name_enc','class_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','trend','occupancy']][0:100]
out_test_holidays_druktes[["id","occupancy"]].to_csv('predictions.csv',index=False)
"""
Explanation: Since we have a lot of 'Null' (+-1/3th) values for our 'class' feature, and we don't want to throw that away, we can try to predict these labels based on the other features, we get +75% accuracy so that seems sufficient. But we can't forgot to do the same thing for the test set!
End of explanation
"""
skf = StratifiedKFold(n_splits=5, random_state=1337)
X = training[['seconds_since_midnight', 'month']]
y = training['occupancy']
cms = []
accs = []
parameters = {#'penalty': ['l1', 'l2'], # No penalty tuning, cause 'l1' is only supported by liblinear
# It can be interesting to manually take a look at 'l1' with 'liblinear', since LASSO
# provides sparse solutions (boils down to the fact that LASSO does some feature selection for you)
'solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag'],
'tol': [1e-4, 1e-6, 1e-8],
'C': [1e-2, 1e-1, 1.0, 1e1],
'max_iter': [1e2, 1e3]
}
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y[train_index], y[test_index]
tuned_log_reg = GridSearchCV(LogisticRegression(penalty='l2'), parameters, cv=3,
scoring='accuracy')
tuned_log_reg.fit(X_train, y_train)
print(tuned_log_reg.best_params_)
predictions = tuned_log_reg.predict(X_test)
cm = confusion_matrix(y_test, predictions)
cms.append(cm)
accs.append(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
print('Confusion matrix:\n', np.mean(cms, axis=0))
print('Avg accuracy', np.mean(accs), '+-', np.std(accs))
print('Predict all lows', float(len(y[y == 0]))/float(len(y)))
"""
Explanation: 4. 'Advanced' predictive modeling: model selection & hyper-parameter tuning
Model evaluation and hyper-parameter tuning
In order to evaluate your model, K-fold cross-validation (https://en.wikipedia.org/wiki/Cross-validation_(statistics) ) is often applied. Here, the data is divided in K chunks, K-1 chunks are used for training while 1 chunk is used for testing. Different metrics exist, such as accuracy, AUC, F1 score, and more. For this lab, we will use accuracy.
Some machine learning techniques, supported by sklearn:
SVMs
Decision Trees
Decision Tree Ensemble: AdaBoost, Random Forest, Gradient Boosting
Multi-Level Perceptrons/Neural Networks
Naive Bayes
K-Nearest Neighbor
...
To tune the different hyper-parameters of a machine learning model, again different techniques exist:
* Grid search: exhaustively try all possible parameter combinations (Code to tune the different parameters of our LogReg model has been given)
* Random search: try a number of random combinations, it has been shown that this is quite equivalent to grid search
4.1: *Choose one or more machine learning techniques, different from Logistic Regression and apply them to our data, with tuned hyper-parameters! You will see that switching techniques in sklearn is really simple! Which model performs best on this data? *
End of explanation
"""
holiday_pops = pd.read_json('data/holidays.json')
holidays = pd.read_json( (holiday_pops['holidays']).to_json(), orient='index')
holidays['date'] = pd.to_datetime(holidays['date'])
holidays.head(1)
training["date"] = training["querytime"].values.astype('datetime64[D]')
out_test["date"] = out_test["querytime"].values.astype('datetime64[D]')
training_holidays = pd.merge(training,holidays, how="left", on='date')
training_holidays.school = training_holidays.school.fillna(0)
training_holidays.name = training_holidays.name.fillna("geen")
training_holidays[0:1]
out_test_holidays = pd.merge(out_test,holidays, how="left", on='date')
out_test_holidays.school = out_test_holidays.school.fillna(0)
out_test_holidays.name = out_test_holidays.name.fillna("geen")
out_test_holidays[0:1]
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
#encode the names from the holidays (Summer,Christmas...)
training_holidays["name_enc"] = encoder.fit_transform(training_holidays["name"])
out_test_holidays["name_enc"] = encoder.fit_transform(out_test_holidays["name"])
#encode the classes (IC,TGV,L...)
training_holidays["class_enc"] = encoder.fit_transform(training_holidays["post.class"])
out_test_holidays["class_enc"] = encoder.fit_transform(out_test_holidays["post.class"])
training_holidays=training_holidays.rename(columns = {'too':'destination'})
out_test_holidays=out_test_holidays.rename(columns = {'too':'destination'})
stations_df = pd.merge(stations_df,druktes_df.drop(['Unnamed: 0','station'],1), left_on = 'name', right_on = 'station_link')
"""
Explanation: 5. Data augmentation with external data sources
There is a unlimited amount of factors that influence the occupancy of a train! Definitely more than the limited amount of data given in the feedback logs. Therefore, we will try to create new features for our dataset using external data sources. Examples of data sources include:
Weather APIs
A holiday calendar
Event calendars
Connection and delay information of the SpitsGidsAPI
Data from the NMBS/SNCB
Twitter and other social media
many, many more
In order to save time, a few 'prepared' files have already been given to you. Of course, you are free to scrape/generate your own data as well:
Hourly weather data for all stations in Belgium, from August till April weather_data.zip
A file which contains the vehicle identifiers and the stations where this vehicle stops line_info.csv
Based on this line_info, you can construct a graph of the rail net in Belgium and apply some fancy graph features (pagerank, edge betweenness, ...) iGraph experiments.ipynb
A file containing the coordinates of a station, and the number of visitors during week/weekend for 2015 station_druktes.csv
A file with some of the holidays (this can definitely be extended) holidays.json
For event data, there is the Eventful API
5.1: Pick one (or more) external data source(s) and link your current data frame to that data source (requires some creativity in most cases). Extract features from your new, linked data source and re-train your model. How much gain did you achieve?
Als we kijken naar "training.id.value_counts()" dan zien we vooral dat het om studenten bestemmingen gaat, misschien komt dat omdat het vooral hen zijn die deze app gebruiken? We moeten dus nadenken wanneer zij de trein nemen, en wat dat kan beinvloeden. Misschien het aantal studenten per station incorporeren?
End of explanation
"""
def transform_druktes(row):
start = row['from']
destination = row['destination']
day = row['weekday']
row['from_lat']=stations_df[stations_df["from"] == start]["latitude"].values[0]
row['from_lng']=stations_df[stations_df["destination"] == destination]["longitude"].values[0]
row['des_lat']=stations_df[stations_df["from"] == start]["latitude"].values[0]
row['des_lng']=stations_df[stations_df["destination"] == destination]["longitude"].values[0]
row['zoekterm']=stations_df[stations_df["destination"] == destination]["zoekterm"].values[0]
if day == 5:
row['drukte_from']=stations_df[stations_df["from"] == start]["zaterdag"].values[0]
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zaterdag"].values[0]
elif day == 6:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]
elif day == 4:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*1.11
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*1.11
elif day == 3:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*1.21
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*1.21
elif day == 2:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*0.736
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*0.736
elif day == 1:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*0.92
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.*0.92
else:
row['drukte_from']=stations_df[stations_df["from"] == start]["week"].values[0]/5.0*1.016
row['drukte_to']=stations_df[stations_df["destination"] == destination]["week"].values[0]/5.0*1.016
return row
training_holidays_druktes = training_holidays_druktes.apply(transform_druktes, axis=1)
out_test_holidays_druktes = training_holidays_druktes.apply(transform_druktes, axis=1)
training_holidays_druktes = pd.concat([training_holidays_druktes,
pd.get_dummies(training_holidays_druktes['weekday'], prefix="day_"),
],1)
out_test_holidays_druktes = pd.concat([out_test_holidays_druktes,
pd.get_dummies(out_test_holidays_druktes['weekday'], prefix="day_"),
],1)
trends_df = pd.DataFrame()
real_trend_df = pd.DataFrame()
from pytrends.request import TrendReq
import pandas as pd
# enter your own credentials
google_username = "davidjohansmolders@gmail.com"
google_password = "*******"
#path = ""
# Login to Google. Only need to run this once, the rest of requests will use the same session.
pytrend = TrendReq(google_username, google_password, custom_useragent='My Pytrends Script')
for i in range(0,645):
if i % 4 != 0:
continue
try:
pytrend.build_payload(kw_list=[stations_df[stations_df.destination == i].zoekterm.values[0], stations_df[stations_df.destination == i+1].zoekterm.values[0], stations_df[stations_df.destination == i+2].zoekterm.values[0], stations_df[stations_df.destination == i+3].zoekterm.values[0], "Brussel trein"],geo="BE",timeframe='2016-07-27 2017-04-05')
real_trend_df = pd.concat([real_trend_df,pytrend.interest_over_time()], axis=1)
except:
continue
no_dup_trends = trends_df.T.groupby(level=0).first().T
training_holidays_druktes = pd.merge(training_holidays_druktes,stations_df[["destination","zoekterm"]], left_on = 'destination', right_on = 'destination')
out_test_holidays_druktes = pd.merge(out_test_holidays_druktes,stations_df[["destination","zoekterm"]], left_on = 'destination', right_on = 'destination')
int(real_trend_df.loc["2016-07-28"]["Brussel trein"])
training_holidays_druktes_copy = training_holidays_druktes
out_test_holidays_druktes_copy = out_test_holidays_druktes
training_holidays_druktes = training_holidays_druktes_copy
out_test_holidays_druktes = out_test_holidays_druktes_copy
def get_trends(row):
zoek = str(row.zoekterm)
datum = str(row["date"])[0:10]
try:
row["real_trend"] = int(real_trend_df.loc[datum][zoek])
except:
row["real_trend"] = 0
return row
training_holidays_druktes = training_holidays_druktes.apply(get_trends, axis=1)
out_test_holidays_druktes = out_test_holidays_druktes.apply(get_trends, axis=1)
training_holidays_druktes = training_holidays_druktes.drop(['post.date','post.from','post.vehicle','querytype','user_agent','post.to','name','post.class'],1)
out_test_holidays_druktes = out_test_holidays_druktes.drop(['post.date','post.from','post.vehicle','querytype','user_agent','post.to','name','post.class'],1)
training_holidays_druktes["hour"] = training_holidays_druktes["querytime"].values.astype('datetime64[h]').astype('str')
out_test_holidays_druktes["hour"] = out_test_holidays_druktes["querytime"].values.astype('datetime64[h]').astype('str')
training_holidays_druktes["hour_lag"] = (training_holidays_druktes["querytime"].values.astype('datetime64[h]')-2).astype('str')
out_test_holidays_druktes["hour_lag"] = (out_test_holidays_druktes["querytime"].values.astype('datetime64[h]')-2).astype('str')
training_holidays_druktes["timeframe"] = training_holidays_druktes["hour_lag"]+" "+training_holidays_druktes["hour"]
out_test_holidays_druktes["timeframe"] = out_test_holidays_druktes["hour_lag"]+" "+out_test_holidays_druktes["hour"]
# enter your own credentials
google_username = "davidjohansmolders@gmail.com"
google_password = "*******"
#path = ""
# Login to Google. Only need to run this once, the rest of requests will use the same session.
pytrend = TrendReq(google_username, google_password, custom_useragent='My Pytrends Script')
def get_hour_trends(row):
zoek = str(row.zoekterm_x)
tijd = str(row["timeframe"])
try:
pytrend.build_payload(kw_list=[zoek],timeframe=tijd)
row["hour_trend"] = int(pytrend.interest_over_time()[zoek].sum())
except:
row["hour_trend"] = 0
return row
training_holidays_druktes = training_holidays_druktes.apply(get_hour_trends, axis=1)
out_test_holidays_druktes = out_test_holidays_druktes.apply(get_hour_trends, axis=1)
training_holidays_druktes = pd.concat([training_holidays_druktes,
pd.get_dummies(training_holidays_druktes['class_enc'], prefix="class_"),
],1)
out_test_holidays_druktes = pd.concat([out_test_holidays_druktes,
pd.get_dummies(out_test_holidays_druktes['class_enc'], prefix="class_"),
],1)
# file names for all csv files containing weather information per month
weather_csv = ['weather_data_apr_1', 'weather_data_apr_2', 'weather_data_aug_1', 'weather_data_aug_2', 'weather_data_dec_1', 'weather_data_dec_2', 'weather_data_feb_1', 'weather_data_feb_2', 'weather_data_jan_1', 'weather_data_jan_2', 'weather_data_july_1', 'weather_data_july_2', 'weather_data_mar_1', 'weather_data_mar_2', 'weather_data_nov_1', 'weather_data_nov_2', 'weather_data_oct_1', 'weather_data_oct_2', 'weather_data_sep_1', 'weather_data_sep_2']
for i in range(len(weather_csv)):
weather_csv[i] = 'data/weather_data/' + weather_csv[i] + '.csv'
# create column of station index
stations_df['station_index'] = stations_df.index
# put all weather data in an array
weather_months = []
for csv in weather_csv:
weather_month = pd.read_csv(csv)
# convert date_time to a datetime object
weather_month['date_time'] = pd.to_datetime(weather_month['date_time'])
weather_month = weather_month.drop(['Unnamed: 0','lat','lng'], 1)
weather_months.append(weather_month)
# concatenate all weather data
weather = pd.concat(weather_months)
# merge weather month with station to convert station name to index (that can be found in holiday_druktes)
weather = pd.merge(weather, stations_df[["name", "station_index"]], left_on = 'station_name', right_on = 'name')
weather = weather.drop(['station_name', 'name'], 1)
# truncate querytime to the hour in new column
training_holidays_druktes['querytime_hour'] = training_holidays_druktes['querytime'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour))
# join training with weather data
training_holidays_druktes_weather = pd.merge(training_holidays_druktes, weather, how='left', left_on = ['destination', 'querytime_hour'], right_on = ['station_index', 'date_time'])
training_holidays_druktes_weather = training_holidays_druktes_weather.drop(['querytime_hour', 'date_time', 'station_index'], 1)
training_holidays_druktes_weather = training_holidays_druktes_weather.drop_duplicates()
# fill null rows of weather data with there mean
training_holidays_druktes_weather['temperature'].fillna(training_holidays_druktes_weather['temperature'].mean(), inplace=True)
training_holidays_druktes_weather['humidity'].fillna(training_holidays_druktes_weather['humidity'].mean(), inplace=True)
training_holidays_druktes_weather['windspeed'].fillna(training_holidays_druktes_weather['windspeed'].mean(), inplace=True)
training_holidays_druktes_weather['visibility'].fillna(training_holidays_druktes_weather['visibility'].mean(), inplace=True)
training_holidays_druktes_weather['weather_type'].fillna(training_holidays_druktes_weather['weather_type'].mean(), inplace=True)
#cast weather type to int
training_holidays_druktes_weather['weather_type'] = training_holidays_druktes_weather['weather_type'].astype(int)
# Add weather data to test data
# truncate querytime to the hour in new column
out_test_holidays_druktes['querytime_hour'] = out_test_holidays_druktes['querytime'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour))
# join test with weather data
out_test_holidays_druktes_weather = pd.merge(out_test_holidays_druktes, weather, how='left', left_on = ['destination', 'querytime_hour'], right_on = ['station_index', 'date_time'])
out_test_holidays_druktes_weather = out_test_holidays_druktes_weather.drop(['querytime_hour', 'date_time', 'station_index'], 1)
out_test_holidays_druktes_weather = out_test_holidays_druktes_weather.drop_duplicates()
# fill null rows of weather data with there mean
out_test_holidays_druktes_weather['temperature'].fillna(out_test_holidays_druktes_weather['temperature'].mean(), inplace=True)
out_test_holidays_druktes_weather['humidity'].fillna(out_test_holidays_druktes_weather['humidity'].mean(), inplace=True)
out_test_holidays_druktes_weather['windspeed'].fillna(out_test_holidays_druktes_weather['windspeed'].mean(), inplace=True)
out_test_holidays_druktes_weather['visibility'].fillna(out_test_holidays_druktes_weather['visibility'].mean(), inplace=True)
out_test_holidays_druktes_weather['weather_type'].fillna(out_test_holidays_druktes_weather['weather_type'].mean(), inplace=True)
#cast weather type to int
out_test_holidays_druktes_weather['weather_type'] = out_test_holidays_druktes_weather['weather_type'].astype(int)
# set out_test_holidays_druktes and training_holidays_druktes equal to weather counterpart such that we don't need to change all variable names above
out_test_holidays_druktes = out_test_holidays_druktes_weather
training_holidays_druktes = training_holidays_druktes_weather
"""
Explanation: Transform all null classes to one null class, maybe try to predict the class? Based on to and from and time
End of explanation
"""
pickle.dump(training_holidays_druktes,open("temp_data/training_holidays_druktes.pkl","wb"))
pickle.dump(out_test_holidays_druktes,open("temp_data/out_test_holidays_druktes.pkl","wb"))
training_holidays_druktes = pd.read_pickle("temp_data/training_holidays_druktes.pkl")
out_test_holidays_druktes = pd.read_pickle("temp_data/out_test_holidays_druktes.pkl")
training_holidays_druktes[0:5]
"""
Explanation: 6. Generating a Kaggle submission and comparing your methodology to others
6.1: Train your best performing model on train.nldjson and generate predictions for test.nldjson. Create a file called submission.csv with format listed below and submit it on the Kaggle competition! What's your position on the leaderboard?
End of explanation
"""
|
pligor/predicting-future-product-prices | 02_preprocessing/exploration04-price_history_dfa.ipynb | agpl-3.0 | # -*- coding: UTF-8 -*-
from __future__ import division
import numpy as np
import pandas as pd
import sys
import math
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
import re
import os
import csv
from helpers.outliers import MyOutliers
from skroutz_mobile import SkroutzMobile
from sklearn.ensemble import IsolationForest
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, r2_score
from skroutz_mobile import SkroutzMobile
from sklearn.model_selection import StratifiedShuffleSplit
from helpers.my_train_test_split import MySplitTrainTest
from sklearn.preprocessing import StandardScaler
from preprocess_price_history import PreprocessPriceHistory
from price_history import PriceHistory
from dfa import dfa
import scipy.signal as ss
import nolds
%matplotlib inline
random_state = np.random.RandomState(seed=16011984)
csv_in = "../price_history_02_with_seq_start.csv"
orig_df = pd.read_csv(csv_in, index_col=0, encoding='utf-8', quoting=csv.QUOTE_ALL)
orig_df.shape
df = orig_df.drop(labels=PriceHistory.SPECIAL_COLS, axis=1)
df.shape
CSV_FILEPATH = "../price_history_02_with_seq_start.csv"
#xx = df.iloc[0, ]
ph = PriceHistory(CSV_FILEPATH)
tt = ph.extractSequenceByLocation(iloc=0)
tt.shape
tt[-1]
alpha = nolds.dfa(tt)
alpha
seqs = [ph.extractSequenceByLocation(iloc=ii) for ii in xrange(len(ph.df))]
len(seqs)
len(seqs[0])
alphas = []
for seq in seqs:
try:
alpha = nolds.dfa(seq.values)
if not np.isnan(alpha):
alphas.append(alpha)
except AssertionError, ee:
pass
#alphas = [seq for seq in seqs if len(seq) > 1 and not np.all(seq[0] == seq)]
len(alphas)
plt.figure(figsize=(17,8))
sns.distplot(alphas, rug=True,
axlabel='Alpha of Detrended Flunctuation Analysis')
plt.show()
"""
Explanation: https://cschoel.github.io/nolds/nolds.html#detrended-fluctuation-analysis
End of explanation
"""
# References
"""
Explanation: Conclusion
the estimate alpha for the Hurst parameter (alpha < 1: stationary process similar to fractional Gaussian noise with H = alpha, alpha > 1: non-stationary process similar to fractional Brownian motion with H = alpha - 1)
So most price histories are identified as we would expect, as non-stationary processes
End of explanation
"""
seq = seqs[0].values
plt.plot(seq)
detrendeds = [ss.detrend(seq) for seq in seqs]
len(detrendeds)
plt.plot(detrendeds[0])
detrendeds[0]
alldetr = []
for detrended in detrendeds:
alldetr += list(detrended)
len(alldetr)
fig = plt.figure( figsize=(14, 6) )
sns.distplot(alldetr, axlabel="Price Deviation from zero after detrend")
plt.show()
stdsca = StandardScaler(with_std=False)
seqs_zero_mean = [stdsca.fit_transform(seq.values.reshape(1, -1).T) for seq in seqs]
len(seqs_zero_mean), seqs_zero_mean[0].shape, seqs_zero_mean[3].shape
allzeromean = np.empty(shape=(0, 1))
for seq in seqs_zero_mean:
allzeromean = np.vstack( (allzeromean, seq) )
allzeromean.shape
fig = plt.figure( figsize=(14, 6) )
sns.distplot(allzeromean.flatten(),
axlabel="Price Deviation from zero before detrend")
plt.show()
"""
Explanation: https://cschoel.github.io/nolds/nolds.html#detrended-fluctuation-analysis
https://scholar.google.co.uk/scholar?q=Detrended+fluctuation+analysis%3A+A+scale-free+view+on+neuronal+oscillations&btnG=&hl=en&as_sdt=0%2C5
MLA format:
Hardstone, Richard, et al. "Detrended fluctuation analysis: a scale-free view on neuronal oscillations." Frontiers in physiology 3 (2012).
Price histories Detrended
End of explanation
"""
|
arcyfelix/Courses | 18-03-07-Deep Learning With Python by François Chollet/Chapter 6.2 - Understanding recurrent neural networks.ipynb | apache-2.0 | from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32))
model.summary()
"""
Explanation: Chapter 6.2 - Understanding recurrent neural networks
Simple RNN
SimpleRNN layer takes input of shape (batch_size, timesteps, input_features)
End of explanation
"""
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.summary()
"""
Explanation: Like all recurrent layers in Keras, SimpleRNN can be run in two different modes: it can return either the full sequences of successive outputs for each timestep (a 3D tensor of shape (batch_size, timesteps, output_features)), or it can return only the last output for each input sequence (a 2D tensor of shape (batch_size, output_features))
End of explanation
"""
model = Sequential()
model.add(Embedding(input_dim = 10000,
output_dim = 32))
model.add(SimpleRNN(units = 32,
return_sequences = True))
model.add(SimpleRNN(units = 32,
return_sequences = True))
model.add(SimpleRNN(units = 32,
return_sequences = True))
# The last layer returns only the last outputs
model.add(SimpleRNN(32))
model.summary()
"""
Explanation: Stacking multiple recurrent layers on top of each other can have benefits, like with convolutional neural networks.
End of explanation
"""
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
# Number of words to be used as features
max_features = 10000
# Cutting the review after this number of words
maxlen = 500
batch_size = 32
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')
print('Pad sequences (samples x time)')
input_train = pad_sequences(input_train, maxlen=maxlen)
input_test = pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
from keras.layers import Dense
model = Sequential()
model.add(Embedding(input_dim = max_features,
output_dim = 32))
model.add(SimpleRNN(units = 32))
model.add(Dense(units = 1,
activation='sigmoid'))
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x = input_train,
y = y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2)
"""
Explanation: IMDB example
End of explanation
"""
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Visualizing the results
End of explanation
"""
from keras.layers import LSTM
model = Sequential()
model.add(Embedding(input_dim = max_features,
output_dim = 32))
model.add(LSTM(units = 32))
model.add(Dense(units = 1,
activation = 'sigmoid'))
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x = input_train,
y = y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2)
"""
Explanation: LSTM
End of explanation
"""
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Visualizing the results
End of explanation
"""
|
metpy/MetPy | v1.1/_downloads/87fd6ee8be4ea1587fa2ad7f4206407a/Combined_plotting.ipynb | bsd-3-clause | import xarray as xr
from metpy.cbook import get_test_data
from metpy.plots import ContourPlot, ImagePlot, MapPanel, PanelContainer
from metpy.units import units
# Use sample NARR data for plotting
narr = xr.open_dataset(get_test_data('narr_example.nc', as_file_obj=False))
"""
Explanation: Combined Plotting
Demonstrate the use of MetPy's simplified plotting interface combining multiple plots.
Also shows how to control the maps that are plotted. Plots sample NARR data.
End of explanation
"""
contour = ContourPlot()
contour.data = narr
contour.field = 'Temperature'
contour.level = 850 * units.hPa
contour.linecolor = 'red'
contour.contours = 15
"""
Explanation: Create a contour plot of temperature
End of explanation
"""
img = ImagePlot()
img.data = narr
img.field = 'Geopotential_height'
img.level = 850 * units.hPa
"""
Explanation: Create an image plot of Geopotential height
End of explanation
"""
panel = MapPanel()
panel.area = 'us'
panel.layers = ['coastline', 'borders', 'states', 'rivers', 'ocean', 'land']
panel.title = 'NARR Example'
panel.plots = [contour, img]
pc = PanelContainer()
pc.size = (10, 8)
pc.panels = [panel]
pc.show()
"""
Explanation: Plot the data on a map
End of explanation
"""
|
JarnoRFB/qtpyvis | notebooks/tensorflow/train.ipynb | mit | from IPython.display import clear_output, Image, display, HTML
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
"""
Explanation: Inline visualization of TensorFlow graph from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
and
http://sdsawtelle.github.io/blog/output/getting-started-with-tensorflow-in-jupyter.html
End of explanation
"""
dataset = mnist.load_data()
train_data = dataset[0][0] / 255
train_data = train_data[..., np.newaxis].astype('float32')
train_labels = np_utils.to_categorical(dataset[0][1]).astype('float32')
test_data = dataset[1][0] / 255
test_data = test_data[..., np.newaxis].astype('float32')
test_labels = np_utils.to_categorical(dataset[1][1]).astype('float32')
train_data.shape
train_labels[0]
plt.imshow(train_data[0, ..., 0])
"""
Explanation: Preprocessing the data
End of explanation
"""
def get_batch(data, labels, num_samples):
"""Get a random batch of corresponding data and labels of size `num_samples`"""
idx = np.random.choice(np.arange(0, data.shape[0]), num_samples)
return data[[idx]], labels[[idx]]
"""
Explanation: Function for providing batches
End of explanation
"""
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='VALID')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='VALID')
graph_plain = tf.Graph()
with graph_plain.as_default():
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
with tf.name_scope('conv2d_1'):
W_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])
act_conv1 = tf.nn.relu(conv2d(x, W_conv1) + b_conv1)
with tf.name_scope('max_pooling2d_1'):
pool1 = max_pool_2x2(act_conv1)
with tf.name_scope('conv2d_2'):
W_conv2 = weight_variable([3, 3, 32, 32])
b_conv2 = bias_variable([32])
act_conv2 = tf.nn.relu(conv2d(pool1, W_conv2)) + b_conv2
with tf.name_scope('dropout_1'):
keep_prob1 = tf.placeholder(tf.float32)
drop1 = tf.nn.dropout(act_conv2, keep_prob=keep_prob1)
with tf.name_scope('flatten_1'):
flatten_1 = tf.reshape(drop1, [-1, 11 * 11 * 32])
with tf.name_scope('dense_1'):
W_dense1 = weight_variable([11 * 11 * 32, 64])
b_dense_1 = bias_variable([64])
act_dense1 = tf.nn.relu((flatten_1 @ W_dense1) + b_dense_1)
with tf.name_scope('dropout_2'):
keep_prob2 = tf.placeholder(tf.float32)
drop2 = tf.nn.dropout(act_dense1, keep_prob=keep_prob2)
with tf.name_scope('dense_2'):
W_dense2 = weight_variable([64, 10])
b_dense2 = bias_variable([10])
# Dont use softmax activation function, because tf provides cross entropy only in conjunction with it.
net_dense2 = (drop2 @ W_dense2) + b_dense2
with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=net_dense2, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
correct_prediction = tf.equal(tf.argmax(net_dense2, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Create init op and saver in the graph, so they can find the variables.
init_op_plain = tf.global_variables_initializer()
saver = tf.train.Saver()
show_graph(graph_plain)
"""
Explanation: Definining the TensorFlow model with the core API
End of explanation
"""
sess = tf.Session(graph=graph_plain)
sess.run(init_op_plain)
for i in range(1000):
batch = get_batch(train_data, train_labels, 50)
if i % 100 == 0:
train_accuracy = sess.run(
fetches=accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob1: 0.75, keep_prob2: 0.5}
)
print('step %d, training accuracy %g' % (i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob1: 0.75, keep_prob2: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={x: test_data, y_: test_labels, keep_prob1: 1.0, keep_prob2: 1.0},
session=sess))
# Save the model including weights.
saver.save(sess, 'tf_mnist_model_plain/tf_mnist_model.ckpt')
sess.close()
"""
Explanation: Training loop
End of explanation
"""
graph_layers = tf.Graph()
with graph_layers.as_default():
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
training = tf.placeholder_with_default(False, shape=(), name='training') # Switch for dropout layers.
t = tf.layers.conv2d(x, filters=32, kernel_size=(3 ,3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_1')
t = tf.layers.max_pooling2d(t, pool_size=(2, 2), strides=(2, 2),
name='max_pooling2d_1')
t = tf.layers.conv2d(t, filters=32, kernel_size=(3, 3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_2')
t = tf.layers.dropout(t, rate=0.25, training=training, name='dropout_1')
t = tf.contrib.layers.flatten(t)
# Dense does not really flatten, but behaves like tensordot
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html
# https://github.com/tensorflow/tensorflow/issues/8175
t = tf.layers.dense(t, units=64, activation=tf.nn.relu, name='dense_1')
t = tf.layers.dropout(t, rate=0.5, training=training, name='dropout_2')
t = tf.layers.dense(t, units=10, name='dense_2')
with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=t, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
correct_prediction = tf.equal(tf.argmax(t, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Create init op and saver in the graph, so they can find the variables.
init_op_layers = tf.global_variables_initializer()
saver = tf.train.Saver()
show_graph(graph_layers)
"""
Explanation: Definining the TensorFlow model with the tf.layers API.
End of explanation
"""
sess = tf.Session(graph=graph_layers)
sess.run(init_op_layers)
for i in range(2000):
batch = get_batch(train_data, train_labels, 50)
if i % 100 == 0:
train_accuracy = sess.run(
fetches=accuracy, feed_dict={x: batch[0], y_: batch[1], training: True}
)
print('step %d, training accuracy %g' % (i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], training: True})
print('test accuracy %g' % accuracy.eval(feed_dict={x: test_data, y_: test_labels},
session=sess))
# Save the model including weights.
saver.save(sess, 'tf_mnist_model_layers/tf_mnist_model.ckpt')
sess.close()
"""
Explanation: Training loop
End of explanation
"""
def model_fn(features, labels, mode):
training = (mode == tf.estimator.ModeKeys.TRAIN)
t = tf.layers.conv2d(features['x'], filters=32, kernel_size=(3 ,3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_1')
t = tf.layers.max_pooling2d(t, pool_size=(3, 3), strides=(1 ,1),
name='max_pooling2d_1')
t = tf.layers.conv2d(t, filters=32, kernel_size=(3, 3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_2')
t = tf.layers.dropout(t, rate=0.25, training=training, name='dropout_1')
t = tf.contrib.layers.flatten(t)
# Dense does not really flatten, but behaves like tensordot
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html
# https://github.com/tensorflow/tensorflow/issues/8175
t = tf.layers.dense(t, units=64, activation=tf.nn.relu, name='dense_1')
t = tf.layers.dropout(t, rate=0.5, training=training, name='dropout_2')
t = tf.layers.dense(t, units=10, name='dense_2')
predictions = tf.argmax(t, axis=1)
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"numbers": predictions}
)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(predictions=predictions,
labels=tf.argmax(labels, axis=1))
}
loss = tf.losses.softmax_cross_entropy(labels, t)
train_op = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step=tf.train.get_global_step())
# Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops
)
estimator = tf.estimator.Estimator(model_fn=model_fn, model_dir='tf_mnist_model_estimator/')
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': train_data.astype('float32')},
y=train_labels.astype('float32'),
batch_size=50,
num_epochs=1,
shuffle=True
)
estimator.train(input_fn=train_input_fn)
# The model is automatically saved when using the estimator API.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': test_data},
y=test_labels,
num_epochs=1,
shuffle=False)
estimator.evaluate(input_fn=test_input_fn)
plt.imshow(train_data[0, ..., 0])
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data[0:1]},
shuffle=False)
predictions = estimator.predict(input_fn=predict_input_fn)
for pred in predictions:
print(pred)
# Restore model to look at the graph.
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('tf_mnist_model_estimator/model.ckpt-1200.meta')
saver.restore(sess,tf.train.latest_checkpoint('tf_mnist_model_estimator/'))
show_graph(sess.graph)
"""
Explanation: Definining the TensorFlow model with the tf.estimator API.
End of explanation
"""
|
karst87/ml | dev/pyml/datacamp/kaggle-python-tutorial-on-machine-learning/01_getting-started-with-python.ipynb | mit | #Compute x = 4 * 3 and print the result
x = 4 * 3
print(x)
#Compute y = 6 * 9 and print the result
y = 6 * 9
print(y)
"""
Explanation: getting-started-with-python
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=1
1. How it works
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=1
Welcome to our Kaggle Machine Learning Tutorial. In this tutorial, you will explore how to tackle Kaggle Titanic competition using Python and Machine Learning. In case you're new to Python, it's recommended that you first take our free Introduction to Python for Data Science Tutorial. Furthermore, while not required, familiarity with machine learning techniques is a plus so you can get the maximum out of this tutorial.
In the editor on the right, you should type Python code to solve the exercises. When you hit the 'Submit Answer' button, every line of code is interpreted and executed by Python and you get a message whether or not your code was correct. The output of your Python code is shown in the console in the lower right corner. Python makes use of the # sign to add comments; these lines are not run as Python code, so they will not influence your result.
You can also execute Python commands straight in the console. This is a good way to experiment with Python code, as your submission is not checked for correctness.
Instructions
In the editor to the right, you see some Python code and annotations. This is what a typical exercise will look like.
To complete the exercise and see how the interactive environment works add the code to compute y and hit the Submit Answer button. Don't forget to print the result.
End of explanation
"""
# Import the Pandas library
import pandas as pd
kaggle_path = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/"
# Load the train and test datasets to create two DataFrames
train_url = kaggle_path + "train.csv"
train = pd.read_csv(train_url)
test_url = kaggle_path + "test.csv"
test = pd.read_csv(test_url)
#Print the `head` of the train and test dataframes
print(train.head())
print(test.head())
"""
Explanation: 2. Get the Data with Pandas
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=2
When the Titanic sank, 1502 of the 2224 passengers and crew were killed. One of the main reasons for this high level of casualties was the lack of lifeboats on this self-proclaimed "unsinkable" ship.
Those that have seen the movie know that some individuals were more likely to survive the sinking (lucky Rose) than others (poor Jack). In this course, you will learn how to apply machine learning techniques to predict a passenger's chance of surviving using Python.
Let's start with loading in the training and testing set into your Python environment. You will use the training set to build your model, and the test set to validate it. The data is stored on the web as csv files; their URLs are already available as character strings in the sample code. You can load this data with the read_csv() method from the Pandas library.
Instructions
First, import the Pandas library as pd.
Load the test data similarly to how the train data is loaded.
Inspect the first couple rows of the loaded dataframes using the .head() method with the code provided.
End of explanation
"""
train.describe()
test.describe()
train.shape
test.shape
"""
Explanation: 3.Understanding your data
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=3
Before starting with the actual analysis, it's important to understand the structure of your data. Both test and train are DataFrame objects, the way pandas represent datasets. You can easily explore a DataFrame using the .describe() method. .describe() summarizes the columns/features of the DataFrame, including the count of observations, mean, max and so on. Another useful trick is to look at the dimensions of the DataFrame. This is done by requesting the .shape attribute of your DataFrame object. (ex. your_data.shape)
The training and test set are already available in the workspace, as train and test. Apply .describe() method and print the .shape attribute of the training set. Which of the following statements is correct?
Possible Answers
The training set has 891 observations and 12 variables, count for Age is 714.
The training set has 418 observations and 11 variables, count for Age is 891.
The testing set has 891 observations and 11 variables, count for Age is 891.
The testing set has 418 observations and 12 variables, count for Age is 714.
End of explanation
"""
# absoulte numbers
train['Survived'].value_counts()
# percentages
train['Survived'].value_counts(normalize=True)
train['Survived'][train['Sex']=='male'].value_counts()
train['Survived'][train['Sex'] =='female'].value_counts()
# Passengers that survived vs passengers that passed away
print(train['Survived'].value_counts())
# As proportions
print(train['Survived'].value_counts(normalize=True))
# Males that survived vs males that passed away
print(train['Survived'][train['Sex']=='male'].value_counts())
# Females that survived vs Females that passed away
print(train['Survived'][train['Sex']=='female'].value_counts())
# Normalized male survival
print(train['Survived'][train['Sex']=='male'].value_counts(normalize=True))
# Normalized female survival
print(train['Survived'][train['Sex']=='female'].value_counts(normalize=True))
"""
Explanation: 4. Rose vs Jack, or Female vs Male
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=4
How many people in your training set survived the disaster with the Titanic? To see this, you can use the value_counts() method in combination with standard bracket notation to select a single column of a DataFrame:
# absolute numbers
train["Survived"].value_counts()
# percentages
train["Survived"].value_counts(normalize = True)
If you run these commands in the console, you'll see that 549 individuals died (62%) and 342 survived (38%). A simple way to predict heuristically could be: "majority wins". This would mean that you will predict every unseen observation to not survive.
To dive in a little deeper we can perform similar counts and percentage calculations on subsets of the Survived column. For example, maybe gender could play a role as well? You can explore this using the .value_counts() method for a two-way comparison on the number of males and females that survived, with this syntax:
train["Survived"][train["Sex"] == 'male'].value_counts()
train["Survived"][train["Sex"] == 'female'].value_counts()
To get proportions, you can again pass in the argument normalize = True to the .value_counts() method.
Instructions
Calculate and print the survival rates in absolute numbers using values_counts() method.
Calculate and print the survival rates as proportions by setting the normalize argument to True.
Repeat the same calculations but on subsets of survivals based on Sex
End of explanation
"""
# Create the column Child and assign to 'NaN'
train["Child"] = float('NaN')
# Assign 1 to passengers under 18, 0 to those 18 or older. Print the new column.
# train['Child'][train['Age'] >= 18] = 0
# train['Child'][train['Age'] < 18] = 1
train.loc[train['Age'] >= 18, 'Child'] = 0
train.loc[train['Age'] < 18, 'Child'] = 1
print(train['Child'])
# Print normalized Survival Rates for passengers under 18
print(train["Survived"][train["Child"] == 1].value_counts(normalize = True))
# Print normalized Survival Rates for passengers 18 or older
print(train["Survived"][train["Child"] == 0].value_counts(normalize = True))
"""
Explanation: 5.Does age play a role?
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=5
Another variable that could influence survival is age; since it's probable that children were saved first. You can test this by creating a new column with a categorical variable Child. Child will take the value 1 in cases where age is less than 18, and a value of 0 in cases where age is greater than or equal to 18.
To add this new variable you need to do two things (i) create a new column, and (ii) provide the values for each observation (i.e., row) based on the age of the passenger.
Adding a new column with Pandas in Python is easy and can be done via the following syntax:
your_data["new_var"] = 0
This code would create a new column in the train DataFrame titled new_var with 0 for each observation.
To set the values based on the age of the passenger, you make use of a boolean test inside the square bracket operator. With the []-operator you create a subset of rows and assign a value to a certain variable of that subset of observations. For example,
train["new_var"][train["Fare"] > 10] = 1
would give a value of 1 to the variable new_var for the subset of passengers whose fares greater than 10. Remember that new_var has a value of 0 for all other values (including missing values).
A new column called Child in the train data frame has been created for you that takes the value NaN for all observations.
Instructions
Set the values of Child to 1 is the passenger's age is less than 18 years.
Then assign the value 0 to observations where the passenger is greater than or equal to 18 years in the new Child column.
Compare the normalized survival rates for those who are <18 and those who are older. Use code similar to what you had in the previous exercise.
End of explanation
"""
# Create a copy of test: test_one
test_one = test
# Initialize a Survived column to 0
test_one['Survived'] = 0
# Set Survived to 1 if Sex equals "female" and print the `Survived` column from `test_one`
# test_one['Survived'][test_one['Sex'] == 'female'] = 1
test_one.loc[test_one['Sex'] == 'female', 'Survived'] = 1
print(test_one['Survived'])
"""
Explanation: 6.First Prediction
https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=6
In one of the previous exercises you discovered that in your training set, females had over a 50% chance of surviving and males had less than a 50% chance of surviving. Hence, you could use this information for your first prediction: all females in the test set survive and all males in the test set die.
You use your test set for validating your predictions. You might have seen that contrary to the training set, the test set has no Survived column. You add such a column using your predicted values. Next, when uploading your results, Kaggle will use this variable (= your predictions) to score your performance.
Instructions
Create a variable test_one, identical to dataset test
Add an additional column, Survived, that you initialize to zero.
Use vector subsetting like in the previous exercise to set the value of Survived to 1 for observations whose Sex equals "female".
Print the Survived column of predictions from the test_one dataset.
End of explanation
"""
|
philippgrafendorfe/stackedautoencoders | ROBO_SAE_Comments.ipynb | mit | IPython.display.Image("images/robo1_nn.png")
"""
Explanation: Title of Database: Wall-Following navigation task with mobile robot SCITOS-G5
The data were collected as the SCITOS G5 navigates through the room following the wall in a clockwise
direction, for 4 rounds. To navigate, the robot uses 24 ultrasound sensors arranged circularly around its "waist".
The numbering of the ultrasound sensors starts at the front of the robot and increases in clockwise direction.
Import and basic data inspection
The Move_Forward and the Sharp-Right-Turn Class combine nearly 80% of all observated classes. So it might happen, that the accuracy may still be high with around 75% although most of the features are eliminated.
Train Neural Net
The dimension of the hidden layers are set arbitrarily but some runs have shown that 30 is a good number. The input_dim Variable is set to 24 because initially there are 24 features. The aim is to build the best possible neural net.
Optimizer
RMSprop is a mini batch gradient descent algorithm which divides the gradient by a running average of the learning rate. More information: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
The weights are initialized by a normal distribution with mean 0 and standard deviation of 0.05.
End of explanation
"""
# Plot normalized confusion matrix
plt.figure(figsize=(20,10))
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
"""
Explanation: The classifier yields a test accuracy of 0.945054945055.
End of explanation
"""
IPython.display.Image("images/2018-01-25 18_44_01-PubMed Central, Table 2_ Sensors (Basel). 2017 Mar; 17(3)_ 549. Published online.png")
"""
Explanation: Comparison
The following data is from a paper published in March 2017. You can find that here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5375835/
End of explanation
"""
|
TimofeyBalashov/MagnetizationTunneling | Using the code.ipynb | mit | %pylab inline
from ipywidgets import interact
from pyatoms.J.SingleAtom import SingleAtom
import mpmath as mp
"""
Explanation: Calculation of atomic spectra in crystal field
This notebook introduces the code for calculating the energy spectrum of single magnetic atoms in environments of various symmetry.
Loading the libraries
End of explanation
"""
Ho = SingleAtom(8, 3)
Ho.CF.setSymmetry("C3v")
"""
Explanation: Defining the atom
We take a Holmium atom in 3-fold symmetry adsorption sites (e.g. Pt(111)).
The first argument is the value of the total angular momentum, the second one the orbital momentum of the magnetization electrons.
End of explanation
"""
Ho.CF.setCoefficient(2, 0, -140e-3)
Ho.CF.setCoefficient(4, 0, 1.06e-3)
"""
Explanation: The crystal field operator is a sum of Stevens' operators corresponding to the desired symmetry (Stevens, 1952).
We are going to use crystal field parameters published by Donati et al. (2014). The energy units are millielectronvolts by default.
End of explanation
"""
plot(Ho.Js, Ho.Es, 'ro')
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
"""
Explanation: We plot the energy spectrum as function of the expectation value of the $J_z$ operator ($\left<\hat{J}_z\right>$). As we have only defined uniaxial anisotropy so far, all the states are $J_z$ eigenstates and form dimers (except for $J_z=0$ state).
End of explanation
"""
Ho.CF.setCoefficient(4, 3, 1e-3)
plot(Ho.Js, Ho.Es, 'ro')
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
"""
Explanation: Adding a transversal term leads to creation of singlets at every third state (C3v).
End of explanation
"""
Ho.ZT.setBz(1e-3)
plot(Ho.Js, Ho.Es, 'ro')
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
"""
Explanation: Magnetic Field
Application of magnetic field splits the singlets again.
End of explanation
"""
@interact(theta=(0, pi/2, pi/12))
def _plt(theta=0):
Ho.ZT.setBtheta(theta)
plot(Ho.Js, Ho.Es, 'ro')
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
"""
Explanation: Magnetic field can be applied in any direction. One can observe e.g. the effect of rotating the field vector from z to x axis.
End of explanation
"""
Ho.ZT.setBxyz(0,0,0)
colors = log(abs(asarray(Ho.transitions(Ho.Jp)[0][0,:].tolist(), dtype=float)))
scatter(Ho.Js, Ho.Es, s=100, c=colors, cmap='Reds', vmin=-20, vmax=0, edgecolors="k")
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
title("Matrix element of $J_+$ operator\nbetween the ground state and the rest (on log scale)")
"""
Explanation: Transitions
We can calculate the matrix elements of an arbitrary operator between the eigenstates of the Hamiltonian. For example, we could see what states are coupled together by the $\hat{J}_+$ operator.
End of explanation
"""
Ho.ZT.setBxyz(0,0,0)
colors = log(abs(asarray(Ho.J_transitions()[0,:].tolist(), dtype=float)))
scatter(Ho.Js, Ho.Es, s=100, c=colors, cmap='Reds', vmin=-20, vmax=1, edgecolors="k")
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
title("Transition probability from the ground state to the other states (on log scale)")
"""
Explanation: To calculate transition probabilities between states through an operator of the form $\vec{\sigma}\vec{J}$, we use the function J_transitions, that uses the formula $$p_{i\to f} = \frac1{J(J+1)}\left[|\left<f\middle|J_z\middle|i\right>|^2 + \frac12\left(|\left<f\middle|J_+\middle|i\right>|^2 + |\left<f\middle|J_-\middle|i\right>|^2\right)\right]$$
(see Hirjibehedin et al., 2007)
End of explanation
"""
Ho.CF.setCoefficient(2,0,-0.239068)
Ho.CF.setCoefficient(4,0,8.59023e-5)
Ho.CF.setCoefficient(4,3,2.93446e-5)
Ho.CF.setCoefficient(6,0,1.86782e-7)
Ho.CF.setCoefficient(6,3,-1.96786e-6)
Ho.CF.setCoefficient(6,6,6.30483e-7)
plot(Ho.Js, Ho.Es, 'ro')
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5);
"""
Explanation: Different environment
Let us now consider a different set of crystal field parameters (Miyamachi et al. 2012). Here the ground states do not mix, so we get a doublet.
End of explanation
"""
colors = log(abs(asarray(Ho.J_transitions()[0,:].tolist(), dtype=float)))
scatter(Ho.Js, Ho.Es, s=100, c=colors, cmap='Reds', vmin=-20, vmax=1, edgecolors="k")
ylabel("Energy (meV)")
xlabel(r"$\left<\hat{J}_z\right>$")
xlim(-8.5, 8.5)
title("Transition probability from the ground state to the other states (on log scale)")
print("Transition probability within the ground doublet: {}".format(Ho.J_transitions()[0,1]))
print("Transition probability to the first excited state: {}".format(Ho.J_transitions()[0,2]))
"""
Explanation: Note that the probability of a transition to the other side of the parabola is very low.
End of explanation
"""
print("Transition probability within the ground doublet: {}".format(mp.nstr(Ho.J_transitions()[0,1], 3)))
with mp.workdps(200):
Ho = SingleAtom(8, 3)
Ho.CF.setSymmetry("C3v")
Ho.CF.setCoefficient(2, 0, -0.239068)
Ho.CF.setCoefficient(4, 0, 8.59023e-5)
Ho.CF.setCoefficient(4, 3, 2.93446e-5)
Ho.CF.setCoefficient(6, 0, 1.86782e-7)
Ho.CF.setCoefficient(6, 3, -1.96786e-6)
Ho.CF.setCoefficient(6, 6, 6.30483e-7)
print("Transition probability within the ground doublet: {}".format(mp.nstr(Ho.J_transitions()[0,1], 3)))
"""
Explanation: The actual probability of transitions within the doublet should be zero. We can increase the precision to see the probability decrease.
End of explanation
"""
Bfield = logspace(-8, -1)
probability = []
for B in Bfield:
Ho.ZT.setBz(B)
probability.append(Ho.J_transitions()[0,1])
loglog(Bfield, probability)
ylabel("B_z (T)")
xlabel("Transition probability")
xlim(-8.5, 8.5);
"""
Explanation: The transition probability within the ground doublet will increase with magnetic field.
End of explanation
"""
|
wheeler-microfluidics/teensy-minimal-rpc | teensy_minimal_rpc/notebooks/dma-examples/Example - Multi-channel ADC using DMA.ipynb | gpl-3.0 | from arduino_rpc.protobuf import resolve_field_values
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as DMA
import teensy_minimal_rpc.ADC as ADC
# Disconnect from existing proxy (if available)
try:
del proxy
except NameError:
pass
proxy = SerialProxy()
"""
Explanation: Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
<img src="multi-channel_ADC_using_DMA.jpg" style="max-height: 500px" />
Device
Connect to device
End of explanation
"""
import arduino_helpers.hardware.teensy as teensy
# Set ADC parameters
proxy.setAveraging(16, teensy.ADC_0)
proxy.setResolution(16, teensy.ADC_0)
proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.update_adc_registers(
teensy.ADC_0,
ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B)))
"""
Explanation: Configure ADC sample rate, etc.
End of explanation
"""
DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h`
DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h`
# DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
# DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
# DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
proxy.update_dma_mux_chcfg(0, DMA.MUX_CHCFG(SOURCE=DMAMUX_SOURCE_ADC0,
TRIG=False,
ENBL=True))
# DMA request input signals and this enable request flag
# must be asserted before a channel’s hardware service
# request is accepted (21.3.3/394).
# DMA_SERQ = i
proxy.update_dma_registers(DMA.Registers(SERQ=0))
proxy.enableDMA(teensy.ADC_0)
proxy.DMA_registers().loc['']
dmamux0 = DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(0).tostring())
resolve_field_values(dmamux0)[['full_name', 'value']]
adc0 = ADC.Registers.FromString(proxy.read_adc_registers(teensy.ADC_0).tostring())
resolve_field_values(adc0)[['full_name', 'value']].loc[['CFG2', 'SC1A', 'SC3']]
"""
Explanation: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
End of explanation
"""
import re
import numpy as np
import pandas as pd
import arduino_helpers.hardware.teensy.adc as adc
sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)]) for v in dir(teensy) if re.search(r'^A\d+', v)]))
channel_sc1as = np.array(sc1a_pins[['A0', 'A1', 'A0', 'A3', 'A0']].tolist(), dtype='uint32')
"""
Explanation: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
End of explanation
"""
proxy.free_all()
N = np.dtype('uint16').itemsize * channel_sc1as.size
# Allocate source array
adc_result_addr = proxy.mem_alloc(N)
# Fill result array with zeros
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Copy channel SC1A configurations to device memory
adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channel_sc1as.view('uint8'))
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print 'Analog pins:', proxy.mem_cpy_device_to_host(adc_sda1s_addr, len(channel_sc1as) *
channel_sc1as.dtype.itemsize).view('uint32')
"""
Explanation: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
End of explanation
"""
ADC0_SC1A = 0x4003B000 # ADC status and control registers 1
sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT,
DSIZE=DMA.R_TCD_ATTR._32_BIT),
NBYTES_MLNO=4,
SADDR=int(adc_sda1s_addr),
SOFF=4,
SLAST=-channel_sc1as.size * 4,
DADDR=int(ADC0_SC1A),
DOFF=0,
DLASTSGA=0,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(1, sda1_tcd_msg)
"""
Explanation: Configure DMA channel $i$
End of explanation
"""
ADC0_RA = 0x4003B010 # ADC data result register
ADC0_RB = 0x4003B014 # ADC data result register
tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=2,
SADDR=ADC0_RA,
SOFF=0,
SLAST=0,
DADDR=int(adc_result_addr),
DOFF=2,
DLASTSGA=-channel_sc1as.size * 2,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(0, tcd_msg)
"""
Explanation: Configure DMA channel $ii$
End of explanation
"""
# Clear output array to zero.
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Software trigger channel $i$ to copy *first* SC1A configuration, which
# starts ADC conversion for the first channel.
#
# Conversions for subsequent ADC channels are triggered through minor-loop
# linking from DMA channel $ii$ to DMA channel $i$ (*not* through explicit
# software trigger).
proxy.update_dma_registers(DMA.Registers(SSRT=1))
# Display converted ADC values (one value per channel in `channel_sd1as` list).
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
"""
Explanation: Trigger sample scan across selected ADC channels
End of explanation
"""
|
empet/Math | Joukowski-airfoil.ipynb | bsd-3-clause | import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
%matplotlib inline
def Juc(z, lam):#Joukowski transformation
return z+(lam**2)/z
def circle(C, R):
t=np.linspace(0,2*np.pi, 200)
return C+R*np.exp(1j*t)
def deg2radians(deg):
return deg*np.pi/180
plt.rcParams['figure.figsize'] = 8, 8
def streamlines(alpha=10, beta=5, V_inf=1, R=1, ratio=1.2):
#ratio=R/lam
alpha=deg2radians(alpha)# angle of attack
beta=deg2radians(beta)# -beta is the argument of the complex no (Joukovski parameter - circle center)
if ratio<=1: #R/lam must be >1
raise ValueError('R/lambda must be >1')
lam=R/ratio#lam is the parameter of the Joukowski transformation
center_c=lam-R*np.exp(-1j*beta)# Center of the circle
x=np.arange(-3,3, 0.1)
y=np.arange(-3,3, 0.1)
x,y=np.meshgrid(x,y)
z=x+1j*y
z=ma.masked_where(np.absolute(z-center_c)<=R, z)
Z=z-center_c
Gamma=-4*np.pi*V_inf*R*np.sin(beta+alpha)#circulation
# np.log(Z) cannot be calculated correctly due to a numpy bug np.log(MaskedArray);
#https://github.com/numpy/numpy/issues/8516
# we perform an elementwise computation
U=np.zeros(Z.shape, dtype=np.complex)
with np.errstate(divide='ignore'):#avoid warning when evaluates np.log(0+1jy).
#In this case the arg is arctan(y/0)+cst
for m in range(Z.shape[0]):
for n in range(Z.shape[1]):
#U[m,n]=Gamma*np.log(Z[m,n]/R)/(2*np.pi)#
U[m,n]=Gamma*np.log((Z[m,n]*np.exp(-1j*alpha))/R)/(2*np.pi)
c_flow=V_inf*Z*np.exp(-1j*alpha) + (V_inf*np.exp(1j*alpha)*R**2)/Z - 1j*U #the complex flow
J=Juc(z, lam)#Joukovski transformation of the z-plane minus the disc D(center_c, R)
Circle=circle(center_c, R)
Airfoil=Juc(Circle, lam)# airfoil
return J, c_flow.imag, Airfoil
J, stream_func, Airfoil=streamlines()
levels=np.arange(-2.8, 3.8, 0.2).tolist()
"""
Explanation: The flow past a Joukowski airfoil
The generation of streamlines of the flow past a Joukowski airfoil follows this chapter from the Internet Book of Fluid Dynamics.
Visualization of streamlines is based on the property of the complex flow
with respect to a conformal transformation:
If w is the complex plane of the airfoil, z is the complex plane of the circle as the section in a circular cylinder,
and $w=w(z)$ is a conformal tranformatiom from the outside of the disc mapped to the airfoil,
then the complex flow, $F$, past the airfoil is related to the complex flow, $f$, past the circle(cylinder) by:
$F(w)=f(z(w))$ or equivalently $F(w(z))=f(z)$.
The streamlines of each flow are defined as contour plots of the imaginary part of the complex flow.
In our case, due to the latter relation, we plot the contours of the stream function, $Imag{(f)}$, over $w(z)$, where $w(z)$ is the Joukowski transformation, that maps a suitable circle onto the airfoil.
End of explanation
"""
fig=plt.figure()
ax=fig.add_subplot(111)
cp=ax.contour(J.real, J.imag, stream_func,levels=levels, colors='blue', linewidths=1,
linestyles='solid')# this means that the flow is evaluated at Juc(z) since c_flow(Z)=C_flow(csi(Z))
ax.plot(Airfoil.real, Airfoil.imag)
ax.set_aspect('equal')
"""
Explanation: Matplotlib plot of the streamlines:
End of explanation
"""
import plotly.plotly as py
py.sign_in('empet', 'my_api_key')
conts=cp.allsegs # get the segments of line computed via plt.contour
xline=[]
yline=[]
for cont in conts:
if len(cont)!=0:
for arr in cont:
xline+=arr[:,0].tolist()
yline+=arr[:,1].tolist()
xline.append(None)
yline.append(None)
flowlines=dict(x=xline,
y=yline,
type='scatter',
mode='lines',
line=dict(color='blue', width=1)
)
#define a filled path (a shape) representing the airfoil
shapes=[]
path='M'
for pt in Airfoil:
path+=str(pt.real)+', '+str(pt.imag)+' L '
shapes.append(dict(line=dict(color='blue',
width=1.5
),
path= path,
type='path',
fillcolor='#edf4fe'
)
)
axis=dict(showline=True, zeroline=False, ticklen=4, mirror=True, showgrid=False)
layout=dict(title="The streamlines for the flow past a Joukowski airfoil<br>Angle of attack, alpha=10 degrees",
font=dict(family='Balto'),
showlegend=False,
autosize=False,
width=600,
height=600,
xaxis=dict(axis, **{'range': [ma.min(J.real), ma.max(J.real)]}),
yaxis=dict(axis, **{'range':[ma.min(J.imag), ma.max(J.imag)]}),
shapes=shapes,
plot_bgcolor='#c1e3ff',
hovermode='closest',
)
fig=dict(data=[flowlines],layout=layout)
py.iplot(fig, filename='Joucstreamlns')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Plotly plot of the streamlines:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/vertex_endpoints/tf_hub_obj_detection/deploy_tfhub_object_detection_on_vertex_endpoints.ipynb | apache-2.0 | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
"""
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/vertex_endpoints/tf_hub_obj_detection/deploy_tfhub_object_detection_on_vertex_endpoints.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/vertex_endpoints/tf_hub_obj_detection/deploy_tfhub_object_detection_on_vertex_endpoints.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/vertex_endpoints/tf_hub_obj_detection/deploy_tfhub_object_detection_on_vertex_endpoints.ipynb"
<img src="https://cloud.google.com/images/products/ai/ai-solutions-icon.svg" alt="Vertex AI Workbench notebook"> Open in Vertex AI Workbench
</a>
</td>
</table>
Deploying a TensorFlow Hub object detection model using Vertex AI Endpoints
Overview
This tutorial demonstrates how to take a TensorFlow Hub object detection model, add a preprocessing layer and deploy it to a Vertex AI endpoint for online prediction.
Because the object detection model accepts tensors as an input, we will add a preprocessing layer that accepts jpeg strings and decodes them. This makes it easier for clients to call the endpoint without having to implement their own TensorFlow logic.
Model
The model used for this tutorial is the CenterNet HourGlass104 Keypoints 512x512 from TensorFlow Hub open source model repository
Objective
The steps performed include:
- Download a object detection model from TensorFlow Hub.
- Create a preprocessing layer using @tf.function.
- Upload the model to Vertex AI Models.
- Create a Vertex AI Endpoint.
- Call the endpoint with both the Python Vertex AI SDK and through command line using CURL.
- Undeploy the endpoint and delete the model.
Costs
This tutorial uses billable components of Google Cloud:
- Vertex AI
- Cloud Storage
Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex SDK for Python
End of explanation
"""
!pip install -U "tensorflow>=2.7"
"""
Explanation: Install TensorFlow.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
import os
PROJECT_ID = ""
if not os.getenv("IS_TESTING"):
# Get your Google Cloud project ID from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
"""
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you might be able to get your project ID using gcloud.
End of explanation
"""
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "" # @param {type:"string"}
"""
Explanation: Otherwise, set your project ID here.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
print(BUCKET_NAME)
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
You first upload the model files to a Cloud Storage bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
"""
! gsutil mb -p $PROJECT_ID -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
# Download and extract model
!wget https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1?tf-hub-format=compressed
!tar xvzf 1?tf-hub-format=compressed
!mkdir obj_detect_model
!mv ./saved_model.pb obj_detect_model/
!mv ./variables obj_detect_model/
"""
Explanation: Download and extract the model
There are various object detection models in TensorFlow Hub. We will be using the CenterNet HourGlass104 Keypoints 512x512.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from PIL import Image
from six import BytesIO
# Clone the tensorflow models repository
!git clone --depth 1 https://github.com/tensorflow/models
"""
Explanation: Visualization tools
To visualize the images with the proper detected boxes, keypoints and segmentation, we will use the TensorFlow Object Detection API. To install it we will clone the repo.
End of explanation
"""
%%bash
sudo apt install -y protobuf-compiler
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
pip install .
"""
Explanation: Installing the object detection API
End of explanation
"""
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
%matplotlib inline
"""
Explanation: Now we can import the dependencies we will need later
End of explanation
"""
PATH_TO_LABELS = "./models/research/object_detection/data/mscoco_label_map.pbtxt"
category_index = label_map_util.create_category_index_from_labelmap(
PATH_TO_LABELS, use_display_name=True
)
print(category_index[5])
"""
Explanation: Load label map data (for plotting).
Label maps correspond index numbers to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine.
We are going, for simplicity, to load from the repository that we loaded the Object Detection API code
End of explanation
"""
model = tf.saved_model.load("obj_detect_model/")
"""
Explanation: Load the model
Here we will load the downloaded model into memory.
End of explanation
"""
image_path = "models/research/object_detection/test_images/image2.jpg"
def load_image_into_numpy_array(path):
image_data = tf.io.gfile.GFile(path, "rb").read()
image = Image.open(BytesIO(image_data))
(width, height) = image.size
return np.array(image.getdata()).reshape((1, height, width, 3)).astype(np.uint8)
image_np = load_image_into_numpy_array(image_path)
plt.figure(figsize=(24, 32))
plt.imshow(image_np[0])
plt.show()
results = model(image_np)
result = {key: value.numpy() for key, value in results.items()}
"""
Explanation: Load an image and use the model for inference.
End of explanation
"""
COCO17_HUMAN_POSE_KEYPOINTS = [
(0, 1),
(0, 2),
(1, 3),
(2, 4),
(0, 5),
(0, 6),
(5, 7),
(7, 9),
(6, 8),
(8, 10),
(5, 6),
(5, 11),
(6, 12),
(11, 12),
(11, 13),
(13, 15),
(12, 14),
(14, 16),
]
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if "detection_keypoints" in result:
keypoints = result["detection_keypoints"][0]
keypoint_scores = result["detection_keypoint_scores"][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result["detection_boxes"][0],
(result["detection_classes"][0] + label_id_offset).astype(int),
result["detection_scores"][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=0.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS,
)
plt.figure(figsize=(24, 32))
plt.imshow(image_np_with_detections[0])
plt.show()
"""
Explanation: Visualize results
End of explanation
"""
VERTEX_MODEL_PATH = "obj_detect_model_vertex/"
def _preprocess(bytes_inputs):
decoded = tf.io.decode_jpeg(bytes_inputs, channels=3)
resized = tf.image.resize(decoded, size=(512, 512))
return tf.cast(resized, dtype=tf.uint8)
def _get_serve_image_fn(model):
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serve_image_fn(bytes_inputs):
decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.uint8)
return model(decoded_images)
return serve_image_fn
signatures = {
"serving_default": _get_serve_image_fn(model).get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string)
)
}
tf.saved_model.save(model, VERTEX_MODEL_PATH, signatures=signatures)
"""
Explanation: Create a preprocessing function for Vertex AI serving.
The model expects a numpy array as an input. This creates two problems for our endpoint:
* Vertex AI public endpoints have a maximum request size of 1.5 MB. Images are much larger than this.
* It would make it more difficult for clients based in languages other than Python to build a request.
These two limitations can be solved by building a preprocessing function and attaching it to our model.
We will create a preprocessing function that takes a jpeg encoded image, resizes it to the model's minimum required input and passes this preprocessed input to the model. We will then save the model with the preprocessing function which will be ready to be uploaded to our Vertex AI endpoint.
The image will be passed to our endpoint as a base64 encoded jpeg string.
End of explanation
"""
!saved_model_cli show --dir obj_detect_model --all
!saved_model_cli show --dir obj_detect_model_vertex --all
"""
Explanation: We will verify that the input was modified correctly by using the saved_model_cli command on both the original and vertexai prepared model.
The results for the serving_default signature should be as follows.
Original model:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_UINT8
shape: (1, -1, -1, 3)
name: serving_default_input_tensor:0
Vertex AI model:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['bytes_inputs'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_bytes_inputs:0
End of explanation
"""
vertex_model = tf.saved_model.load(VERTEX_MODEL_PATH)
import base64
def encode_image(image):
with open(image, "rb") as image_file:
encoded_string = base64.urlsafe_b64encode(image_file.read()).decode("utf-8")
return encoded_string
results = vertex_model([_preprocess(tf.io.decode_base64(encode_image(image_path)))])
"""
Explanation: Lets test the preprocessing function by passing it a base 64 encoded jpeg image.
End of explanation
"""
# different object detection models have additional results
# all of them are explained in the documentation
result = {key: value.numpy() for key, value in results.items()}
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if "detection_keypoints" in result:
keypoints = result["detection_keypoints"][0]
keypoint_scores = result["detection_keypoint_scores"][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result["detection_boxes"][0],
(result["detection_classes"][0] + label_id_offset).astype(int),
result["detection_scores"][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=0.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS,
)
plt.figure(figsize=(24, 32))
plt.imshow(image_np_with_detections[0])
plt.show()
"""
Explanation: View the results
End of explanation
"""
!gsutil cp -r $VERTEX_MODEL_PATH $BUCKET_NAME/obj_detection_model_vertex
!gsutil ls $BUCKET_NAME
"""
Explanation: Create a Vertex AI endpoint
In this section we will upload the model to Google Cloud Storage and reference it inside Vertex AI for endpoints deployment
End of explanation
"""
!gcloud ai models upload \
--region=us-central1 \
--project=$PROJECT_ID \
--display-name=object-detection \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-5:latest \
--artifact-uri=$BUCKET_NAME/obj_detection_model_vertex
"""
Explanation: Create a model in Vertex AI
End of explanation
"""
!gcloud ai endpoints create \
--project=$PROJECT_ID \
--region=$REGION \
--display-name=object-detection-endpoint
"""
Explanation: Create endpoint
End of explanation
"""
%%bash -s "$REGION" "$PROJECT_ID" --out MODEL_ID
MODEL_ID=`gcloud ai models list --region=$1 --project=$2 | grep object-detection`
echo $MODEL_ID | cut -d' ' -f1 | tr -d '\n'
%%bash -s "$REGION" "$PROJECT_ID" --out ENDPOINT_ID
ENDPOINT_ID=`gcloud ai endpoints list --region=$1 --project=$2 | sed -n 2p`
echo $ENDPOINT_ID | cut -d' ' -f1 | tr -d '\n'
!gcloud ai endpoints deploy-model $ENDPOINT_ID \
--project=$PROJECT_ID \
--region=$REGION \
--model=$MODEL_ID \
--display-name=object-detection-endpoint \
--traffic-split=0=100
"""
Explanation: Retrieve MODEL_ID and ENDPPOINT_ID
End of explanation
"""
import os
print(os.stat(image_path).st_size)
im = Image.open(image_path)
im.save("image2.jpg", quality=95)
print(os.stat("image2.jpg").st_size)
!echo {"\""instances"\"" : [{"\""bytes_inputs"\"" : {"\""b64"\"" : "\""$(base64 "image2.jpg")"\""}}]} > instances.json
!curl POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/$PROJECT_ID/locations/us-central1/endpoints/$ENDPOINT_ID:predict \
-d @instances.json > results.json
"""
Explanation: Write the request to a json file and call the endpoint using Curl.
First we need to reduce our image memory footprint. As of Feb. 2022, Vertex AI endpoints has a maximum request size of 1.5mb. This is done to keep the containers behind endpoints from crashing during heavy load times.
End of explanation
"""
# Get the input key
serving_input = list(
vertex_model.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving input :", serving_input)
"""
Explanation: Make Predictions using the Vertex SDK
The Vertex SDK has convenient methods to call endpoints to make predictions.
First, we get the serving input from the model. This is what the endpoint expects as a key for the base64 encoded image.
End of explanation
"""
from google.cloud import aiplatform
aip_endpoint_name = (
f"projects/{PROJECT_ID}/locations/us-central1/endpoints/{ENDPOINT_ID}"
)
endpoint = aiplatform.Endpoint(aip_endpoint_name)
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
# Endpoints will do the base64 decoding, so we change the function to encode the image a bit.
def encode_image_bytes(image_path):
bytes = tf.io.read_file(image_path)
return base64.b64encode(bytes.numpy()).decode("utf-8")
instances_list = [{serving_input: {"b64": encode_image_bytes("image2.jpg")}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
results = endpoint.predict(instances=instances)
"""
Explanation: Load an endpoint object.
End of explanation
"""
# different object detection models have additional results
# all of them are explained in the documentation
prediction_results = results.predictions[0]
result = {key: np.array([value]) for key, value in prediction_results.items()}
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if "detection_keypoints" in result:
keypoints = result["detection_keypoints"][0]
keypoint_scores = result["detection_keypoint_scores"][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result["detection_boxes"][0],
(result["detection_classes"][0] + label_id_offset).astype(int),
result["detection_scores"][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=0.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS,
)
plt.figure(figsize=(24, 32))
plt.imshow(image_np_with_detections[0])
plt.show()
"""
Explanation: View results
End of explanation
"""
%%bash -s "$ENDPOINT_ID" "$REGION" "$PROJECT_ID" --out ENDPOINT_MODEL_ID
ENDPOINT_MODEL_ID=$(gcloud ai endpoints describe $1 --region=$2 --project=$3 | grep "id:")
ENDPOINT_MODEL_ID=`echo $ENDPOINT_MODEL_ID | cut -d' ' -f2`
echo $ENDPOINT_MODEL_ID | tr -d "'"
# Undeploy endpoint
! gcloud ai endpoints undeploy-model $ENDPOINT_ID \
--project=$PROJECT_ID \
--region=$REGION \
--deployed-model-id=$ENDPOINT_MODEL_ID \
# Delete endpoint resource
! gcloud ai endpoints delete $ENDPOINT_ID \
--project=$PROJECT_ID \
--region=$REGION \
--quiet
# Delete model resource
! gcloud ai models delete $MODEL_ID \
--project=$PROJECT_ID \
--region=$REGION \
--quiet
# Delete Cloud Storage objects that were created
#! gsutil -m rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/pitch_yaw.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Misalignment (Pitch & Yaw)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['teffs'])
"""
Explanation: Now let's add a mesh dataset at a few different times so that we can see how the misalignment affect the surfaces of the stars.
End of explanation
"""
print b['pitch@component']
print b['incl@constraint']
"""
Explanation: Relevant Parameters
The 'pitch' parameter defines the misalignment of a given star in the same direction as the inclination. We can see how it is defined relative to the inclination by accessing the constraint.
Note that, by default, it is the inclination of the component that is constrained with the inclination of the orbit and the pitch as free parameters.
End of explanation
"""
print b['yaw@component']
print b['long_an@constraint']
"""
Explanation: Similarly, the 'yaw' parameter defines the misalignment in the direction of the lonigtude of the ascending node.
Note that, by default, it is the long_an of the component that is constrained with the long_an of the orbit and the yaw as free parameters.
End of explanation
"""
print b['long_an@primary@component'].description
"""
Explanation: The long_an of a star is a bit of an odd concept, and really is just meant to be analogous to the inclination case. In reality, it is the angle of the "equator" of the star on the sky.
End of explanation
"""
b['syncpar@secondary'] = 5.0
b['pitch@secondary'] = 0
b['yaw@secondary'] = 0
b.run_compute(irrad_method='none')
"""
Explanation: Note also that the system is aligned by default, with the pitch and yaw both set to zero.
Misaligned Systems
To create a misaligned system, we must set at pitch and/or yaw to be non-zero.
But first let's create an aligned system for comparison. In order to easily see the spin-axis, we'll plot the effective temperature and spin-up our star to exaggerate the effect.
End of explanation
"""
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
"""
Explanation: We'll plot the mesh as it would be seen on the plane of the sky.
End of explanation
"""
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: and also with the line-of-sight along the x-axis.
End of explanation
"""
b['pitch@secondary'] = 30
b['yaw@secondary'] = 0
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: If we set the pitch to be non-zero, we'd expect to see a change in the spin axis along the line-of-sight.
End of explanation
"""
b['pitch@secondary@component'] = 0
b['yaw@secondary@component'] = 30
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: And if we set the yaw to be non-zero, we'll see the rotation axis rotate on the plane of the sky.
End of explanation
"""
|
spectralDNS/shenfun | binder/sphere-helmholtz.ipynb | bsd-2-clause | from shenfun import *
from shenfun.la import SolverGeneric1ND
import sympy as sp
"""
Explanation: Spherical coordinates in shenfun
The Helmholtz equation is given as
$$
-\nabla^2 u + \alpha u = f.
$$
In this notebook we will solve this equation on a unitsphere, using spherical coordinates. To verify the implementation we use a spherical harmonics function as manufactured solution.
We start the implementation by importing necessary functionality from shenfun and sympy:
End of explanation
"""
r = 1
theta, phi = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
"""
Explanation: Define spherical coordinates $(r, \theta, \phi)$
$$
\begin{align}
x &= r \sin \theta \cos \phi \
y &= r \sin \theta \sin \phi \
z &= r \cos \theta
\end{align}
$$
using sympy. The radius r will be constant r=1. We create the three-dimensional position vector rv as a function of the two new coordinates $(\theta, \phi)$.
End of explanation
"""
N, M = 64, 64
L0 = FunctionSpace(N, 'C', domain=(0, np.pi))
F1 = FunctionSpace(M, 'F', dtype='d')
T = TensorProductSpace(comm, (L0, F1), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
v = TestFunction(T)
u = TrialFunction(T)
"""
Explanation: We define bases with the domains $\theta \in [0, \pi]$ and $\phi \in [0, 2\pi]$. Also define a tensorproductspace, test- and trialfunction. Note that the new coordinates and the position vector are fed to the TensorProductSpace and not the individual spaces:
End of explanation
"""
#sph = sp.functions.special.spherical_harmonics.Ynm
#ue = sph(6, 3, theta, phi)
ue = sp.cos(8*(sp.sin(theta)*sp.cos(phi) + sp.sin(theta)*sp.sin(phi) + sp.cos(theta)))
"""
Explanation: Use one spherical harmonic function as a manufactured solution
End of explanation
"""
alpha = 1000
g = (-div(grad(u))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g*T.coors.sg)
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
"""
Explanation: Compute the right hand side on the quadrature mesh and take the scalar product
End of explanation
"""
from IPython.display import Math
Math((-div(grad(u))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
#Math((grad(u)).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
"""
Explanation: Note that we can use the shenfun operators div and grad on a trialfunction u, and then switch the trialfunction for a sympy function ue. The operators will then make use of sympy's derivative method on the function ue. Here (-div(grad(u))+alpha*u) corresponds to the equation we are trying to solve:
End of explanation
"""
mats = inner(v, (-div(grad(u))+alpha*u)*T.coors.sg)
mats[3].mats
"""
Explanation: Evaluated with u=ue and you get the exact right hand side f.
Tensor product matrices that make up the Helmholtz equation are then assembled as
End of explanation
"""
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
"""
Explanation: And the linear system of equations can be solved using the generic SolverGeneric1ND, that can be used for any problem that only has non-periodic boundary conditions in one dimension.
End of explanation
"""
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(inner(1, (uj-uq)**2)), np.linalg.norm(uj-uq))
np.linalg.norm(u_hat - u_hat.backward().forward())
import matplotlib.pyplot as plt
%matplotlib inline
plt.spy(Sol1.solvers1D[1].mat, markersize=0.2)
#raise RuntimeError
"""
Explanation: Transform back to real space and compute the error.
End of explanation
"""
u_hat2 = u_hat.refine([N*3, M*3])
"""
Explanation: Postprocessing
Since we used quite few quadrature points in solving this problem, we refine the solution for a nicer plot. Note that refine simply pads Functions with zeros, which gives exactly the same accuracy, but more quadrature points in real space. u_hat has NxM quadrature points, here we refine using 3 times as many points along both dimensions
End of explanation
"""
surf3D(u_hat2, wrapaxes=[1])
"""
Explanation: The periodic solution does not contain the periodic points twice, i.e., the computational mesh contains $0$, but not $2\pi$. It looks better if we wrap the periodic dimension all around to $2\pi$, and this is achieved with
End of explanation
"""
Math((div(grad(div(grad(u))))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
"""
Explanation: Biharmonic equation
A biharmonic equation is given as
$$
\nabla^4 u + \alpha u = f.
$$
This equation is extremely messy in spherical coordinates. I cannot even find it posted anywhere. Nevertheless, we can solve it trivially with shenfun, and we can also see what it looks like
End of explanation
"""
g = (div(grad(div(grad(u))))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g)
# Take scalar product
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
mats = inner(v, div(grad(div(grad(u)))) + alpha*u)
# Solve
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
# Transform back to real space.
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(dx((uj-uq)**2)))
"""
Explanation: Remember that this equation uses constant radius r=1. We now solve the equation using the same manufactured solution as for the Helmholtz equation.
End of explanation
"""
r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
L0 = FunctionSpace(20, 'L', domain=(0, 1))
F1 = FunctionSpace(20, 'L', domain=(0, np.pi))
F2 = FunctionSpace(20, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
p = TrialFunction(T)
Math((div(grad(div(grad(p))))).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
q = TestFunction(T)
A = inner(div(grad(q)), div(grad(p)))
"""
Explanation: Want to see what the regular 3-dimensional biharmonic equation looks like in spherical coordinates? This is extremely tedious to derive by hand, but in shenfun you can get there with the following few lines of code
End of explanation
"""
L0 = FunctionSpace(8, 'C', domain=(0, np.pi))
F1 = FunctionSpace(8, 'F', dtype='D')
F2 = FunctionSpace(8, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2))
p = TrialFunction(T)
Math((div(grad(div(grad(p))))).tolatex(funcname='u'))
"""
Explanation: I don't know if this is actually correct, because I haven't derived it by hand and I haven't seen it printed anywhere, but at least I know the Cartesian equation is correct:
End of explanation
"""
|
phockett/ePSproc | notebooks/plottingDev/ITK_tests_070320.ipynb | gpl-3.0 | import numpy as np
from itkwidgets import view
"""
Explanation: ITK widgets tests
See also pyVista_tests_070320.ipynb
End of explanation
"""
number_of_points = 3000
gaussian_1_mean = [0.0, 0.0, 0.0]
gaussian_1_cov = [[1.0, 0.0, 0.0], [0.0, 2.0, 0.0], [0.0, 0.0, 0.5]]
point_set_1 = np.random.multivariate_normal(gaussian_1_mean, gaussian_1_cov,
number_of_points)
gaussian_2_mean = [4.0, 6.0, 7.0]
gaussian_2_cov = [[2.0, 0.0, 0.0], [0.0, 2.0, 0.0], [0.0, 0.0, 1.5]]
point_set_2 = np.random.multivariate_normal(gaussian_2_mean, gaussian_2_cov,
number_of_points)
gaussian_3_mean = [4.0, 0.0, 7.0]
gaussian_3_cov = [[4.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 3.5]]
point_set_3 = np.random.multivariate_normal(gaussian_3_mean, gaussian_3_cov,
number_of_points)
view(point_sets=[point_set_1, point_set_2, point_set_3])
view?
"""
Explanation: Point cloud plotting
https://github.com/InsightSoftwareConsortium/itkwidgets/blob/master/examples/NumPyArrayPointSet.ipynb
End of explanation
"""
import pyvista
from pyvista import examples
mesh = examples.download_st_helens().warp_by_scalar()
mesh
view(geometries=mesh)
"""
Explanation: pyVista examples
https://github.com/InsightSoftwareConsortium/itkwidgets/blob/master/examples/pyvista.StructuredGrid.ipynb
End of explanation
"""
|
agarwal-shubham/ACNN | deep_learning_for_3D_shape_analysis_anisotropic.ipynb | mit | import sys
import os
import numpy as np
import scipy.io
import time
import theano
import theano.tensor as T
import theano.sparse as Tsp
import lasagne as L
import lasagne.layers as LL
import lasagne.objectives as LO
from lasagne.layers.normalization import batch_norm
sys.path.append('..')
from icnn import aniso_utils_lasagne, dataset, snapshotter
"""
Explanation: Prerequisites
Install Theano and Lasagne using the following commands:
bash
pip install -r https://raw.githubusercontent.com/Lasagne/Lasagne/master/requirements.txt
pip install https://github.com/Lasagne/Lasagne/archive/master.zip
Working in a virtual environment is recommended.
Data preparation
Current code allows to generate geodesic patches from a collection of shapes represented as triangular meshes.
To get started with the pre-processing:
git clone https://github.com/jonathanmasci/ShapeNet_data_preparation_toolbox.git
The usual processing pipeline is show in run_forrest_run.m.
We will soon update this preparation stage, so perhaps better to start with our pre-computed dataset, and stay tuned! :-)
Prepared data
All it is required to train on the FAUST_registration dataset for this demo is available for download at
https://www.dropbox.com/s/aamd98nynkvbcop/EG16_tutorial.tar.bz2?dl=0
ICNN Toolbox
bash
git clone https://github.com/jonathanmasci/EG16_tutorial.git
End of explanation
"""
base_path = '/home/shubham/Desktop/IndependentStudy/EG16_tutorial/dataset/FAUST_registrations/data/diam=200/'
# train_txt, test_txt, descs_path, patches_path, geods_path, labels_path, ...
# desc_field='desc', patch_field='M', geod_field='geods', label_field='labels', epoch_size=100
ds = dataset.ClassificationDatasetPatchesMinimal(
'FAUST_registrations_train.txt', 'FAUST_registrations_test.txt',
os.path.join(base_path, 'descs', 'shot'),
os.path.join(base_path, 'patch_aniso', 'alpha=100_nangles=016_ntvals=005_tmin=6.000_tmax=24.000_thresh=99.900_norm=L1'),
None,
os.path.join(base_path, 'labels'),
epoch_size=50)
# inp = LL.InputLayer(shape=(None, 544))
# print(inp.input_var)
# patch_op = LL.InputLayer(input_var=Tsp.csc_fmatrix('patch_op'), shape=(None, None))
# print(patch_op.shape)
# print(patch_op.input_var)
# icnn = LL.DenseLayer(inp, 16)
# print(icnn.output_shape)
# print(icnn.output_shape)
# desc_net = theano.dot(patch_op, icnn)
"""
Explanation: Data loading
End of explanation
"""
nin = 544
nclasses = 6890
l2_weight = 1e-5
def get_model(inp, patch_op):
icnn = LL.DenseLayer(inp, 16)
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 16, nscale=5, nangl=16))
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 32, nscale=5, nangl=16))
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 64, nscale=5, nangl=16))
ffn = batch_norm(LL.DenseLayer(icnn, 512))
ffn = LL.DenseLayer(icnn, nclasses, nonlinearity=aniso_utils_lasagne.log_softmax)
return ffn
inp = LL.InputLayer(shape=(None, nin))
patch_op = LL.InputLayer(input_var=Tsp.csc_fmatrix('patch_op'), shape=(None, None))
ffn = get_model(inp, patch_op)
# L.layers.get_output -> theano variable representing network
output = LL.get_output(ffn)
pred = LL.get_output(ffn, deterministic=True) # in case we use dropout
# target theano variable indicatind the index a vertex should be mapped to wrt the latent space
target = T.ivector('idxs')
# to work with logit predictions, better behaved numerically
cla = aniso_utils_lasagne.categorical_crossentropy_logdomain(output, target, nclasses).mean()
acc = LO.categorical_accuracy(pred, target).mean()
# a bit of regularization is commonly used
regL2 = L.regularization.regularize_network_params(ffn, L.regularization.l2)
cost = cla + l2_weight * regL2
"""
Explanation: Network definition
End of explanation
"""
params = LL.get_all_params(ffn, trainable=True)
grads = T.grad(cost, params)
# computes the L2 norm of the gradient to better inspect training
grads_norm = T.nlinalg.norm(T.concatenate([g.flatten() for g in grads]), 2)
# Adam turned out to be a very good choice for correspondence
updates = L.updates.adam(grads, params, learning_rate=0.001)
"""
Explanation: Define the update rule, how to train
End of explanation
"""
funcs = dict()
funcs['train'] = theano.function([inp.input_var, patch_op.input_var, target],
[cost, cla, l2_weight * regL2, grads_norm, acc], updates=updates,
on_unused_input='warn')
funcs['acc_loss'] = theano.function([inp.input_var, patch_op.input_var, target],
[acc, cost], on_unused_input='warn')
funcs['predict'] = theano.function([inp.input_var, patch_op.input_var],
[pred], on_unused_input='warn')
"""
Explanation: Compile
End of explanation
"""
n_epochs = 50
eval_freq = 1
start_time = time.time()
best_trn = 1e5
best_tst = 1e5
kvs = snapshotter.Snapshotter('demo_training.snap')
for it_count in xrange(n_epochs):
tic = time.time()
b_l, b_c, b_s, b_r, b_g, b_a = [], [], [], [], [], []
for x_ in ds.train_iter():
tmp = funcs['train'](*x_)
# do some book keeping (store stuff for training curves etc)
b_l.append(tmp[0])
b_c.append(tmp[1])
b_r.append(tmp[2])
b_g.append(tmp[3])
b_a.append(tmp[4])
epoch_cost = np.asarray([np.mean(b_l), np.mean(b_c), np.mean(b_r), np.mean(b_g), np.mean(b_a)])
print(('[Epoch %03i][trn] cost %9.6f (cla %6.4f, reg %6.4f), |grad| = %.06f, acc = %7.5f %% (%.2fsec)') %
(it_count, epoch_cost[0], epoch_cost[1], epoch_cost[2], epoch_cost[3], epoch_cost[4] * 100,
time.time() - tic))
if np.isnan(epoch_cost[0]):
print("NaN in the loss function...let's stop here")
break
if (it_count % eval_freq) == 0:
v_c, v_a = [], []
for x_ in ds.test_iter():
tmp = funcs['acc_loss'](*x_)
v_a.append(tmp[0])
v_c.append(tmp[1])
test_cost = [np.mean(v_c), np.mean(v_a)]
print((' [tst] cost %9.6f, acc = %7.5f %%') % (test_cost[0], test_cost[1] * 100))
if epoch_cost[0] < best_trn:
kvs.store('best_train_params', [it_count, LL.get_all_param_values(ffn)])
best_trn = epoch_cost[0]
if test_cost[0] < best_tst:
kvs.store('best_test_params', [it_count, LL.get_all_param_values(ffn)])
best_tst = test_cost[0]
print("...done training %f" % (time.time() - start_time))
"""
Explanation: Training (a bit simplified)
End of explanation
"""
rewrite = True
out_path = '/tmp/EG16_tutorial/dumps/'
print "Saving output to: %s" % out_path
if not os.path.isdir(out_path) or rewrite==True:
try:
os.makedirs(out_path)
except:
pass
a = []
for i,d in enumerate(ds.test_iter()):
fname = os.path.join(out_path, "%s" % ds.test_fnames[i])
print fname,
tmp = funcs['predict'](d[0], d[1])[0]
a.append(np.mean(np.argmax(tmp, axis=1).flatten() == d[2].flatten()))
scipy.io.savemat(fname, {'desc': tmp})
print ", Acc: %7.5f %%" % (a[-1] * 100.0)
print "\nAverage accuracy across all shapes: %7.5f %%" % (np.mean(a) * 100.0)
else:
print "Model predictions already produced."
"""
Explanation: Test phase
Now that the model is train it is enough to take the fwd function and apply it to new data.
End of explanation
"""
|
kinnala/sp.fem | learning/Example 1 - Stokes equations.ipynb | agpl-3.0 | import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
from spfem.geometry import GeometryMeshPyTriangle
%matplotlib inline
"""
Explanation: Problem statement
The Stokes problem is a classical example of a mixed problem.
Initialize
End of explanation
"""
g = GeometryMeshPyTriangle(np.array([(0, 0), (1, 0), (1, 0.2), (2, 0.4), (2, 0.6), (1, 0.8), (1, 1), (0, 1)]))
m = g.mesh(0.03)
m.draw()
m.show()
"""
Explanation: Geometry and mesh generation
End of explanation
"""
from spfem.element import ElementTriP1, ElementTriP2, ElementH1Vec
from spfem.assembly import AssemblerElement
"""
Explanation: Assembly
End of explanation
"""
a = AssemblerElement(m, ElementH1Vec(ElementTriP2()))
b = AssemblerElement(m, ElementH1Vec(ElementTriP2()), ElementTriP1())
c = AssemblerElement(m, ElementTriP1())
def stokes_bilinear_a(du, dv):
def inner_product(a, b):
return a[0][0]*b[0][0] +\
a[0][1]*b[0][1] +\
a[1][0]*b[1][0] +\
a[1][1]*b[1][1]
def eps(dw): # symmetric part of the velocity gradient
import copy
dW = copy.deepcopy(dw)
dW[0][1] = .5*(dw[0][1] + dw[1][0])
dW[1][0] = dW[0][1]
return dW
return inner_product(eps(du), eps(dv))
A = a.iasm(stokes_bilinear_a) # iasm takes a function handle defining the weak form
def stokes_bilinear_b(du, v):
return (du[0][0]+du[1][1])*v
B = b.iasm(stokes_bilinear_b)
from spfem.utils import stack
from scipy.sparse import csr_matrix
eps = 1e-3
C = c.iasm(lambda u, v: u*v)
K = stack(np.array([[A, B.T], [B, -eps*C]])).tocsr()
from spfem.utils import direct
import copy
x = np.zeros(K.shape[0])
f = copy.deepcopy(x)
# find DOF sets
dirichlet_dofs, _ = a.find_dofs(lambda x, y: x >= 1.0)
inflow_dofs, inflow_locs = a.find_dofs(lambda x, y: x == 2.0, dofrows=[0])
# set inflow condition and solve with direct method
def inflow_profile(y):
return (y-0.4)*(y-0.6)
x[inflow_dofs] = inflow_profile(inflow_locs[1, :])
I = np.setdiff1d(np.arange(K.shape[0]), dirichlet_dofs)
x = direct(K, f, x=x, I=I)
m.plot(x[np.arange(C.shape[0]) + A.shape[0]])
m.plot(np.sqrt(x[a.dofnum_u.n_dof[0, :]]**2+x[a.dofnum_u.n_dof[0, :]]**2), smooth=True)
plt.figure()
plt.quiver(m.p[0, :], m.p[1, :], x[a.dofnum_u.n_dof[0, :]], x[a.dofnum_u.n_dof[1, :]])
m.show()
"""
Explanation: Next we create assemblers for the elements. We can give different elements for the solution vector and the test function. In this case we form the blocks $A$ and $B$ separately.
End of explanation
"""
|
WomensCodingCircle/CodingCirclePython | Lesson12_TabularData/Tabular Data.ipynb | mit | import csv
"""
Explanation: Using Tabular Data in Python
csv module
Python has a csv reader/writer as part of its built in library. It is called csv. This is the simplest way to read tabular data (data in table format). The type of data you used to use excel to process (hopefully you will try out python now). It must be in text format to use the csv module, so .csv (comma separated) or .tsv (tab separated)
Here is the documentation: https://docs.python.org/2/library/csv.html
To use it, first you must import it
import csv
End of explanation
"""
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
"""
Explanation: Next we create a csv reader. You give it a file handle and optionally the dialect, the separator (usually commas or tabs), and the quote character.
with open(filename, 'r') as fh:
reader = csv.reader(fh, delimiter='\t', quotechar='"')
End of explanation
"""
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
for row in reader:
print(row)
"""
Explanation: The reader doesn't do anything yet. It is a generator that allows you to loop through the data (it is very similar to a file handle).
To loop through the data you just write a simple for loop
for row in reader:
#process row
The each row will be a list with each element corresponding to a single column.
End of explanation
"""
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
header = next(reader)
for row in reader:
print(row)
print("Header", header)
"""
Explanation: TRY IT
Open up the file workout.txt (tab delimited, tab='\t') with the csv reader and print out each row.
Doesn't that look nice?
Well there are a few problems that I can see. First the header, how do we deal with that?
Headers
The easiest way I have found is to use the next method (that is available with any generator) before the for loop and to store that in a header variable. That reads the first line and stores it (so that you can use it later) and then advances the pointer to the next line so when you run the for loop it is only on the data.
header = reader.next()
for row in reader:
#process data
End of explanation
"""
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
header = next(reader)
for row in reader:
float_row = [float(row[0]), float(row[1])]
print(float_row)
"""
Explanation: Values are Strings
Notice that each item is a string. You'll need to remember that and convert things that actually should be numbers using the float() or int() functions.
End of explanation
"""
# Let's find the average distance for all walks.
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
header = next(reader)
# Empty list for storing all distances
walks = []
for row in reader:
#distance is in the first column
dist = row[0]
# Convert to float so we can do math
dist = float(dist)
# Append to our list
walks.append(dist)
# Use list aggregation methods to get average distance
ave_dist = sum(walks) / len(walks)
print("Average distance walked: {0:.1f}".format(ave_dist))
# Let's see our pace for each walk
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
header = next(reader)
for row in reader:
#distance is in the first column
dist = row[0]
# Convert to float so we can do math
dist = float(dist)
#time in minutes is in the second column
time_minutes = row[1]
# Convert to float so we can do math
time_minutes = float(time_minutes)
# calculate pace as minutes / kilometer
pace = time_minutes /dist
print("Pace: {0:.1f} min/km".format(pace))
# If you want a challenge, try to make this seconds/mile
# We can filter data. Let's get the ave pace only for walks longer than
# 3 km
# Let's see our pace for each walk
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
header = next(reader)
paces = []
for row in reader:
#distance is in the first column
dist = row[0]
# Convert to float so we can do math
dist = float(dist)
# Don't count short walks
if dist >= 3.0:
#time in minutes is in the second column
time_minutes = row[1]
# Convert to float so we can do math
time_minutes = float(time_minutes)
pace = time_minutes /dist
paces.append(pace)
ave_pace = sum(paces) / len(paces)
print("Average walking pace: {0:.1f} min/km".format(ave_pace))
"""
Explanation: TRY IT
Open workouts with a csv reader. Save the header line to a variable called header. Convert each value in the data rows to ints and print them out.
Analyzing our data
You can use just about everything we have learned up until this point to analyze your data: if statements, regexs, math, data structures. Let's look at some examples.
End of explanation
"""
# Lets see our pace for each walk
with open('walks.csv', 'r') as fh:
reader = csv.reader(fh, delimiter=',')
header = next(reader)
# This is the dictionary we will put our data from the csv into
# The key's are the column headers and the values is a list of
# all the data in that column (transformed into floats)
data = {}
# Initialize our dictionary with keys from header and values
# as empty lists
for column in header:
data[column] = []
for row in reader:
# Enumerate give us the index and the value so
# we don't have to use a count variable
for index, column in enumerate(header):
# convert data point to float
data_point = float(row[index])
# append data to dictionary's list for that column
data[column].append(data_point)
# look at that beautiful data. You can do anything with that!
print(data)
"""
Explanation: Here is something I do all the time. It is a little more complicated than the above examples, so take your time trying to understand it. What I like to do is to read the csv data and transform it to a dictionary of lists. This allows me to use it in many different ways later in the code. It is most useful with larger dataset that I will be analyzing and using many different times. (You can even print it out as JSON!)
End of explanation
"""
import random
with open('sleep.csv', 'w') as fh:
writer = csv.writer(fh, delimiter='\t', quotechar='"')
header = ['day', 'sleep (hr)']
writer.writerow(header)
for i in range(1,11):
hr_sleep = random.randint(4,10)
writer.writerow([i, hr_sleep])
#open the file to prove you wrote it. (Open in excel for best results)
"""
Explanation: TRY IT
Find the average number of squats done from the workouts.txt file. Feel free to copy the code for opening from the previous TRY IT.
Writing CSVs
The csv module also contains code for writing csvs.
To write, you create a writer using the writer method and give it a filehandle and optionally delimiter and quotechar.
with open('my_file.csv', 'w') as fh:
writer = csv.writer(fh, delimiter=',', quotechar='"')
Then use the writerow method with a list to write as it's argument.
writer.writerow([item1, item2])
End of explanation
"""
|
chetnapriyadarshini/deep-learning | batch-norm/Batch_Normalization_Exercises.ipynb | mit | import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
"""
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
"""
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
"""
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
"""
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=None)
# Apply batch normalization to the linear combination of the inputs and weights
layer = tf.layers.batch_normalization(layer, training= is_training)
layer = tf.nn.relu(layer)
return layer
"""
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
# Apply batch normalization to the linear combination of the inputs and weights
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
is_training = tf.placeholder(tf.bool, name="is_training")
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100,is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Tell TensorFlow to update the population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys,is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys,is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
"""
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer_in = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
# gamma for scaling and beta for shifting
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]),trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]),trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean,batch_variance = tf.nn.moments(layer_in,[0])
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean*decay + batch_mean*(1-decay))
train_variance = tf.assign(pop_variance,pop_variance*decay + batch_variance*(1-decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer_in, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer_in, pop_mean,pop_variance,beta,gamma,epsilon)
batch_normalized_output = tf.cond(is_training,batch_norm_training,batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
"""
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
"""
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]),trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]),trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * (decay) + batch_mean * (1-decay))
train_variance = tf.assign(pop_variance, pop_variance * (decay) + batch_variance * (1-decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean,batch_variance,beta,gamma,epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i,is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer,100,is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training:False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels, is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]], is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation
"""
|
laic/gensim | docs/notebooks/doc2vec-IMDB.ipynb | lgpl-2.1 | import locale
import glob
import os.path
import requests
import tarfile
import sys
import codecs
dirname = 'aclImdb'
filename = 'aclImdb_v1.tar.gz'
locale.setlocale(locale.LC_ALL, 'C')
if sys.version > '3':
control_chars = [chr(0x85)]
else:
control_chars = [unichr(0x85)]
# Convert text to lower-case and strip punctuation/symbols from words
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
import time
start = time.clock()
if not os.path.isfile('aclImdb/alldata-id.txt'):
if not os.path.isdir(dirname):
if not os.path.isfile(filename):
# Download IMDB archive
url = u'http://ai.stanford.edu/~amaas/data/sentiment/' + filename
r = requests.get(url)
with open(filename, 'wb') as f:
f.write(r.content)
tar = tarfile.open(filename, mode='r')
tar.extractall()
tar.close()
# Concat and normalize test/train data
folders = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
alldata = u''
for fol in folders:
temp = u''
output = fol.replace('/', '-') + '.txt'
# Is there a better pattern to use?
txt_files = glob.glob('/'.join([dirname, fol, '*.txt']))
for txt in txt_files:
with codecs.open(txt, 'r', encoding='utf-8') as t:
t_clean = t.read()
for c in control_chars:
t_clean = t_clean.replace(c, ' ')
temp += t_clean
temp += "\n"
temp_norm = normalize_text(temp)
with codecs.open('/'.join([dirname, output]), 'w', encoding='utf-8') as n:
n.write(temp_norm)
alldata += temp_norm
with codecs.open('/'.join([dirname, 'alldata-id.txt']), 'w', encoding='utf-8') as f:
for idx, line in enumerate(alldata.splitlines()):
num_line = u"_*{0} {1}\n".format(idx, line)
f.write(num_line)
end = time.clock()
print ("total running time: ", end-start)
import os.path
assert os.path.isfile("aclImdb/alldata-id.txt"), "alldata-id.txt unavailable"
"""
Explanation: gensim doc2vec & IMDB sentiment dataset
TODO: section on introduction & motivation
TODO: prerequisites + dependencies (statsmodels, patsy, ?)
Requirements
Following are the dependencies for this tutorial:
- testfixtures
- statsmodels
Load corpus
Fetch and prep exactly as in Mikolov's go.sh shell script. (Note this cell tests for existence of required files, so steps won't repeat once the final summary file (aclImdb/alldata-id.txt) is available alongside this notebook.)
End of explanation
"""
import gensim
from gensim.models.doc2vec import TaggedDocument
from collections import namedtuple
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no//25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
"""
Explanation: The data is small enough to be read into memory.
End of explanation
"""
from gensim.models import Doc2Vec
import gensim.models.doc2vec
from collections import OrderedDict
import multiprocessing
cores = multiprocessing.cpu_count()
assert gensim.models.doc2vec.FAST_VERSION > -1, "this will be painfully slow otherwise"
simple_models = [
# PV-DM w/concatenation - window=5 (both sides) approximates paper's 10-word total window size
Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),
# PV-DBOW
Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),
# PV-DM w/average
Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),
]
# speed setup by sharing results of 1st model's vocabulary scan
simple_models[0].build_vocab(alldocs) # PV-DM/concat requires one special NULL word so it serves as template
print(simple_models[0])
for model in simple_models[1:]:
model.reset_from(simple_models[0])
print(model)
models_by_name = OrderedDict((str(model), model) for model in simple_models)
"""
Explanation: Set-up Doc2Vec Training & Evaluation Models
Approximating experiment of Le & Mikolov "Distributed Representations of Sentences and Documents", also with guidance from Mikolov's example go.sh:
./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1
Parameter choices below vary:
100-dimensional vectors, as the 400d vectors of the paper don't seem to offer much benefit on this task
similarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out
cbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0
added to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)
a min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)
End of explanation
"""
from gensim.test.test_doc2vec import ConcatenatedDoc2Vec
models_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])
models_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])
"""
Explanation: Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)
End of explanation
"""
import numpy as np
import statsmodels.api as sm
from random import sample
# for timing
from contextlib import contextmanager
from timeit import default_timer
import time
@contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: default_timer() - start
yield lambda: elapser()
end = default_timer()
elapser = lambda: end-start
def logistic_predictor_from_data(train_targets, train_regressors):
logit = sm.Logit(train_targets, train_regressors)
predictor = logit.fit(disp=0)
#print(predictor.summary())
return predictor
def error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):
"""Report error rate on test_doc sentiments, using supplied model and train_docs"""
train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])
train_regressors = sm.add_constant(train_regressors)
predictor = logistic_predictor_from_data(train_targets, train_regressors)
test_data = test_set
if infer:
if infer_subsample < 1.0:
test_data = sample(test_data, int(infer_subsample * len(test_data)))
test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]
else:
test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]
test_regressors = sm.add_constant(test_regressors)
# predict & evaluate
test_predictions = predictor.predict(test_regressors)
corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])
errors = len(test_predictions) - corrects
error_rate = float(errors) / len(test_predictions)
return (error_rate, errors, len(test_predictions), predictor)
"""
Explanation: Predictive Evaluation Methods
Helper methods for evaluating error rate.
End of explanation
"""
from collections import defaultdict
best_error = defaultdict(lambda :1.0) # to selectively-print only best errors achieved
from random import shuffle
import datetime
alpha, min_alpha, passes = (0.025, 0.001, 20)
alpha_delta = (alpha - min_alpha) / passes
print("START %s" % datetime.datetime.now())
for epoch in range(passes):
shuffle(doc_list) # shuffling gets best results
for name, train_model in models_by_name.items():
# train
duration = 'na'
train_model.alpha, train_model.min_alpha = alpha, alpha
with elapsed_timer() as elapsed:
train_model.train(doc_list, total_examples=train_model.corpus_count, epochs=train_model.iter)
duration = '%.1f' % elapsed()
# evaluate
eval_duration = ''
with elapsed_timer() as eval_elapsed:
err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if err <= best_error[name]:
best_error[name] = err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, err, epoch + 1, name, duration, eval_duration))
if ((epoch + 1) % 5) == 0 or epoch == 0:
eval_duration = ''
with elapsed_timer() as eval_elapsed:
infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if infer_err < best_error[name + '_inferred']:
best_error[name + '_inferred'] = infer_err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))
print('completed pass %i at alpha %f' % (epoch + 1, alpha))
alpha -= alpha_delta
print("END %s" % str(datetime.datetime.now()))
"""
Explanation: Bulk Training
Using explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post – with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
Evaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
End of explanation
"""
# print best error rates achieved
for rate, name in sorted((rate, name) for name, rate in best_error.items()):
print("%f %s" % (rate, name))
"""
Explanation: Achieved Sentiment-Prediction Accuracy
End of explanation
"""
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc; re-run cell for more examples
print('for doc %d...' % doc_id)
for model in simple_models:
inferred_docvec = model.infer_vector(alldocs[doc_id].words)
print('%s:\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))
"""
Explanation: In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%.
Examining Results
Are inferred vectors close to the precalculated ones?
End of explanation
"""
import random
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples
model = random.choice(simple_models) # and a random model
sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents
print(u'TARGET (%d): «%s»\n' % (doc_id, ' '.join(alldocs[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))
"""
Explanation: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
End of explanation
"""
word_models = simple_models[:]
import random
from IPython.display import HTML
# pick a random word with a suitable number of occurences
while True:
word = random.choice(word_models[0].wv.index2word)
if word_models[0].wv.vocab[word].count > 10:
break
# or uncomment below line, to just pick a word from the relevant domain:
#word = 'comedy/drama'
similars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\n') for model in word_models]
similar_table = ("<table><tr><th>" +
"</th><th>".join([str(model) for model in word_models]) +
"</th></tr><tr><td>" +
"</td><td>".join(similars_per_model) +
"</td></tr></table>")
print("most similar words for '%s' (%d occurences)" % (word, simple_models[0].wv.vocab[word].count))
HTML(similar_table)
"""
Explanation: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
End of explanation
"""
# assuming something like
# https://word2vec.googlecode.com/svn/trunk/questions-words.txt
# is in local directory
# note: this takes many minutes
for model in word_models:
sections = model.accuracy('questions-words.txt')
correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])
print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))
"""
Explanation: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
End of explanation
"""
This cell left intentionally erroneous.
"""
Explanation: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
End of explanation
"""
from gensim.models import KeyedVectors
w2v_g100b = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
w2v_g100b.compact_name = 'w2v_g100b'
word_models.append(w2v_g100b)
"""
Explanation: To mix the Google dataset (if locally available) into the word tests...
End of explanation
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
"""
Explanation: To get copious logging output from above steps...
End of explanation
"""
%load_ext autoreload
%autoreload 2
"""
Explanation: To auto-reload python code while developing...
End of explanation
"""
|
BYUFLOWLab/MDOnotebooks | MonteCarlo.ipynb | mit | def func(x):
return x[0]**2 + 2*x[1]**2 + 3*x[2]**2
def con(x):
return x[0] + x[1] + x[2] - 3.5 # rewritten in form c <= 0
x = [1.0, 1.0, 1.0]
sigma = [0.00, 0.06, 0.2]
"""
Explanation: Monte Carlo
This is the simple Monte Carlo example you worked on in class.
Consider the following objective and constraint
\begin{align}
f(x) &= x_1^2 + 2x_2^2 + 3x_3^2\
c(x) &= x_1 + x_2 + x_3 \le 3.5
\end{align}
At the point:
$$x = [1, 1, 1]$$
the standard deviation in $x$ is (normally distributed)
$$\sigma_x = [0.0, 0.06, 0.2]$$
Compute the following:
- Output statistics for $f$ (mean, standard deviation, histogram)
- Reliability of $c$
End of explanation
"""
import numpy as np
def stats(n):
f = np.zeros(n)
c = np.zeros(n)
for i in range(n):
x1 = x[0]
x2 = x[1] + np.random.randn(1)*sigma[1]
x3 = x[2] + np.random.randn(1)*sigma[2]
f[i] = func([x1, x2, x3])
c[i] = con([x1, x2, x3])
# mean
mu = np.average(f)
# standard deviation
std = np.std(f, ddof=1) #ddof=1 gives an unbiased estimate (np.sqrt(1.0/(n-1)*(np.sum(f**2) - n*mu**2)))
return mu, std, f, c
"""
Explanation: We will use randn, which gives us a random number k sampled from a normal distribution. It is sampled from a unit normal with zero mean and a standard deviation of 1 so to translate to an arbitrary mean and standard deviation the random value will be
$$ x = \mu + k \sigma $$
If we sample enough times, we should get reliable statistics
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
nvec = np.logspace(1, 6, 20)
muvec = np.zeros(20)
stdvec = np.zeros(20)
for i, n in enumerate(nvec):
muvec[i], stdvec[i], _, _ = stats(int(n))
print i
plt.figure()
plt.semilogx(nvec, muvec, '-o')
plt.figure()
plt.semilogx(nvec, stdvec, '-o')
plt.show()
"""
Explanation: Let's evaluate this function for different values of n (number of samples) to see how long it takes to converge.
End of explanation
"""
n = 1e5
mu, std, f, c = stats(int(n))
print 'mu =', mu
print 'sigma =', std
plt.figure()
plt.hist(f, bins=20);
"""
Explanation: Note that it takes about 100,000 simulations for the statistics to converge. Let's rerun that case and check our the histogram and statistics.
End of explanation
"""
reliability = np.count_nonzero(c <= 0.0)/float(n)
print 'reliability = ', reliability*100, '%'
"""
Explanation: Notice that it skews to the right. Because of the square terms in the function, any deviation causes the function to increase.
We can also estimate our reliability just by counting up our failures (or counting up the number of times the constraint was satisfied).
End of explanation
"""
from pyDOE import lhs
from scipy.stats.distributions import norm
def statsLHS(n):
f = np.zeros(n)
c = np.zeros(n)
# generate latin hypercube sample points beforehand from normal dist
lhd = lhs(2, samples=n)
rpt = norm(loc=0, scale=1).ppf(lhd)
for i in range(n):
x1 = x[0]
x2 = x[1] + rpt[i, 0]*sigma[1]
x3 = x[2] + rpt[i, 1]*sigma[2]
f[i] = func([x1, x2, x3])
c[i] = con([x1, x2, x3])
# mean
mu = np.average(f)
# standard deviation
std = np.std(f, ddof=1) #ddof=1 gives an unbiased estimate (np.sqrt(1.0/(n-1)*(np.sum(f**2) - n*mu**2)))
return mu, std, f, c
muLHS = np.zeros(20)
stdLHS = np.zeros(20)
for i, n in enumerate(nvec):
muLHS[i], stdLHS[i], _, _ = statsLHS(int(n))
print i
plt.figure()
plt.semilogx(nvec, muvec, '-o')
plt.semilogx(nvec, muLHS, '-o')
plt.figure()
plt.semilogx(nvec, stdvec, '-o')
plt.semilogx(nvec, stdLHS, '-o')
plt.show()
"""
Explanation: Monte Carlo with LHS
In our discussion on surrogate-based optimization we learned about Latin Hypercube Sampling (LHS). Let's apply LHS to Monte Carlo to see if it speeds up convergence.
End of explanation
"""
|
miaecle/deepchem | examples/notebooks/deepchem_tensorflow_eager.ipynb | mit | import tensorflow as tf
import tensorflow.contrib.eager as tfe
"""
Explanation: TensorGraph Layers and TensorFlow eager
In this tutorial we will look at the working of TensorGraph layer with TensorFlow eager.
But before that let's see what exactly is TensorFlow eager.
Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. In other words, eager execution is a feature that makes TensorFlow execute operations immediately. Concrete values are returned instead of a computational graph to be executed later.
As a result:
- It allows writing imperative coding style like numpy
- Provides fast debugging with immediate run-time errors and integration with Python tools
- Strong support for higher-order gradients
End of explanation
"""
tfe.enable_eager_execution()
"""
Explanation: After importing neccessary modules, at the program startup we invoke enable_eager_execution().
End of explanation
"""
import numpy as np
import deepchem as dc
from deepchem.models.tensorgraph import layers
"""
Explanation: Enabling eager execution changes how TensorFlow functions behave. Tensor objects return concrete values instead of being a symbolic reference to nodes in a static computational graph(non-eager mode). As a result, eager execution should be enabled at the beginning of a program.
Note that with eager execution enabled, these operations consume and return multi-dimensional arrays as Tensor objects, similar to NumPy ndarrays
Dense layer
End of explanation
"""
# Initialize parameters
in_dim = 2
out_dim = 3
batch_size = 10
inputs = np.random.rand(batch_size, in_dim).astype(np.float32) #Input
layer = layers.Dense(out_dim) # Provide the number of output values as parameter. This creates a Dense layer
result = layer(inputs) #get the ouput tensors
print(result)
"""
Explanation: In the following snippet we describe how to create a Dense layer in eager mode. The good thing about calling a layer as a function is that we don't have to call create_tensor() directly. This is identical to tensorflow API and has no conflict. And since eager mode is enabled, it should return concrete tensors right away.
End of explanation
"""
layer2 = layers.Dense(out_dim)
result2 = layer2(inputs)
print(result2)
"""
Explanation: Creating a second Dense layer should produce different results.
End of explanation
"""
x = layers.Dense(out_dim)(inputs)
print(x)
"""
Explanation: We can also execute the layer in eager mode to compute its output as a function of inputs. If the layer defines any variables, they are created the first time it is invoked. This happens in the same exact way that we would create a single layer in non-eager mode.
The following is also a way to create a layer in eager mode. The create_tensor() is invoked by __call__() object. This gives us an advantage of directly passing the tensor as a parameter while constructing a TensorGraph layer.
End of explanation
"""
from deepchem.models.tensorgraph.layers import Conv1D
width = 5
in_channels = 2
filters = 3
kernel_size = 2
batch_size = 5
inputs = np.random.rand(batch_size, width, in_channels).astype(
np.float32)
layer = layers.Conv1D(filters, kernel_size)
result = layer(inputs)
print(result)
"""
Explanation: Conv1D layer
Dense layers are one of the layers defined in Deepchem. Along with it there are several others like Conv1D, Conv2D, conv3D etc. We also take a look at how to construct a Conv1D layer below.
Basically this layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs.
When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None)
When the argument input_shape is passed in as a tuple of integers e.g (2, 3) it would mean we are passing a sequence of 2 vectors of 3-Dimensional vectors.
And when it is passed as (None, 3) it means that we want variable-length sequences of 3-dimensional vectors.
End of explanation
"""
_input = tf.random_normal([2, 3])
print(_input)
layer = layers.Dense(4) # A DeepChem Dense layer
result = layer(_input)
print(result)
"""
Explanation: Again it should be noted that creating a second Conv1D layer would producr different results.
So thats how we invoke different DeepChem layers in eager mode.
One of the other interesting point is that we can mix tensorflow layers and DeepChem layers. Since they all take tensors as inputs and return tensors as outputs, so you can take the output from one kind of layer and pass it as input to a different kind of layer. But it should be noted that tensorflow layers can't be added to a TensorGraph.
Workflow of DeepChem layers
Now that we've generalised so much, we should actually see if deepchem supplies an identical workflow for layers to that of tensorflow. For instance, let's consider the code where we create a Dense layer.
python
y = Dense(3)(input)
What the above line does is that it creates a dense layer with three outputs. It initializes the weights and the biases. And then it multiplies the input tensor by the weights.
Let's put the above statement in some mathematical terms. A Dense layer has a matrix of weights of shape (M, N), where M is the number of outputs and N is the number of inputs. The first time we call it, the layer sets N based on the shape of the input we passed to it and creates the weight matrix.
End of explanation
"""
result = tf.layers.dense(_input, units=4) # A tensorflow Dense layer
print(result)
"""
Explanation: This is exactly how a tensorflow Dense layer works. It implements the same operation as that of DeepChem's Dense layer i.e., outputs = activation(inputs.kernel + bias) where kernel is the weights matrix created by the layer, and bias is a bias vector created by the layer.
End of explanation
"""
def dense_squared(x):
return layers.Dense(1)(layers.Dense(1)(inputs))
grad = tfe.gradients_function(dense_squared)
print(dense_squared(3.0))
print(grad(3.0))
"""
Explanation: We pass a tensor input to that of tensorflow Dense layer and recieve an output tensor that has the same shape as that of input except the last dimension is that of ouput space.
Gradients
Finding gradients under eager mode is much similar to the autograd API. The computational flow is very clean and logical.
What happens is that different operations can occur during each call, all forward operations are recorded to a tape, which is then played backwards when computing gradients. After the gradients have been computed, the tape is discared.
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak | past-semesters/spring_2016/day-by-day/day22-traveling-salesman-problem/TravelingSalesman_Problem_SOLUTIONS.ipynb | agpl-3.0 | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display, clear_output
def calc_total_distance(table_of_distances, city_order):
'''
Calculates distances between a sequence of cities.
Inputs: N x N table containing distances between each pair of the N
cities, as well as an array of length N+1 containing the city order,
which starts and ends with the same city (ensuring that the path is
closed)
Returns: total path length for the closed loop.
'''
total_distance = 0.0
# loop over cities and sum up the path length between successive pairs
for i in range(city_order.size-1):
total_distance += table_of_distances[city_order[i]][city_order[i+1]]
return total_distance
def plot_cities(city_order,city_x,city_y):
'''
Plots cities and the path between them.
Inputs: ordering of cities, x and y coordinates of each city.
Returns: a plot showing the cities and the path between them.
'''
# first make x,y arrays
x = []
y = []
# put together arrays of x and y positions that show the order that the
# salesman traverses the cities
for i in range(0, city_order.size):
x.append(city_x[city_order[i]])
y.append(city_y[city_order[i]])
# append the first city onto the end so the loop is closed
x.append(city_x[city_order[0]])
y.append(city_y[city_order[0]])
#time.sleep(0.1)
clear_output(wait=True)
display(fig) # Reset display
fig.clear() # clear output for animation
plt.xlim(-0.2, 20.2) # give a little space around the edges of the plot
plt.ylim(-0.2, 20.2)
# plot city positions in blue, and path in red.
plt.plot(city_x,city_y, 'bo', x, y, 'r-')
"""
Explanation: The Traveling Salesman problem
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use Monte Carlo methods to find the shortest path between several cities - the "Traveling Salesman" problem. This is an example of how randomization can be used to optimize problems that would be incredibly computationally expensive (and sometimes impossible) to solve exactly.
The Traveling Salesman problem
The Traveling Salesman Problem is a classic problem in computer science where the focus is on optimization. The problem is as follows: Imagine there is a salesman who has to travel to N cities. The order is unimportant, as long as he only visits each city once on each trip, and finishes where he started. The salesman wants to keep the distance traveled (and thus travel costs) as low as possible. This problem is interesting for a variety of reasons - it applies to transportation (finding the most efficient bus routes), logistics (finding the best UPS or FedEx delivery routes for some number of packages), or in optimizing manufacturing processes to reduce cost.
The Traveling Salesman Problem is extremely difficult to solve for large numbers of cities - testing every possible combination of cities would take N! (N factorial) individual tests. For 10 cities, this would require 3,628,800 separate tests. For 20 cities, this would require 2,432,902,008,176,640,000 (approximately $2.4 \times 10^{18}$) tests - if you could test one combination per microsecond ($10^{-6}$ s) it would take approximately 76,000 years! For 30 cities, at the same rate testing every combination would take more than one billion times the age of the Universe. As a result, this is the kind of problem where a "good enough" answer is sufficient, and where randomization comes in.
A good local example of a solution to the Traveling Salesman Problem is an optimized Michigan road trip calculated by a former MSU graduate student (and one across the US). There's also a widely-used software library for solving the Traveling Salesman Problem; the website has some interesting applications of the problem!
End of explanation
"""
# number of cities we'll use.
number_of_cities = 30
# seed for random number generator so we get the same value every time!
np.random.seed(2024561414)
# create random x,y positions for our current number of cities. (Distance scaling is arbitrary.)
city_x = np.random.random(size=number_of_cities)*20.0
city_y = np.random.random(size=number_of_cities)*20.0
# table of city distances - empty for the moment
city_distances = np.zeros((number_of_cities,number_of_cities))
# calculate distnace between each pair of cities and store it in the table.
# technically we're calculating 2x as many things as we need (as well as the
# diagonal, which should all be zeros), but whatever, it's cheap.
for a in range(number_of_cities):
for b in range(number_of_cities):
city_distances[a][b] = ((city_x[a]-city_x[b])**2 + (city_y[a]-city_y[b])**2 )**0.5
# create the array of cities in the order we're going to go through them
city_order = np.arange(city_distances.shape[0])
# tack on the first city to the end of the array, since that ensures a closed loop
city_order = np.append(city_order, city_order[0])
"""
Explanation: This code sets up everything we need
Given a number of cities, set up random x and y positions and calculate a table of distances between pairs of cities (used for calculating the total trip distance). Then set up an array that controls the order that the salesman travels between cities, and plots out the initial path.
End of explanation
"""
fig = plt.figure()
# Put your code here!
# number of steps we'll take
N_steps = 1000
step = [0]
distance = [calc_total_distance(city_distances,city_order)]
for i in range(N_steps):
swap1 = np.random.randint(1,city_order.shape[0]-2)
swap2 = np.random.randint(1,city_order.shape[0]-2)
orig_distance = calc_total_distance(city_distances,city_order)
new_city_order = np.copy(city_order)
hold = new_city_order[swap1]
new_city_order[swap1] = new_city_order[swap2]
new_city_order[swap2] = hold
new_distance = calc_total_distance(city_distances,new_city_order)
if new_distance < orig_distance:
city_order = np.copy(new_city_order)
step.append(i)
distance.append(new_distance)
plot_cities(city_order,city_x,city_y)
plt.plot(step,distance)
"""
Explanation: Put your code below this!
Your code should take some number of steps, doing the following at each step:
Randomly swap two cities in the array of cities (except for the first/last city)
Check the total distance traversed by the salesman
If the new ordering results in a shorter path, keep it. If not, throw it away.
Plot the shorter of the two paths (the original one or the new one)
Also, keep track of the steps and the minimum distance traveled as a function of number of steps and plot out the minimum distance as a function of step!
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_object_evoked.ipynb | bsd-3-clause | import os.path as op
import mne
"""
Explanation: The :class:Evoked <mne.Evoked> data structure: evoked/averaged data
End of explanation
"""
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evokeds = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evokeds)
"""
Explanation: The :class:Evoked <mne.Evoked> data structure is mainly used for storing
averaged data over trials. In MNE the evoked objects are created by averaging
epochs data with :func:mne.Epochs.average. Here we read the evoked dataset
from a file.
End of explanation
"""
evoked = mne.read_evokeds(fname, condition='Left Auditory')
evoked.apply_baseline((None, 0)).apply_proj()
print(evoked)
"""
Explanation: Notice that the reader function returned a list of evoked instances. This is
because you can store multiple categories into a single file. Here we have
categories of
['Left Auditory', 'Right Auditory', 'Left Visual', 'Right Visual'].
We can also use condition parameter to read in only one category.
End of explanation
"""
print(evoked.info)
print(evoked.times)
"""
Explanation: If you're gone through the tutorials of raw and epochs datasets, you're
probably already familiar with the :class:Info <mne.Info> attribute.
There is nothing new or special with the evoked.info. All the relevant
info is still there.
End of explanation
"""
print(evoked.nave) # Number of averaged epochs.
print(evoked.first) # First time sample.
print(evoked.last) # Last time sample.
print(evoked.comment) # Comment on dataset. Usually the condition.
print(evoked.kind) # Type of data, either average or standard_error.
"""
Explanation: The evoked data structure also contains some new attributes easily
accessible:
End of explanation
"""
data = evoked.data
print(data.shape)
"""
Explanation: The data is also easily accessible. Since the evoked data arrays are usually
much smaller than raw or epochs datasets, they are preloaded into the memory
when the evoked object is constructed. You can access the data as a numpy
array.
End of explanation
"""
print('Data from channel {0}:'.format(evoked.ch_names[10]))
print(data[10])
"""
Explanation: The data is arranged in an array of shape (n_channels, n_times). Notice
that unlike epochs, evoked object does not support indexing. This means that
to access the data of a specific channel you must use the data array
directly.
End of explanation
"""
evoked = mne.EvokedArray(data, evoked.info, tmin=evoked.times[0])
evoked.plot()
"""
Explanation: If you want to import evoked data from some other system and you have it in a
numpy array you can use :class:mne.EvokedArray for that. All you need is
the data and some info about the evoked data. For more information, see
tut_creating_data_structures.
End of explanation
"""
|
hannorein/rebound | ipython_examples/HybridIntegrationsWithMercurius.ipynb | gpl-3.0 | import math
import rebound, rebound.data
%matplotlib inline
sim = rebound.Simulation()
rebound.data.add_outer_solar_system(sim) # add some particles for testing
for i in range(1,sim.N):
sim.particles[i].m *= 50.
sim.integrator = "WHFast" # This will end badly!
sim.dt = sim.particles[1].P * 0.002 # Timestep a small fraction of innermost planet's period
sim.move_to_com()
E0 = sim.calculate_energy() # Calculate initial energy
rebound.OrbitPlot(sim);
"""
Explanation: Hybrid integrations with MERCURIUS
REBOUND comes with several integrators, each of which has its own advantages and disadvantages. MERCURIUS is a hybrid integrator that is very similar to the hybrid integrator in John Chambers' Mercury code (J. E. Chambers 1999). It uses a symplectic Wisdom-Holman integrator when particles are far apart from each other and switches over to a high order integrator during close encounters. Specifically, MERCURIUS uses the efficient WHFast and IAS15 integrators internally.
Let's start out by showcasing the problem with traditional fixed timestep integrators such as WHFast. We setup a simulation of the outer solar system and increase the masses of the planets by a factor of 50.
End of explanation
"""
sim.integrate(600*2.*math.pi)
E1 = sim.calculate_energy()
print("Relative energy error with WHFast: %f"%((E0-E1)/E0))
"""
Explanation: Let us integrate this system for a few hundred years. An instability will occur. We can then measure the energy error, which is a good estimate as to how accurate the integration was.
End of explanation
"""
sim = rebound.Simulation()
rebound.data.add_outer_solar_system(sim) # add some particles for testing
for i in range(1,sim.N):
sim.particles[i].m *= 50.
sim.integrator = "mercurius"
sim.dt = sim.particles[1].P * 0.002 # Timestep a small fraction of innermost planet's period
sim.move_to_com()
E0 = sim.calculate_energy() # Calculate initial energy
sim.integrate(600*2.*math.pi)
E1 = sim.calculate_energy()
print("Relative energy error with MERCURIUS: %e"%((E1-E0)/E0))
"""
Explanation: An energy error that large means we basically go it wrong completely. Let's try this again but use MERCURIUS.
End of explanation
"""
# Sets the minimal timestep to a fraction of the global timestep
sim.ri_ias15.min_dt = 1e-4 * sim.dt
"""
Explanation: As you can see, MERCURIUS is able to integrate this system with much better accuracy. When a close encounter occurs, it automatically (and smoothly!) switches to the IAS15 integrator. When there is no close encounter, you still get all the benefits in terms of speed an accuracy from a symplectic integrator.
There are a few options to adjust MERCURIUS. First of all, because it uses IAS15 internally, you may want to set a minimal timestep for IAS15. This ensures that IAS15 never stalls while it is tries to resolve one very close encounter and can be done with the following command:
End of explanation
"""
sim.ri_mercurius.hillfac = 5
"""
Explanation: You also may want to change the critical distance at which MERCURIUS switches over from pure WHFast to IAS15. This is expressed in units of Hill radii. The default is 3 Hill radii, in the following we change it to 5 Hill radii:
End of explanation
"""
|
kcyu1993/ML_course_kyu | labs/ex01/solutions/taskB.ipynb | mit | np.random.seed(10)
p, q = (np.random.rand(i, 2) for i in (4, 5))
p_big, q_big = (np.random.rand(i, 80) for i in (100, 120))
print(p, "\n\n", q)
"""
Explanation: Data Generation
End of explanation
"""
def naive(p, q):
result = np.zeros((p.shape[0], q.shape[0]))
for i in range(p.shape[0]):
for j in range(q.shape[0]):
tmp = 0
for k in range(p.shape[1]):
tmp += (p[i,k]-q[j,k])**2
result[i,j] = tmp
return np.sqrt(result)
def naive_2(p, q):
result = np.zeros((p.shape[0], q.shape[0]))
for i in range(p.shape[0]):
for j in range(q.shape[0]):
result[i,j] = np.sum((p[i]-q[j])**2)
return np.sqrt(result)
"""
Explanation: Solution
End of explanation
"""
rows, cols = np.indices((p.shape[0], q.shape[0]))
print(rows, end='\n\n')
print(cols)
print(p[rows.ravel()], end='\n\n')
print(q[cols.ravel()])
def with_indices(p, q):
rows, cols = np.indices((p.shape[0], q.shape[0]))
distances = np.sqrt(np.sum((p[rows.ravel(), :] - q[cols.ravel(), :])**2, axis=1))
return distances.reshape((p.shape[0], q.shape[0]))
def with_indices_2(p, q):
rows, cols = np.indices((p.shape[0], q.shape[0]))
distances = np.sqrt(np.sum((p[rows, :] - q[cols, :])**2, axis=2))
return distances
"""
Explanation: Use matching indices
Instead of iterating through indices, one can use them directly to parallelize the operations with Numpy.
End of explanation
"""
from scipy.spatial.distance import cdist
def scipy_version(p, q):
return cdist(p, q)
"""
Explanation: Use a library
scipy is the equivalent of matlab toolboxes and have a lot to offer. Actually the pairwise computation is part of the library through the spatial module.
End of explanation
"""
def tensor_broadcasting(p, q):
return np.sqrt(np.sum((p[:,np.newaxis,:]-q[np.newaxis,:,:])**2, axis=2))
"""
Explanation: Numpy Magic
End of explanation
"""
methods = [naive, naive_2, with_indices, with_indices_2, scipy_version, tensor_broadcasting]
timers = []
for f in methods:
r = %timeit -o f(p_big, q_big)
timers.append(r)
plt.figure(figsize=(10,6))
plt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=False) # Set log to True for logarithmic scale
plt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)
plt.xlabel('Method')
plt.ylabel('Time (ms)')
"""
Explanation: Compare methods
End of explanation
"""
|
mayankjohri/LetsExplorePython | Section 1 - Core Python/Chapter 06 - Functions/1. Functions.ipynb | gpl-3.0 | def caps(val):
"""
caps returns double the value of the provided value
"""
return val*2
a = caps("TEST ")
print(a)
print(caps.__doc__)
"""
Explanation: Functions
Functions are blocks of code identified by a name, which can receive ""predetermined"" parameters or not ;).
In Python, functions:
return objects or not.
can provide documentation using Doc Strings.
Can have their properties changed (usually by decorators).
Have their own namespace (local scope), and therefore may obscure definitions of global scope.
Allows parameters to be passed by name. In this case, the parameters can be passed in any order.
Allows optional parameters (with pre-defined defaults ), thus if no parameter are provided then, pre-defined default will be used.
Syntax:
python
def func_name(parameter_one, parameter_two=default_value):
"""
Doc String
"""
<code block>
return value
NOTE: The parameters with default value must be declared after the ones without default value.
End of explanation
"""
a = caps(1234)
print(a)
"""
Explanation: In the above example, we have caps as function, which takes val as argument and returns val * 2.
End of explanation
"""
def is_valid(data):
if 10 in data:
return True
return False
a = is_valid([10, 200, 33, "asf"])
print(a)
a = is_valid((10,))
print(a)
is_valid((10,))
a = is_valid((110,))
print(a)
def is_valid_new(data):
return 10 in data
print(is_valid_new([10, 200, 33, "asf"]))
a = is_valid_new((110,))
print(a)
"""
Explanation: Functions can return any data type, next example returns a boolean value.
End of explanation
"""
def fatorial(n):#{
n = n if n > 1 else 1
j = 1
for i in range(1, n + 1):
j = j * i
return j
#}
# Testing...
for i in range(1, 6):
print (i, '->', fatorial(i))
"""
Explanation: Example (factorial without recursion):
End of explanation
"""
def factorial(num):
"""Fatorial implemented with recursion."""
if num <= 1:
return 1
else:
return(num * factorial(num - 1))
# Testing factorial()
print (factorial(5))
# 5 * (4 * (3 * (2) * (1))
"""
Explanation: Example (factorial with recursion):
End of explanation
"""
def fib(n):
"""Fibonacci:
fib(n) = fib(n - 1) + fib(n - 2) se n > 1
fib(n) = 1 se n <= 1
"""
if n > 1:
return fib(n - 1) + fib(n - 2)
else:
return 1
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
"""
Explanation: Example (Fibonacci series with recursion):
End of explanation
"""
def fib(n):
# the first two values
l = [1, 1]
# Calculating the others
for i in range(2, n + 1):
l.append(l[i -1] + l[i - 2])
return l[n]
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
def test(a, b):
print(a, b)
return a + b
print(test(1, 2))
test(b=1, a=2)
def test_abc(a, b, c):
print(a, b, c)
return a + b + c
"""
Explanation: Example (Fibonacci series without recursion):
End of explanation
"""
test_abc(2, c=3, b=2)
test_abc(2, b=2, c=3)
try:
test_abc(2, a=12, c=3)
except Exception as e:
print(e)
"""
Explanation: python
test_abc(b=1, a=2, 3)
Output:
python
File "<ipython-input-10-e66702cbcb27>", line 2
test_abc(b=1, a=2, 3)
^
SyntaxError: positional argument follows keyword argument
NOTE: We cannot have non-keyword arguments after keyword arguments
End of explanation
"""
def test_new(a, b, c):
pass
"""
Explanation: Functions can also not return anything like in the below example
End of explanation
"""
def test(a, b):
print(a, b)
return a*a, b*b
x, a = test(2, 5)
print(x)
print(type(x))
print(a)
print(type(a))
print(type(test(2, 5)))
def test(a, b):
print(a, b)
return a*a, b*b, a*b
x = test(2 , 5)
print(x)
print(type(x))
def test(a, b):
print(a, b)
return a*a, b*b, "asdf"
x = test(2, 5)
print(x)
print(type(x))
def test(a=100, b=1000):
print(a, b)
return a, b
x = test(2, 5)
print(x)
print(test(10))
def test(a=100, b=1000):
print(a, b)
return a, b
print(test(b=10))
print(test(101))
def test(d, c, a=100, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
"""
Explanation: Functions can also return multiple values, usually in form of tuple.
End of explanation
"""
def rgb_html(r=0, g=0, b=0):
"""Converts R, G, B to #RRGGBB"""
return '#%02x%02x%02x' % (r, g, b)
def html_rgb(color='#000000'):
"""Converts #RRGGBB em R, G, B"""
if color.startswith('#'): color = color[1:]
r = int(color[:2], 16)
g = int(color[2:4], 16)
b = int(color[4:], 16)
return r, g, b
print (rgb_html(200, 200, 255))
print (rgb_html(b=200, g=200, r=255))
print (html_rgb('#c8c8ff'))
"""
Explanation: Example (RGB conversion):
End of explanation
"""
def test(c, d, a=100, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
"""
Explanation: Note: non-default argument's should always follow default argument
Example:
```python
def test(d, a=100, c, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
```
Output:
python
File "<ipython-input-6-3d33b3561563>", line 1
def test(d, a=100, c, b=1000):
^
SyntaxError: non-default argument follows default argument
End of explanation
"""
# *args - arguments without name (list)
# **kargs - arguments with name (ditcionary)
def func(*args, **kargs):
print (args)
print (kargs)
func('weigh', 10, unit='k')
"""
Explanation: Observations:
The arguments with default value must come last, after the non-default arguments.
The default value for a parameter is calculated when the function is defined.
The arguments passed without an identifier are received by the function in the form of a list.
The arguments passed to the function with an identifier are received in the form of a dictionary.
The parameters passed to the function with an identifier should come at the end of the parameter list.
Example of how to get all parameters:
End of explanation
"""
def func(* , **kargs):
print (args)
print (kargs)
a = {
"name": "Mohan kumar Shah",
"age": 24 + 1
}
func('weigh', 10, unit='k', val=a)
def func(*args):
print(args)
func('weigh', 10, "test")
data = [(4, 3), (5, 1), (7, 2), (9, 0)]
# Comparing by the last element
def _cmp(x, y):
return cmp(x[-1], y[-1])
print ('List:', data)
"""
Explanation: In the example, kargs will receive the named arguments and args will receive the others.
The interpreter has some builtin functions defined, including sorted(), which orders sequences, and cmp(), which makes comparisons between two arguments and returns -1 if the first element is greater, 0 (zero) if they are equal, or 1 if the latter is higher. This function is used by the routine of ordering, a behavior that can be modified.
Example:
End of explanation
"""
print (eval('12. / 2 + 3.3'))
def listing(lst):
for l in lst:
print(l)
d = {"Mayank Johri":40, "Janki Mohan Johri":68}
listing(d)
d = {
"name": "Mohan",
"age": 24
}
a = {
"name": "Mohan kumar Shah",
"age": 24 + 1
}
def process_dict(d=a):
print(d)
process_dict(d)
process_dict()
def test(a=[]):
a.append(1)
print(a)
test()
test()
test()
def test(a=None):
if a == None:
a = []
a.append(1)
print(a)
test()
test()
test()
"""
Explanation: Python also has a builtin function eval(), which evaluates code (source or object) and returns the value.
Example:
End of explanation
"""
|
sbenthall/bigbang | examples/experimental_notebooks/Single Word Trend.ipynb | agpl-3.0 | df = pd.DataFrame(columns=["MessageId","Date","From","In-Reply-To","Count"])
for row in archives[0].data.iterrows():
try:
w = row[1]["Body"].replace("'", "")
k = re.sub(r'[^\w]', ' ', w)
k = k.lower()
t = nltk.tokenize.word_tokenize(k)
subdict = {}
count = 0
for g in t:
try:
word = st.stem(g)
except:
print g
pass
if word == checkword:
count += 1
if count == 0:
continue
else:
subdict["MessageId"] = row[0]
subdict["Date"] = row[1]["Date"]
subdict["From"] = row[1]["From"]
subdict["In-Reply-To"] = row[1]["In-Reply-To"]
subdict["Count"] = count
df = df.append(subdict,ignore_index=True)
except:
if row[1]["Body"] is None:
print '!!! Detected an email with an empty Body field...'
else: print 'error'
df[:5] #dataframe of informations of the particular word.
"""
Explanation: You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.
To download, from an interactive Python shell, run:
import nltk
nltk.download()
And in the graphical UI that appears, choose "punkt" from the All Packages tab and Download.
End of explanation
"""
df.groupby([df.Date.dt.year, df.Date.dt.month]).agg({'Count':np.sum}).plot(y='Count')
"""
Explanation: Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time.
End of explanation
"""
|
sysid/nbs | LP/Introduction-to-linear-programming/Introduction to Linear Programming with Python - Part 6.ipynb | mit | def make_io_and_constraint(y1, x1, x2, target_x1, target_x2):
"""
Returns a list of constraints for a linear programming model
that will constrain y1 to 1 when
x1 = target_x1 and x2 = target_x2;
where target_x1 and target_x2 are 1 or 0
"""
binary = [0,1]
assert target_x1 in binary
assert target_x2 in binary
if IOx1 == 1 and IOx2 == 1:
return [
y1 >= x1 + x2 - 1,
y1 <= x1,
y1 <= x2
]
elif IOx1 == 1 and IOx2 == 0:
return [
y1 >= x1 - x2,
y1 <= x1,
y1 <= (1 - x2)
]
elif IOx1 == 0 and IOx2 == 1:
return [
y1 >= x2 - x1,
y1 <= (1 - x1),
y1 <= x2
]
else:
return [
y1 >= - (x1 + x2 -1),
y1 <= (1 - x1),
y1 <= (1 - x2)
]
"""
Explanation: Introduction to Linear Programming with Python - Part 6
Mocking conditional statements using binary constraints
In part 5, I mentioned that in some cases it is possible to construct conditional statements using binary constraints.
We will explore not only conditional statements using binary constraints, but combining them with logical operators, 'and' and 'or'.
First we'll work through some theory, then a real world example as an extension of part 5's example at the end.
Conditional statement
To start simply, if we have the binary constraint x<sub>1</sub> and we want:
python
if x1 == 1:
y1 == 1
elif x1 == 0:
y1 == 0
We can achieve this easily using the following constraint:
python
y1 == x1
However, if we wanted the opposite:
python
if x1 == 1:
y1 == 0
elif x1 == 0:
y1 == 1
Given that they most both be 1 or 0, we just need the following constraint:
python
x1 + y1 == 1
Logical 'AND' operator
Now for something a little more complex, we can coerce a particular binary constraint to be 1 based on the states of 2 other binary constraints.
If we have binary constraints x<sub>1</sub> and x<sub>2</sub> and we want to achieve the following:
python
if x1 == 1 and x2 == 0:
y1 == 1
else:
y1 == 0
So that $y_1$ is only 1 in the case that x<sub>1</sub> is 1 and x<sub>2</sub> is 0. We can use the following 3 constraints to achieve this:
python
[
y1 >= x1 - x2,
y1 <= x1,
y1 <= (1 - x2)
]
We'll take a moment to deconstruct this. In our preferred case that x<sub>1</sub> = 1 and x<sub>2</sub> = 0, the three statments resolve to:
* y<sub>1</sub> ≥ 1
* y<sub>1</sub> ≤ 1
* y<sub>1</sub> ≤ 1
The only value of $y_1$ that fulfils each of these is 1.
In any other case, however, y<sub>1</sub> will be zero. Let's take another example, say x<sub>1</sub> = 0 and x<sub>2</sub> = 1. This resolves to:
* y<sub>1</sub> ≥ -1
* y<sub>1</sub> ≤ 0
* y<sub>1</sub> ≤ 0
Given that y<sub>1</sub> is a binary variable and must be 0 or 1, the only value of y<sub>1</sub> that can fulfil each of these is 0.
You can construct 3 constraints so that y<sub>1</sub> is equal to 1, only in the case you're interested in out of the 4 following options:
* x<sub>1</sub> = 1 and x<sub>2</sub> = 1
* x<sub>1</sub> = 1 and x<sub>2</sub> = 0
* x<sub>1</sub> = 0 and x<sub>2</sub> = 1
* x<sub>1</sub> = 0 and x<sub>2</sub> = 0
I have created a function for exactly this purpose to cover all cases:
End of explanation
"""
import pandas as pd
import pulp
factories = pd.DataFrame.from_csv('csv/factory_variables.csv', index_col=['Month', 'Factory'])
factories
demand = pd.DataFrame.from_csv('csv/monthly_demand.csv', index_col=['Month'])
demand
"""
Explanation: Logical 'OR' operator
This is all well and good for the 'and' logical operator. What about the 'or' logical operator.
If we would like the following:
python
if x1 == 1 or x2 == 1:
y1 == 1
else:
y1 == 0
We can use the following linear constraints:
python
y1 <= x1 + x2
y1 * 2 >= x1 + x2
So that:
* if x<sub>1</sub> is 1 and x<sub>2</sub> is 1:
* y<sub>1</sub> ≤ 2
* 2y<sub>1</sub> ≥ 2
* y<sub>1</sub> must equal 1
* if x<sub>1</sub> is 1 and x<sub>2</sub> is 0:
* y<sub>1</sub> ≤ 1
* 2y<sub>1</sub> ≥ 1
* y<sub>1</sub> must equal 1
* if x<sub>1</sub> is 0 and x<sub>2</sub> is 1:
* y<sub>1</sub> ≤ 1
* 2y<sub>1</sub> ≥ 1
* y<sub>1</sub> must equal 1
* if x<sub>1</sub> is 0 and x<sub>2</sub> is 0:
* y<sub>1</sub> ≤ 0
* 2y<sub>1</sub> ≥ 0
* y<sub>1</sub> must equal 0
Again, we'll consider the alternative option:
python
if x1 == 0 or x2 == 0:
y1 == 1
else:
y1 == 0
We can use the following linear constraints:
python
y1 * 2 <= 2 - (x1 + x2)
y1 >= 1 - (x1 + x2)
An Example - Scheduling Example Extended
In our last example, we explored the scheduling of 2 factories.
Both factories had 2 costs:
* Fixed Costs - Costs incurred while the factory is running
* Variable Costs - Cost per unit of production
We're going to introduce a third cost - Start up cost.
This will be a cost incurred by turning on the machines at one of the factories.
In this example, our start-up costs will be:
* Factory A - €20,000
* Factory B - €400,000
Let's start by reminding ourselves of the input data.
End of explanation
"""
# Production
production = pulp.LpVariable.dicts("production",
((month, factory) for month, factory in factories.index),
lowBound=0,
cat='Integer')
# Factory Status, On or Off
factory_status = pulp.LpVariable.dicts("factory_status",
((month, factory) for month, factory in factories.index),
cat='Binary')
# Factory switch on or off
switch_on = pulp.LpVariable.dicts("switch_on",
((month, factory) for month, factory in factories.index),
cat='Binary')
"""
Explanation: We'll begin by defining our decision variables, we have an additional binary variable for switching on the factory.
End of explanation
"""
# Instantiate the model
model = pulp.LpProblem("Cost minimising scheduling problem", pulp.LpMinimize)
# Select index on factory A or B
factory_A_index = [tpl for tpl in factories.index if tpl[1] == 'A']
factory_B_index = [tpl for tpl in factories.index if tpl[1] == 'B']
# Define objective function
model += pulp.lpSum(
[production[m, f] * factories.loc[(m, f), 'Variable_Costs'] for m, f in factories.index]
+ [factory_status[m, f] * factories.loc[(m, f), 'Fixed_Costs'] for m, f in factories.index]
+ [switch_on[m, f] * 20000 for m, f in factory_A_index]
+ [switch_on[m, f] * 400000 for m, f in factory_B_index]
)
"""
Explanation: We instantiate our model and define our objective function, including start up costs
End of explanation
"""
# Production in any month must be equal to demand
months = demand.index
for month in months:
model += production[(month, 'A')] + production[(month, 'B')] == demand.loc[month, 'Demand']
# Production in any month must be between minimum and maximum capacity, or zero.
for month, factory in factories.index:
min_production = factories.loc[(month, factory), 'Min_Capacity']
max_production = factories.loc[(month, factory), 'Max_Capacity']
model += production[(month, factory)] >= min_production * factory_status[month, factory]
model += production[(month, factory)] <= max_production * factory_status[month, factory]
# Factory B is off in May
model += factory_status[5, 'B'] == 0
model += production[5, 'B'] == 0
"""
Explanation: Now we begin to build up our constraints as in Part 5
End of explanation
"""
for month, factory in factories.index:
# In month 1, if the factory ison, we assume it turned on
if month == 1:
model += switch_on[month, factory] == factory_status[month, factory]
# In other months, if the factory is on in the current month AND off in the previous month, switch on = 1
else:
model += switch_on[month, factory] >= factory_status[month, factory] - factory_status[month-1, factory]
model += switch_on[month, factory] <= 1 - factory_status[month-1, factory]
model += switch_on[month, factory] <= factory_status[month, factory]
"""
Explanation: But now we want to add in our constraints for switching on.
A factory switches on if:
* It is off in the previous month (m-1)
* AND it on in the current month (m).
As we don't know if the factory is on before month 0, we'll assume that the factory has switched on if it is on in month 1.
End of explanation
"""
model.solve()
pulp.LpStatus[model.status]
output = []
for month, factory in production:
var_output = {
'Month': month,
'Factory': factory,
'Production': production[(month, factory)].varValue,
'Factory Status': factory_status[(month, factory)].varValue,
'Switch On': switch_on[(month, factory)].varValue
}
output.append(var_output)
output_df = pd.DataFrame.from_records(output).sort_values(['Month', 'Factory'])
output_df.set_index(['Month', 'Factory'], inplace=True)
output_df
"""
Explanation: We'll then solve our model
End of explanation
"""
|
JannesKlaas/MLiFC | Week 1/Ch. 5 - Multiclass Regression.ipynb | mit | # Package imports
# Matplotlib is a matlab like plotting library
import matplotlib
import matplotlib.pyplot as plt
# Numpy handles matrix operations
import numpy as np
# SciKitLearn is a useful machine learning utilities library
import sklearn
# The sklearn dataset module helps generating datasets
import sklearn.datasets
import sklearn.linear_model
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates decision boundary plot
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Accent)
"""
Explanation: Ch. 5 - Multiclass Regression
In this chapter we will take a big step from a binary classification which can only differentiate between two classes to multiclass classification which can differentiate between any number of classes. As a brief motivation check out this clip from the HBO show 'Silicon Valley':
The app in the show can only differentiate between Hot Dog and Not Hot Dog, a classic binary classification task. The HBO team actually made a Hot Dog / Not Hot Dog app but of course it is not very useful if our classifier can only separate two things. We need a way of doing multiclass regression.
This weeks challenge will also involve multiclass regression with the Zalando Fashion MNIST dataset. In this chapter we will give a closer look at the underlying technology with a simpler problem.
End of explanation
"""
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_blobs(n_samples=200,centers=3,cluster_std=0.8)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Accent)
"""
Explanation: The problem
We have three distinct groups of customers, men women and children. Our fantastic feature engineering team has already come up with two metrics to separate them. What is needed is an automatic way to distinguish whether a customer is a man, a woman or a child. Our task today is to build a neural network that can handle this separation.
End of explanation
"""
# Print data for the first ten customers
print('X')
print(X[:10])
# Print the classes for the first ten customers
print('y:')
print(y[:10])
"""
Explanation: One hot encoding
Let's have a look at the data.
End of explanation
"""
# Reshape from array to vector
y = y.reshape(200,1)
# Generate one hot encoding
enc = OneHotEncoder()
onehot = enc.fit_transform(y)
# Convert to numpy vector
y = onehot.toarray()
"""
Explanation: The input matrix X is already well formatted and ready for our neural network but there is a problem with y. The classes are codified from 0 to 2, but we they are categorical. We need a categorical representation like this:
$$0 \rightarrow man \rightarrow \begin{pmatrix}1 & 0 & 0\end{pmatrix}$$
$$1 \rightarrow woman \rightarrow \begin{pmatrix}0 & 1 & 0\end{pmatrix}$$
$$2 \rightarrow child \rightarrow \begin{pmatrix}0 & 0 & 1\end{pmatrix}$$
This encoding is called one hot encoding since one element of the resulting vector is 'hot', representing the category. Luckily, SciKitLearn has a handy function for this purpose:
End of explanation
"""
def softmax(z):
'''
Calculates the softmax activation of a given input x
See: https://en.wikipedia.org/wiki/Softmax_function
'''
#Calculate exponent term first
exp_scores = np.exp(z)
return exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
"""
Explanation: Our neural network will therefore also have an output dimensionality of 3:
Softmax
Hawk eyes will have spotted that our new network also uses a new activation function for the final layer: The softmax activation. The softmax function is a generalized version of the sigmoid function. It works with any output vector size, enabling it to handle any amount of classes $K$. It's output can be interpreted as the probability distribution over the different classes for a given example. For example if the output of softmax for a given example is $\begin{pmatrix}0.6 & 0.3 & 0.1\end{pmatrix}$ we can interpret it as an 80% probability that the customer is a man, a 30% probability that it is a woman and a 10% probability that it is a child. The softmax function gets computed as:
$$softmax(z) = \frac{e^{z}}{\sum_{k=1}^K e^{z_k}} \text{for } j = 1,...,K$$
Or in python code:
End of explanation
"""
def softmax_loss(y,y_hat):
'''
Calculates the generalized logistic loss between a prediction y_hat and the labels y
See: http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/
We need to clip values that get too close to zero to avoid zeroing out.
Zeroing out is when a number gets so small that the computer replaces it with 0.
Therefore, we clip numbers to a minimum value.
'''
# Clipping calue
minval = 0.000000000001
# Number of samples
m = y.shape[0]
# Loss formula, note that np.sum sums up the entire matrix and therefore does the job of two sums from the formula
loss = -1/m * np.sum(y * np.log(y_hat.clip(min=minval)))
return loss
"""
Explanation: Generalized loss function
Since we generalized the activation function we also have to generalize the loss function. The generalized loss function working with any vector with $K$ categories is as follows:
$$
\begin{aligned}
L(y,\hat{y}) = - \frac{1}{m} \sum_{i=1}^m \sum_{j=1}^K y_{i,j} \log\hat{y}_{i,j}
\end{aligned}
$$
You can show that this is the same as the log loss function from earlier where $K = 1$
In python code it looks like this:
End of explanation
"""
# Log loss derivative, equal to softmax loss derivative
def loss_derivative(y,y_hat):
'''
Calculates the gradient (derivative) of the loss between point y and y_hat
See: https://stats.stackexchange.com/questions/219241/gradient-for-logistic-loss-function
'''
return (y_hat-y)
"""
Explanation: Generalized loss derivative
Since element wise subtraction makes no difference about the dimensionality of the matrices subtracted the derivative of the generalized loss function is the same:
$$\frac{dL(y,\hat{y})}{d\hat{y}} = (\hat{y} - y)$$
Or in python code
End of explanation
"""
def tanh_derivative(x):
'''
Calculates the derivative of the tanh function that is used as the first activation function
See: https://socratic.org/questions/what-is-the-derivative-of-tanh-x
'''
return (1 - np.power(x, 2))
"""
Explanation: Building the rest of the network
After we changed the activation function and the loss function and its derivative, the rest of the network does not change very much. We just need to replace the functions in our forward and backward propagation code
End of explanation
"""
def forward_prop(model,a0):
'''
Forward propagates through the model, stores results in cache.
See: https://stats.stackexchange.com/questions/147954/neural-network-forward-propagation
A0 is the activation at layer zero, it is the same as X
'''
# Load parameters from model
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Linear step
z1 = a0.dot(W1) + b1
# First activation function
a1 = np.tanh(z1)
# Second linear step
z2 = a1.dot(W2) + b2
# Second activation function
a2 = softmax(z2)
cache = {'a0':a0,'z1':z1,'a1':a1,'z2':z2,'a2':a2}
return cache
"""
Explanation: In forward propagation the softmax function replaces the sigmoid function
End of explanation
"""
def backward_prop(model,cache,y):
'''
Backward propagates through the model to calculate gradients.
Stores gradients in grads dictionary.
See: https://en.wikipedia.org/wiki/Backpropagation
'''
# Load parameters from model
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Load forward propagation results
a0,a1, a2 = cache['a0'],cache['a1'],cache['a2']
# Get number of samples
m = y.shape[0]
# Backpropagation
# Calculate loss derivative with respect to output
dz2 = loss_derivative(y=y,y_hat=a2)
# Calculate loss derivative with respect to second layer weights
dW2 = 1/m*(a1.T).dot(dz2)
# Calculate loss derivative with respect to second layer bias
db2 = 1/m*np.sum(dz2, axis=0)
# Calculate loss derivative with respect to first layer
dz1 = np.multiply(dz2.dot(W2.T) ,tanh_derivative(a1))
# Calculate loss derivative with respect to first layer weights
dW1 = 1/m*np.dot(a0.T, dz1)
# Calculate loss derivative with respect to first layer bias
db1 = 1/m*np.sum(dz1, axis=0)
# Store gradients
grads = {'dW2':dW2,'db2':db2,'dW1':dW1,'db1':db1}
return grads
"""
Explanation: Since the derivative of the loss function did not change, nothing changes in the backward propagation
End of explanation
"""
def initialize_parameters(nn_input_dim,nn_hdim,nn_output_dim):
'''
Initializes weights with random number between -1 and 1
Initializes bias with 0
Assigns weights and parameters to model
'''
# First layer weights
W1 = 2 *np.random.randn(nn_input_dim, nn_hdim) - 1
# First layer bias
b1 = np.zeros((1, nn_hdim))
# Second layer weights
W2 = 2 * np.random.randn(nn_hdim, nn_output_dim) - 1
# Second layer bias
b2 = np.zeros((1, nn_output_dim))
# Package and return model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
return model
def update_parameters(model,grads,learning_rate):
'''
Updates parameters accoarding to gradient descent algorithm
See: https://en.wikipedia.org/wiki/Gradient_descent
'''
# Load parameters
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Update parameters
W1 -= learning_rate * grads['dW1']
b1 -= learning_rate * grads['db1']
W2 -= learning_rate * grads['dW2']
b2 -= learning_rate * grads['db2']
# Store and return parameters
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
return model
"""
Explanation: Initializing and updating parameters was also done the exact same way
End of explanation
"""
def predict(model, x):
'''
Predicts y_hat as 1 or 0 for a given input X
'''
# Do forward pass
c = forward_prop(model,x)
#get y_hat
y_hat = np.argmax(c['a2'], axis=1)
return y_hat
"""
Explanation: Making predictions
To make predictions, we do not need the probability of all categories but the most likely category. We can do this with the argmax() command from numpy, which gives us the category with the highes likely-hood back. This also turns our onehot vector back into a categorical representation with the value 0 to 2. So when we calculate the accuracy, we also need to convert y back to its original representation.
End of explanation
"""
def train(model,X_,y_,learning_rate, epochs=20000, print_loss=False):
# Gradient descent. Loop over epochs
for i in range(0, epochs):
# Forward propagation
cache = forward_prop(model,X_)
#a1, probs = cache['a1'],cache['a2']
# Backpropagation
grads = backward_prop(model,cache,y_)
# Gradient descent parameter update
# Assign new parameters to the model
model = update_parameters(model=model,grads=grads,learning_rate=learning_rate)
# Pring loss & accuracy every 100 iterations
if print_loss and i % 100 == 0:
a2 = cache['a2']
print('Loss after iteration',i,':',softmax_loss(y_,a2))
y_hat = predict(model,X_)
y_true = y_.argmax(axis=1)
print('Accuracy after iteration',i,':',accuracy_score(y_pred=y_hat,y_true=y_true)*100,'%')
return model
"""
Explanation: The training process does not change.
End of explanation
"""
# Hyper parameters
hiden_layer_size = 3
# I picked this value because it showed good results in my experiments
learning_rate = 0.01
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
# This is what we return at the end
model = initialize_parameters(nn_input_dim=2, nn_hdim= hiden_layer_size, nn_output_dim= 3)
model = train(model,X,y,learning_rate=learning_rate,epochs=1000,print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model,x))
plt.title("Decision Boundary for hidden layer size 3")
"""
Explanation: We will train this model with a hidden layer size of 3 and a learning rate of 0.01, feel free to try different values.
End of explanation
"""
|
NYUDataBootcamp/Projects | UG_F16/Qian-DevEconCorrelation .ipynb | mit | import pandas as pd # data package
import matplotlib.pyplot as plt # graphics
import seaborn as sns # seaborn graphics package
import numpy as np # foundation for pandas
import sys # system module
import datetime as dt # date and time module
%matplotlib inline
"""
Explanation: What factors correlate with economic development?
December 2016
Author: George Qian
Contact: george.qian@stern.nyu.edu
Introduction
Many factors goes into transforming a nation from a third-world country to a first world country. As the saying goes "correlation does not equal causation", in this project I will only be looking at the correlation between a variable in development such as foreign direct investment and the country's GDP/GNI growth. I'm going to determine which of the factors I've chosen correlates most closely with a country's economic development. I'm going to explore the correlation between those factors and the country's wealth distribution. The correlation could be further explored to determine if there is a causation.
Packages Imported
End of explanation
"""
#Importing the data from github
url = 'https://raw.githubusercontent.com/ghq201/databootcamp/master/Project%20Data.csv'
wb = pd.read_csv(url)
wb.head(153)
"""
Explanation: Dataset
Usually data on developing countries are difficult to gather as most of those countries lacks in census and other data collection. Out of all the global agencies, the World Bank has one of the most extensive collection of data for developing countries. The data for this report was collected from World Bank DataBank. I downloaded the datasets I needed and reuploaded to Github so others could also have access to the data used.
Countries Chosen:
Burundi, Djibouti, Ethiopia, Kenya, Madagascar, Tanzania, Uganda, Zambia, and Zimbabwe
Factors:
Foreign direct investment, net inflows
Arable land
Central government debt, total
Children out of school
Female genital mutilation prevalence
Fertilizer consumption
Firms expected to give gifts in meetings with tax officials
Health expenditure per capita, PPP
Literacy rate, adult male
Net ODA received per capita
Economic Development Indicators:
GDP
GNI
Wealth Distribution
1.. GINI Coefficient
End of explanation
"""
#Setting the index to Series and Country
wb=wb.set_index(['Series Name', 'Country Name'])
#Removing missing values
wb=wb.replace(to_replace=['..'], value=[np.nan]).head(152)
#Converting objects to float as for some reason the type still came out as object after .replace
for i in range(2001,2016):
str_i=str(i)
wb[str_i]=wb[str_i].apply(pd.to_numeric)
wb.dtypes
#Transposing the dataset
wb=wb.T.head(152)
wb.head(152)
#separating the datasets
FDI=wb['Foreign direct investment, net inflows (BoP, current US$)']
Arable=wb['Arable land (% of land area)']
ArablePP=wb['Arable land (hectares per person)']
CGDebt=wb['Central government debt, total (% of GDP)']
CooSchool=wb['Children out of school (% of primary school age)']
FemaleGM=wb['Female genital mutilation prevalence (%)']
FertCons=wb['Fertilizer consumption (kilograms per hectare of arable land)']
Firms=wb['Firms expected to give gifts in meetings with tax officials (% of firms)']
GDP=wb['GDP (constant LCU)']
GDPPC=wb['GDP per capita (constant LCU)']
GINI=wb['GINI index (World Bank estimate)']
GNI=wb['GNI (constant LCU)']
GNIPC=wb['GNI per capita (constant LCU)']
HealthEx=wb['Health expenditure per capita, PPP (constant 2011 international $)']
LiteracyM=wb['Literacy rate, adult male (% of males ages 15 and above)']
LiteracyF=wb['Literacy rate, adult female (% of females ages 15 and above)']
ODA=wb['Net ODA received per capita (current US$)']
Factors = [FDI,Arable,ArablePP,CGDebt,CooSchool,FemaleGM,FertCons,Firms,HealthEx,LiteracyM,LiteracyF,ODA]
Economic_Indicator = [GDP,GDPPC,GINI,GNI,GNIPC]
"""
Explanation: Cleaning the dataset
As expected there are a lot of missing data for each countries. Some countries have more data than others and most only have data for a couple years for a certain variable.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,7))
FDIGraph=FDI.plot(ax=ax,kind='line')
FDIGraph.set(ylabel="net inflows (BoP, current US$)", xlabel="Year")
FDIGraph.set_title('Foreign Direct Investment',fontsize= 30)
"""
Explanation: Country Trends
Here are some important trends that I have decided to look at for each country's development.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,7))
GDPGraph=GDPPC.plot(ax=ax,kind='line')
GDPGraph.set(ylabel="constant LCU", xlabel="Year")
GDPGraph.set_title('GDP Per Capita',fontsize= 30)
"""
Explanation: There has been a general upward trend for foreign direct investment, which could prove beneficial to a country's growth.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,7))
ODAGraph=ODA.plot(ax=ax,kind='line')
ODAGraph.set(ylabel="current US$", xlabel="Year")
ODAGraph.set_title('ODA Received Per Capita',fontsize= 30)
"""
Explanation: Over the past 15 years, Tanzania and Burundi has experienced a steady pace of growth while other east African countries have not been as successful.
End of explanation
"""
FDI.corrwith(GDPPC)
ODA.corrwith(GDPPC)
#Creating a dictionary of variables to correlation mean
Factorname = ['FDI','Arable Land','Arable Land per person','Central Government Debt','Childern out of School',
'Female Genital Mutilation','Fertilizer Consumption','Firms Expected to Give Gifts','Health Expenditure',
'Male Literacy','Female Literacy','ODA']
GDPPC_Correlation = {}
for i in range(0,12):
corr=Factors[i].corrwith(GDPPC)
s=float(corr.mean())
GDPPC_Correlation[Factorname[i]]=s
GNIPC_Correlation = {}
for i in range(0,12):
corr=Factors[i].corrwith(GNI)
s=float(corr.mean())
GNIPC_Correlation[Factorname[i]]=s
GINIPC_Correlation = {}
for i in range(0,12):
corr=Factors[i].corrwith(GINI)
s=float(corr.mean())
GINIPC_Correlation[Factorname[i]]=s
#Converting the dictionaries to dataframe
GINIC=pd.DataFrame.from_dict(GINIPC_Correlation,orient='index')
GNIC=pd.DataFrame.from_dict(GNIPC_Correlation,orient='index')
GDPC=pd.DataFrame.from_dict(GDPPC_Correlation,orient='index')
Correlation=pd.concat([GDPC,GNIC,GINIC],axis=1)
Correlation.columns = ["GDP Correlation", "GNI Correlation", "GINI Correlation"]
print (Correlation)
"""
Explanation: ODA received per capita has been fairly constant over the past fifteen years
Correlation Analysis
Here we are looking at the correlation between the various factors and economic development indicators. I will first to do a country-by-country correlation analysis on two factors that are hotly debated on whether they are beneficial to the country, FDI/ODI receieved. Afterwards, I'm going to take a look at an average of the correlations for each factor.
End of explanation
"""
|
bloomberg/bqplot | examples/Tutorials/Brush Interval Selector.ipynb | apache-2.0 | import numpy as np
from ipywidgets import Layout, HTML, VBox
import bqplot.pyplot as plt
"""
Explanation: Linking Plots Using Brush Interval Selector
Details on how to use the brush interval selector can be found in this notebook.
Brush interval selectors can be used where continuous updates are not desirable (for example, in callbacks performing slower computations)
The boolean trait brushing can be used to control continuous updates in the BrushSelector. brushing will be set to False when the interval selector is not brushing. We can register callbacks by listening to the brushing trait of the brush selector. We can check the value of brushing trait in the callback and perform updates only at the end of brushing.
Let's now look at an example of linking a time series plot to a scatter plot using a BrushIntervalSelector
End of explanation
"""
from bqplot.interacts import BrushIntervalSelector
y1, y2 = np.random.randn(2, 200).cumsum(axis=1) # two simple random walks
fig_layout = Layout(width="900px", height="500px")
time_series_fig = plt.figure(layout=fig_layout)
line = plt.plot([y1, y2])
# create a fast interval selector by passing in the X scale and the line mark on which the selector operates
intsel = BrushIntervalSelector(marks=[line], scale=line.scales["x"])
time_series_fig.interaction = intsel # set the interval selector on the figure
"""
Explanation: Let's set up an interval selector on a figure containing two time series plots. The interval selector can be activated by clicking on the figure
End of explanation
"""
scat_fig = plt.figure(
layout=fig_layout,
animation_duration=750,
title="Scatter of time series slice selected by the interval selector",
)
# set the x and y attributes to the y values of line.y
scat = plt.scatter(*line.y, colors=["red"], stroke="black")
# define a callback for the interval selector
def update_scatter(*args):
brushing = intsel.brushing
# update scatter *only* when the interval selector
# is not brushing to prevent continuous updates
if not brushing:
# interval selector is active
if line.selected is not None:
# get the start and end indices of the interval
start_ix, end_ix = line.selected[0], line.selected[-1]
else: # interval selector is *not* active
start_ix, end_ix = 0, -1
# update the x and y attributes of the scatter by slicing line.y
with scat.hold_sync():
scat.x, scat.y = line.y[:, start_ix:end_ix]
# register the callback with brushing trait of interval selector
intsel.observe(update_scatter, "brushing")
help_label = HTML(
'<div style="color: blue; font-size: 16px; margin:20px 0px 0px 50px">\
Brush on the time series plot to activate the interval selector</div>'
)
VBox([help_label, time_series_fig, scat_fig])
"""
Explanation: Let's now create a scatter plot of the two time series and stack it below the time series plot using a VBox
End of explanation
"""
|
DawesLab/LabNotebooks | Timescales in QuTiP.ipynb | mit | from qutip import *
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Timescales in QuTiP
Andrew M.C. Dawes — 2016
An overview to one frequently asked question about QuTiP.
Introduction
QuTiP is a python package, if you are new to QuTiP, you should first read the tutorial materials available. If you have used QuTiP but are unsure about timescale and time units, this document will help clarify these concepts.
It is important to note that QuTiP routines do not care what time units you use. There is no internal unit of time. Time is defined purely by the units of the other values in your problem, so it is up to you to be careful about units! QuTiP includes a set of solvers for several equations that are relevant to quantum mechanics. Those equations relate various quantities (such as energy and time) and the units of these quantities are constrained only by the equations (i.e. not by QuTiP itself).
$$i \hbar \frac{d}{dt}\left|\psi\right\rangle = H\left|\psi\right\rangle$$
python imports
We'll use qutip, numpy, matplotlib according to the following import scheme:
End of explanation
"""
Sx = 0.5 * sigmax()
Sy = 0.5 * sigmay()
Sz = 0.5 * sigmaz()
"""
Explanation: It will be useful to define the three components of a spin-1/2 in terms of the Pauli matrices:
End of explanation
"""
H = 2 * np.pi * 3 * Sz
psi0 = 1/np.sqrt(2)*Qobj([[1],[1]])
times = np.linspace(0,1,50)
result = mesolve(H, psi0, times, [], [Sx, Sy, Sz])
x = result.expect[0]
y = result.expect[1]
z = result.expect[2]
plt.plot(times,x)
plt.plot(times,y)
plt.plot(times,z)
plt.ylim(-1.2,1.2)
"""
Explanation: Now the Hamiltonian of a spin-1/2 system in an external magnetic field is $H = \boldsymbol{\omega}_L \cdot \boldsymbol{S}$ where $\boldsymbol{\omega}_L = -\gamma B$ is the Larmor frequency of precession. Here we see our first introduction to units in the system. Either by assumption (following convention) or by derivation, we see that the units of the Hamiltonian are angular frequency. In particular, the equation solved by QuTiP is:
$$i \frac{d}{dt}|\psi\rangle = \omega_L S_Z |\psi\rangle$$
If we make this angular frequency explicit by entering $f = 3$:
$$ H = 2 \pi \cdot 3 \cdot S_z$$
Hypothesis
We would expect this system to undergo three complete oscillations in one unit of time.
End of explanation
"""
|
denglert/manuals | python/modules/matplotlib/legend/notebooks/legend_outside_figure.ipynb | mit | import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
import numpy as np
%matplotlib inline
x = np.linspace(0.0, 2.0*np.pi, 100)
y = np.sin(x)
"""
Explanation: Legend outside the axis
References:
- https://matplotlib.org/examples/pylab_examples/figlegend_demo.html
End of explanation
"""
f = plt.figure()
ax_sidebox = f.add_axes([0.0, 0.05, 0.22, 0.9])
ax = f.add_axes([0.32, 0.05, 0.65, 0.9])
ax_sidebox.grid(False)
ax_sidebox.set_xticks(())
ax_sidebox.set_yticks(())
sidebox_text = ['text1', 'text2', 'text3']
text_xpos = 0.03
text_ypos_start = 0.85
text_vspace = 0.1
for i,text in enumerate(sidebox_text):
f.text(text_xpos, text_ypos_start-i*text_vspace, text, transform=ax.transAxes, fontsize=18)
line1 = ax.plot(x,y)
line2 = ax.plot(x,0.5*y)
ax_sidebox.legend(line1, (r'$y_{1}$',), loc=(0.1, 0.5), fontsize=15)
f.legend(line2, (r'$y_{2}$',), loc=(0.045, 0.4), fontsize=15)
"""
Explanation: Sidebox with two axis
End of explanation
"""
f,a = plt.subplots(figsize=(9,6))
a.plot(x,y)
# - Top caption box
f.subplots_adjust(top=0.8)
caption_box = mpatches.Rectangle((0.0, 1.0), 1.0, 0.2, clip_on=False, transform=a.transAxes, edgecolor='k', facecolor='white', linewidth=1.0)
a.text(0.4, 1.09, 'This text is inside the top figure caption box', horizontalalignment='center', transform=a.transAxes)
a.add_artist(caption_box)
# - Custom legend
curve = mlines.Line2D([], [], linestyle="-", color='C0', label=r'$\sin\alpha$')
legend_handles = [curve]
a.legend(handles=legend_handles, loc=(0.8, 1.07));
"""
Explanation: Top caption box with a single axis
End of explanation
"""
|
heatseeknyc/data-science | src/bryan analyses/Hack for Heat #6.ipynb | mit | #Like before, we're going to select the relevant columns from the database:
connection = psycopg2.connect('dbname= threeoneone user=threeoneoneadmin password=threeoneoneadmin')
cursor = connection.cursor()
cursor.execute('''SELECT createddate, closeddate, borough FROM service;''')
data = cursor.fetchall()
data = pd.DataFrame(data)
data.columns = ['createddate','closeddate','borough']
"""
Explanation: Hack for Heat #6: Complaint resolution time, over time and censoring
As a follow up to the last post, we're going to try and find out if the average resolution time has changed over time. This might be tricky, as we might run into the issue of censoring. Censoring is a problem in time-series analyses that occurs when data that are missing because they lie outside of the range of the measure.
In this case, what we might expect to find is that complaint resolution time appears shorter as we get to the present day, and that may be true, but it may also happen because cases where problems have yet to be solved show up as missing data. In other words, for a case that was opened in 2010, the longest possible resolution time is 5 years, whereas for a case opened yesterday, the longest possible resolution time is 1 day.
So, as a first step, let's look at the proportion of unseen cases over time:
End of explanation
"""
data['cryear'] = [x.year for x in data['createddate']]
data['crmonth'] = [x.month for x in data['createddate']]
"""
Explanation: Let's extract years and months again:
End of explanation
"""
#filter out bad casesa
import datetime
today = datetime.date(2016,5,29)
janone = datetime.date(2010,1,1)
data = data.loc[(data['closeddate'] > data['createddate']) | (data['closeddate'].isnull() == True)]
databyyear = data.groupby(by='cryear').count()
databyyear
data['timedelta'] = data['closeddate'] - data['createddate']
data['timedeltaint'] = [x.days if pd.isnull(x) == False else None for x in data['timedelta']]
data.groupby(by='cryear').mean()
"""
Explanation: And now, we're going to filter out bad cases again. However, this time, we're going to be a bit more selective. We're going to include cases where the closed date is missing.
End of explanation
"""
databyyear['propclosed'] = databyyear['closeddate']/databyyear['createddate']
databyyear
"""
Explanation: This table shows exactly what I'm talking about - as we get closer to the current day, the average resolution time falls more and more. If censorig is occuring, we might expect that the proportion of cases closed is also decreasing over time. This is generally the case:
End of explanation
"""
|
seg/2016-ml-contest | MandMs/03_Facies_classification-MandMs_RandomForest_EngineeredFeatures_SFSelection_ValidationCurves.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import scipy as sp
from scipy.stats import randint as sp_randint
from scipy.signal import argrelextrema
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing
from sklearn.metrics import f1_score, make_scorer
from sklearn.model_selection import LeaveOneGroupOut, validation_curve
filename = 'SFS_top70_selected_engineered_features.csv'
training_data = pd.read_csv(filename)
training_data.describe()
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
"""
Explanation: Facies classification using Random forest and engineered features
Contest entry by: <a href="https://github.com/mycarta">Matteo Niccoli</a>, <a href="https://github.com/dahlmb">Mark Dahl</a>, with a contribution by Daniel Kittridge.
Original contest notebook by Brendon Hall, Enthought
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The code and ideas in this notebook,</span> by <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Matteo Niccoli and Mark Dahl, </span> are licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Loading the dataset with selected set of top 70 engineered features.
We first created a large set of moments and GLCM features. The workflow is described in the 03_Facies_classification_MandMs_feature_engineering_commented.ipynb notebook (with huge thanks go to Daniel Kittridge for his critically needed Pandas magic, and useful suggestions).
We then selected 70 using a Sequential (Forward) Feature Selector form Sebastian Raschka's mlxtend library. Details in the 03_Facies_classification-MandMs_SFS_feature_selection.ipynb notebook.
End of explanation
"""
y = training_data['Facies'].values
print y[25:40]
print np.shape(y)
X = training_data.drop(['Formation', 'Well Name','Facies'], axis=1)
print np.shape(X)
X.describe(percentiles=[.05, .25, .50, .75, .95])
"""
Explanation: Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, and we are also using depth. We also get a vector of the facies labels that correspond to each feature vector.
End of explanation
"""
scaler = preprocessing.StandardScaler().fit(X)
X = scaler.transform(X)
"""
Explanation: Preprocessing data with standard scaler
End of explanation
"""
Fscorer = make_scorer(f1_score, average = 'micro')
"""
Explanation: Make F1 performance scorers
End of explanation
"""
wells = training_data["Well Name"].values
logo = LeaveOneGroupOut()
"""
Explanation: Parameter tuning ( maximum number of features and number of estimators): validation curves combined with leave one well out cross validation
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
RF_clf100 = RandomForestClassifier (n_estimators=100, n_jobs=-1, random_state = 49)
RF_clf200 = RandomForestClassifier (n_estimators=200, n_jobs=-1, random_state = 49)
RF_clf300 = RandomForestClassifier (n_estimators=300, n_jobs=-1, random_state = 49)
RF_clf400 = RandomForestClassifier (n_estimators=400, n_jobs=-1, random_state = 49)
RF_clf500 = RandomForestClassifier (n_estimators=500, n_jobs=-1, random_state = 49)
RF_clf600 = RandomForestClassifier (n_estimators=600, n_jobs=-1, random_state = 49)
param_name = "max_features"
#param_range = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60]
param_range = [9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51]
plt.figure()
plt.suptitle('n_estimators = 100', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf100, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 200', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf200, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 300', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf300, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 400', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf400, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 500', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf500, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 600', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf600, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
"""
Explanation: Random forest classifier
In Random Forest classifiers serveral decision trees (often hundreds - a forest of trees) are created and trained on a random subsets of samples (drawn with replacement) and features (drawn without replacement); the decision trees work together to make a more accurate classification (description from Randal Olson's <a href="http://nbviewer.jupyter.org/github/rhiever/Data-Analysis-and-Machine-Learning-Projects/blob/master/example-data-science-notebook/Example%20Machine%20Learning%20Notebook.ipynb"> excellent notebook</a>).
End of explanation
"""
RF_clf_f1 = RandomForestClassifier (n_estimators=600, max_features = 21,
n_jobs=-1, random_state = 49)
f1_RF = []
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
RF_clf_f1.fit(X[train], y[train])
pred = RF_clf_f1.predict(X[test])
sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_RF.append(sc)
print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_RF)/(1.0*(len(f1_RF))))
"""
Explanation: Average test F1 score with leave one well out
End of explanation
"""
RF_clf_b = RandomForestClassifier (n_estimators=600, max_features = 21,
n_jobs=-1, random_state = 49)
blind = pd.read_csv('engineered_features_validation_set_top70.csv')
X_blind = np.array(blind.drop(['Formation', 'Well Name'], axis=1))
scaler1 = preprocessing.StandardScaler().fit(X_blind)
X_blind = scaler1.transform(X_blind)
y_pred = RF_clf_b.fit(X, y).predict(X_blind)
#blind['Facies'] = y_pred
np.save('ypred_RF_SFS_VC.npy', y_pred)
"""
Explanation: Predicting and saving facies for blind wells
End of explanation
"""
|
scheib/chromium | third_party/tensorflow-text/src/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb | bsd-3-clause | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
!pip uninstall -y tensorflow tf-text
!pip install -U tensorflow-text-nightly
!pip install -U tf-nightly
!pip install -U tf-models-nightly
import matplotlib.pyplot as plt
import sklearn.metrics
import sklearn.calibration
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import numpy as np
import tensorflow as tf
import official.nlp.modeling.layers as layers
import official.nlp.optimization as optimization
"""
Explanation: Uncertainty-aware Deep Language Learning with BERT-SNGP
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/uncertainty_quantification_with_sngp_bert"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
In the SNGP tutorial, you learned how to build SNGP model on top of a deep residual network to improve its ability to quantify its uncertainty. In this tutorial, you will apply SNGP to a natural language understanding (NLU) task by building it on top of a deep BERT encoder to improve deep NLU model's ability in detecting out-of-scope queries.
Specifically, you will:
* Build BERT-SNGP, a SNGP-augmented BERT model.
* Load the CLINC Out-of-scope (OOS) intent detection dataset.
* Train the BERT-SNGP model.
* Evaluate the BERT-SNGP model's performance in uncertainty calibration and out-of-domain detection.
Beyond CLINC OOS, the SNGP model has been applied to large-scale datasets such as Jigsaw toxicity detection, and to the image datasets such as CIFAR-100 and ImageNet.
For benchmark results of SNGP and other uncertainty methods, as well as high-quality implementation with end-to-end training / evaluation scripts, you can check out the Uncertainty Baselines benchmark.
Setup
End of explanation
"""
tf.__version__
gpus = tf.config.list_physical_devices('GPU')
gpus
assert gpus, """
No GPU(s) found! This tutorial will take many hours to run without a GPU.
You may hit this error if the installed tensorflow package is not
compatible with the CUDA and CUDNN versions."""
"""
Explanation: This tutorial needs the GPU to run efficiently. Check if the GPU is available.
End of explanation
"""
#@title Standard BERT model
PREPROCESS_HANDLE = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
MODEL_HANDLE = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3'
class BertClassifier(tf.keras.Model):
def __init__(self,
num_classes=150, inner_dim=768, dropout_rate=0.1,
**classifier_kwargs):
super().__init__()
self.classifier_kwargs = classifier_kwargs
# Initiate the BERT encoder components.
self.bert_preprocessor = hub.KerasLayer(PREPROCESS_HANDLE, name='preprocessing')
self.bert_hidden_layer = hub.KerasLayer(MODEL_HANDLE, trainable=True, name='bert_encoder')
# Defines the encoder and classification layers.
self.bert_encoder = self.make_bert_encoder()
self.classifier = self.make_classification_head(num_classes, inner_dim, dropout_rate)
def make_bert_encoder(self):
text_inputs = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
encoder_inputs = self.bert_preprocessor(text_inputs)
encoder_outputs = self.bert_hidden_layer(encoder_inputs)
return tf.keras.Model(text_inputs, encoder_outputs)
def make_classification_head(self, num_classes, inner_dim, dropout_rate):
return layers.ClassificationHead(
num_classes=num_classes,
inner_dim=inner_dim,
dropout_rate=dropout_rate,
**self.classifier_kwargs)
def call(self, inputs, **kwargs):
encoder_outputs = self.bert_encoder(inputs)
classifier_inputs = encoder_outputs['sequence_output']
return self.classifier(classifier_inputs, **kwargs)
"""
Explanation: First implement a standard BERT classifier following the classify text with BERT tutorial. We will use the BERT-base encoder, and the built-in ClassificationHead as the classifier.
End of explanation
"""
class ResetCovarianceCallback(tf.keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs=None):
"""Resets covariance matrix at the begining of the epoch."""
if epoch > 0:
self.model.classifier.reset_covariance_matrix()
class SNGPBertClassifier(BertClassifier):
def make_classification_head(self, num_classes, inner_dim, dropout_rate):
return layers.GaussianProcessClassificationHead(
num_classes=num_classes,
inner_dim=inner_dim,
dropout_rate=dropout_rate,
gp_cov_momentum=-1,
temperature=30.,
**self.classifier_kwargs)
def fit(self, *args, **kwargs):
"""Adds ResetCovarianceCallback to model callbacks."""
kwargs['callbacks'] = list(kwargs.get('callbacks', []))
kwargs['callbacks'].append(ResetCovarianceCallback())
return super().fit(*args, **kwargs)
"""
Explanation: Build SNGP model
To implement a BERT-SNGP model, you only need to replace the ClassificationHead with the built-in GaussianProcessClassificationHead. Spectral normalization is already pre-packaged into this classification head. Like in the SNGP tutorial, add a covariance reset callback to the model, so the model automatically reset the covariance estimator at the begining of a new epoch to avoid counting the same data twice.
End of explanation
"""
(clinc_train, clinc_test, clinc_test_oos), ds_info = tfds.load(
'clinc_oos', split=['train', 'test', 'test_oos'], with_info=True, batch_size=-1)
"""
Explanation: Note: The GaussianProcessClassificationHead takes a new argument temperature. It corresponds to the $\lambda$ parameter in the mean-field approximation introduced in the SNGP tutorial. In practice, this value is usually treated as a hyperparamter, and is finetuned to optimize the model's calibration performance.
Load CLINC OOS dataset
Now load the CLINC OOS intent detection dataset. This dataset contains 15000 user's spoken queries collected over 150 intent classes, it also contains 1000 out-of-domain (OOD) sentences that are not covered by any of the known classes.
End of explanation
"""
train_examples = clinc_train['text']
train_labels = clinc_train['intent']
# Makes the in-domain (IND) evaluation data.
ind_eval_data = (clinc_test['text'], clinc_test['intent'])
"""
Explanation: Make the train and test data.
End of explanation
"""
test_data_size = ds_info.splits['test'].num_examples
oos_data_size = ds_info.splits['test_oos'].num_examples
# Combines the in-domain and out-of-domain test examples.
oos_texts = tf.concat([clinc_test['text'], clinc_test_oos['text']], axis=0)
oos_labels = tf.constant([0] * test_data_size + [1] * oos_data_size)
# Converts into a TF dataset.
ood_eval_dataset = tf.data.Dataset.from_tensor_slices(
{"text": oos_texts, "label": oos_labels})
"""
Explanation: Create a OOD evaluation dataset. For this, combine the in-domain test data clinc_test and the out-of-domain data clinc_test_oos. We will also assign label 0 to the in-domain examples, and label 1 to the out-of-domain examples.
End of explanation
"""
TRAIN_EPOCHS = 3
TRAIN_BATCH_SIZE = 32
EVAL_BATCH_SIZE = 256
#@title
def bert_optimizer(learning_rate,
batch_size=TRAIN_BATCH_SIZE, epochs=TRAIN_EPOCHS,
warmup_rate=0.1):
"""Creates an AdamWeightDecay optimizer with learning rate schedule."""
train_data_size = ds_info.splits['train'].num_examples
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(warmup_rate * num_train_steps)
# Creates learning schedule.
lr_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=learning_rate,
decay_steps=num_train_steps,
end_learning_rate=0.0)
return optimization.AdamWeightDecay(
learning_rate=lr_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
optimizer = bert_optimizer(learning_rate=1e-4)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = tf.metrics.SparseCategoricalAccuracy()
fit_configs = dict(batch_size=TRAIN_BATCH_SIZE,
epochs=TRAIN_EPOCHS,
validation_batch_size=EVAL_BATCH_SIZE,
validation_data=ind_eval_data)
sngp_model = SNGPBertClassifier()
sngp_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
sngp_model.fit(train_examples, train_labels, **fit_configs)
"""
Explanation: Train and evaluate
First set up the basic training configurations.
End of explanation
"""
#@title
def oos_predict(model, ood_eval_dataset, **model_kwargs):
oos_labels = []
oos_probs = []
ood_eval_dataset = ood_eval_dataset.batch(EVAL_BATCH_SIZE)
for oos_batch in ood_eval_dataset:
oos_text_batch = oos_batch["text"]
oos_label_batch = oos_batch["label"]
pred_logits = model(oos_text_batch, **model_kwargs)
pred_probs_all = tf.nn.softmax(pred_logits, axis=-1)
pred_probs = tf.reduce_max(pred_probs_all, axis=-1)
oos_labels.append(oos_label_batch)
oos_probs.append(pred_probs)
oos_probs = tf.concat(oos_probs, axis=0)
oos_labels = tf.concat(oos_labels, axis=0)
return oos_probs, oos_labels
"""
Explanation: Evaluate OOD performance
Evaluate how well the model can detect the unfamiliar out-of-domain queries. For rigorous evaluation, use the OOD evaluation dataset ood_eval_dataset built earlier.
End of explanation
"""
sngp_probs, ood_labels = oos_predict(sngp_model, ood_eval_dataset)
ood_probs = 1 - sngp_probs
"""
Explanation: Computes the OOD probabilities as $1 - p(x)$, where $p(x)=softmax(logit(x))$ is the predictive probability.
End of explanation
"""
precision, recall, _ = sklearn.metrics.precision_recall_curve(ood_labels, ood_probs)
auprc = sklearn.metrics.auc(recall, precision)
print(f'SNGP AUPRC: {auprc:.4f}')
"""
Explanation: Now evaluate how well the model's uncertainty score ood_probs predicts the out-of-domain label. First compute the Area under precision-recall curve (AUPRC) for OOD probability v.s. OOD detection accuracy.
End of explanation
"""
prob_true, prob_pred = sklearn.calibration.calibration_curve(
ood_labels, ood_probs, n_bins=10, strategy='quantile')
plt.plot(prob_pred, prob_true)
plt.plot([0., 1.], [0., 1.], c='k', linestyle="--")
plt.xlabel('Predictive Probability')
plt.ylabel('Predictive Accuracy')
plt.title('Calibration Plots, SNGP')
plt.show()
"""
Explanation: This matches the SNGP performance reported at the CLINC OOS benchmark under the Uncertainty Baselines.
Next, examine the model's quality in uncertainty calibration, i.e., whether the model's predictive probability corresponds to its predictive accuracy. A well-calibrated model is considered trust-worthy, since, for example, its predictive probability $p(x)=0.8$ means that the model is correct 80% of the time.
End of explanation
"""
|
griffinfoster/fundamentals_of_interferometry | 3_Positional_Astronomy/3_1_equatorial_coordinates.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import HTML
HTML('../style/code_toggle.html')
import healpy as hp
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
import matplotlib
import ephem
"""
Explanation: Outline
Glossary
Positional Astronomy
Previous: Introduction
Next: Hour Angle and Local Sidereal Time
Import standard modules:
End of explanation
"""
arcturus = ephem.star('Arcturus')
arcturus.compute('2016/2/8',epoch=ephem.J2000)
print('J2000: RA:%s DEC:%s' % (arcturus.ra, arcturus.dec))
arcturus.compute('2016/2/8', epoch=ephem.B1950)
print('B1950: RA:%s DEC:%s' % (arcturus.a_ra, arcturus.a_dec))
"""
Explanation: Equatorial Coordinates (RA/DEC)
The Celestial Sphere
We can use a geographical coordinate system to uniquely identify a position on earth. We normally use the coordinates latitude $L_a$ (to measure north and south) and longitude $L_o$ (to measure east and west) to accomplish this. The equatorial coordinate system is depicted in Figure 3.1.1.
</a> <img src='figures/geo.svg' width=60%>
Figure 3.1.1: The geographical coordinates latitude $L_a$ and longitude $L_o$.
We also require a coordinate system to map the celestial objects. For all intents and purposes we may think of our universe as being projected onto a sphere of arbitrary radius. This sphere surrounds the Earth and is known as the celestial sphere. This is not a true representation of our universe, but it is a very useful approximate astronomical construct. The celestial equator is obtained by projecting the equator of the earth onto the celestial sphere. The stars themselves do not move on the celestial sphere and therefore have a unique location on it. The Sun is an exception, it changes position in a periodic fashion during the year (as the Earth orbits around the Sun). The path it traverses on the celestial sphere is known as the ecliptic.
The NCP and SCP
The north celestial pole (NCP) is an important location on the celestial sphere and is obtained by projecting the north pole of the earth onto the celestial sphere. The star Polaris is very close to the NCP and serves as a reference when positioning a telescope.
The south celestial pole (SCP) is obtained in a similar way. The imaginary circle known as the celestial equator is in the same plane as the equator of the earth and is obtained by projecting the equator of the earth onto the celestial sphere. The southern hemisphere counterpart of Polaris is Sigma Octanis.
We use a specific point on the celestial equator from which we measure the location of all other celestial objects. This point is known as the first point of Aries ($\gamma$) <!--\vernal--> or the vernal equinox. The vernal equinox is the point where
the ecliptic intersects the celestial equator (south to north). We discuss the vernal equinox in more detail in <a class='pos_sec_lst_eq'></a> <!--\ref{pos:sec:lst}-->.
Coordinate Definitions
We use the equatorial coordinates to uniquely identify the location of celestial objects rotating with the celestial sphere around the SCP/NCP axis.
The Right Ascension $\alpha$ - We define the hour circle of an object as the circle on the celestial sphere that crosses the NCP and the object itself, while also perpendicularly intersecting with the celestial equator. The right ascension of an object is the angular distance between the vernal equinox and the hour circle of a celestial object measured along the celestial equator and is measured eastward. It is measured in Hours Minutes Seconds (e.g. $\alpha = 03^\text{h}13^\text{m}32.5^\text{s}$) and spans $360^\circ$ on the celestial sphere from $\alpha = 00^\text{h}00^\text{m}00^\text{s}$ (the coordinates of $\gamma$) to $\alpha = 23^\text{h}59^\text{m}59^\text{s}$.
The Declination $\delta$ - the declination of an object is the angular distance from the celestial equator measured along its hour circle (it is positive in the northern celestial hemisphere and negative in the southern celestial hemisphere). It is measured in Degrees Arcmin Arcsec (e.g. $\delta = -15^\circ23'44''$) which spans from $\delta = -90^\circ00'00''$ (SCP) to $+\delta = 90^\circ00'00''$ (NCP).
The equatorial coordinates are presented graphically in Figure 3.1.2.
<div class=warn>
<b>Warning:</b> As for any spherical system, the Right Ascension of the NCP ($\delta=+90^ \circ$) and the SCP ($\delta=-90^ \circ$) are ill-defined. And a source close to the any celestial pole can have an unintuitive Right Ascension.
</div>
<a id='pos:fig:equatorial_coordinates'></a> <!--\label{pos:fig:equatorial_coordinates}--> <img src='figures/equatorial.svg' width=500>
Figure 3.1.2: The equatorial coordinates $\alpha$ and $\delta$. The vernal equinox $\gamma$, the equatorial reference point is also depicted. The vernal
equinox is the point where the ecliptic (the path the sun traverses over one year) intersects the celestial equator. <span style="background-color:cyan">KT:XX: What are the green circles in the image? </span>
<div class=warn>
<b>Warning:</b> One arcminute of the declination axis (e.g. $00^\circ01'00''$) is not equal to one <em>minute</em> in right ascension axis (e.g. $00^\text{h}01^\text{m}00^\text{s}$). <br>
Indeed, in RA, the 24$^\text{h}$ circle is mapped to a 360$^\circ$ circle meaning that 1 hour spans over a section of 15$^\circ$. And as 1$^\text{h}$ is 60$^\text{m}$, therefore 1$^\text{m}$ in RA correspond to $1^\text{m} = \frac{1^\text{h}}{60}=\frac{15^\circ}{60}=0.25'$. <br>
You should be careful about this **factor of 4 difference between RA min and DEC arcmin** (i.e. $\text{RA} \; 00^\text{h}01^\text{m}00^\text{s}\neq \text{DEC} \; 00^\circ01'00''$)
</div>
B1950 and J2000
We will be making use of the <cite data-cite=''>pyephem package</cite> ⤴ package in the rest of this chapter to help us clarify and better understand some theoretical concepts. The two classes we will be using are the Observer and the Body class. The Observer class acts as a proxy for an array, while the Body class embodies a specific celestial object. In this section we will only make use of the Body class.
Earlier in this section I mentioned that the celestial objects do not move on the celestial sphere and therefore have fixed equatorial coordinates. This is not entirely true. Due to the precession (the change in the orientation of the earth's rotational axis) the location of the stars do in fact change minutely during the course of one generation. That is why we need to link the equatorial coordinates of a celestial object in a catalogue to a specific observational epoch (a specific instant in time). We can then easily compute the true coordinates as they would be today given the equatorial coordinates from a specific epoch as a starting point. There are two popular epochs that are often used, namely J2000 and B1950. Expressed in <cite data-cite=''>UT (Universal Time)</cite> ⤴:
* B1950 - 1949/12/31 22:09:50 UT,
* J2000 - 2000/1/1 12:00:00 UT.
The 'B' and the 'J' serve as a shorthand for the Besselian year and the Julian year respectively. They indicate the lenght of time used to measure one year while choosing the exact instant in time associated with J2000 and B1950. The Besselian year is based on the concept of a <cite data-cite=''>tropical year</cite> ⤴ and is not used anymore. The Julian year consists of 365.25 days. In the code snippet below we use pyephem to determine the J2000 and B1950 equatorial coordinates of Arcturus.
End of explanation
"""
haslam = hp.read_map('../data/fits/haslam/lambda_haslam408_nofilt.fits')
matplotlib.rcParams.update({'font.size': 10})
proj_map = hp.cartview(haslam,coord=['G','C'], max=2e5, xsize=2000,return_projected_map=True,title="Haslam 408 MHz with no filtering",cbar=False)
hp.graticule()
"""
Explanation: Example: The 408 MHz Haslam Map
To finish things off, let's make sure that given the concepts we have learned in this section we are able to interpret a radio skymap correctly. We will be plotting and inspecting the <cite data-cite=''>Haslam 408 MHz map</cite> ⤴. We load the Haslam map with read_map and view it with cartview. These two functions form part of the <cite data-cite=''>healpy package</cite> ⤴.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
matplotlib.rcParams.update({'font.size': 22})
#replot the projected healpy map
ax.imshow(proj_map[::-1,:],vmax=2e5, extent=[12,-12,-90,90],aspect='auto')
names = np.array(["Vernal Equinox","Cassiopeia A","Sagitarius A","Cygnus A","Crab Nebula","Fornax A","Pictor A"])
ra = np.array([0,(23 + 23./60 + 24./3600)-24,(17 + 42./60 + 9./3600)-24,(19 + 59./60 + 28./3600)-24,5+34./60+32./3600,3+22./60+41.7/3600,5+19./60+49.7/3600])
dec = np.array([0,58+48./60+54./3600,-28-50./60,40+44./60+2./3600,22+52./3600,-37-12./60-30./3600,-45-46./60-44./3600])
#mark the positions of important radio sources
ax.plot(ra,dec,'ro',ms=20,mfc="None")
for k in xrange(len(names)):
ax.annotate(names[k], xy = (ra[k],dec[k]), xytext=(ra[k]+0.8, dec[k]+5))
#create userdefined axis labels and ticks
ax.set_xlim(12,-12)
ax.set_ylim(-90,90)
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.yticks(ticks)
ticks = np.array([12,10,8,6,4,2,0,-2,-4,-8,-6,-10,-12])
plt.xticks(ticks)
plt.xlabel("Right Ascension [$h$]")
plt.ylabel("Declination [$^{\circ}$]")
plt.title("Haslam 408 MHz with no filtering")
#relabel the tick values
fig.canvas.draw()
labels = [item.get_text() for item in ax.get_xticklabels()]
labels = np.array(["12$^h$","10$^h$","8$^h$","6$^h$","4$^h$","2$^h$","0$^h$","22$^h$","20$^h$","18$^h$","16$^h$","14$^h$","12$^h$"])
ax.set_xticklabels(labels)
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
"""
Explanation: The cartview function also produces a projected map as a byproduct (it takes the form of a 2D numpy array). We can now replot this projected map using matplotlib (see <a class='pos_fig_haslam_eq'></a> <!--\ref{pos:fig:haslam}-->). We do so in the code snippet that follows.
End of explanation
"""
|
piskvorky/gensim | docs/src/auto_examples/tutorials/run_annoy.ipynb | lgpl-2.1 | LOGS = False # Set to True if you want to see progress in logs.
if LOGS:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Fast Similarity Queries with Annoy and Word2Vec
Introduces the Annoy library for similarity queries on top of vectors learned by Word2Vec.
End of explanation
"""
import gensim.downloader as api
text8_path = api.load('text8', return_path=True)
print("Using corpus from", text8_path)
"""
Explanation: The Annoy "Approximate Nearest Neighbors Oh Yeah"
<https://github.com/spotify/annoy>_ library enables similarity queries with
a Word2Vec model. The current implementation for finding k nearest neighbors
in a vector space in Gensim has linear complexity via brute force in the
number of indexed documents, although with extremely low constant factors.
The retrieved results are exact, which is an overkill in many applications:
approximate results retrieved in sub-linear time may be enough. Annoy can
find approximate nearest neighbors much faster.
Outline
Download Text8 Corpus
Train the Word2Vec model
Construct AnnoyIndex with model & make a similarity query
Compare to the traditional indexer
Persist indices to disk
Save memory by via memory-mapping indices saved to disk
Evaluate relationship of num_trees to initialization time and accuracy
Work with Google's word2vec C formats
1. Download Text8 corpus
End of explanation
"""
from gensim.models import Word2Vec, KeyedVectors
from gensim.models.word2vec import Text8Corpus
# Using params from Word2Vec_FastText_Comparison
params = {
'alpha': 0.05,
'vector_size': 100,
'window': 5,
'epochs': 5,
'min_count': 5,
'sample': 1e-4,
'sg': 1,
'hs': 0,
'negative': 5,
}
model = Word2Vec(Text8Corpus(text8_path), **params)
wv = model.wv
print("Using trained model", wv)
"""
Explanation: 2. Train the Word2Vec model
For more details, see sphx_glr_auto_examples_tutorials_run_word2vec.py.
End of explanation
"""
from gensim.similarities.annoy import AnnoyIndexer
# 100 trees are being used in this example
annoy_index = AnnoyIndexer(model, 100)
# Derive the vector for the word "science" in our model
vector = wv["science"]
# The instance of AnnoyIndexer we just created is passed
approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = wv.most_similar([vector], topn=11)
print("\nExact Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
"""
Explanation: 3. Construct AnnoyIndex with model & make a similarity query
An instance of AnnoyIndexer needs to be created in order to use Annoy in Gensim.
The AnnoyIndexer class is located in gensim.similarities.annoy.
AnnoyIndexer() takes two parameters:
model: A Word2Vec or Doc2Vec model.
num_trees: A positive integer. num_trees effects the build
time and the index size. A larger value will give more accurate results,
but larger indexes. More information on what trees in Annoy do can be found
here <https://github.com/spotify/annoy#how-does-it-work>__. The relationship
between num_trees\ , build time, and accuracy will be investigated later
in the tutorial.
Now that we are ready to make a query, lets find the top 5 most similar words
to "science" in the Text8 corpus. To make a similarity query we call
Word2Vec.most_similar like we would traditionally, but with an added
parameter, indexer.
Apart from Annoy, Gensim also supports the NMSLIB indexer. NMSLIB is a similar library to
Annoy – both support fast, approximate searches for similar vectors.
End of explanation
"""
# Set up the model and vector that we are using in the comparison
annoy_index = AnnoyIndexer(model, 100)
# Dry run to make sure both indexes are fully in RAM
normed_vectors = wv.get_normed_vectors()
vector = normed_vectors[0]
wv.most_similar([vector], topn=5, indexer=annoy_index)
wv.most_similar([vector], topn=5)
import time
import numpy as np
def avg_query_time(annoy_index=None, queries=1000):
"""Average query time of a most_similar method over 1000 random queries."""
total_time = 0
for _ in range(queries):
rand_vec = normed_vectors[np.random.randint(0, len(wv))]
start_time = time.process_time()
wv.most_similar([rand_vec], topn=5, indexer=annoy_index)
total_time += time.process_time() - start_time
return total_time / queries
queries = 1000
gensim_time = avg_query_time(queries=queries)
annoy_time = avg_query_time(annoy_index, queries=queries)
print("Gensim (s/query):\t{0:.5f}".format(gensim_time))
print("Annoy (s/query):\t{0:.5f}".format(annoy_time))
speed_improvement = gensim_time / annoy_time
print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement))
"""
Explanation: The closer the cosine similarity of a vector is to 1, the more similar that
word is to our query, which was the vector for "science". There are some
differences in the ranking of similar words and the set of words included
within the 10 most similar words.
4. Compare to the traditional indexer
End of explanation
"""
fname = '/tmp/mymodel.index'
# Persist index to disk
annoy_index.save(fname)
# Load index back
import os.path
if os.path.exists(fname):
annoy_index2 = AnnoyIndexer()
annoy_index2.load(fname)
annoy_index2.model = model
# Results should be identical to above
vector = wv["science"]
approximate_neighbors2 = wv.most_similar([vector], topn=11, indexer=annoy_index2)
for neighbor in approximate_neighbors2:
print(neighbor)
assert approximate_neighbors == approximate_neighbors2
"""
Explanation: This speedup factor is by no means constant and will vary greatly from
run to run and is particular to this data set, BLAS setup, Annoy
parameters(as tree size increases speedup factor decreases), machine
specifications, among other factors.
.. Important::
Initialization time for the annoy indexer was not included in the times.
The optimal knn algorithm for you to use will depend on how many queries
you need to make and the size of the corpus. If you are making very few
similarity queries, the time taken to initialize the annoy indexer will be
longer than the time it would take the brute force method to retrieve
results. If you are making many queries however, the time it takes to
initialize the annoy indexer will be made up for by the incredibly fast
retrieval times for queries once the indexer has been initialized
.. Important::
Gensim's 'most_similar' method is using numpy operations in the form of
dot product whereas Annoy's method isnt. If 'numpy' on your machine is
using one of the BLAS libraries like ATLAS or LAPACK, it'll run on
multiple cores (only if your machine has multicore support ). Check SciPy
Cookbook
<http://scipy-cookbook.readthedocs.io/items/ParallelProgramming.html>_
for more details.
5. Persisting indices to disk
You can save and load your indexes from/to disk to prevent having to
construct them each time. This will create two files on disk, fname and
fname.d. Both files are needed to correctly restore all attributes. Before
loading an index, you will have to create an empty AnnoyIndexer object.
End of explanation
"""
# Remove verbosity from code below (if logging active)
if LOGS:
logging.disable(logging.CRITICAL)
from multiprocessing import Process
import os
import psutil
"""
Explanation: Be sure to use the same model at load that was used originally, otherwise you
will get unexpected behaviors.
6. Save memory via memory-mapping indexes saved to disk
Annoy library has a useful feature that indices can be memory-mapped from
disk. It saves memory when the same index is used by several processes.
Below are two snippets of code. First one has a separate index for each
process. The second snipped shares the index between two processes via
memory-mapping. The second example uses less total RAM as it is shared.
End of explanation
"""
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model.wv["science"]
annoy_index = AnnoyIndexer(new_model, 100)
approximate_neighbors = new_model.wv.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Create and run two parallel processes to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
"""
Explanation: Bad example: two processes load the Word2vec model from disk and create their
own Annoy index from that model.
End of explanation
"""
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model.wv["science"]
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = new_model
approximate_neighbors = new_model.wv.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
"""
Explanation: Good example: two processes load both the Word2vec model and index from disk
and memory-map the index.
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: 7. Evaluate relationship of num_trees to initialization time and accuracy
End of explanation
"""
exact_results = [element[0] for element in wv.most_similar([normed_vectors[0]], topn=100)]
x_values = []
y_values_init = []
y_values_accuracy = []
for x in range(1, 300, 10):
x_values.append(x)
start_time = time.time()
annoy_index = AnnoyIndexer(model, x)
y_values_init.append(time.time() - start_time)
approximate_results = wv.most_similar([normed_vectors[0]], topn=100, indexer=annoy_index)
top_words = [result[0] for result in approximate_results]
y_values_accuracy.append(len(set(top_words).intersection(exact_results)))
"""
Explanation: Build dataset of initialization times and accuracy measures:
End of explanation
"""
plt.figure(1, figsize=(12, 6))
plt.subplot(121)
plt.plot(x_values, y_values_init)
plt.title("num_trees vs initalization time")
plt.ylabel("Initialization time (s)")
plt.xlabel("num_trees")
plt.subplot(122)
plt.plot(x_values, y_values_accuracy)
plt.title("num_trees vs accuracy")
plt.ylabel("%% accuracy")
plt.xlabel("num_trees")
plt.tight_layout()
plt.show()
"""
Explanation: Plot results:
End of explanation
"""
# To export our model as text
wv.save_word2vec_format('/tmp/vectors.txt', binary=False)
from smart_open import open
# View the first 3 lines of the exported file
# The first line has the total number of entries and the vector dimension count.
# The next lines have a key (a string) followed by its vector.
with open('/tmp/vectors.txt', encoding='utf8') as myfile:
for i in range(3):
print(myfile.readline().strip())
# To import a word2vec text model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# To export a model as binary
wv.save_word2vec_format('/tmp/vectors.bin', binary=True)
# To import a word2vec binary model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)
annoy_index = AnnoyIndexer(wv, 100)
annoy_index.save('/tmp/mymodel.index')
# Load and test the saved word vectors and saved Annoy index
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = wv
vector = wv["cat"]
approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = wv.most_similar([vector], topn=11)
print("\nExact Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
"""
Explanation: From the above, we can see that the initialization time of the annoy indexer
increases in a linear fashion with num_trees. Initialization time will vary
from corpus to corpus. In the graph above we used the (tiny) Lee corpus.
Furthermore, in this dataset, the accuracy seems logarithmically related to
the number of trees. We see an improvement in accuracy with more trees, but
the relationship is nonlinear.
7. Work with Google's word2vec files
Our model can be exported to a word2vec C format. There is a binary and a
plain text word2vec format. Both can be read with a variety of other
software, or imported back into Gensim as a KeyedVectors object.
End of explanation
"""
|
me-surrey/dl-gym | 05_support_vector_machines.ipynb | apache-2.0 | # To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "svm"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
"""
Explanation: Chapter 5 – Support Vector Machines
This notebook contains all the sample code and solutions to the exercises in chapter 5.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
"""
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# SVM Classifier model
svm_clf = SVC(kernel="linear", C=float("inf"))
svm_clf.fit(X, y)
# Bad models
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5*x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(x0, pred_1, "g--", linewidth=2)
plt.plot(x0, pred_2, "m-", linewidth=2)
plt.plot(x0, pred_3, "r-", linewidth=2)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo")
plt.xlabel("Petal length", fontsize=14)
plt.axis([0, 5.5, 0, 2])
save_fig("large_margin_classification_plot")
plt.show()
"""
Explanation: Large margin classification
The next few code cells generate the first figures in chapter 5. The first actual code sample comes after:
End of explanation
"""
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64)
ys = np.array([0, 0, 1, 1])
svm_clf = SVC(kernel="linear", C=100)
svm_clf.fit(Xs, ys)
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(Xs[:, 0][ys==1], Xs[:, 1][ys==1], "bo")
plt.plot(Xs[:, 0][ys==0], Xs[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, 0, 6)
plt.xlabel("$x_0$", fontsize=20)
plt.ylabel("$x_1$ ", fontsize=20, rotation=0)
plt.title("Unscaled", fontsize=16)
plt.axis([0, 6, 0, 90])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(Xs)
svm_clf.fit(X_scaled, ys)
plt.subplot(122)
plt.plot(X_scaled[:, 0][ys==1], X_scaled[:, 1][ys==1], "bo")
plt.plot(X_scaled[:, 0][ys==0], X_scaled[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, -2, 2)
plt.xlabel("$x_0$", fontsize=20)
plt.title("Scaled", fontsize=16)
plt.axis([-2, 2, -2, 2])
save_fig("sensitivity_to_feature_scales_plot")
"""
Explanation: Sensitivity to feature scales
End of explanation
"""
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]])
y_outliers = np.array([0, 0])
Xo1 = np.concatenate([X, X_outliers[:1]], axis=0)
yo1 = np.concatenate([y, y_outliers[:1]], axis=0)
Xo2 = np.concatenate([X, X_outliers[1:]], axis=0)
yo2 = np.concatenate([y, y_outliers[1:]], axis=0)
svm_clf2 = SVC(kernel="linear", C=10**9)
svm_clf2.fit(Xo2, yo2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(Xo1[:, 0][yo1==1], Xo1[:, 1][yo1==1], "bs")
plt.plot(Xo1[:, 0][yo1==0], Xo1[:, 1][yo1==0], "yo")
plt.text(0.3, 1.0, "Impossible!", fontsize=24, color="red")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[0][0], X_outliers[0][1]),
xytext=(2.5, 1.7),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plt.plot(Xo2[:, 0][yo2==1], Xo2[:, 1][yo2==1], "bs")
plt.plot(Xo2[:, 0][yo2==0], Xo2[:, 1][yo2==0], "yo")
plot_svc_decision_boundary(svm_clf2, 0, 5.5)
plt.xlabel("Petal length", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[1][0], X_outliers[1][1]),
xytext=(3.2, 0.08),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
save_fig("sensitivity_to_outliers_plot")
plt.show()
"""
Explanation: Sensitivity to outliers
End of explanation
"""
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)),
])
svm_clf.fit(X, y)
svm_clf.predict([[5.5, 1.7]])
"""
Explanation: Large margin vs margin violations
This is the first code example in chapter 5:
End of explanation
"""
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42)
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
save_fig("regularization_plot")
"""
Explanation: Now let's generate the graph comparing different regularization settings:
End of explanation
"""
X1D = np.linspace(-4, 4, 9).reshape(-1, 1)
X2D = np.c_[X1D, X1D**2]
y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^")
plt.gca().get_yaxis().set_ticks([])
plt.xlabel(r"$x_1$", fontsize=20)
plt.axis([-4.5, 4.5, -0.2, 0.2])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs")
plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^")
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16])
plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3)
plt.axis([-4.5, 4.5, -1, 17])
plt.subplots_adjust(right=1)
save_fig("higher_dimensions_plot", tight_layout=False)
plt.show()
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))
])
polynomial_svm_clf.fit(X, y)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show()
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show()
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2)
gamma = 0.3
x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1)
x2s = gaussian_rbf(x1s, -2, gamma)
x3s = gaussian_rbf(x1s, 1, gamma)
XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)]
yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red")
plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^")
plt.plot(x1s, x2s, "g--")
plt.plot(x1s, x3s, "b:")
plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"Similarity", fontsize=14)
plt.annotate(r'$\mathbf{x}$',
xy=(X1D[3, 0], 0),
xytext=(-0.5, 0.20),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20)
plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20)
plt.axis([-4.5, 4.5, -0.1, 1.1])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs")
plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^")
plt.xlabel(r"$x_2$", fontsize=20)
plt.ylabel(r"$x_3$ ", fontsize=20, rotation=0)
plt.annotate(r'$\phi\left(\mathbf{x}\right)$',
xy=(XK[3, 0], XK[3, 1]),
xytext=(0.65, 0.50),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplots_adjust(right=1)
save_fig("kernel_method_plot")
plt.show()
x1_example = X1D[3, 0]
for landmark in (-2, 1):
k = gaussian_rbf(np.array([[x1_example]]), np.array([[landmark]]), gamma)
print("Phi({}, {}) = {}".format(x1_example, landmark, k))
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
rbf_kernel_svm_clf.fit(X, y)
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
save_fig("moons_rbf_svc_plot")
plt.show()
"""
Explanation: Non-linear classification
End of explanation
"""
np.random.seed(42)
m = 50
X = 2 * np.random.rand(m, 1)
y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
from sklearn.svm import LinearSVR
svm_reg = LinearSVR(epsilon=1.5, random_state=42)
svm_reg.fit(X, y)
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42)
svm_reg2 = LinearSVR(epsilon=0.5, random_state=42)
svm_reg1.fit(X, y)
svm_reg2.fit(X, y)
def find_support_vectors(svm_reg, X, y):
y_pred = svm_reg.predict(X)
off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)
return np.argwhere(off_margin)
svm_reg1.support_ = find_support_vectors(svm_reg1, X, y)
svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)
eps_x1 = 1
eps_y_pred = svm_reg1.predict([[eps_x1]])
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
#plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2)
plt.annotate(
'', xy=(eps_x1, eps_y_pred), xycoords='data',
xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),
textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}
)
plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20)
plt.subplot(122)
plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18)
save_fig("svm_regression_plot")
plt.show()
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
from sklearn.svm import SVR
svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1)
svm_poly_reg.fit(X, y)
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1)
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1)
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.subplot(122)
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show()
"""
Explanation: Regression
End of explanation
"""
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
from mpl_toolkits.mplot3d import Axes3D
def plot_3D_decision_function(ax, w, b, x1_lim=[4, 6], x2_lim=[0.8, 2.8]):
x1_in_bounds = (X[:, 0] > x1_lim[0]) & (X[:, 0] < x1_lim[1])
X_crop = X[x1_in_bounds]
y_crop = y[x1_in_bounds]
x1s = np.linspace(x1_lim[0], x1_lim[1], 20)
x2s = np.linspace(x2_lim[0], x2_lim[1], 20)
x1, x2 = np.meshgrid(x1s, x2s)
xs = np.c_[x1.ravel(), x2.ravel()]
df = (xs.dot(w) + b).reshape(x1.shape)
m = 1 / np.linalg.norm(w)
boundary_x2s = -x1s*(w[0]/w[1])-b/w[1]
margin_x2s_1 = -x1s*(w[0]/w[1])-(b-1)/w[1]
margin_x2s_2 = -x1s*(w[0]/w[1])-(b+1)/w[1]
ax.plot_surface(x1s, x2, 0, color="b", alpha=0.2, cstride=100, rstride=100)
ax.plot(x1s, boundary_x2s, 0, "k-", linewidth=2, label=r"$h=0$")
ax.plot(x1s, margin_x2s_1, 0, "k--", linewidth=2, label=r"$h=\pm 1$")
ax.plot(x1s, margin_x2s_2, 0, "k--", linewidth=2)
ax.plot(X_crop[:, 0][y_crop==1], X_crop[:, 1][y_crop==1], 0, "g^")
ax.plot_wireframe(x1, x2, df, alpha=0.3, color="k")
ax.plot(X_crop[:, 0][y_crop==0], X_crop[:, 1][y_crop==0], 0, "bs")
ax.axis(x1_lim + x2_lim)
ax.text(4.5, 2.5, 3.8, "Decision function $h$", fontsize=15)
ax.set_xlabel(r"Petal length", fontsize=15)
ax.set_ylabel(r"Petal width", fontsize=15)
ax.set_zlabel(r"$h = \mathbf{w}^t \cdot \mathbf{x} + b$", fontsize=18)
ax.legend(loc="upper left", fontsize=16)
fig = plt.figure(figsize=(11, 6))
ax1 = fig.add_subplot(111, projection='3d')
plot_3D_decision_function(ax1, w=svm_clf2.coef_[0], b=svm_clf2.intercept_[0])
save_fig("iris_3D_plot")
plt.show()
"""
Explanation: Under the hood
End of explanation
"""
def plot_2D_decision_function(w, b, ylabel=True, x1_lim=[-3, 3]):
x1 = np.linspace(x1_lim[0], x1_lim[1], 200)
y = w * x1 + b
m = 1 / w
plt.plot(x1, y)
plt.plot(x1_lim, [1, 1], "k:")
plt.plot(x1_lim, [-1, -1], "k:")
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot([m, m], [0, 1], "k--")
plt.plot([-m, -m], [0, -1], "k--")
plt.plot([-m, m], [0, 0], "k-o", linewidth=3)
plt.axis(x1_lim + [-2, 2])
plt.xlabel(r"$x_1$", fontsize=16)
if ylabel:
plt.ylabel(r"$w_1 x_1$ ", rotation=0, fontsize=16)
plt.title(r"$w_1 = {}$".format(w), fontsize=16)
plt.figure(figsize=(12, 3.2))
plt.subplot(121)
plot_2D_decision_function(1, 0)
plt.subplot(122)
plot_2D_decision_function(0.5, 0, ylabel=False)
save_fig("small_w_large_margin_plot")
plt.show()
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = SVC(kernel="linear", C=1)
svm_clf.fit(X, y)
svm_clf.predict([[5.3, 1.3]])
"""
Explanation: Small weight vector results in a large margin
End of explanation
"""
t = np.linspace(-2, 4, 200)
h = np.where(1 - t < 0, 0, 1 - t) # max(0, 1-t)
plt.figure(figsize=(5,2.8))
plt.plot(t, h, "b-", linewidth=2, label="$max(0, 1 - t)$")
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.yticks(np.arange(-1, 2.5, 1))
plt.xlabel("$t$", fontsize=16)
plt.axis([-2, 4, -1, 2.5])
plt.legend(loc="upper right", fontsize=16)
save_fig("hinge_plot")
plt.show()
"""
Explanation: Hinge loss
End of explanation
"""
X, y = make_moons(n_samples=1000, noise=0.4, random_state=42)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
import time
tol = 0.1
tols = []
times = []
for i in range(10):
svm_clf = SVC(kernel="poly", gamma=3, C=10, tol=tol, verbose=1)
t1 = time.time()
svm_clf.fit(X, y)
t2 = time.time()
times.append(t2-t1)
tols.append(tol)
print(i, tol, t2-t1)
tol /= 10
plt.semilogx(tols, times)
"""
Explanation: Extra material
Training time
End of explanation
"""
# Training set
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64).reshape(-1, 1) # Iris-Virginica
from sklearn.base import BaseEstimator
class MyLinearSVC(BaseEstimator):
def __init__(self, C=1, eta0=1, eta_d=10000, n_epochs=1000, random_state=None):
self.C = C
self.eta0 = eta0
self.n_epochs = n_epochs
self.random_state = random_state
self.eta_d = eta_d
def eta(self, epoch):
return self.eta0 / (epoch + self.eta_d)
def fit(self, X, y):
# Random initialization
if self.random_state:
np.random.seed(self.random_state)
w = np.random.randn(X.shape[1], 1) # n feature weights
b = 0
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_t = X * t
self.Js=[]
# Training
for epoch in range(self.n_epochs):
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
X_t_sv = X_t[support_vectors_idx]
t_sv = t[support_vectors_idx]
J = 1/2 * np.sum(w * w) + self.C * (np.sum(1 - X_t_sv.dot(w)) - b * np.sum(t_sv))
self.Js.append(J)
w_gradient_vector = w - self.C * np.sum(X_t_sv, axis=0).reshape(-1, 1)
b_derivative = -C * np.sum(t_sv)
w = w - self.eta(epoch) * w_gradient_vector
b = b - self.eta(epoch) * b_derivative
self.intercept_ = np.array([b])
self.coef_ = np.array([w])
support_vectors_idx = (X_t.dot(w) + b < 1).ravel()
self.support_vectors_ = X[support_vectors_idx]
return self
def decision_function(self, X):
return X.dot(self.coef_[0]) + self.intercept_[0]
def predict(self, X):
return (self.decision_function(X) >= 0).astype(np.float64)
C=2
svm_clf = MyLinearSVC(C=C, eta0 = 10, eta_d = 1000, n_epochs=60000, random_state=2)
svm_clf.fit(X, y)
svm_clf.predict(np.array([[5, 2], [4, 1]]))
plt.plot(range(svm_clf.n_epochs), svm_clf.Js)
plt.axis([0, svm_clf.n_epochs, 0, 100])
print(svm_clf.intercept_, svm_clf.coef_)
svm_clf2 = SVC(kernel="linear", C=C)
svm_clf2.fit(X, y.ravel())
print(svm_clf2.intercept_, svm_clf2.coef_)
yr = y.ravel()
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs", label="Not Iris-Virginica")
plot_svc_decision_boundary(svm_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("MyLinearSVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("SVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(loss="hinge", alpha = 0.017, n_iter = 50, random_state=42)
sgd_clf.fit(X, y.ravel())
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_b = np.c_[np.ones((m, 1)), X] # Add bias input x0=1
X_b_t = X_b * t
sgd_theta = np.r_[sgd_clf.intercept_[0], sgd_clf.coef_[0]]
print(sgd_theta)
support_vectors_idx = (X_b_t.dot(sgd_theta) < 1).ravel()
sgd_clf.support_vectors_ = X[support_vectors_idx]
sgd_clf.C = C
plt.figure(figsize=(5.5,3.2))
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(sgd_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("SGDClassifier", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
"""
Explanation: Linear SVM classifier implementation using Batch Gradient Descent
End of explanation
"""
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
C = 5
alpha = 1 / (C * len(X))
lin_clf = LinearSVC(loss="hinge", C=C, random_state=42)
svm_clf = SVC(kernel="linear", C=C)
sgd_clf = SGDClassifier(loss="hinge", learning_rate="constant", eta0=0.001, alpha=alpha,
n_iter=100000, random_state=42)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
lin_clf.fit(X_scaled, y)
svm_clf.fit(X_scaled, y)
sgd_clf.fit(X_scaled, y)
print("LinearSVC: ", lin_clf.intercept_, lin_clf.coef_)
print("SVC: ", svm_clf.intercept_, svm_clf.coef_)
print("SGDClassifier(alpha={:.5f}):".format(sgd_clf.alpha), sgd_clf.intercept_, sgd_clf.coef_)
"""
Explanation: Exercise solutions
1. to 7.
See appendix A.
8.
Exercise: train a LinearSVC on a linearly separable dataset. Then train an SVC and a SGDClassifier on the same dataset. See if you can get them to produce roughly the same model.
Let's use the Iris dataset: the Iris Setosa and Iris Versicolor classes are linearly separable.
End of explanation
"""
# Compute the slope and bias of each decision boundary
w1 = -lin_clf.coef_[0, 0]/lin_clf.coef_[0, 1]
b1 = -lin_clf.intercept_[0]/lin_clf.coef_[0, 1]
w2 = -svm_clf.coef_[0, 0]/svm_clf.coef_[0, 1]
b2 = -svm_clf.intercept_[0]/svm_clf.coef_[0, 1]
w3 = -sgd_clf.coef_[0, 0]/sgd_clf.coef_[0, 1]
b3 = -sgd_clf.intercept_[0]/sgd_clf.coef_[0, 1]
# Transform the decision boundary lines back to the original scale
line1 = scaler.inverse_transform([[-10, -10 * w1 + b1], [10, 10 * w1 + b1]])
line2 = scaler.inverse_transform([[-10, -10 * w2 + b2], [10, 10 * w2 + b2]])
line3 = scaler.inverse_transform([[-10, -10 * w3 + b3], [10, 10 * w3 + b3]])
# Plot all three decision boundaries
plt.figure(figsize=(11, 4))
plt.plot(line1[:, 0], line1[:, 1], "k:", label="LinearSVC")
plt.plot(line2[:, 0], line2[:, 1], "b--", linewidth=2, label="SVC")
plt.plot(line3[:, 0], line3[:, 1], "r-", label="SGDClassifier")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs") # label="Iris-Versicolor"
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo") # label="Iris-Setosa"
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper center", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.show()
"""
Explanation: Let's plot the decision boundaries of these three models:
End of explanation
"""
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata("MNIST original")
X = mnist["data"]
y = mnist["target"]
X_train = X[:60000]
y_train = y[:60000]
X_test = X[60000:]
y_test = y[60000:]
"""
Explanation: Close enough!
9.
Exercise: train an SVM classifier on the MNIST dataset. Since SVM classifiers are binary classifiers, you will need to use one-versus-all to classify all 10 digits. You may want to tune the hyperparameters using small validation sets to speed up the process. What accuracy can you reach?
First, let's load the dataset and split it into a training set and a test set. We could use train_test_split() but people usually just take the first 60,000 instances for the training set, and the last 10,000 instances for the test set (this makes it possible to compare your model's performance with others):
End of explanation
"""
np.random.seed(42)
rnd_idx = np.random.permutation(60000)
X_train = X_train[rnd_idx]
y_train = y_train[rnd_idx]
"""
Explanation: Many training algorithms are sensitive to the order of the training instances, so it's generally good practice to shuffle them first:
End of explanation
"""
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train, y_train)
"""
Explanation: Let's start simple, with a linear SVM classifier. It will automatically use the One-vs-All (also called One-vs-the-Rest, OvR) strategy, so there's nothing special we need to do. Easy!
End of explanation
"""
from sklearn.metrics import accuracy_score
y_pred = lin_clf.predict(X_train)
accuracy_score(y_train, y_pred)
"""
Explanation: Let's make predictions on the training set and measure the accuracy (we don't want to measure it on the test set yet, since we have not selected and trained the final model yet):
End of explanation
"""
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32))
X_test_scaled = scaler.transform(X_test.astype(np.float32))
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train_scaled, y_train)
y_pred = lin_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
"""
Explanation: Wow, 82% accuracy on MNIST is a really bad performance. This linear model is certainly too simple for MNIST, but perhaps we just needed to scale the data first:
End of explanation
"""
svm_clf = SVC(decision_function_shape="ovr")
svm_clf.fit(X_train_scaled[:10000], y_train[:10000])
y_pred = svm_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
"""
Explanation: That's much better (we cut the error rate in two), but still not great at all for MNIST. If we want to use an SVM, we will have to use a kernel. Let's try an SVC with an RBF kernel (the default).
Warning: if you are using Scikit-Learn ≤ 0.19, the SVC class will use the One-vs-One (OvO) strategy by default, so you must explicitly set decision_function_shape="ovr" if you want to use the OvR strategy instead (OvR is the default since 0.19).
End of explanation
"""
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(svm_clf, param_distributions, n_iter=10, verbose=2)
rnd_search_cv.fit(X_train_scaled[:1000], y_train[:1000])
rnd_search_cv.best_estimator_
rnd_search_cv.best_score_
"""
Explanation: That's promising, we get better performance even though we trained the model on 6 times less data. Let's tune the hyperparameters by doing a randomized search with cross validation. We will do this on a small dataset just to speed up the process:
End of explanation
"""
rnd_search_cv.best_estimator_.fit(X_train_scaled, y_train)
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
"""
Explanation: This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set (run this at night, it will take hours):
End of explanation
"""
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
accuracy_score(y_test, y_pred)
"""
Explanation: Ah, this looks good! Let's select this model. Now we can test it on the test set:
End of explanation
"""
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = housing["data"]
y = housing["target"]
"""
Explanation: Not too bad, but apparently the model is overfitting slightly. It's tempting to tweak the hyperparameters a bit more (e.g. decreasing C and/or gamma), but we would run the risk of overfitting the test set. Other people have found that the hyperparameters C=5 and gamma=0.005 yield even better performance (over 98% accuracy). By running the randomized search for longer and on a larger part of the training set, you may be able to find this as well.
10.
Exercise: train an SVM regressor on the California housing dataset.
Let's load the dataset using Scikit-Learn's fetch_california_housing() function:
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
"""
Explanation: Split it into a training set and a test set:
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
"""
Explanation: Don't forget to scale the data:
End of explanation
"""
from sklearn.svm import LinearSVR
lin_svr = LinearSVR(random_state=42)
lin_svr.fit(X_train_scaled, y_train)
"""
Explanation: Let's train a simple LinearSVR first:
End of explanation
"""
from sklearn.metrics import mean_squared_error
y_pred = lin_svr.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
mse
"""
Explanation: Let's see how it performs on the training set:
End of explanation
"""
np.sqrt(mse)
"""
Explanation: Let's look at the RMSE:
End of explanation
"""
from sklearn.svm import SVR
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(SVR(), param_distributions, n_iter=10, verbose=2, random_state=42)
rnd_search_cv.fit(X_train_scaled, y_train)
rnd_search_cv.best_estimator_
"""
Explanation: In this training set, the targets are tens of thousands of dollars. The RMSE gives a rough idea of the kind of error you should expect (with a higher weight for large errors): so with this model we can expect errors somewhere around $10,000. Not great. Let's see if we can do better with an RBF Kernel. We will use randomized search with cross validation to find the appropriate hyperparameter values for C and gamma:
End of explanation
"""
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
np.sqrt(mse)
"""
Explanation: Now let's measure the RMSE on the training set:
End of explanation
"""
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
np.sqrt(mse)
"""
Explanation: Looks much better than the linear model. Let's select this model and evaluate it on the test set:
End of explanation
"""
|
pucdata/pythonclub | sessions/05-astropy/Astropy Explored.ipynb | gpl-3.0 | #Preamble. These are some standard things I like to include in IPython Notebooks.
import astropy
from astropy.table import Table, Column, MaskedColumn
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy import units as u
from astropy.coordinates import SkyCoord, Angle
import astropy.cosmology as cos
# special IPython command to prepare the notebook for matplotlib
%matplotlib inline
"""
Explanation: ASTROPY EXPLORATION
Astropy is a package that can be downloaded directly fron Anaconda. It is hosted at astropy.org
http://docs.astropy.org/en/stable/getting_started.html
Is a goot place to get started, but we'll try our hand at dealing with some data.
End of explanation
"""
cat_file='data/DR9Q.fits'
hdu = fits.open(cat_file)
hdu.info()
"""
Explanation: First we'll download a fits file from SDSS. This is a fits table rather than a fits image, but the general principle for opening any of these is the same. Astropy's I/O is so advanced that it basically will do exactly what you want unless you are opening a mosaic or a multispec file, and then it is generally the case that you can find affiliated packages which can handle the proper data opening.
End of explanation
"""
cat_header=hdu[0].header
cat_header
"""
Explanation: Typically, you'll want to discover the metadata by reading in the header. For our FITS table, the header isn't as illuminating as it will be for an image file, for example.
End of explanation
"""
print cat_header['NAXIS1']
"""
Explanation: This metadata often contains information you will want to use or manipulate later. In that case, it's pretty straightforward to call it.
End of explanation
"""
cat_data=hdu[1].data
cat_data.columns
"""
Explanation: However, what you really want to see is the data, right? From this point on, we'll be referring explicitly to a fits datatable instead of image. One thing to keep in mind is that the data will be read into a numpy array with a zero-based index in the [y, x] order as was discussed in a previous Python club meeting.
End of explanation
"""
data = Table(cat_data)
"""
Explanation: This is a particular kind of "FITS record" rather than a data table. You might want to convert this into an astropy data table to see its functionality. Another option would be to convert this into a pandas dataframe, but we'll let Roberto handle that idea at a later time.
End of explanation
"""
data
"""
Explanation: Let's see how this differs from the other object above. You can get a sense just be doing the self call.
End of explanation
"""
len(data)
truncated_data = data[0:49]
print(truncated_data)
"""
Explanation: I note that this table is fairly large, so let's pare it down a bit.
End of explanation
"""
truncated_data.show_in_notebook()
"""
Explanation: There are other ways you may want to view your table. For example, the "show in notebook" functionality.
End of explanation
"""
truncated_data.show_in_browser()
truncated_data.show_in_browser(jsviewer=True)
"""
Explanation: There is also an option to show it in browser either as a static HTML or sortable javascript table.
End of explanation
"""
data['coordinate'] = SkyCoord(ra=data['RA']*u.degree, dec=cat_data['DEC']*u.degree)
some_coordinate = data['coordinate'][data['SDSS_NAME'] == '000048.17+013313.6']
print(some_coordinate.galactic)
print(some_coordinate.ra.hms.h[0], some_coordinate.ra.hms.m[0], some_coordinate.ra.hms.s[0])
#Here's a way to plot the Mollweide projection of our data in degrees.
def plot_mwd(ra,dec,org=0,title='Mollweide projection', projection='mollweide', coordinate='equatorial'):
x = np.remainder(ra+360-org,360) # shift RA values
ind = x>180
x[ind] -=360 # scale conversion to [-180, 180]
x=-x # reverse the scale: East to the left
tick_labels = np.array([150, 120, 90, 60, 30, 0, 330, 300, 270, 240, 210])
tick_labels = np.remainder(tick_labels+360+org,360)
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111, projection=projection, axisbg ='LightCyan')
ax.scatter(np.radians(x),np.radians(dec), marker='o', s=5, facecolor='green', edgecolor='', alpha=0.5) # convert degrees to radians
ax.set_xticklabels(tick_labels) # we add the scale on the x axis
ax.set_title(title)
ax.title.set_fontsize(15)
if coordinate == 'equatorial':
ax.set_xlabel("RA")
ax.set_ylabel("Dec")
elif coordinate == 'galactic':
ax.set_xlabel("Gal longitude")
ax.set_ylabel("Gal latitude")
else:
ax.set_xlabel("NA")
ax.set_ylabel("NA")
ax.xaxis.label.set_fontsize(12)
ax.yaxis.label.set_fontsize(12)
ax.grid(True)
plot_mwd(data['coordinate'].ra.degree,data['coordinate'].dec.degree, org=180.)
plot_mwd(data['coordinate'].galactic.l.degree,data['coordinate'].galactic.b.degree, org=0., coordinate='galactic')
plt.hist(data['Z_VI'])
# load a standard cosmology
cosmo = cos.WMAP9
plt.hist(cosmo.age(truncated_data['Z_VI']))
plt.xlabel('Gyrs')
# arcsec/kpc
plt.hist(cosmo.arcsec_per_kpc_proper(truncated_data['Z_VI']))
# luminosity distance
plt.hist(cosmo.luminosity_distance(truncated_data['Z_VI'])/1000)
plt.xlabel('Gpc')
plt.plot(np.sort(truncated_data['Z_VI']), np.sort(cosmo.H(truncated_data['Z_VI'])), '-')
plt.xlabel('z')
plt.ylabel(r'H in km sec$^{-1}$ Mpc$^{-1}$')
# Temp of CMB as a function of redshift
plt.plot(np.sort(truncated_data['Z_VI']), np.sort(cosmo.Tcmb(truncated_data['Z_VI'])), '-')
plt.xlabel('z')
plt.ylabel(r'$T_{CMB}$ in K')
# H_0
cosmo.H0
# What redshift is it when the Universe is 500 Myr old?
import astropy.units as u
cos.z_at_value(cosmo.age, 500 * u.Myr)
# Now, define our own cosmology, old-school H0 value, but keep the Universe
# flat
mycos = cos.FlatLambdaCDM(H0=100, Om0=0.3)
plt.hist(mycos.luminosity_distance(truncated_data['Z_VI'])/1000)
plt.xlabel('Gpc')
"""
Explanation: Astropy also contains a number of "units" objects which can be very useful. We have to add these by hand at this point (I think, but correct me if you know of a trick for SDSS fits tables... it seems like there should be one.) So we'll create a new column that is a "coordinate" object.
End of explanation
"""
|
russellclarke82/CV | Pi/String formatting for printing.ipynb | apache-2.0 | print('This is a String {}'.format('INSERTED'))
print('This is an example of MultiIndex insertions {} {} {}'.format('INSERTION1', 'INSERTION2', 'INSERTION3'))
a = 'I1'
b = 'I2'
c = 'I3'
print('This is me jumping the gun and testing a theory {} {} {}'.format(a, b, c))
print('Now testing MultiIndexed Insertions without jumping the gun:' + 'The {2} {1} {0}'.format('fox', 'brown', 'quick'))
"""
Explanation: Often we will want to insert/inject variables into string eg. my_name = "Russ"
print("Hi " + my_name)
String Interpolation refers to formatting strings in various ways to permit injection.
Explorations of the .format() method/function and the f-strings (formatted string literals) newer in Python3.6.x plus
End of explanation
"""
print('The {q} {g} {f}'.format(f='fox', q='quick', g='golden'))
"""
Explanation: Can also repeat an index eg. {0} {0} {0} - and use K2VP as below - cannot use numerical values (expressions and maloc rules).
End of explanation
"""
result = 12987/8921
result
print('The result was {}'.format(result))
print('The result was {r:1.6f}'.format(r=result))
"""
Explanation: Repeat can also be used with keys: {f} {f} {f}
Float formatting follows "{value:width.precision f}"
End of explanation
"""
name = 'Rusha'
print(f'Hi, he said his name was {name} because he drinks too much caffeine and rushes around.')
age = 3000
real_name = 'Russell'
hobbits = 'AI, Art and Coding among others.'
print(f'HI, he said his name was {name} because he drinks a lot of caffeine and rushes about. I later found out after getting to know him, his real name is {real_name} and he is {age} years old and has a keen interest in {hobbits}')
pi = u"\U0001D70B"
print(f'He also said he likes the idea of {pi}.')
"""
Explanation: Formatted String literals AKA 'FString'.
End of explanation
"""
|
GoogleCloudPlatform/rad-lab | modules/data_science/scripts/build/notebooks/Quantum_Simulation_qsimcirq.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 Google
End of explanation
"""
try:
import cirq
except ImportError:
!pip install cirq --quiet
import cirq
try:
import qsimcirq
except ImportError:
!pip install qsimcirq --quiet
import qsimcirq
"""
Explanation: Get started with qsimcirq
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/qsim/tutorials/qsimcirq"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/qsim/blob/master/docs/tutorials/qsimcirq.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/qsim/blob/master/docs/tutorials/qsimcirq.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/qsim/docs/tutorials/qsimcirq.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
The qsim library provides a Python interface to Cirq in the qsimcirq PyPI package.
Setup
Install the Cirq and qsimcirq packages:
End of explanation
"""
# Define qubits and a short circuit.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with Cirq and return the full state vector.
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.simulate(circuit)
print(cirq_results)
print()
# Simulate the circuit with qsim and return the full state vector.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.simulate(circuit)
print(qsim_results)
"""
Explanation: Simulating Cirq circuits with qsim is easy: just define the circuit as you normally would, then create a QSimSimulator to perform the simulation. This object implements Cirq's simulator.py interfaces, so you can drop it in anywhere the basic Cirq simulator is used.
Full state-vector simulation
qsim is optimized for computing the final state vector of a circuit. Try it by running the example below.
End of explanation
"""
samples = cirq.sample_state_vector(
qsim_results.state_vector(), indices=[0, 1], repetitions=10)
print(samples)
"""
Explanation: To sample from this state, you can invoke Cirq's sample_state_vector method:
End of explanation
"""
# Define a circuit with measurements.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(
cirq.H(q0), cirq.X(q1), cirq.CX(q0, q1),
cirq.measure(q0, key='qubit_0'),
cirq.measure(q1, key='qubit_1'),
)
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with Cirq and return just the measurement values.
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.run(circuit, repetitions=5)
print(cirq_results)
print()
# Simulate the circuit with qsim and return just the measurement values.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.run(circuit, repetitions=5)
print(qsim_results)
"""
Explanation: Measurement sampling
qsim also supports sampling from user-defined measurement gates.
Note: Since qsim and Cirq use different random number generators, identical runs on both simulators may give different results, even if they use the same seed.
End of explanation
"""
# Define a circuit with intermediate measurements.
q0 = cirq.LineQubit(0)
circuit = cirq.Circuit(
cirq.X(q0)**0.5, cirq.measure(q0, key='m0'),
cirq.X(q0)**0.5, cirq.measure(q0, key='m1'),
cirq.X(q0)**0.5, cirq.measure(q0, key='m2'),
)
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with qsim and return just the measurement values.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.run(circuit, repetitions=5)
print(qsim_results)
"""
Explanation: The warning above highlights an important distinction between the simulate and run methods:
simulate only executes the circuit once.
Sampling from the resulting state is fast, but if there are intermediate measurements the final state vector depends on the results of those measurements.
run will execute the circuit once for each repetition requested.
As a result, sampling is much slower, but intermediate measurements are re-sampled for each repetition. If there are no intermediate measurements, run redirects to simulate for faster execution.
The warning goes away if intermediate measurements are present:
End of explanation
"""
# Define a simple circuit.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with qsim and return the amplitudes for |00) and |01).
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01])
print(cirq_results)
print()
# Simulate the circuit with qsim and return the amplitudes for |00) and |01).
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01])
print(qsim_results)
"""
Explanation: Amplitude evaluation
qsim can also calculate amplitudes for specific output bitstrings.
End of explanation
"""
import time
# Get a rectangular grid of qubits.
qubits = cirq.GridQubit.rect(4, 5)
# Generates a random circuit on the provided qubits.
circuit = cirq.experiments.random_rotations_between_grid_interaction_layers_circuit(
qubits=qubits, depth=16)
# Simulate the circuit with Cirq and print the runtime.
cirq_simulator = cirq.Simulator()
cirq_start = time.time()
cirq_results = cirq_simulator.simulate(circuit)
cirq_elapsed = time.time() - cirq_start
print(f'Cirq runtime: {cirq_elapsed} seconds.')
print()
# Simulate the circuit with qsim and print the runtime.
qsim_simulator = qsimcirq.QSimSimulator()
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
"""
Explanation: Performance benchmark
The code below generates a depth-16 circuit on a 4x5 qubit grid, then runs it against the basic Cirq simulator. For a circuit of this size, the difference in runtime can be significant - try it out!
End of explanation
"""
# Use eight threads to parallelize simulation.
options = {'t': 8}
qsim_simulator = qsimcirq.QSimSimulator(options)
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
"""
Explanation: qsim performance can be tuned further by passing options to the simulator constructor. These options use the same format as the qsim_base binary - a full description can be found in the qsim usage doc. The example below demonstrates enabling multithreading in qsim; for best performance, use the same number of threads as the number of cores (or virtual cores) on your machine.
End of explanation
"""
# Increase maximum fused gate size to three qubits.
options = {'f': 3}
qsim_simulator = qsimcirq.QSimSimulator(options)
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
"""
Explanation: Another option is to adjust the maximum number of qubits over which to fuse gates. Increasing this value (as demonstrated below) increases arithmetic intensity, which may improve performance with the right environment settings.
End of explanation
"""
# Pick a pair of qubits.
q0 = cirq.GridQubit(0, 0)
q1 = cirq.GridQubit(0, 1)
# Create a circuit that entangles the pair.
circuit = cirq.Circuit(
cirq.H(q0), cirq.CX(q0, q1), cirq.X(q1)
)
print("Circuit:")
print(circuit)
"""
Explanation: Advanced applications: Distributed execution
qsimh (qsim-hybrid) is a second library in the qsim repository that takes a slightly different approach to circuit simulation. When simulating a quantum circuit, it's possible to simplify the execution by decomposing a subset of two-qubit gates into pairs of one-qubit gates with shared indices. This operation is called "slicing" (or "cutting") the gates.
qsimh takes advantage of the "slicing" operation by selecting a set of gates to "slice" and assigning each possible value of the shared indices across a set of executors running in parallel. By adding up the results afterwards, the total state can be recovered.
End of explanation
"""
options = {}
# 'k' indicates the qubits on one side of the cut.
# We'll use qubit 0 for this.
options['k'] = [0]
# 'p' and 'r' control when values are assigned to cut indices.
# There are some intricacies in choosing values for these options,
# but for now we'll set p=1 and r=0.
# This allows us to pre-assign the value of the CX indices
# and distribute its execution to multiple jobs.
options['p'] = 1
options['r'] = 0
# 'w' indicates the value pre-assigned to the cut.
# This should change for each execution.
options['w'] = 0
# Create the qsimh simulator with those options.
qsimh_simulator = qsimcirq.QSimhSimulator(options)
results_0 = qsimh_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print(results_0)
"""
Explanation: In order to let qsimh know how we want to split up the circuit, we need to pass it some additional options. More detail on these can be found in the qsim usage doc, but the fundamentals are explained below.
End of explanation
"""
options['w'] = 1
qsimh_simulator = qsimcirq.QSimhSimulator(options)
results_1 = qsimh_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print(results_1)
"""
Explanation: Now to run the other side of the cut...
End of explanation
"""
results = [r0 + r1 for r0, r1 in zip(results_0, results_1)]
print("qsimh results:")
print(results)
qsim_simulator = qsimcirq.QSimSimulator()
qsim_simulator.compute_amplitudes(circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print("qsim results:")
print(results)
"""
Explanation: ...and add the two together. The results of a normal qsim simulation are shown for comparison.
End of explanation
"""
|
tsaqib/bike-sharing-time-series-nn-numpy | cnn-tensorflow/cnn-tensorflow.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
import helper
import numpy as np
from sklearn.preprocessing import LabelBinarizer
import pickle
import tensorflow as tf
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
"""
Explanation: A Step-by-Step Convolutional Neural Network using TensorFlow
In this project, we'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. We'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. We'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, we'll get to see your neural network's predictions on the sample images. The code lives here.
All the import and initializations
End of explanation
"""
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Get the Data
Let us run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Let us play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Let us ask ourselves "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help us preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
return x / 255
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, we have to implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
lb = LabelBinarizer()
lb.fit(x)
lb.classes_ = list(range(0, 10))
return lb.transform(x)
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, let us implement a function for preprocessing namely one_hot_encode. This function will return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. We need to make sure to save the map of encodings outside the function.
End of explanation
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As we have seen from exploring the data above, the order of the samples are randomized. If needed we could randomize the dataset at this stage, but for this imageset it's already random.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file.
End of explanation
"""
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
Next time we come back to this notebook or have to restart the notebook, we can start from here now that the preprocessed data has been saved to disk. Just make sure you imported all the packages by executing the first cell at the beginning.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
return tf.placeholder(tf.float32, shape=[None, *image_shape], name="x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name="keep_prob")
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, we'll build each layer into a function. Most of the code we've seen has been outside of functions. To test the code more thoroughly, we require that we put each layer in a function. This allows us to clearly test for simple mistakes using the unittests included in this project.
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Let us implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load the saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
weights = tf.Variable(tf.truncated_normal([*conv_ksize, int(x_tensor.get_shape()[3]), conv_num_outputs], mean=0.0, stddev=0.05, dtype=tf.float32))
biases = tf.Variable(tf.constant(0, shape=[conv_num_outputs], dtype=tf.float32))
x = tf.nn.conv2d(x_tensor, weights, strides=[1, *conv_strides, 1], padding='SAME')
x = tf.nn.bias_add(x, biases)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME')
return x
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* Same padding can be used, but we're free to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* Same padding can be used, but we're free to use any padding.
Note: Let us not use TensorFlow Layers or TensorFlow Layers (contrib) for this layer just to keep just enough details.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
return tf.contrib.layers.flatten(x_tensor)
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Let us implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Let us now use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.relu)
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Let us implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
tests.test_output(output)
"""
Explanation: Output Layer
Let us implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Here we can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note: Activation, softmax, or cross entropy shouldn't be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# Apply Convolution and Max Pool layers
conv_num_outputs = [8*3, 16*3, 32*3]
conv_ksize, conv_strides, pool_ksize, pool_strides, num_outputs = (3,3), (1,1), (2,2), (2,2), 512
conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = conv2d_maxpool(conv, conv_num_outputs[1], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = conv2d_maxpool(conv, conv_num_outputs[2], conv_ksize, conv_strides, pool_ksize, pool_strides)
# Apply a Flatten Layer
flattened = flatten(conv)
# Apply Fully Connected Layers
fully = fully_conn(flattened, num_outputs)
fc_layer = tf.nn.dropout(fully, keep_prob)
fully = fully_conn(fc_layer, num_outputs)
fc_layer = tf.nn.dropout(fully, keep_prob)
return output(fully, 10)
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Let us implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. We can use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
We can apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
pass
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Now we can implement the function train_neural_network to do a single optimization. The optimization uses optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f}, Accuracy: {:.4f}'.format(loss, validation_accuracy))
pass
"""
Explanation: Show Stats
Let us implement the function print_stats to print loss and validation accuracy. We can use the global variables valid_features and valid_labels to calculate validation accuracy. Let us also use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
epochs = 20
batch_size = 512
keep_probability = 0.5
"""
Explanation: Hyperparameters
The following parameters need to be tuned:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. In most cases, they are set to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while we iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, we can run the model on all the data in the next section.
End of explanation
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that we got a decent accuracy with a single CIFAR-10 batch, let us try it with all five batches.
End of explanation
"""
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Let us test your model against the test dataset. This will be our final accuracy for this project. Should we have an accuracy less than 50%, we need to keep tweaking the model architecture and parameters.
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/sklearn_ensae_course/01_data_manipulation.ipynb | mit | # Start pylab inline mode, so figures will appear in the notebook
%matplotlib inline
"""
Explanation: 2A.ML101.1: Introduction to data manipulation with scientific Python
In this section we'll go through the basics of the scientific Python stack for data manipulation: using numpy and matplotlib.
Source: Course on machine learning with scikit-learn by Gaël Varoquaux
You can skip this section if you already know the scipy stack.
To learn the scientific Python ecosystem: http://scipy-lectures.org
End of explanation
"""
import numpy as np
# Generating a random array
X = np.random.random((3, 5)) # a 3 x 5 array
print(X)
# Accessing elements
# get a single element
print(X[0, 0])
# get a row
print(X[1])
# get a column
print(X[:, 1])
# Transposing an array
print(X.T)
# Turning a row vector into a column vector
y = np.linspace(0, 12, 5)
print(y)
# make into a column vector
print(y[:, np.newaxis])
"""
Explanation: Numpy Arrays
Manipulating numpy arrays is an important part of doing machine learning
(or, really, any type of scientific computation) in Python. This will likely
be review for most: we'll quickly go through some of the most important features.
End of explanation
"""
from scipy import sparse
# Create a random array with a lot of zeros
X = np.random.random((10, 5))
print(X)
# set the majority of elements to zero
X[X < 0.7] = 0
print(X)
# turn X into a csr (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)
# convert the sparse matrix to a dense array
print(X_csr.toarray())
"""
Explanation: There is much, much more to know, but these few operations are fundamental to what we'll
do during this tutorial.
Scipy Sparse Matrices
We won't make very much use of these in this tutorial, but sparse matrices are very nice
in some situations. For example, in some machine learning tasks, especially those associated
with textual analysis, the data may be mostly zeros. Storing all these zeros is very
inefficient. We can create and manipulate sparse matrices as follows:
End of explanation
"""
%matplotlib inline
# Here we import the plotting functions
import matplotlib.pyplot as plt
# plotting a line
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x));
# scatter-plot points
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);
# showing images
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]
im = y * np.sin(x) * np.cos(y)
print(im.shape)
# imshow - note that origin is at the top-left by default!
plt.imshow(im);
# Contour plot - note that origin here is at the bottom-left by default!
plt.contour(im);
"""
Explanation: Matplotlib
Another important part of machine learning is visualization of data. The most common
tool for this in Python is matplotlib. It is an extremely flexible package, but
we will go over some basics here.
First, something special to IPython notebook. We can turn on the "IPython inline" mode,
which will make plots show up inline in the notebook.
End of explanation
"""
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.collections import EllipseCollection
x = np.arange(10)
y = np.arange(15)
X, Y = np.meshgrid(x, y)
XY = np.hstack((X.ravel()[:, np.newaxis], Y.ravel()[:, np.newaxis]))
ww = X/10.0
hh = Y/15.0
aa = X*9
fig, ax = plt.subplots()
ec = EllipseCollection(ww, hh, aa, units='x', offsets=XY,
transOffset=ax.transData)
ec.set_array((X + Y).ravel())
ax.add_collection(ec)
ax.autoscale_view()
ax.set_xlabel('X')
ax.set_ylabel('y')
cbar = plt.colorbar(ec)
cbar.set_label('X+Y');
"""
Explanation: There are many, many more plot types available. One useful way to explore these is by
looking at the matplotlib gallery: http://matplotlib.org/gallery.html
You can test these examples out easily in the notebook: simply copy the Source Code
link on each page, and put it in a notebook using the %load magic.
For example:
End of explanation
"""
|
jpn--/larch | larch/doc/example/107_latent_class.ipynb | gpl-3.0 | import larch
import pandas
from larch.roles import P,X
"""
Explanation: 107: Latent Class Models
In this example, we will replicate the latent class example model
from Biogeme.
End of explanation
"""
from larch import data_warehouse
raw = pandas.read_csv(larch.data_warehouse.example_file('swissmetro.csv.gz'))
"""
Explanation: The swissmetro dataset used in this example is conveniently bundled with Larch,
accessible using the data_warehouse module. We'll load this file using
the pandas read_csv command.
End of explanation
"""
raw.head()
"""
Explanation: We can inspect a few rows of data to see what we have using the head method.
End of explanation
"""
raw['SM_COST'] = raw['SM_CO'] * (raw["GA"]==0)
"""
Explanation: The Biogeme code includes a variety of commands to manipulate the data
and create new variables. Because Larch sits on top of pandas, a reasonable
method to create new variables is to just create new columns in the
source pandas.DataFrame in the usual manner for any DataFrame.
End of explanation
"""
raw['TRAIN_COST'] = raw.eval("TRAIN_CO * (GA == 0)")
"""
Explanation: You can also use the eval method of pandas DataFrames.
This method takes an expression as a string
and evaluates it within a namespace that has already loaded the
column names as variables.
End of explanation
"""
raw['TRAIN_COST_SCALED'] = raw['TRAIN_COST'] / 100
raw['TRAIN_TT_SCALED'] = raw['TRAIN_TT'] / 100
raw['SM_COST_SCALED'] = raw.eval('SM_COST / 100')
raw['SM_TT_SCALED'] = raw['SM_TT'] / 100
raw['CAR_CO_SCALED'] = raw['CAR_CO'] / 100
raw['CAR_TT_SCALED'] = raw['CAR_TT'] / 100
raw['CAR_AV_SP'] = raw.eval("CAR_AV * (SP!=0)")
raw['TRAIN_AV_SP'] = raw.eval("TRAIN_AV * (SP!=0)")
"""
Explanation: This can allow for writing data
expressions more succinctly, as long as all your variable names
are strings that can also be the names of variables in Python.
If this isn't the case (e.g., if any variable names have spaces
in the name) you'll be better off if you stay away from this
feature.
We can mix and match between these two method to create new
columns in any DataFrame as needed.
End of explanation
"""
keep = raw.eval("PURPOSE in (1,3) and CHOICE != 0")
"""
Explanation: Removing some observations can also be done directly using pandas.
Here we identify a subset of observations that we want to keep.
End of explanation
"""
dfs = larch.DataFrames(raw[keep], alt_codes=[1,2,3])
"""
Explanation: You may note that we don't assign this value to a column within the
raw DataFrame. This is perfectly acceptable, as the output from
the eval method is just a normal pandas.Series, like any other
single column output you might expect to get from a pandas method.
When you've created the data you need, you can pass the dataframe to
the larch.DataFrames constructor. Since the swissmetro data is in
idco format, we'll need to explicitly identify the alternative
codes as well.
End of explanation
"""
dfs.info()
"""
Explanation: The info method of the DataFrames object gives a short summary
of the contents.
End of explanation
"""
dfs.info(verbose=True)
"""
Explanation: A longer summary is available by setting verbose to True.
End of explanation
"""
dfs.data_co.info()
"""
Explanation: You may have noticed that the info summary notes that this data is "not computation-ready".
That's because some of the data columns are stored as integers, which can be observed by
inspecting the info on the data_co dataframe.
End of explanation
"""
m1 = larch.Model(dataservice=dfs)
m1.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m1.choice_co_code = 'CHOICE'
m1.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_COST_SCALED") * P("B_COST")
m1.utility_co[2] = X("SM_COST_SCALED") * P("B_COST")
m1.utility_co[3] = P("ASC_CAR") + X("CAR_CO_SCALED") * P("B_COST")
m2 = larch.Model(dataservice=dfs)
m2.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m2.choice_co_code = 'CHOICE'
m2.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_TT_SCALED") * P("B_TIME") + X("TRAIN_COST_SCALED") * P("B_COST")
m2.utility_co[2] = X("SM_TT_SCALED") * P("B_TIME") + X("SM_COST_SCALED") * P("B_COST")
m2.utility_co[3] = P("ASC_CAR") + X("CAR_TT_SCALED") * P("B_TIME") + X("CAR_CO_SCALED") * P("B_COST")
"""
Explanation: When computations are run, we'll need all the data to be in float format, but Larch knows this and will
handle it for you later.
Class Model Setup
Having prepped our data, we're ready to set up discrete choices models
for each class in the latent class model. We'll reproduce the Biogeme
example exactly here, as a technology demonstation. Each of two classes
will be set up with a simple MNL model.
End of explanation
"""
mk = larch.Model()
mk.utility_co[2] = P("W_OTHER")
"""
Explanation: Class Membership Model
For Larch, the class membership model will be set up as yet another discrete choice model.
In this case, the choices are not the ultimate choices, but instead are the latent classes.
To remain consistent with the Biogeme example, we'll set up this model with only a single
constant that determines class membership. Unlike Biogeme, this class membership will
be represented with an MNL model, not a simple direct probability.
End of explanation
"""
from larch.model.latentclass import LatentClassModel
m = LatentClassModel(mk, {1:m1, 2:m2})
"""
Explanation: The utility function of the first class isn't written here, which means it will implicitly
be set as 0.
Latent Class Model
Now we're ready to create the latent class model itself, by assembling the components
we created above. The constructor for the LatentClassModel takes two arguments,
a class membership model, and a dictionary of class models, where the keys in the
dictionary correspond to the identifying codes from the utility functions we wrote
for the class membership model.
End of explanation
"""
m.load_data()
m.dataframes.info(verbose=1)
"""
Explanation: The we'll load the data needed for our models using the load_data method.
This step will assemble the data needed, and convert it to floating point
format as required.
End of explanation
"""
result = m.maximize_loglike()
result
# TEST
from pytest import approx
assert result.loglike == approx(-5208.498065961453)
"""
Explanation: Only the data actually needed by the models has been converted, which may help
keep memory usage down on larger models. You may also note that the loaded
dataframes no longer reports that it is "not computational-ready".
To estimate the model, we'll use the maximize_loglike method. When run
in Jupyter, a live-view report of the parmeters and log likelihood is displayed.
End of explanation
"""
m.loglike_null()
# TEST
assert _ == approx(-6964.662979191462)
"""
Explanation: To complete our analysis, we can compute the log likelihood at "null" parameters.
End of explanation
"""
m.calculate_parameter_covariance()
m.covariance_matrix
m.robust_covariance_matrix
"""
Explanation: And the parameter covariance matrixes.
End of explanation
"""
report = larch.Reporter("Latent Class Example")
"""
Explanation: Reporting Results
And then generate a report of the estimation statistics. Larch includes a Reporter class
to help you assemble a report containing the relevant output you want.
End of explanation
"""
report << "# Parameter Estimates"
"""
Explanation: Pipe into the report section headers in markdown format (use one hash for top level
headings, two hashes for lower levels, etc.)
End of explanation
"""
report << m.pf
"""
Explanation: You can also pipe in dataframes directly, include the pf parameter frame from the model.
End of explanation
"""
report << "# Estimation Statistics"
report << m.estimation_statistics()
report << "# Parameter Covariance"
report << "## Typical Parameter Covariance"
report << m.covariance_matrix
report << "## Robust Parameter Covariance"
report << m.robust_covariance_matrix
report << "# Utility Functions"
report << "## Class 1"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
report << "## Class 2"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
"""
Explanation: And a selection of pre-formatted summary sections.
End of explanation
"""
report.save('latent-class-example-report.html', overwrite=True)
"""
Explanation: In addition to reviewing report sections in a Jupyter notebook, the
entire report can be saved to an HTML file.
End of explanation
"""
|
datactive/bigbang | examples/activity/Cohort Visualization.ipynb | mit | url = "6lo"
arx = Archive(url,archive_dir="../archives")
arx.data[:1]
"""
Explanation: One interesting question for open source communities is whether they are growing. Often the founding members of a community would like to see new participants join and become active in the community. This is important for community longevity; ultimatley new members are required to take leadership roles if a project is to sustain itself over time.
The data available for community participation is very granular, as it can include the exact traces of the messages sent by participants over a long history. One way of summarizing this information to get a sense of overall community growth is a cohort visualization.
In this notebook, we will produce a visualization of changing participation over time.
End of explanation
"""
act = arx.get_activity()
"""
Explanation: Archive objects have a method that reports for each user how many emails they sent each day.
End of explanation
"""
fig = plt.figure(figsize=(12.5, 7.5))
#act.idxmax().order().T.plot()
(act > 0).idxmax().order().plot()
fig.axes[0].yaxis_date()
"""
Explanation: This plot will show when each sender sent their first post. A slow ascent means a period where many people joined.
End of explanation
"""
fig = plt.figure(figsize=(12.5, 7.5))
(act > 0).idxmax().order().hist()
fig.axes[0].xaxis_date()
"""
Explanation: This is the same data, but plotted as a histogram. It's easier to see the trends here.
End of explanation
"""
n = 5
from bigbang import plot
# A series, indexed by users, of the day of their first post
# This series is ordered by time
first_post = (act > 0).idxmax().order()
# Splitting the previous series into five equal parts,
# each representing a chronological quintile of list members
cohorts = np.array_split(first_post,n)
cohorts = [list(c.keys()) for c in cohorts]
plot.stack(act,partition=cohorts,smooth=10)
"""
Explanation: While this is interesting, what if we are interested in how much different "cohorts" of participants stick around and continue to participate in the community over time?
What we want to do is divide the participants into N cohorts based on the percentile of when they joined the mailing list. I.e, the first 1/N people to participate in the mailing list are the first cohort. The second 1/N people are in the second cohort. And so on.
Then we can combine the activities of each cohort and do a stackplot of how each cohort has participated over time.
End of explanation
"""
cohorts[1].index.values
"""
Explanation: This gives us a sense of when new members are taking the lead in the community. But what if the old members are just changing their email addresses? To test that case, we should clean our data with entity resolution techniques.
End of explanation
"""
|
Zweedeend/interactive-2d-gvdw | gvdw-bokeh.ipynb | mit | import inspect
from math import sqrt, pi
import numpy as np
import pandas as pd
from bokeh.io import show, output_notebook, push_notebook
from bokeh.plotting import figure
from ipywidgets import interact
from scipy.integrate import quad
output_notebook()
"""
Explanation: Equation of State using Generalized van der Waals Theory
This notebook illustrates the generalized van der Waals theory (gvdW) for the equation of state for interacting particles. Based on the lecture notes, Properties of Molecular Fluids in Equilibrium by Sture Nordholm.
End of explanation
"""
# Debye-Huckel
def potential_Debye_Huckel(r, z, D):
r"""$\frac{\lambda_B z^2}{r} e^{-r/\lambda_D}$"""
lB = 7.0 # Bjerrum length, angstroms
return lB * z**2 * np.exp(-r/D) / r
# Lennard-Jones
def potential_Lennard_Jones(r, eps, sigma):
r"""$4\beta \varepsilon_{LJ} \left ( \left ( \frac{\sigma}{r}\right )^{12} - \left ( \frac{\sigma}{r}\right )^{6}\right )$"""
return 4 * eps * ( (sigma/r)**12 - (sigma/r)**6 )
# Total potential
def potential_Combined(r, z, D, eps, sigma):
r"""$\frac{\lambda_B z^2}{r} e^{-r/\lambda_D} + 4\beta \varepsilon_{LJ} \left ( \left ( \frac{\sigma}{r}\right )^{12} - \left ( \frac{\sigma}{r}\right )^{6}\right )$"""
return potential_Debye_Huckel(r, z, D) + potential_Lennard_Jones(r, eps, sigma)
"""
Explanation: Pair potentials
The particles are here assumed to interact via a Lennard-Jones and a screened Coulomb potential,
$$
\beta w(r) = \frac{\lambda_B z^2}{r} e^{-r/\lambda_D}
+ 4\beta \varepsilon_{LJ} \left ( \left ( \frac{\sigma}{r}\right )^{12} - \left ( \frac{\sigma}{r}\right )^{6}\right )
$$
where $\lambda_B$ and $\lambda_D$ are the Bjerrum and Debye lengths, respectively.
Any potential may in principle be given and must return the energy in units of $k_BT$.
Define your own potentials below
The name should start with potential_
The first parameter should be r
The docstring can be added for a nice displayname of the function (The raw python string like r"$ \mu $" is convenient when writing latex, because in normal strings the backslash acts as an escape character)
End of explanation
"""
def ahat(potential, **parameters):
sigma = parameters['sigma']
# extract the relevant parameters for the potential
parameters = {k:v for k,v in parameters.items() if k in inspect.signature(potential).parameters}
def integrand(r):
return potential(r, **parameters) * r**2
integral, error = quad(integrand, sigma, np.infty, limit=50)
return -2 * pi * integral
def mu_ideal(n):
return np.log(n)
def mu_gvdw(n, z, D, eps, sigma, potential=potential_Combined):
y0 = pi*sigma**2 / 2 # excluded volume
y = 1 / n
a = ahat(potential, z=z, D=D, eps=eps, sigma=sigma)
return -np.log(y-y0) + y0 / (y - y0) - 2 * a / y
"""
Explanation: Interaction parameter
Here we integrate the above pair potential to get the average interaction energy per particle, assuming that the pair correlation function, $g(r)$, can be described by a simple step function, zero when $r<\sigma$, unity otherwise:
$$
\hat{a} = -\frac{1}{2} \int_{\sigma}^{\infty} 4\pi w(r) r^2 dr
$$
In this Notebook we simply do the integration numerically so that we can use arbitrary pair potentials.
From this we calculate the potential, $μ$, versus density, $n$, using,
$$
\beta \, \mu_{gvdW} = \ln \left( \frac{1}{y - y_0} \right) + \frac{y}{y-y_0} - 2 \frac{\hat{a}}{y}
$$
From this we calculate the pressure, $p$, versus density, $n$, using,
where $y=1/n$ and $y_0=\pi \sigma^2/2$ is the particle area.
For reference we'll also plot Equation of State for an ideal system (van 't Hoff), $\beta \mu_{ideal}= \ln (1/y)$,
where $\beta = 1/k_BT$.
End of explanation
"""
n = np.linspace(1e-4, 1e-3, 100);
def update(eps=1.0, sigma=4.0, z=1.0, Cs=0.3, potential=potential_Combined):
D = 3.04/sqrt(Cs)
gvdw_line.data_source.data["y"] = mu_gvdw(n, z, D, eps, sigma, potential)
push_notebook(handle=potfig_handle)
potfig = figure(
title="Chemical potential",
plot_height=300,
plot_width=600,
x_axis_label="Number density n",
y_axis_label="Potential (k T)")
ideal_line = potfig.line(n, mu_ideal(n), legend="Ideal")
gvdw_line = potfig.line(n, mu_gvdw(n, z=0, D=3.04/sqrt(0.3), eps=1, sigma=4, potential=potential_Combined), color="green", legend="GvdW")
potfig_handle = show(potfig, notebook_handle=True)
_potentials = {fname[10:]: func for fname, func in globals().items() if fname.startswith("potential_")}
interact(update, eps=(0.0, 10.0, 0.1), sigma=(0, 5, 0.1), z=(0.0, 3, 1.0), Cs=(1e-3, 1.0, 0.1), potential=_potentials);
"""
Explanation: Interactive plot
End of explanation
"""
with open("datafile.csv", "wt") as stream:
stream.write("""\
length potential proteins density
1.414000000000000000e+03 1.429585460731450097e-01 2.000000000000000000e+01 1.000302091231552026e+03
9.990000000000000000e+02 2.990882091428900269e-01 2.000000000000000000e+01 2.004006008010011783e+03
8.160000000000000000e+02 4.684432472751309806e-01 2.000000000000000000e+01 3.003652441368703876e+03
6.320000000000000000e+02 8.629727385734929923e-01 2.000000000000000000e+01 5.007210382951449901e+03
5.340000000000000000e+02 1.353602607621670062e+00 2.000000000000000000e+01 7.013704779138435697e+03
4.710000000000000000e+02 1.970895704549270100e+00 2.000000000000000000e+01 9.015466031977857710e+03
4.260000000000000000e+02 2.788653065634310035e+00 2.000000000000000000e+01 1.102074103462716812e+04
3.920000000000000000e+02 3.842403663548089821e+00 2.000000000000000000e+01 1.301541024573094546e+04""")
df = pd.read_csv("datafile.csv", delimiter="(?:\s+|,)", engine="python")
"""
Explanation: With Data
End of explanation
"""
|
AEW2015/PYNQ_PR_Overlay | Pynq-Z1/notebooks/Video_PR/Image_Duotone_Overlay_Filter.ipynb | bsd-3-clause | from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
"""
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Image Overlay Duotone Color Filter Example
In this notebook, we will overlay an image on the output videofeed. By default, an image showing Abraham Lincoln will be displayed at the top of the screen.
In order to store larger images, just two colors are allowed. These colors can be controlled by registers. Each color can also be replaced with "transparency" so that you can see the video feed behind it.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
End of explanation
"""
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(2)
hdmi_out.start()
hdmi_in.start()
"""
Explanation: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port). The "2" represents a resolution of 1280x720, which is the output streaming resolution of the camera.
End of explanation
"""
Bitstream_Part("img_overlay_duotone_p.bit").download()
"""
Explanation: 3. Program board with 2 Color Image Overlay Filter
Run the following script to download the RGB Filter to the PYNQ. This will allow us to modify the colors of the video stream.
End of explanation
"""
import ipywidgets as widgets
from IPython.display import HTML, display
display(HTML('''<style>
.widget-label { min-width: 15ex !important; }
</style>'''))
R0 =Register(0)
R1 =Register(1)
R2 =Register(2)
R3 =Register(3)
R4 =Register(4)
R5 =Register(5)
R6 =Register(6)
R0.write(1)
R1.write(1)
R2.write(400)
R3.write(449)
R4.write(0xFFFFFF)
R5.write(0x000000)
R0_s = widgets.IntSlider(
value=1,
min=1,
max=1279,
step=1,
description='X Origin:',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='i',
slider_color='red',
width = '600px'
)
R1_s = widgets.IntSlider(
value=1,
min=1,
max=719,
step=1,
description='Y Origin:',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='i',
slider_color='green',
width = '600px'
)
R2_b = widgets.BoundedIntText(
value=200,
min=1,
max=1280,
step=1,
description='Image Width:',
disabled=True
)
R3_b = widgets.BoundedIntText(
value=200,
min=1,
max=720,
step=1,
description='Image Height:',
disabled=True
)
R4_b = widgets.ColorPicker(
concise=False,
description='Color 1:',
value='white'
)
R4_c = widgets.Checkbox(
value=False,
description='Color 1 Clear',
disabled=False
)
R5_b = widgets.ColorPicker(
concise=False,
description='Color 2:',
value='black'
)
R5_c = widgets.Checkbox(
value=False,
description='Color 2 Clear',
disabled=False
)
R6_s = widgets.Select(
options=['Abe Lincoln', 'Emma Watson', 'Giraffe'],
value='Abe Lincoln',
description='Display Image:',
disabled=False,
)
def update_r0(*args):
R0.write(R0_s.value)
R0_s.observe(update_r0, 'value')
def update_r1(*args):
R1.write(R1_s.value)
R1_s.observe(update_r1, 'value')
def update_r2(*args):
R2.write(R2_b.value)
R2_b.observe(update_r2, 'value')
def update_r3(*args):
R3.write(R3_b.value)
R3_b.observe(update_r3, 'value')
def update_r4(*args):
if R4_b.value == "black":
color1 = 0x000000
elif R4_b.value == "blue":
color1 = 0x0000FF
elif R4_b.value == "brown":
color1 = 0x996633
elif R4_b.value == "cyan":
color1 = 0x00FFFF
elif R4_b.value == "green":
color1 = 0x00FF00
elif R4_b.value == "magenta":
color1 = 0xFF00FF
elif R4_b.value == "orange":
color1 = 0xFF8000
elif R4_b.value == "purple":
color1 = 0x800080
elif R4_b.value == "red":
color1 = 0xFF0000
elif R4_b.value == "yellow":
color1 = 0xFFFF00
elif R4_b.value == "white":
color1 = 0xFFFFFF
else:
color1 = int(R4_b.value[1:],16)
if R4_c.value == True:
color1 |= 0x1000000
else:
color1 &= 0xFFFFFF
R4.write(color1)
R4_b.observe(update_r4, 'value')
R4_c.observe(update_r4, 'value')
def update_r5(*args):
if R5_b.value == "black":
color2 = 0x000000
elif R5_b.value == "blue":
color2 = 0x0000FF
elif R5_b.value == "brown":
color2 = 0x996633
elif R5_b.value == "cyan":
color2 = 0x00FFFF
elif R5_b.value == "green":
color2 = 0x00FF00
elif R5_b.value == "magenta":
color2 = 0xFF00FF
elif R5_b.value == "orange":
color2 = 0xFF8000
elif R5_b.value == "purple":
color2 = 0x800080
elif R5_b.value == "red":
color2 = 0xFF0000
elif R5_b.value == "yellow":
color2 = 0xFFFF00
elif R5_b.value == "white":
color2 = 0xFFFFFF
else:
color2 = int(R5_b.value[1:],16)
if R5_c.value == True:
color2 |= 0x1000000
else:
color2 &= 0xFFFFFF
R5.write(color2)
R5_b.observe(update_r5, 'value')
R5_c.observe(update_r5, 'value')
def update_r6(*args):
filename = "nofile.bin"
if R6_s.value == 'Abe Lincoln':
filename = "./data/lincoln.bin"
elif R6_s.value == 'Emma Watson':
filename = "./data/emma_watson.bin"
elif R6_s.value == 'Giraffe':
filename = "./data/giraffe.bin"
with open(filename, "rb") as f:
w = f.read(2)
h = f.read(2)
width = (w[1] << 8) | w[0]
height = (h[1] << 8) | h[0]
R2.write(width)
R3.write(height)
num_pixels = (width*height)/8
for i in range(0, int(num_pixels)-1):
byte = f.read(1)
b = int('{:08b}'.format(byte[0])[::-1], 2)
b = byte[0];
x = (i<<8) | b;
R6.write(x);
R6_s.observe(update_r6, 'value')
widgets.VBox([R0_s,R1_s,R4_b,R4_c,R5_b,R5_c,R6_s])
"""
Explanation: 4. Create a user interface
We will communicate with the filter using a nice user interface. Run the following code to activate that interface.
7 Registers are used to interact with this particular filter.
R0 : Origin X-Coordinate. The origin of the image is the top left corner. Writing to R0 allows you to specify where the image appears horizontally on the feed.
R1 : Origin Y-Coordinate. Writing to R1 allows you to specify where the image appears vertically on the feed.
R2 : Width. This specifies how wide (in pixels) the image is.
R3 : Height. This specifies how tall (in pixels) the image is.
R4 : This specifies the index of the first color (the background of the overlay image). Because the color is 24 bits (23:0), the 25th bit (bit 24) signifies whether this color should be transparent or not.
R5 : This specifies the index of the second color (the foreground of the overlay image). Like with R4, bit 24 signifies whether this color should be transparent or not.
R5 : This is used to write a new image to the filter. 8 pixels are written at a time (1 byte = 8 bits = 8 pixels). The 16-bit pixel address and 1-byte representation of 8 pixels are concatenated and written to this register.
The current minimum and maximum values for the X- and Y-Coordinates as well as image width and height are based on a 1280x720 screen resolution.
End of explanation
"""
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
"""
Explanation: 5. Exploration
Image Position.
Try moving the sliders back and forth. Moving the X Origin slider right should move the image to the right. Moving the Y Origin slider right should move the image down.
Colors.
Click on the color box for Color 1 and browse to change the color. You can also enter the hex representation of the color in the text box. You can find the codes here: http://www.rapidtables.com/web/color/RGB_Color.htm. Do the same for Color 2.
Transparency.
Check the Color 1 Clear box. All pixels of the image overlay background should become transparent. Uncheck the box and check the Color 2 Clear box. Now the foreground should become transparent.
Upload New Image.
Try selecting a different image. The new image file should be written to replace the previous image.
6. Clean up
When you are done experimenting with the filter, run the following code to stop the video stream.
End of explanation
"""
|
WNoxchi/Kaukasos | pytorch/fastai-pytorch-tutorial-scratch-notes.ipynb | mit | from pathlib import Path
import requests
data_path = Path('data')
path = data_path/'mnist'
path.mkdir(parents=True, exist_ok=True)
url = 'http://deeplearning.net/data/mnist/'
filename = 'mnist.pkl.gz'
(path/filename)
if not (path/filename).exists():
content = requests.get(url+filename).content
(path/filename).open('wb').write(content)
import pickle, gzip
with gzip.open(path/filename, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
%matplotlib inline
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
import torch
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
import math
weights = torch.rand(784, 10)/math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
xb.shape, xb.sum(-1).shape
"""
Explanation: 2018/9/15-16 WNixalo
https://github.com/fastai/fastai_v1/blob/master/dev_nb/001a_nn_basics.ipynb
End of explanation
"""
bs = 64
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb)
preds[0], preds.shape
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss_func = nll
yb = y_train[0:bs]
loss_func(preds, yb)
preds[0]
((x_train[0:bs]@weights+bias) - (x_train[0:bs]@weights+bias).exp().sum(-1).log().unsqueeze(-1))[0]
preds[0]
nll(preds, yb)
-preds[range(yb.shape[0]), yb].mean()
type(preds)
preds[range(0)]
preds[0]
preds[range(1)]
preds[range(2)]
preds[:2]
type(preds)
np.array([[range(10)]])[range(1)]
A = np.array([[range(10)]])
A.shape
A[range(2)]
A.shape
len(A[0])
A.shape[0]
A[0]
A[range(1)]
xb.sum()
xb.numpy().sum(-1)
xb.sum(-1)
"""
Explanation: the torch.Tensor.sum(dim) call takes an integer argument as the axis along which to sum. This applies to NumPy arrays as well.
In this case xb.sum(-1) will turn a 64x784 tensor into a size 64 tensor. This creates a tensor with each element being the total sum of its corresponding size 784 (28x28 flattened) image from the minibatch.
End of explanation
"""
xb.sum(-1)
xb[0].sum()
"""
Explanation: torch.unsqueeze returns a tensor with a dimension of size 1 inserted at the specified position.
the returned tensor shares the smae underlying data with this tensor.
End of explanation
"""
xb.exp().sum(-1).log()
xb.exp().sum(-1).log()[0]
"""
Explanation: taking a look at what .unsqueeze does; what does the tensor look like right before unsqueeze is applied to it?
End of explanation
"""
(xb.exp().sum(-1).log())[0]
xb.exp().sum(-1).log().unsqueeze(-1)[:10]
np.array([i for i in range(10)]).shape
torch.Tensor([i for i in range(10)]).shape
xb.exp().sum(-1).log().unsqueeze(-1).numpy().shape
"""
Explanation: making sure I didn't need parentheses there
End of explanation
"""
xb.exp().sum(-1).log()[:10]
"""
Explanation: Okay so .unsqueeze turns the size 64 tensor into a 64x1 tensor, so it's nicely packaged up with the first element being the 64-long vector ... or something like that right?
End of explanation
"""
preds.unsqueeze(-1).shape
"""
Explanation: The unsqueezed tensor doesn't look as 'nice'.. I guess. So it's packaged into a single column vector because we'll need that for the linear algebra we'll do to it later yeah?
End of explanation
"""
preds.unsqueeze(-1)[:2]
"""
Explanation: Oh this is cool. I was wondering how .unsqeeze worked for tensors with multiple items in multiple dimensions (ie: not just a single row vector). Well this is what it does:
End of explanation
"""
# logsoftmax(xb)
ls_xb = log_softmax(xb)
log_softmax(xb@weights+bias)[0]
(xb@weights).shape
xb.shape
(xb@weights).shape
"""
Explanation: So .unsqueeze turns our size 64x10 ... ohhhhhhhh I misread:
torch.unsqueeze returns a tensor with a dimension of size 1 inserted at the specified position.
doesn't mean it repackages the original tensor into a 1-dimensional tensor. I was wonder how it knew how long to make it (you'd have to just concatenate everything, but then in what order?).
No, a size-1 dimension is inserted where you tell it. So if it's an (X,Y) matrix, you go and give it a Z dimension, but that Z only contains the original (X,Y), ie: the only thing added is a dimension.
Okay, interesting. Not exactly sure yet why we want 3 dimensions, but I kinda get it. Is it related to our data being 28x28x1? Wait isn't PyTorch's ordering N x [C x H x W] ? So it's unrelated then? Or useful for returning 64x784 to 64x28x28? I think that's not the case? Don't know.
So what's up with the input[range(.. thing?:
End of explanation
"""
# for reference:
xb = x_train[0:bs]
yb = y_train[0:bs]
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
preds = model(xb)
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss = nll(preds, yb)
loss
"""
Explanation: Oh this is where I was confused. I'm not throwing xb into Log Softmax. I'm throwing xb • w + bias. The shape going into the log softmax function is not 64x784, it's 64x10. Yeah that makes sense. well duh it has to. Each value in the tensor is an activation for a class, for each image in the minibatch. So by the magic of machine learning, each activation encapsulates the effect of the weights and biases on that input element with respect to that class.
So that means that the .unsqueeze oepration is not going to be giving a 64x784 vector.
End of explanation
"""
xb, xb.shape
"""
Explanation: Note the loss equals that in cell Out[25] above as it should.
Back to teasing this apart by hand.
The minibatch:
End of explanation
"""
(xb @ weights + bias)[:2]
(xb @ weights + bias).shape
"""
Explanation: The minibatch's activations as they head into the Log Softmax:
End of explanation
"""
log_softmax(xb@weights+bias)[:2]
log_softmax(xb@weights+bias).shape
"""
Explanation: The minibatch activations after the Log Softmax and before heading into Negative Log Likelihood:
End of explanation
"""
nll(log_softmax(xb@weights+bias), yb)
"""
Explanation: The loss value computed via NLL on the Log Softmax activations:
End of explanation
"""
[range(yb.shape[0]), yb]
"""
Explanation: Okay. Now questions. What is indexing input by [range(target.shape[0]), target] supposed to be doing? I established before that A[range(n)] is valid if n ≤ A.shape[0]. So what's going on is I'm range-indexing the 1st dimension of the LogSoftmax activations with the length of the target tensor, and the rest of the dimension indices being the ..target tensor itself?
That means the index is this:
End of explanation
"""
xb[yb]
"""
Explanation: Okay. What does it look like when I index a tensor – forget range-idx for now – with another tensor?
End of explanation
"""
xb.shape, yb.shape
array_1 = np.array([[str(j)+str(i) for i in range(10)] for j in range(5)])
array_1
array_2 = np.array([i for i in range(len(array_1[0]))])
array_2
"""
Explanation: Okay..
End of explanation
"""
array_1[range(array_2.shape[0]), array_2]
"""
Explanation: Uh, moment of truth:
End of explanation
"""
# for reference (again):
xb = x_train[0:bs]
yb = y_train[0:bs]
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
preds = model(xb)
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss = nll(preds, yb)
"""
Explanation: Oof course. What happened. Is it.. yes. I'm indexing the wrong array. Also no value in target is greater than the number of classes ... oh... oh ffs. Okay.
I range index by the length of target's first dim to get the entire first dim of the LogSoftmax activations, and each vector in that index is itself indexed by the value of the target.
Less-shitty English: take the first dimension of the activations; that should be batch_size x num_classes activations; so: num_classes values in each of batch_size vectors; Now for each of those vectors, pull out the value indexed by the corresponding index-value in the target tensor.
Oh I see. So just now I was confused that there was redundant work being done. yeah kinda. It's Linear Algebra. See, the weights and biases produce the entire output-activations tensor. Meaning: the dot-product & addition operation creates probabilities for every class for every image in the minibatch. Yeah that can be a lot; linalg exists in a block-like world & it's easy to get carried away (I think).
And that answers another question: the loss function here only cares about how wrong the correct class was. Looks like the incorrect classes are totally ignored (hence a bit of mental hesitation for me because it looks like 90% of the information is being thrown away (it is)). Now, that's not what's going on when the Log Softmax is being computed. Gotta think about that a moment..
could activations for non-target classes affect the target-activations during the Log Softmax step, before they're disgarded in the NLL?
xb - xb.exp().sum(-1).log().unsqueeze(-1)
is the magic line (xb is x in the definition).
End of explanation
"""
xb.shape, weights.shape
np.array([[1,1,1],[2,2,2],[3,3,3]]) @ np.array([[1],[2],[3]])
np.array([[1,1,1],[2,2,2],[-11,0,3]]) @ np.array([[1],[2],[3]])
"""
Explanation: When the activations are activating, only the weights and biases are having a say. Right?
End of explanation
"""
yb.type()
# batch size of 3
xb_tmp = np.array([[1,1,1,1,1],[2,2,2,2,2],[3,3,3,3,3]])
yb_tmp = np.array([0,1,2])
# 4 classes
c = 4
w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])])
xb_tmp = torch.Tensor(xb_tmp)
yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor
w_tmp = torch.Tensor(w_tmp)
"""
Explanation: Right.
Now what about the Log Softmax operation itself? Well okay I can simulate this by hand:
End of explanation
"""
torch.tensor([[1, 2, 3]],dtype=torch.int32)
xb_tmp.shape, yb_tmp.shape, w_tmp.shape
xb.shape, yb.shape, weights.shape
actv_tmp = log_softmax(xb_tmp @ w_tmp)
actv_tmp
nll(actv_tmp, yb_tmp)
"""
Explanation: umm....
...
So it's torch.tensor not torch.Tensor? Got a lot of errors trying to specify a datatype with capital T. Alright then.
End of explanation
"""
# batch size of 3
xb_tmp = np.array([[0,1,1,0,0]])
yb_tmp = np.array([1])
# 4 classes
c = 4
w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])])
xb_tmp = torch.Tensor(xb_tmp)
yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor
w_tmp = torch.Tensor(w_tmp)
xb_tmp @ w_tmp
# LogSoftmax(activations)
actv_tmp = log_softmax(xb_tmp @ w_tmp)
actv_tmp
# NLL Loss
loss = nll(actv_tmp, yb_tmp)
loss
def cross_test(x, y):
# batch size of 3
xb_tmp = np.array(x)
yb_tmp = np.array(y)
# 4 classes
c = 4
w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])])
xb_tmp = torch.Tensor(xb_tmp)
yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor
w_tmp = torch.Tensor(w_tmp)
print(f'Activation: {xb_tmp @ w_tmp}')
# LogSoftmax(activations)
actv_tmp = log_softmax(xb_tmp @ w_tmp)
print(f'Log Softmax: {actv_tmp}')
# NLL Loss
loss = nll(actv_tmp, yb_tmp)
print(f'NLL Loss: {loss}')
w_tmp
cross_test([[1,1,1,1,1]], [1])
cross_test([[1,1,1,1,0]], [1])
cross_test([[1,1,1,0,0]], [1])
cross_test([[1,1,1,1,0]], [1])
cross_test([[1,1,0,0,0]], [1])
"""
Explanation: Good it works. Now to change things. The question was if any of the dropped values (non-target index) had any effect on the loss - since the loss was only calculated on error from the correct target. Basically: is there any lateral flow of information?
So I'll check this by editing values in the softmax activation that are not of the correct index.
Wait that shouldn't have an effect anyway. No the question is if information earlier in the stream had an effect later on. It is 4:12 am..
Aha. My question was if the activations that created the non-target class probabilities had any effect on target classes. Which is asking if there is crossing of information in the ... oh.
I confused myself with the minibatches. Ignore those, there'd be something very wrong if there was cross-talk between them. I want to know if there is cross-talk within an individual tensor as it travels through the model.
End of explanation
"""
|
feststelltaste/software-analytics | prototypes/Complexity over Time.ipynb | gpl-3.0 | import pandas as pd
diff_raw = pd.read_csv(
"../../buschmais-spring-petclinic_fork/git_diff.log",
sep="\n",
names=["raw"])
diff_raw.head(16)
"""
Explanation: The idea
In my previous blog post, we got to know the idea of "indentation-based complexity". We took a static view on the Linux kernel to spot the most complex areas.
This time, we wanna track the evolution of the indentation-based complexity of a software system over time. We are especially interested in it's correlation between the lines of code. Because if we have a more or less stable development of the lines of codes of our system, but an increasing number of indentation per source code file, we surely got a complexity problem.
Again, this analysis is higly inspired by Adam Tornhill's book "Software Design X-Ray"
, which I currently always recommend if you want to get a deep dive into software data analysis.
The data
For the calculation of the evolution of our software system, we can use data from the version control system. In our case, we can get all changes to Java source code files with Git. We just need so say the right magic words, which is
git log -p -- *.java
This gives us data like the following:
```
commit e5254156eca3a8461fa758f17dc5fae27e738ab5
Author: Antoine Rey antoine.rey@gmail.com
Date: Fri Aug 19 18:54:56 2016 +0200
Convert Controler's integration test to unit test
diff --git a/src/test/java/org/springframework/samples/petclinic
/web/CrashControllerTests.java b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java
index ee83b8a..a83255b 100644
--- a/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java
+++ b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java
@@ -1,8 +1,5 @@
package org.springframework.samples.petclinic.web;
-import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
-import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*;
-
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
```
We have the
* commit sha
commit e5254156eca3a8461fa758f17dc5fae27e738ab5
* author's name
Author: Antoine Rey <antoine.rey@gmail.com>
* date of the commit
Date: Fri Aug 19 18:54:56 2016 +0200
* commit message
Convert Controler's integration test to unit test
* names of the files that changes (after and before)
diff --git a/src/test/java/org/springframework/samples/petclinic
/web/CrashControllerTests.java b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java
* the extended index header
index ee83b8a..a83255b 100644
--- a/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java
+++ b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java
* and the full file diff where we can see additions or modifications (+) and deletions (-)
```
package org.springframework.samples.petclinic.web;
-import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
-import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*;
-
import org.junit.Before;
```
We "just" have to get this data into our favorite data analysis framework, which is, of course, Pandas :-). We can actually do that! Let's see how!
Advanced data wangling
Reading in such a semi-structured data is a little challenge. But we can do it with some tricks. First, we read in the whole Git diff history by standard means, using read_csv and the separator \n to get one row per line. We make sure to give the columns a nice name as well.
End of explanation
"""
index_row = diff_raw.raw.str.startswith("index ")
ignored_diff_rows = (index_row.shift(1) | index_row.shift(2))
diff_raw = diff_raw[~(index_row | ignored_diff_rows)]
diff_raw.head(10)
"""
Explanation: The output is the commit data that I've describe above where each in line the text file represents one row in the DataFrame (without blank lines).
Cleansing
We skip all the data we don't need for sure. Especially the "extended index header" with the two lines that being with +++ and --- are candidates to mix with the real diff data that begins also with a + or a -. Furtunately, we can identify these rows easily: These are the rows that begin with the row that starts with index. Using the shift operation starting at the row with index, we can get rid of all those lines.
End of explanation
"""
diff_raw['commit'] = diff_raw.raw.str.split("^commit ").str[1]
diff_raw['timestamp'] = pd.to_datetime(diff_raw.raw.str.split("^Date: ").str[1])
diff_raw['path'] = diff_raw.raw.str.extract("^diff --git.* b/(.*)", expand=True)[0]
diff_raw.head()
"""
Explanation: Extracting metadata
Next, we extract some metadata of a commit. We can identify the different entries by using a regular expression that looks up a specific key word for each line. We extract each individual information into a new Series/column because we need it for each change line during the software's history.
End of explanation
"""
diff_raw = diff_raw.fillna(method='ffill')
diff_raw.head(8)
"""
Explanation: To assign each commit's metadata to the remaining rows, we forward fill those rows with the metadata by using the fillna method.
End of explanation
"""
%%timeit
diff_raw.raw.str.extract("^\+( *).*$", expand=True)[0].str.len()
diff_raw["i"] = diff_raw.raw.str[1:].str.len() - diff_raw.raw.str[1:].str.lstrip().str.len()
diff_raw
%%timeit
diff_raw.raw.str[0] + diff_raw.raw.str.[1:].str.lstrip().str.len()
diff_raw['added'] = diff_raw.line.str.extract("^\+( *).*$", expand=True)[0].str.len()
diff_raw['deleted'] = diff_raw.line.str.extract("^-( *).*$", expand=True)[0].str.len()
diff_raw.head()
"""
Explanation: Identifying source code lines
We can now focus on the changed source code lines. We can identify
End of explanation
"""
diff_raw['line'] = diff_raw.raw.str.replace("\t", " ")
diff_raw.head()
diff = \
diff_raw[
(~diff_raw['added'].isnull()) |
(~diff_raw['deleted'].isnull())].copy()
diff.head()
diff['is_comment'] = diff.line.str[1:].str.match(r' *(//|/*\*).*')
diff['is_empty'] = diff.line.str[1:].str.replace(" ","").str.len() == 0
diff['is_source'] = ~(diff['is_empty'] | diff['is_comment'])
diff.head()
diff.raw.str[0].value_counts()
diff['lines_added'] = (~diff.added.isnull()).astype('int')
diff['lines_deleted'] = (~diff.deleted.isnull()).astype('int')
diff.head()
diff = diff.fillna(0)
#diff.to_excel("temp.xlsx")
diff.head()
commits_per_day = diff.set_index('timestamp').resample("D").sum()
commits_per_day.head()
%matplotlib inline
commits_per_day.cumsum().plot()
(commits_per_day.added - commits_per_day.deleted).cumsum().plot()
(commits_per_day.lines_added - commits_per_day.lines_deleted).cumsum().plot()
diff_sum = diff.sum()
diff_sum.lines_added - diff_sum.lines_deleted
3913
"""
Explanation: For our later indentation-based complexity calculation, we have to make sure that each line
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/5f078eabe74f0448d3e1662c12313289/source_space_time_frequency.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_band_induced_power
print(__doc__)
"""
Explanation: Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax, event_id = -0.2, 0.5, 1
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
events = events[:10] # take 10 events to keep the computation time low
# Use linear detrend to reduce any edge artifacts
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True, detrend=1)
# Compute a source estimate per frequency band
bands = dict(alpha=[9, 11], beta=[18, 22])
stcs = source_band_induced_power(epochs, inverse_operator, bands, n_cycles=2,
use_fft=False, n_jobs=1)
for b, stc in stcs.items():
stc.save('induced_power_%s' % b)
"""
Explanation: Set parameters
End of explanation
"""
plt.plot(stcs['alpha'].times, stcs['alpha'].data.mean(axis=0), label='Alpha')
plt.plot(stcs['beta'].times, stcs['beta'].data.mean(axis=0), label='Beta')
plt.xlabel('Time (ms)')
plt.ylabel('Power')
plt.legend()
plt.title('Mean source induced power')
plt.show()
"""
Explanation: plot mean power
End of explanation
"""
|
mohanprasath/Course-Work | coursera/python_for_data_science/4.2Writing_and_Saving_Files.ipynb | gpl-3.0 | with open('/resources/data/Example2.txt','w') as writefile:
writefile.write("This is line A")
"""
Explanation: <a href="http://cocl.us/topNotebooksPython101Coursera"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5> Writing and Saving Files in PYTHON</font></h1>
<br>
This notebook will provide information regarding writing and saving data into .txt files.
Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#refw">Writing files Text Files</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
<a id="ref3"></a>
<h2 align=center>Writing Files</h2>
We can open a file object using the method write() to save the text file to a list. To write the mode, argument must be set to write w. Let’s write a file Example2.txt with the line: “This is line A”
End of explanation
"""
with open('/resources/data/Example2.txt','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: We can read the file to see if it worked:
End of explanation
"""
with open('/resources/data/Example2.txt','w') as writefile:
writefile.write("This is line A\n")
writefile.write("This is line B\n")
"""
Explanation: We can write multiple lines:
End of explanation
"""
with open('/resources/data/Example2.txt','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: The method .write() works similar to the method .readline(), except instead of reading a new line it writes a new line. The process is illustrated in the figure , the different colour coding of the grid represents a new line added to the file after each method call.
<a ><img src = "https://ibm.box.com/shared/static/4d86eysjv7fiy5nocgvpbddyj2uckw6z.png" width = 500, align = "center"></a>
<h4 align=center>
An example of “.write()”, the different colour coding of the grid represents a new line added after each method call.
</h4>
You can check the file to see if your results are correct
End of explanation
"""
with open('/resources/data/Example2.txt','a') as testwritefile:
testwritefile.write("This is line C\n")
"""
Explanation: By setting the mode argument to append a you can append a new line as follows:
End of explanation
"""
with open('/resources/data/Example2.txt','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: You can verify the file has changed by running the following cell:
End of explanation
"""
Lines=["This is line A\n","This is line B\n","This is line C\n"]
Lines
with open('Example2.txt','w') as writefile:
for line in Lines:
print(line)
writefile.write(line)
"""
Explanation: We write a list to a .txt file as follows:
End of explanation
"""
with open('Example2.txt','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: We can verify the file is written by reading it and printing out the values:
End of explanation
"""
with open('Example2.txt','a') as testwritefile:
testwritefile.write("This is line D\n")
"""
Explanation: We can again append to the file by changing the second parameter to a. This adds the code:
End of explanation
"""
with open('Example2.txt','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: We can see the results of appending the file:
End of explanation
"""
with open('Example2.txt','r') as readfile:
with open('Example3.txt','w') as writefile:
for line in readfile:
writefile.write(line)
"""
Explanation: Copy a file
Let's copy the file Example2.txt to the file Example3.txt:
End of explanation
"""
with open('Example3.txt','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: We can read the file to see if everything works:
End of explanation
"""
# Write CSV file example
student_list = [{"Student ID": 1, "Gender": "F", "Name": "Emma"},
{"Student ID": 2, "Gender": "M", "Name": "John"},
{"Student ID": 3, "Gender": "F", "Name": "Linda"}]
# Write csv file
with open('Example_csv.csv','w') as writefile:
# Set header for each column
for col_header in list(student_list[0].keys()):
writefile.write(str(col_header) + ", ")
writefile.write("\n")
# Set value for each column
for student in student_list:
for col_ele in list(student.values()):
writefile.write(str(col_ele) + ", ")
writefile.write("\n")
# Print out the result csv
with open('Example_csv.csv','r') as testwritefile:
print(testwritefile.read())
"""
Explanation: After reading files, we can also write data into files and save them in different file formats like .txt, .csv, .xls (for excel files) etc. Let's take a look at an example.
End of explanation
"""
|
InsightLab/data-science-cookbook | 2019/04-naive-bayes/Naive_Bayes_Tutorial_01.ipynb | mit | import csv
def loadCsv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
"""
Explanation: Naive Bayes
Introdução
Neste tutorial iremos apresentar a implentação do algoritmo Naive Bayes usando aplicado a dados numéricos. Utilizaremos neste tutorial o conjunto de dador denominado Pima Inidians Diabetes, utilizado para predizer início de diabetes veja neste link.
Este problema é composto por 768 observações de detalhes médicos de pacientes indianas. Os registros descrevem as medidas instantâneas tomadas do paciente, como sua idade, o número de vezes grávidas e o tratamento do sangue. Todos os pacientes são mulheres com idade igual ou superior a 21 anos. Todos os atributos são numéricos, e suas unidades variam de atributo a atributo.
Cada registro tem um valor de classe que indica se o paciente sofreu um início de diabetes dentro de 5 anos de quando as medidas foram tomadas (1) ou não (0).
Este é um conjunto de dados padrão que tem sido estudado muito na literatura de aprendizagem de máquinas. Uma boa precisão de predição é de 70% a 76%.
Passos do Tutorial
Tratar Dados: carregar os dados do arquivo CSV e divida-o em treinamento e teste conjuntos de dados.
Resumir dados: resumir as propriedades no conjunto de dados de treinamento para que possamos calcular probabilidades e fazer previsões.
Faça uma Previsão: usar os resumos do conjunto de dados para gerar uma única previsão.
Faça previsões: gerar previsões, dado um conjunto de dados de teste e um conjunto de dados de treinamento resumido.
Avalie a precisão: avaliar a precisão das previsões feitas para um conjunto de dados de teste como a porcentagem correta de todas as previsões feitas.
1. Tratar Dados
1.1 Carregar arquivo
A primeira coisa que precisamos fazer é carregar nosso arquivo de dados. Os dados estão no formato CSV sem linha de cabeçalho. Podemos abrir o arquivo com a função open e ler as linhas de dados usando a função de leitor no módulo csv.
Também precisamos converter os atributos que foram carregados como strings em números para que possamos trabalhar com eles. Abaixo está a função loadCsv () para carregar o conjunto de dados Pima indians.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 1
Teste esta função carregando o dataset pima-indians-diabetes.data e imprime o número de instancias carregadas da seguinte forma "Arquivo carregado pima-indians-diabetes.data com XXX linhas"
End of explanation
"""
import random
def splitDataset(dataset, splitRatio):
trainSize = int(len(dataset) * splitRatio)
trainSet = []
copy = list(dataset)
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return [trainSet, copy]
"""
Explanation: 1.2 Dividir Arquivo
Em seguida, precisamos dividir os dados em um conjunto de dados de treinamento, o qual possa ser usado pelo Naive Bayes para fazer previsões e um conjunto de dados de teste para que possamos usar para avaliar a precisão do modelo. Precisamos dividir o conjunto de dados aleatoriamente em treino e teste, em conjuntos de dados com uma proporção de 67% de treinamento e 33% de teste (esta é uma razão comum para testar um algoritmo em um conjunto de dados).
Abaixo está a função splitDataset () que dividirá um determinado conjunto de dados em uma proporção de divisão determinada.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 2
Teste esta função definindo um dataset mockado com 5 instancias, divida este arquivo em trainamento e teste. Imprima os conjuntos de treinamento e teste gerados, por exemplo, imprimindo:
"Dividiu arquivo com 5 linhas em arquivo de treino com [[2], [5], [4]] e de teste com [[1], [3]]"
End of explanation
"""
def separateByClass(dataset):
separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
"""
Explanation: 2. Sumarizar Dados
O modelo do Naive Bayes é composto basicamente pela sumarização do conjunto de dados de treinamento. Este sumário é então usado ao fazer previsões.
O resumo dos dados de treinamento coletados envolve a média e o desvio padrão para cada atributo, pelo valor da classe. Por exemplo, se houver dois valores de classe e 7 atributos numéricos, então precisamos de um desvio padrão e médio para cada combinação de atributo (7) e valor de classe (2), ou seja, 14 resumos de atributos. Estes são necessários ao fazer previsões para calcular a probabilidade de valores de atributos específicos pertencentes a cada valor de classe.
Para sumarizar os dados criamos as seguintes subtarefas:
Separar dados por classe
Calcular Média
Calcular o desvio padrão
Conjunto de dados de resumo
Resumir atributos por classe
Separar dados por classe
2.1 Separar dados por classe
A primeira tarefa é separar as instâncias do conjunto de dados de treinamento pelo valor da classe para que possamos calcular as estatísticas para cada classe. Podemos fazer isso criando um mapa de cada valor de classe para uma lista de instâncias que pertencem a essa classe e classificar todo o conjunto de dados de instâncias nas listas apropriadas.
A função separadaByClass () abaixo faz isso.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 3
Teste este função com alguns exemplos de dados sintéticos e imprima as classes separadas com seus respectivas instancias. Perceba no exemplo acima que a classe se refere ao último elemento do vetor. Segue um exemplo de saída:
"Instancias separadas por classes: {1: [[1, 20, 1], [3, 22, 1]], 0: [[2, 21, 0]]}"
End of explanation
"""
import math
def mean(numbers):
return sum(numbers)/float(len(numbers))
def stdev(numbers):
avg = mean(numbers)
variance = sum([pow(x-avg,2) for x in numbers])/float(len(numbers)-1)
return math.sqrt(variance)
"""
Explanation: 2.2 Calcular Média e Desvio Padrão
Precisamos calcular a média de cada atributo para um valor de classe. A média é a tendência central central ou central dos dados, e vamos usá-lo como meio de nossa distribuição gaussiana ao calcular probabilidades.
Também precisamos calcular o desvio padrão de cada atributo para um valor de classe. O desvio padrão descreve a variação da disseminação dos dados, e vamos usá-lo para caracterizar a propagação esperada de cada atributo em nossa distribuição gaussiana ao calcular probabilidades.
O desvio padrão é calculado como a raiz quadrada da variância. A variância é calculada como a média das diferenças quadradas para cada valor de atributo da média. Observe que estamos usando o método N-1, que subtrai 1 do número de valores de atributo ao calcular a variância.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 4
Crie alguns dados fictícios e teste as funções criadas.Exemplo:
"Cálculo de [1, 2, 3, 4, 5]: média=3.0, stdev=1.5811388300841898"
End of explanation
"""
def summarize(dataset):
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1]
return summaries
"""
Explanation: 2.2 Sumarizar os dados
Agora temos as ferramentas para resumir um conjunto de dados. Para uma determinada lista de instâncias (para um valor de classe), podemos calcular a média e o desvio padrão para cada atributo.
A função zip agrupa os valores de cada atributo em nossas instâncias de dados em suas próprias listas para que possamos calcular os valores de desvio padrão e média para o atributo.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 5
Crie alguns dados fictícios e teste as funções criadas.Exemplo de saída:
"Sumário dos atributos: [(2.0, 1.0), (21.0, 1.0)]"
End of explanation
"""
def summarizeByClass(dataset):
separated = separateByClass(dataset)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = summarize(instances)
return summaries
"""
Explanation: 2.3 Sumarizar Atributos por classes
Podemos juntar tudo ao separar nosso conjunto de dados de treinamento em instâncias agrupadas por classe, usando a função summarizeByClass()
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 6
Teste a função acima, usando um pequeno conjunto de dados. Exemplo de saída:
Resumo por classe: {1: [(2.0, 1.4142135623730951), (21.0, 1.4142135623730951)], 0: [(3.0, 1.4142135623730951), (21.5, 0.7071067811865476)]}
End of explanation
"""
def calculateProbability(x, mean, stdev):
exponent = math.exp(-(math.pow(x-mean,2)/(2*math.pow(stdev,2))))
return (1 / (math.sqrt(2*math.pi) * math.pow(stdev, 2))) * exponent
"""
Explanation: 3. Realizar as Predição
Agora estamos prontos para fazer previsões usando os resumos preparados a partir dos nossos dados de treinamento. As previsões envolvem o cálculo da probabilidade de uma dada instância de dados pertencer a cada classe e a seleção da classe com a maior probabilidade de previsão.
Podemos dividir essa parte nas seguintes tarefas:
Calcular a função de densidade de probabilidade gaussiana
Calcular probabilidades das classes
Fazer uma previsão
Fazer várias previsões
Obter acurácia
3.1 Calcular a função de densidade de probabilidade gaussiana
Podemos usar uma função gaussiana para estimar a probabilidade de um determinado valor de atributo, dada a média conhecida e o desvio padrão para o atributo estimado a partir dos dados de treinamento.
Dado que os resumos de atributos para cada atributo e valor de classe, o resultado é a probabilidade condicional de um determinado valor de atributo dado um valor de classe.
Veja as referências para os detalhes desta equação para a função de densidade de probabilidade gaussiana. Em resumo, estamos conectando nossos detalhes conhecidos ao Gauss (valor do atributo, média e desvio padrão) e recuperando a probabilidade de que nosso valor de atributo pertença à classe.
Na função calcularProbability (), calculamos o expoente primeiro, depois calculamos a divisão principal. Isso nos permite ajustar a equação bem em duas linhas.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 7
Teste a função acima, usando um pequeno conjunto de dados. Exemplo de saída:
"Probabilidade para pertencer a esta classe: 0.06248965759370005"
End of explanation
"""
def calculateClassProbabilities(summaries, inputVector):
probabilities = {}
for classValue, classSummaries in summaries.items():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, stdev = classSummaries[i]
x = inputVector[i]
probabilities[classValue] *= calculateProbability(x, mean, stdev)
return probabilities
"""
Explanation: 3.2 Calcular probabilidades das classes
Neste momento, podemos calcular a probabilidade de um atributo pertencente a uma classe, podemos combinar as probabilidades de todos os valores dos atributos para uma instância de dados e apresentar uma probabilidade de toda a instância de dados pertencente à classe.
Combinamos as probabilidades juntas, multiplicando-as. Na função calculateClassProbabilities () abaixo, a probabilidade de uma determinada instância de dados é calculada multiplicando as probabilidades de atributo para cada classe. O resultado é um mapa de valores de classe para probabilidades.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 8
Teste a função acima, usando um pequeno conjunto de dados. Exemplo de saída:
"Probabilidades para cada classe: {0: 0.7820853879509118, 1: 6.298736258150442e-05}"
End of explanation
"""
def predict(summaries, inputVector):
probabilities = calculateClassProbabilities(summaries, inputVector)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.items():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
"""
Explanation: 3.3 Fazer a Predição
Agora podemos calcular a probabilidade de uma instância de dados pertencente a cada valor de classe, podemos procurar a maior probabilidade e retornar a classe associada.
A função predict() realiza esta tarefa.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 9
Teste a função acima, usando um pequeno conjunto de dados. Exemplo de saída:
"Entrada {'A': [(1, 0.5)], 'B': [(20, 5.0)]} Consulta [1.1, '?'] Predição: Classe A"
End of explanation
"""
def getPredictions(summaries, testSet):
predictions = []
for i in range(len(testSet)):
result = predict(summaries, testSet[i])
predictions.append(result)
return predictions
"""
Explanation: 3.4 Fazer várias predições
Finalmente, podemos estimar a precisão do modelo fazendo previsões para cada instância de dados em nosso conjunto de dados de teste. A função getPredictions () realizará esta tarefa e retornará uma lista de previsões para cada instância de teste.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 10
Teste a função acima, usando um pequeno conjunto de dados. Exemplo de saída:
"Predições: Sumarios {'A': [(1, 0.5)], 'B': [(20, 5.0)]} Teste [[1.1, '?'], [19.1, '?']] Classes Preditas['A', 'B']"
End of explanation
"""
def getAccuracy(testSet, predictions):
correct = 0
for i in range(len(testSet)):
if testSet[i][-1] == predictions[i]:
correct += 1
return (correct/float(len(testSet))) * 100.0
"""
Explanation: 3.5 Calcular Acurácia
As previsões podem ser comparadas com os valores de classe no conjunto de dados de teste. A acurácia da classificação pode ser calculada como uma relação de precisão entre 0 e 100%. A função getAccuracy () calculará essa relação de precisão.
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 11
Teste a função acima, usando um pequeno conjunto de dados. Exemplo de saída:
"Resultado: Teste [[1, 1, 1, 'a'], [2, 2, 2, 'a'], [3, 3, 3, 'b']] predições ['a', 'a', 'a'] Acurácia 66.66666666666666"
End of explanation
"""
### COLOQUE SUA RESPOSTA AQUI
"""
Explanation: Exercicio 12
Junte todo o código acima, crie uma função main e execute a predição para o dataset "pima-indians-diabetes.data", observando a acurácia obtida. Execute várias vezes e analise a variação da acurária ao longo dessas execuções.
End of explanation
"""
|
chemo-wakate/tutorial-6th | beginner/mario/MachineLearning.ipynb | mit | # 数値計算やデータフレーム操作に関するライブラリをインポートする
import numpy as np
import pandas as pd
import scipy as sp
from scipy import stats
# URL によるリソースへのアクセスを提供するライブラリをインポートする。
# import urllib # Python 2 の場合
import urllib.request # Python 3 の場合
# 図やグラフを図示するためのライブラリをインポートする。
%matplotlib inline
import matplotlib.pyplot as plt
# 機械学習関連のライブラリ群
from sklearn.model_selection import train_test_split # 訓練データとテストデータに分割
from sklearn.metrics import confusion_matrix # 混合行列
from sklearn.decomposition import PCA #主成分分析
from sklearn.linear_model import LogisticRegression # ロジスティック回帰
from sklearn.neighbors import KNeighborsClassifier # K近傍法
from sklearn.svm import SVC # サポートベクターマシン
from sklearn.tree import DecisionTreeClassifier # 決定木
from sklearn.ensemble import RandomForestClassifier # ランダムフォレスト
from sklearn.ensemble import AdaBoostClassifier # AdaBoost
from sklearn.naive_bayes import GaussianNB # ナイーブ・ベイズ
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis # 線形判別分析
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis # 二次判別分析
"""
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;">Step 5. 機械学習で二値分類</h3>
<ol>
<li><a href="#1">「ワインの品質」データ読み込み</a>
<li><a href="#2">2群に分ける</a>
<li><a href="#3">説明変数と目的変数に分ける</a>
<li><a href="#4">訓練データとテストデータに分ける</a>
<li><a href="#5">ロジスティク回帰</a>
<li><a href="#6">いろんな機械学習手法を比較する</a>
</ol>
<h4 style="border-bottom: solid 1px black;">Step 5 の目標</h4>
様々な機械学習法で二分類を行って性能評価する。
<img src="fig/cv.png">
End of explanation
"""
# ウェブ上のリソースを指定する
url = 'https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt'
# 指定したURLからリソースをダウンロードし、名前をつける。
# urllib.urlretrieve(url, 'winequality-red.csv') # Python 2 の場合
urllib.request.urlretrieve(url, 'winequality-red.txt') # Python 3 の場合
# データの読み込み
df1 = pd.read_csv('winequality-red.txt', sep='\t', index_col=0)
df1.head() # 先頭5行だけ表示
"""
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="1">1. 「ワインの品質」データ読み込み</a></h3>
データは <a href="http://archive.ics.uci.edu/ml/index.php" target="_blank">UC Irvine Machine Learning Repository</a> から取得したものを少し改変しました。
赤ワイン https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt
白ワイン https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-white.txt
<h4 style="border-bottom: solid 1px black;"> <a href="http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality.names">詳細</a></h4>
<ol>
<li>fixed acidity : 不揮発酸濃度(ほぼ酒石酸濃度)
<li>volatile acidity : 揮発酸濃度(ほぼ酢酸濃度)
<li>citric acid : クエン酸濃度
<li>residual sugar : 残存糖濃度
<li>chlorides : 塩化物濃度
<li>free sulfur dioxide : 遊離亜硫酸濃度
<li>total sulfur dioxide : 亜硫酸濃度
<li>density : 密度
<li>pH : pH
<li>sulphates : 硫酸塩濃度
<li>alcohol : アルコール度数
<li>quality (score between 0 and 10) : 0-10 の値で示される品質のスコア
</ol>
End of explanation
"""
# 簡単な例
toy_data = pd.DataFrame([[1, 4, 7, 10, 13, 16], [2, 5, 8, 11, 14, 27], [3, 6, 9, 12, 15, 17], [21, 24, 27, 20, 23, 26]],
index = ['i1', 'i2', 'i3', 'i4'],
columns = list("abcdef"))
toy_data # 中身の確認
# F列の値が 20 未満の列だけを抜き出す
toy_data[toy_data['f'] < 20]
# F列の値が 20 以上の列だけを抜き出す
toy_data[toy_data['f'] >= 20]
# F列の値が 20 以上の列だけを抜き出して、そのB列を得る
pd.DataFrame(toy_data[toy_data['f'] >= 20]['b'])
# classという名の列を作り、F列の値が 20 未満なら 0 を、そうでなければ 1 を入れる
toy_data['class'] = [0 if i < 20 else 1 for i in toy_data['f'].tolist()]
toy_data # 中身を確認
"""
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="2">2. 2群に分ける</a></h3>
ここでは、ワインの品質を「6未満(よくない)」と「6以上(よい)」の2群に分けてから、機械学習を用いて、pH や volatile acidity などの変数から品質を予測してみましょう。まずは、2群に分けることから始めます。
<h4 style="border-bottom: solid 1px black;">簡単な例で説明</h4>
データを2群に分けるにあたって、pandasの操作が少し分かりにくいので、簡単な例を用いて説明します。
End of explanation
"""
# quality が 6 未満の行を抜き出して、先頭5行を表示する
df1[df1['quality'] < 6].head()
# quality が 6 以上の行を抜き出して、先頭5行を表示する
df1[df1['quality'] >= 6].head()
fig, ax = plt.subplots(1, 1)
# quality が 6 未満の行を抜き出して、x軸を volatile acidity 、 y軸を alcohol として青色の丸を散布する
df1[df1['quality'] < 6].plot(kind='scatter', x=u'volatile acidity', y=u'alcohol', ax=ax,
c='blue', alpha=0.5)
# quality が 6 以上の行を抜き出して、x軸を volatile acidity 、 y軸を alcohol として赤色の丸を散布する
df1[df1['quality'] >= 6].plot(kind='scatter', x=u'volatile acidity', y=u'alcohol', ax=ax,
c='red', alpha=0.5, grid=True, figsize=(5,5))
plt.show()
# quality が 6 未満のものを青色、6以上のものを赤色に彩色して volatile acidity の分布を描画
df1[df1['quality'] < 6]['volatile acidity'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='blue')
df1[df1['quality'] >= 6]['volatile acidity'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='red')
"""
Explanation: <h4 style="border-bottom: solid 1px black;">実データに戻ります</h4>
以下、quality が6未満のワインと6以上のワインに分け、どのように違うのか調べてみましょう。
End of explanation
"""
# 対応のないt検定
significance = 0.05
X = df1[df1['quality'] < 6]['volatile acidity'].tolist()
Y = df1[df1['quality'] >= 6]['volatile acidity'].tolist()
t, p = stats.ttest_ind(X, Y)
print( "t 値は %(t)s" %locals() )
print( "確率は %(p)s" %locals() )
if p < significance:
print("有意水準 %(significance)s で、有意な差があります" %locals())
else:
print("有意水準 %(significance)s で、有意な差がありません" %locals())
"""
Explanation: 上図のように、qualityが6未満のワインと6以上のワインは volatile acidity の分布が異なるように見えます。その差が有意かどうか t検定 で確認してみましょう。
End of explanation
"""
# quality が 6 未満のものを青色、6以上のものを赤色に彩色して pH の分布を描画
df1[df1['quality'] < 6]['pH'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='blue')
df1[df1['quality'] >= 6]['pH'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='red')
# 対応のないt検定
significance = 0.05
X = df1[df1['quality'] <= 5]['pH'].tolist()
Y = df1[df1['quality'] > 5]['pH'].tolist()
t, p = stats.ttest_ind(X, Y)
print( "t 値は %(t)s" %locals() )
print( "確率は %(p)s" %locals() )
if p < significance:
print("有意水準 %(significance)s で、有意な差があります" %locals())
else:
print("有意水準 %(significance)s で、有意な差がありません" %locals())
"""
Explanation: 同様に、qualityが6未満のワインと6以上のワインでは pH の分布が異なるか調べてみましょう。
End of explanation
"""
df1['class'] = [0 if i <= 5 else 1 for i in df1['quality'].tolist()]
df1.head() # 先頭5行を表示
"""
Explanation: <h4 style="border-bottom: solid 1px black;">分類を表す列を追加する</h4>
quality が 6 未満のワインを「0」、6以上のワインを「1」とした class 列を追加しましょう。
End of explanation
"""
# それぞれに与える色を決める。
color_codes = {0:'#0000FF', 1:'#FF0000'}
colors = [color_codes[x] for x in df1['class'].tolist()]
"""
Explanation: class 列が 0 なら青色、1 なら赤色に彩色します。
End of explanation
"""
pd.plotting.scatter_matrix(df1.dropna(axis=1)[df1.columns[:10]], figsize=(20, 20), color=colors, alpha=0.5)
plt.show()
"""
Explanation: その彩色で散布図行列を描きましょう。
End of explanation
"""
dfs = df1.apply(lambda x: (x-x.mean())/x.std(), axis=0).fillna(0) # データの正規化
pca = PCA()
pca.fit(dfs.iloc[:, :10])
# データを主成分空間に写像 = 次元圧縮
feature = pca.transform(dfs.iloc[:, :10])
#plt.figure(figsize=(6, 6))
plt.scatter(feature[:, 0], feature[:, 1], alpha=0.5, color=colors)
plt.title("Principal Component Analysis")
plt.xlabel("The first principal component")
plt.ylabel("The second principal component")
plt.grid()
plt.show()
"""
Explanation: 上図から、各変数と、quality の良し悪しとの関係がボンヤリとつかめてきたのではないでしょうか。続いて主成分分析をしてみます。
End of explanation
"""
X = dfs.iloc[:, :10] # 説明変数
y = df1.iloc[:, 12] # 目的変数
X.head() # 先頭5行を表示して確認
pd.DataFrame(y).T # 目的変数を確認。縦に長いと見にくいので転置して表示。
"""
Explanation: 分かったような分からないような結果ですね。quality の良し悪しを分類・予測するのは簡単ではなさそうです。
<h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="3">3. 説明変数と目的変数に分ける</a></h3>
ここまでで、ワインの品質を2群に分けました。次は、目的変数(ここでは、品質)と説明変数(それ以外の変数)に分けましょう。
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4) # 訓練データ・テストデータへのランダムな分割
X_train.head() # 先頭5行を表示して確認
pd.DataFrame(y_train).T # 縦に長いと見にくいので転置して表示。
"""
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="4">4. 訓練データとテストデータに分ける</a></h3>
機械学習ではその性能評価をするために、既知データを訓練データ(教師データ、教師セットともいいます)とテストデータ(テストセットともいいます)に分割します。訓練データを用いて訓練(学習)して予測モデルを構築し、その予測モデル構築に用いなかったテストデータをどのくらい正しく予測できるかということで性能評価を行ないます。そのような評価方法を「交差検定」(cross-validation)と呼びます。ここでは、
訓練データ(全データの60%)
X_train : 訓練データの説明変数
y_train : 訓練データの目的変数
テストデータ(全データの40%)
X_test : テストデータの説明変数
y_test : テストデータの目的変数
とし、X_train と y_train の関係を学習して X_test から y_test を予測することを目指します。
End of explanation
"""
clf = LogisticRegression() #モデルの生成
clf.fit(X_train, y_train) #学習
# 正解率 (train) : 学習に用いたデータをどのくらい正しく予測できるか
clf.score(X_train, y_train)
# 正解率 (test) : 学習に用いなかったデータをどのくらい正しく予測できるか
clf.score(X_test, y_test)
"""
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="5">5. ロジスティック回帰</a></h3>
機械学習モデルとして有名なものの一つとして、ロジスティック回帰があります。線形回帰分析が量的変数を予測するのに対して、ロジスティック回帰分析は発生確率を予測する手法です。基本的な考え方は線形回帰分析と同じなのですが、予測結果が 0 から 1 の間を取るように、数式やその前提に改良が加えられています。
End of explanation
"""
y_predict = clf.predict(X_test)
pd.DataFrame(y_predict).T
# 予測結果と、正解(本当の答え)がどのくらい合っていたかを表す混合行列
pd.DataFrame(confusion_matrix(y_predict, y_test), index=['predicted 0', 'predicted 1'], columns=['real 0', 'real 1'])
"""
Explanation: 正解率の数字を出すだけなら以上でおしまいですが、具体的な予測結果を確認したい場合は次のようにします。
End of explanation
"""
names = ["Logistic Regression", "Nearest Neighbors",
"Linear SVM", "Polynomial SVM", "RBF SVM", "Sigmoid SVM",
"Decision Tree","Random Forest", "AdaBoost", "Naive Bayes",
"Linear Discriminant Analysis","Quadratic Discriminant Analysis"]
classifiers = [
LogisticRegression(),
KNeighborsClassifier(),
SVC(kernel="linear"),
SVC(kernel="poly"),
SVC(kernel="rbf"),
SVC(kernel="sigmoid"),
DecisionTreeClassifier(),
RandomForestClassifier(),
AdaBoostClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis()]
"""
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="6">6. いろんな機械学習手法を比較する</a></h3>
scikit-learn が用意している機械学習手法(分類器)はロジスティック回帰だけではありません。有名なものは SVM (サポートベクターマシン)などがあります。いろいろ試して、ベストなものを選択してみましょう。
まず、いろんな分類器を classifiers という名のリストの中に収納します。
End of explanation
"""
result = []
for name, clf in zip(names, classifiers): # 指定した複数の分類機を順番に呼び出す
clf.fit(X_train, y_train) # 学習
score1 = clf.score(X_train, y_train) # 正解率(train)の算出
score2 = clf.score(X_test, y_test) # 正解率(test)の算出
result.append([score1, score2]) # 結果の格納
# test の正解率の大きい順に並べる
df_result = pd.DataFrame(result, columns=['train', 'test'], index=names).sort_values('test', ascending=False)
df_result # 結果の確認
# 棒グラフの描画
df_result.plot(kind='bar', alpha=0.5, grid=True)
"""
Explanation: さきほど作成した教師データを使って、これらの分類器で順番に予測して、正解率(train)と正解率(test)を計算してみましょう。
End of explanation
"""
result = []
for trial in range(20): # 20 回繰り返す
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4) # 訓練データ・テストデータの生成
for name, clf in zip(names, classifiers): # 指定した複数の分類機を順番に呼び出す
clf.fit(X_train, y_train) # 学習
score1 = clf.score(X_train, y_train) # 正解率(train)の算出
score2 = clf.score(X_test, y_test) # 正解率(test)の算出
result.append([name, score1, score2]) # 結果の格納
df_result = pd.DataFrame(result, columns=['classifier', 'train', 'test']) # 今回はまだ並べ替えはしない
df_result # 結果の確認。同じ分類器の結果が複数回登場していることに注意。
# 分類器 (classifier) 毎にグループ化して正解率の平均を計算し、test の正解率の平均の大きい順に並べる
df_result_mean = df_result.groupby('classifier').mean().sort_values('test', ascending=False)
df_result_mean # 結果の確認
# エラーバーの表示に用いる目的で、標準偏差を計算する
errors = df_result.groupby('classifier').std()
errors # 結果の確認
# 平均値と標準偏差を用いて棒グラフを描画
df_result_mean.plot(kind='bar', alpha=0.5, grid=True, yerr=errors)
"""
Explanation: 訓練データの作成はランダムに行なうので、作成のたびに正解率の数字は変わります。場合によっては、分類器の順序が前後することもあります。それでは適切な性能評価がしにくいので、教師データを何度も作り直して正解率を計算してみましょう。
End of explanation
"""
# 練習5.1
"""
Explanation: 以上、様々な分類器を用いて、ワインの品質の善し悪しを予測しました。それぞれの分類器にはそれぞれのパラメーターがありますが、上の例では全てデフォルト値を使っています。上手にパラメーターをチューニングすれば、もっと良い予測性能が出せるかもしれません。ですが今回はここまでとさせていただきます。興味があったらぜひ調べてみてください。
<h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;"><a name="4">練習5.1</a></h4>
白ワインのデータ(https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-white.txt) についても同様に機械学習による二値分類を行なってください。
End of explanation
"""
|
Yangqing/caffe2 | caffe2/python/tutorials/create_your_own_dataset.ipynb | apache-2.0 | # First let's import some necessities
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
%matplotlib inline
import urllib2 # for downloading the dataset from the web.
import numpy as np
from matplotlib import pyplot
from StringIO import StringIO
from caffe2.python import core, utils, workspace
from caffe2.proto import caffe2_pb2
f = urllib2.urlopen('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data')
raw_data = f.read()
print('Raw data looks like this:')
print(raw_data[:100] + '...')
# load the features to a feature matrix.
features = np.loadtxt(StringIO(raw_data), dtype=np.float32, delimiter=',', usecols=(0, 1, 2, 3))
# load the labels to a feature matrix
label_converter = lambda s : {'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2}[s]
labels = np.loadtxt(StringIO(raw_data), dtype=np.int, delimiter=',', usecols=(4,), converters={4: label_converter})
"""
Explanation: How do I create my own dataset?
So Caffe2 uses a binary DB format to store the data that we would like to train models on. A Caffe2 DB is a glorified name of a key-value storage where the keys are usually randomized so that the batches are approximately i.i.d. The values are the real stuff here: they contain the serialized strings of the specific data formats that you would like your training algorithm to ingest. So, the stored DB would look (semantically) like this:
key1 value1
key2 value2
key3 value3
...
To a DB, it treats the keys and values as strings, but you probably want structured contents. One way to do this is to use a TensorProtos protocol buffer: it essentially wraps Tensors, aka multi-dimensional arrays, together with the tensor data type and shape information. Then, one can use the TensorProtosDBInput operator to load the data into an SGD training fashion.
Here, we will show you one example of how to create your own dataset. To this end, we will use the UCI Iris dataset - which was a very popular classical dataset for classifying Iris flowers. It contains 4 real-valued features representing the dimensions of the flower, and classifies things into 3 types of Iris flowers. The dataset can be downloaded here.
End of explanation
"""
random_index = np.random.permutation(150)
features = features[random_index]
labels = labels[random_index]
train_features = features[:100]
train_labels = labels[:100]
test_features = features[100:]
test_labels = labels[100:]
# Let's plot the first two features together with the label.
# Remember, while we are plotting the testing feature distribution
# here too, you might not be supposed to do so in real research,
# because one should not peek into the testing data.
legend = ['rx', 'b+', 'go']
pyplot.title("Training data distribution, feature 0 and 1")
for i in range(3):
pyplot.plot(train_features[train_labels==i, 0], train_features[train_labels==i, 1], legend[i])
pyplot.figure()
pyplot.title("Testing data distribution, feature 0 and 1")
for i in range(3):
pyplot.plot(test_features[test_labels==i, 0], test_features[test_labels==i, 1], legend[i])
"""
Explanation: Before we do training, one thing that is often beneficial is to separate the dataset into training and testing. In this case, let's randomly shuffle the data, use the first 100 data points to do training, and the remaining 50 to do testing. For more sophisticated approaches, you can use e.g. cross validation to separate your dataset into multiple training and testing splits. Read more about cross validation here.
End of explanation
"""
# First, let's see how one can construct a TensorProtos protocol buffer from numpy arrays.
feature_and_label = caffe2_pb2.TensorProtos()
feature_and_label.protos.extend([
utils.NumpyArrayToCaffe2Tensor(features[0]),
utils.NumpyArrayToCaffe2Tensor(labels[0])])
print('This is what the tensor proto looks like for a feature and its label:')
print(str(feature_and_label))
print('This is the compact string that gets written into the db:')
print(feature_and_label.SerializeToString())
# Now, actually write the db.
def write_db(db_type, db_name, features, labels):
db = core.C.create_db(db_type, db_name, core.C.Mode.write)
transaction = db.new_transaction()
for i in range(features.shape[0]):
feature_and_label = caffe2_pb2.TensorProtos()
feature_and_label.protos.extend([
utils.NumpyArrayToCaffe2Tensor(features[i]),
utils.NumpyArrayToCaffe2Tensor(labels[i])])
transaction.put(
'train_%03d'.format(i),
feature_and_label.SerializeToString())
# Close the transaction, and then close the db.
del transaction
del db
write_db("minidb", "iris_train.minidb", train_features, train_labels)
write_db("minidb", "iris_test.minidb", test_features, test_labels)
"""
Explanation: Now, as promised, let's put things into a Caffe2 DB. In this DB, what would happen is that we will use "train_xxx" as the key, and use a TensorProtos object to store two tensors for each data point: one as the feature and one as the label. We will use Caffe2's Python DB interface to do so.
End of explanation
"""
net_proto = core.Net("example_reader")
dbreader = net_proto.CreateDB([], "dbreader", db="iris_train.minidb", db_type="minidb")
net_proto.TensorProtosDBInput([dbreader], ["X", "Y"], batch_size=16)
print("The net looks like this:")
print(str(net_proto.Proto()))
workspace.CreateNet(net_proto)
# Let's run it to get batches of features.
workspace.RunNet(net_proto.Proto().name)
print("The first batch of feature is:")
print(workspace.FetchBlob("X"))
print("The first batch of label is:")
print(workspace.FetchBlob("Y"))
# Let's run again.
workspace.RunNet(net_proto.Proto().name)
print("The second batch of feature is:")
print(workspace.FetchBlob("X"))
print("The second batch of label is:")
print(workspace.FetchBlob("Y"))
"""
Explanation: Now, let's create a very simple network that only consists of one single TensorProtosDBInput operator, to showcase how we load data from the DB that we created. For training, you might want to do something more complex: creating a network, train it, get the model, and run the prediction service. To this end you can look at the MNIST tutorial for details.
End of explanation
"""
|
lmoresi/UoM-VIEPS-Intro-to-Python | Notebooks/SphericalMeshing/SphericalTriangulations/Ex7-Refinement-of-Triangulations.ipynb | mit | import stripy as stripy
import numpy as np
"""
Explanation: Example 7 - Refining a triangulation
We have seen how the standard meshes can be uniformly refined to finer resolution. The routines used for this task are available to the stripy user for non-uniform refinement as well.
Notebook contents
Uniform meshes
Refinement strategies
Visualisation
Targetted refinement
Visualisation
End of explanation
"""
ico0 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=0)
ico1 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=1)
ico2 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=2)
ico3 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=3)
ico4 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=4)
ico5 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=5)
ico6 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=6)
ico7 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=7)
print("Size of mesh - 1 {}".format(ico1.points.shape[0]))
print("Size of mesh - 2 {}".format(ico2.points.shape[0]))
print("Size of mesh - 3 {}".format(ico3.points.shape[0]))
print("Size of mesh - 4 {}".format(ico4.points.shape[0]))
print("Size of mesh - 5 {}".format(ico5.points.shape[0]))
print("Size of mesh - 6 {}".format(ico6.points.shape[0]))
print("Size of mesh - 7 {}".format(ico7.points.shape[0]))
"""
Explanation: Uniform meshes by refinement
The refinement_level parameter of the stripy meshes makes repeated loops determining the bisection points of all the existing edges in the triangulation and then creating a new triangulation that includes these points and the original ones. These refinement operations can also be used for non-uniform refinement.
End of explanation
"""
mlons, mlats = ico3.midpoint_refine_triangulation_by_vertices(vertices=[1,2,3,4,5,6,7,8,9,10])
ico3mv = stripy.sTriangulation(mlons, mlats)
mlons, mlats = ico3.edge_refine_triangulation_by_vertices(vertices=[1,2,3,4,5,6,7,8,9,10])
ico3ev = stripy.sTriangulation(mlons, mlats)
mlons, mlats = ico3.centroid_refine_triangulation_by_vertices(vertices=[1,2,3,4,5,6,7,8,9,10])
ico3cv = stripy.sTriangulation(mlons, mlats)
mlons, mlats = ico3.edge_refine_triangulation_by_triangles(triangles=[1,2,3,4,5,6,7,8,9,10])
ico3et = stripy.sTriangulation(mlons, mlats)
mlons, mlats = ico3.centroid_refine_triangulation_by_triangles(triangles=[1,2,3,4,5,6,7,8,9,10])
ico3ct = stripy.sTriangulation(mlons, mlats)
print ico3mv.npoints, ico3mv.simplices.shape[0]
print ico3ev.npoints, ico3ev.simplices.shape[0]
print ico3cv.npoints, ico3cv.simplices.shape[0]
print ico3et.npoints, ico3et.simplices.shape[0]
print ico3ct.npoints, ico3ct.simplices.shape[0]
"""
Explanation: Refinement strategies
Five refinement strategies:
Bisect all segments connected to a given node
Refine all triangles connected to a given node by adding a point at the centroid or bisecting all edges
Refune a given triangle by adding a point at the centroid or bisecting all edges
These are provided as follows:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import gdal
import cartopy
import cartopy.crs as ccrs
def mesh_fig(mesh, meshR, name):
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None))
ax.coastlines(color="lightgrey")
ax.set_global()
generator = mesh
refined = meshR
lons0 = np.degrees(generator.lons)
lats0 = np.degrees(generator.lats)
lonsR = np.degrees(refined.lons)
latsR = np.degrees(refined.lats)
lst = refined.lst
lptr = refined.lptr
ax.scatter(lons0, lats0, color="Red",
marker="o", s=150.0, transform=ccrs.Geodetic())
ax.scatter(lonsR, latsR, color="DarkBlue",
marker="o", s=50.0, transform=ccrs.Geodetic())
segs = refined.identify_segments()
for s1, s2 in segs:
ax.plot( [lonsR[s1], lonsR[s2]],
[latsR[s1], latsR[s2]],
linewidth=0.5, color="black", transform=ccrs.Geodetic())
fig.savefig(name, dpi=250, transparent=True)
return
mesh_fig(ico3, ico3mv, "EdgeByVertex1to10" )
mesh_fig(ico3, ico3ev, "EdgeByVertexT1to10" )
mesh_fig(ico3, ico3cv, "CentroidByVertexT1to10" )
mesh_fig(ico3, ico3et, "EdgeByTriangle1to10" )
mesh_fig(ico3, ico3ct, "CentroidByTriangle1to10" )
"""
Explanation: Visualisation of refinement strategies
End of explanation
"""
points = np.array([[ 0.03, 0.035], [0.05, 0.055]]).T
triangulations = [ico1]
nearest, distances = triangulations[-1].nearest_vertex(points[:,0], points[:,1])
max_depth = 15
while nearest[0] == nearest[1] and max_depth > 0:
lons, lats = triangulations[-1].centroid_refine_triangulation_by_vertices(vertices=nearest[0])
new_triangulation = stripy.sTriangulation(lons, lats)
nearest, distances = new_triangulation.nearest_vertex(points[:,0], points[:,1])
triangulations.append(new_triangulation)
max_depth -= 1
print "refinement_steps =", len(triangulations)
centroid_triangulations = triangulations[:]
triangulations = [ico1]
nearest, distances = triangulations[-1].nearest_vertex(points[:,0], points[:,1])
max_depth = 15
while nearest[0] == nearest[1] and max_depth > 0:
lons, lats = triangulations[-1].edge_refine_triangulation_by_vertices(vertices=nearest[0])
new_triangulation = stripy.sTriangulation(lons, lats)
nearest, distances = new_triangulation.nearest_vertex(points[:,0], points[:,1])
triangulations.append(new_triangulation)
max_depth -= 1
print "refinement_steps =", len(triangulations)
edge_triangulations = triangulations[:]
triangulations = [ico1]
in_triangle = triangulations[-1].containing_triangle(points[:,0], points[:,1])
max_depth = 100
while in_triangle[0] == in_triangle[1] and max_depth > 0:
lons, lats = triangulations[-1].edge_refine_triangulation_by_triangles(in_triangle[0])
new_triangulation = stripy.sTriangulation(lons, lats)
in_triangle = new_triangulation.containing_triangle(points[:,0], points[:,1])
triangulations.append(new_triangulation)
print in_triangle
if in_triangle.shape[0] == 0:
break
max_depth -= 1
print "refinement_steps =", len(triangulations)
edge_t_triangulations = triangulations[:]
triangulations = [ico1]
in_triangle = triangulations[-1].containing_triangle(points[:,0], points[:,1])
max_depth = 100
while in_triangle[0] == in_triangle[1] and max_depth > 0:
lons, lats = triangulations[-1].centroid_refine_triangulation_by_triangles(in_triangle[0])
new_triangulation = stripy.sTriangulation(lons, lats)
in_triangle = new_triangulation.containing_triangle(points[:,0], points[:,1])
triangulations.append(new_triangulation)
print in_triangle
if in_triangle.shape[0] == 0:
break
max_depth -= 1
print "refinement_steps =", len(triangulations)
centroid_t_triangulations = triangulations[:]
"""
Explanation: Targetted refinement
Here we refine a triangulation to a specific criterion - resolving two points in distinct triangles or with distinct nearest neighbour vertices.
End of explanation
"""
import lavavu
## The four different triangulation strategies
t1 = edge_triangulations[-1]
t2 = edge_t_triangulations[-1]
t3 = centroid_triangulations[-1]
t4 = centroid_t_triangulations[-1]
## Fire up the viewer
lv = lavavu.Viewer(border=False, background="#FFFFFF", resolution=[1000,600], near=-10.0)
## Add the nodes to mark the original triangulation
nodes = lv.points("nodes", pointsize=10.0, pointtype="shiny", colour="#448080", opacity=0.75)
nodes.vertices(ico1.points*1.01)
nodes2 = lv.points("SplitPoints", pointsize=2.0, pointtype="shiny", colour="#FF3300", opacity=1.0)
nodes2.vertices(np.array(stripy.spherical.lonlat2xyz(points[:,0], points[:,1])).T * 1.01)
##
tris1w = lv.triangles("t1w", wireframe=True, colour="#444444", opacity=0.8)
tris1w.vertices(t1.points)
tris1w.indices(t1.simplices)
tris1s = lv.triangles("t1s", wireframe=False, colour="#77ff88", opacity=0.8)
tris1s.vertices(t1.points*0.999)
tris1s.indices(t1.simplices)
tris2w = lv.triangles("t2w", wireframe=True, colour="#444444", opacity=0.8)
tris2w.vertices(t2.points)
tris2w.indices(t2.simplices)
tris2s = lv.triangles("t2s", wireframe=False, colour="#77ff88", opacity=0.8)
tris2s.vertices(t2.points*0.999)
tris2s.indices(t2.simplices)
tris3w = lv.triangles("t3w", wireframe=True, colour="#444444", opacity=0.8)
tris3w.vertices(t3.points)
tris3w.indices(t3.simplices)
tris3s = lv.triangles("t3s", wireframe=False, colour="#77ff88", opacity=0.8)
tris3s.vertices(t3.points*0.999)
tris3s.indices(t3.simplices)
tris4w = lv.triangles("t4w", wireframe=True, colour="#444444", opacity=0.8)
tris4w.vertices(t4.points)
tris4w.indices(t4.simplices)
tris4s = lv.triangles("t4s", wireframe=False, colour="#77ff88", opacity=0.8)
tris4s.vertices(t4.points*0.999)
tris4s.indices(t4.simplices)
lv.hide("t1s")
lv.hide("t1w")
lv.hide("t2s")
lv.hide("t2w")
lv.hide("t4s")
lv.hide("t4w")
lv.translation(0.0, 0.0, -3.748)
lv.rotation(37.5, -90.0, -37.5)
lv.control.Panel()
lv.control.Button(command="hide triangles; show t1s; show t1w; redraw", label="EBV")
lv.control.Button(command="hide triangles; show t2s; show t2w; redraw", label="EBT")
lv.control.Button(command="hide triangles; show t3s; show t3w; redraw", label="CBV")
lv.control.Button(command="hide triangles; show t4s; show t4w; redraw", label="CBT")
lv.control.show()
import matplotlib.pyplot as plt
%matplotlib inline
import gdal
import cartopy
import cartopy.crs as ccrs
def mesh_fig(mesh, meshR, name):
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None))
ax.coastlines(color="lightgrey")
ax.set_global()
generator = mesh
refined = meshR
lons0 = np.degrees(generator.lons)
lats0 = np.degrees(generator.lats)
lonsR = np.degrees(refined.lons)
latsR = np.degrees(refined.lats)
ax.scatter(lons0, lats0, color="Red",
marker="o", s=150.0, transform=ccrs.Geodetic())
ax.scatter(lonsR, latsR, color="DarkBlue",
marker="o", s=50.0, transform=ccrs.Geodetic())
ax.scatter(np.degrees(points[:,0]), np.degrees(points[:,1]), marker="s", s=50,
color="#885500", transform=ccrs.Geodetic())
segs = refined.identify_segments()
for s1, s2 in segs:
ax.plot( [lonsR[s1], lonsR[s2]],
[latsR[s1], latsR[s2]],
linewidth=0.5, color="black", transform=ccrs.Geodetic())
fig.savefig(name, dpi=250, transparent=True)
return
mesh_fig(edge_triangulations[0], edge_triangulations[-1], "EdgeByVertex" )
T = edge_triangulations[-1]
E = np.array(T.edge_lengths()).T
A = np.array(T.areas()).T
equant = np.max(E, axis=1) / np.min(E, axis=1)
size_ratio = np.sqrt(np.max(A) / np.min(A))
print "EBV", T.simplices.shape[0], equant.max(), equant.min(), size_ratio
mesh_fig(edge_t_triangulations[0], edge_t_triangulations[-1], "EdgeByTriangle" )
T = edge_t_triangulations[-1]
E = np.array(T.edge_lengths()).T
A = np.array(T.areas()).T
equant = np.max(E, axis=1) / np.min(E, axis=1)
size_ratio = np.sqrt(np.max(A) / np.min(A))
print "EBT", T.simplices.shape[0], equant.max(), equant.min(), size_ratio
mesh_fig(centroid_triangulations[0], centroid_triangulations[-1], "CentroidByVertex" )
T = centroid_triangulations[-1]
E = np.array(T.edge_lengths()).T
A = np.array(T.areas()).T
equant = np.max(E, axis=1) / np.min(E, axis=1)
size_ratio = np.sqrt(np.max(A) / np.min(A))
print "CBV", T.simplices.shape[0], equant.max(), equant.min(), size_ratio
mesh_fig(centroid_t_triangulations[0], centroid_t_triangulations[-1], "CentroidByTriangle" )
T = centroid_t_triangulations[-1]
E = np.array(T.edge_lengths()).T
A = np.array(T.areas()).T
equant = np.max(E, axis=1) / np.min(E, axis=1)
size_ratio = np.sqrt(np.max(A) / np.min(A))
print "CBT", T.simplices.shape[0], equant.max(), equant.min(), size_ratio
"""
Explanation: Visualisation of targetted refinement
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_stats_cluster_spatio_temporal_2samp.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from scipy import stats as stats
import mne
from mne import spatial_tris_connectivity, grade_to_tris
from mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc
from mne.datasets import sample
print(__doc__)
"""
Explanation: 2 samples permutation test on source data with spatio-temporal clustering
Tests if the source space data are significantly different between
2 groups of subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
"""
data_path = sample.data_path()
stc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'
subjects_dir = data_path + '/subjects'
# Load stc to in common cortical space (fsaverage)
stc = mne.read_source_estimate(stc_fname)
stc.resample(50, npad='auto')
stc = mne.morph_data('sample', 'fsaverage', stc, grade=5, smooth=20,
subjects_dir=subjects_dir)
n_vertices_fsave, n_times = stc.data.shape
tstep = stc.tstep
n_subjects1, n_subjects2 = 7, 9
print('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10
X2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10
X1[:, :, :] += stc.data[:, :, np.newaxis]
# make the activity bigger for the second set of subjects
X2[:, :, :] += 3 * stc.data[:, :, np.newaxis]
# We want to compare the overall activity levels for each subject
X1 = np.abs(X1) # only magnitude
X2 = np.abs(X2) # only magnitude
"""
Explanation: Set parameters
End of explanation
"""
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a list of multi-dimensional array of shape
# samples (subjects_k) x time x space, so we permute dimensions
X1 = np.transpose(X1, [2, 1, 0])
X2 = np.transpose(X2, [2, 1, 0])
X = [X1, X2]
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.0001
f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,
n_subjects1 - 1, n_subjects2 - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu =\
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
"""
Explanation: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal)
End of explanation
"""
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
fsave_vertices = [np.arange(10242), np.arange(10242)]
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A != condition B
brain = stc_all_cluster_vis.plot('fsaverage', hemi='both', colormap='mne',
views='lateral', subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.save_image('clusters.png')
"""
Explanation: Visualize the clusters
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | workshops/kfp-caip-sklearn/lab-01-caip-containers/lab-01.ipynb | apache-2.0 | import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
"""
Explanation: Using custom containers with AI Platform Training
Learning Objectives:
1. Learn how to create a train and a validation split with BigQuery
1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform
1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters
1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query it
In this lab, you develop a multi-class classification model, package the model as a docker image, and run on AI Platform Training as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository.
Scikit-learn is one of the most useful libraries for machine learning in Python. The training code uses scikit-learn for data pre-processing and modeling.
The code is instrumented using the hypertune package so it can be used with AI Platform hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
End of explanation
"""
%pip install gcsfs==0.8
"""
Explanation: Run the command in the cell below to install gcsfs package.
End of explanation
"""
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
"""
Explanation: Prepare lab dataset
Set environment variable so that we can use them throughout the entire lab.
The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
End of explanation
"""
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
"""
Explanation: Next, create the BigQuery dataset and upload the Covertype csv data into a table.
End of explanation
"""
!gsutil ls
"""
Explanation: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
End of explanation
"""
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
"""
Explanation: HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.
Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
End of explanation
"""
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
"""
Explanation: Explore the Covertype dataset
Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
End of explanation
"""
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
"""
Explanation: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to Cloud Storage.
Create a training split
Run the query below in order to have repeatable sampling of the data in BigQuery. Note that FARM_FINGERPRINT() is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using the bq command and exports this split into the BigQuery table of covertype_dataset.training.
End of explanation
"""
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
"""
Explanation: Use the bq extract command to export the BigQuery training table to GCS at $TRAINING_FILE_PATH.
End of explanation
"""
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
"""
Explanation: Create a validation split
Run the query below to create a validation split that takes 10% of the data using the bq command and export this split into the BigQuery table covertype_dataset.validation.
In the second cell, use the bq command to export that BigQuery validation table to GCS at $VALIDATION_FILE_PATH.
End of explanation
"""
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
"""
Explanation: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
End of explanation
"""
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
"""
Explanation: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
End of explanation
"""
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
"""
Explanation: Run the pipeline locally.
End of explanation
"""
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
"""
Explanation: Calculate the trained model's accuracy.
End of explanation
"""
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
"""
Explanation: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
End of explanation
"""
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
"""
Explanation: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service.
End of explanation
"""
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
"""
Explanation: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
End of explanation
"""
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
"""
Explanation: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
End of explanation
"""
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
- parameterName: max_iter
type: DISCRETE
discreteValues: [
200,
500
]
- parameterName: alpha
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LINEAR_SCALE
"""
Explanation: Submit an AI Platform hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier:
- Max iterations
- Alpha
The below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of max_iter and the linear range betwee 0.00001 and 0.001 for alpha.
End of explanation
"""
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
"""
Explanation: Start the hyperparameter tuning job.
Use the gcloud command to start the hyperparameter tuning job.
End of explanation
"""
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
"""
Explanation: Monitor the job.
You can monitor the job using Google Cloud console or from within the notebook using gcloud commands.
End of explanation
"""
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
"""
Explanation: NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.
Retrieve HP-tuning results.
After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
End of explanation
"""
response['trainingOutput']['trials'][0]
"""
Explanation: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
End of explanation
"""
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
"""
Explanation: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
End of explanation
"""
!gsutil ls $JOB_DIR
"""
Explanation: NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.
Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on Cloud Storage.
End of explanation
"""
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
"""
Explanation: Deploy the model to AI Platform Prediction
Create a model resource
Use the gcloud command to create a model with model_name in $REGION tagged with labels.
End of explanation
"""
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7\
--region global
"""
Explanation: Create a model version
Use the gcloud command to create a version of the model.
End of explanation
"""
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
"""
Explanation: Serve predictions
Prepare the input file with JSON formated instances.
End of explanation
"""
!gcloud ai-platform predict \
--model $model_name \
--version $model_version \
--json-instances $input_file\
--region global
"""
Explanation: Invoke the model
Use the gcloud command send the data in $input_file to your model deployed as a REST API.
End of explanation
"""
|
agile-geoscience/xlines | notebooks/05_Read_and_write_SHP.ipynb | apache-2.0 | import numpy as np
import fiona
import matplotlib.pyplot as plt
import folium
import pprint
with fiona.open('../data/offshore_wells_2011_Geographic_NAD27.shp') as src:
pprint.pprint(src[0])
"""
Explanation: x lines of Python
Read and write SHP files
This notebook goes with the blog post of the same name, published on 10 August 2017.
We're going to load a shapefile containing some well data. Then we'll change its CRS, make a new attribute, and save a new shapefile.
We'll lean on geopandas for help, but we'll also inspect the file with fiona, a lower-level library that geopandas uses under the hood.
Install geopandas and its dependencies (like gdal, proj, and fiona) with
conda install geopandas
conda install fiona # I had to do this too to get fiona to work.
End of explanation
"""
import geopandas as gpd
"""
Explanation: Using Geopandas
End of explanation
"""
gdf = gpd.read_file('../data/offshore_wells_2011_Geographic_NAD27.shp')
gdf.head()
gdf.plot()
"""
Explanation: Load our data into a GeoDataFrame (gdf):
End of explanation
"""
gdf.geometry[:5]
"""
Explanation: Let's look at the first 5 rows of the geometry attribute.
End of explanation
"""
print(gdf.crs)
"""
Explanation: Notice we're in lat, lon:
End of explanation
"""
gdf = gdf.to_crs({'init': 'epsg:26920'})
"""
Explanation: Visiting EPSG 4267 tells us the datum is NAD27.
Write a new file
Let's cast the SHP to a new CRS: EPSG 26920:
End of explanation
"""
gdf.geometry[:5]
"""
Explanation: Now we're in a UTM coordinate system: UTM Zone 20N, with a NAD83 datum:
End of explanation
"""
gdf['seafl_twt'] = 2 * 1000 * gdf.Water_Dept / 1485
gdf.head()
"""
Explanation: Now we'll also add a new attribute with the two-way seismic travel time to the seafloor (in milliseconds).
End of explanation
"""
gdf.describe()
gdf.to_file('../data/offshore_wells_2011_UTM20_NAD83.shp')
ls ../data/*.shp
"""
Explanation: We can also get a statistical summary of the data frame:
End of explanation
"""
# Must be geographic coords, so casting to WGS84.
gdf = gdf.to_crs({'init': 'epsg:4326'})
with open('../data/offshore_wells.geojson', 'w') as f:
f.write(gdf.to_json())
"""
Explanation: Extra: GeoJSON
GeoJSON is a standard format for encoding geospatial data. Think of it as a more web-friendly shapefile. (It's friendly because it's all contained in a single file, and the file is in JSON format, which JavaScript can process natively and other languages can easily consume).
End of explanation
"""
import folium
# Must be geographic coords, so casting to WGS84.
gdf = gdf.to_crs({'init': 'epsg:4326'})
# Make the map, add the features via GeoJSON.
mymap = folium.Map(location=[45, -62], zoom_start=7)
features = folium.features.GeoJson(gdf.to_json())
mymap.add_child(features)
mymap
"""
Explanation: You can easily load GeoJSON into QGIS for your usual GIS workflow.
It's pretty cool how GeoJSON files show up in GitHub.
Extra: Maps with folium
You can get slippy maps right in Jupyter Notebook with folium. Install it with:
conda install folium
End of explanation
"""
|
mcflugen/bmi-tutorial | notebooks/coupled_example.ipynb | mit | %matplotlib inline
import numpy as np
"""
Explanation: <img src="images/csdms_logo.jpg">
Using a BMI: Coupling Waves and Coastline Evolution Model
This example explores how to use a BMI implementation to couple the Waves component with the Coastline Evolution Model component.
Links
CEM source code: Look at the files that have deltas in their name.
CEM description on CSDMS: Detailed information on the CEM model.
Interacting with the Coastline Evolution Model BMI using Python
Some magic that allows us to view images within the notebook.
End of explanation
"""
from cmt.components import Cem, Waves
cem, waves = Cem(), Waves()
"""
Explanation: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
End of explanation
"""
waves.get_output_var_names()
cem.get_input_var_names()
"""
Explanation: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
End of explanation
"""
cem.initialize(None)
waves.initialize(None)
"""
Explanation: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
End of explanation
"""
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[1] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[0] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
"""
Explanation: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
End of explanation
"""
grid_id = cem.get_var_grid('sea_water__depth')
spacing = cem.get_grid_spacing(grid_id)
shape = cem.get_grid_shape(grid_id)
z = np.empty(shape)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
"""
Explanation: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
End of explanation
"""
qs = np.zeros_like(z)
qs[0, 100] = 750
"""
Explanation: Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
End of explanation
"""
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .3)
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
"""
Explanation: The CSDMS Standard Name for this variable is:
"land_surface_water_sediment~bedload__mass_flow_rate"
You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function get_var_units.
End of explanation
"""
for time in xrange(3000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
"""
Explanation: Set the bedload flux and run the model.
End of explanation
"""
qs[0, 150] = 500
for time in xrange(3750):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
"""
Explanation: Let's add another sediment source with a different flux and update the model.
End of explanation
"""
qs.fill(0.)
for time in xrange(4000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
"""
Explanation: Here we shut off the sediment supply completely.
End of explanation
"""
|
rhiever/scipy_2015_sklearn_tutorial | notebooks/02.2 Supervised Learning - Regression.ipynb | cc0-1.0 | x = np.linspace(-3, 3, 100)
print(x)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.uniform(size=len(x))
plt.plot(x, y, 'o')
"""
Explanation: Regression
In regression we try to predict a continuous output variable. This can be most easily visualized in one dimension.
We will start with a very simple toy example. We will create a dataset out of a sinus curve with some noise:
End of explanation
"""
print(x.shape)
X = x[:, np.newaxis]
print(X.shape)
"""
Explanation: Linear Regression
One of the simplest models again is a linear one, that simply tries to predict the data as lying on a line. One way to find such a line is LinearRegression (also known as ordinary least squares).
The interface for LinearRegression is exactly the same as for the classifiers before, only that y now contains float values, instead of classes.
To apply a scikit-learn model, we need to make X be a 2d-array:
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
"""
Explanation: We split our data in a training and a test set again:
End of explanation
"""
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
"""
Explanation: Then we can built our regression model:
End of explanation
"""
y_pred_train = regressor.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.legend(loc='best')
"""
Explanation: And predict. First let us try the training set:
End of explanation
"""
y_pred_test = regressor.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.legend(loc='best')
"""
Explanation: The line is able to capture the general slope of the data, but not many details.
Let's try the test set:
End of explanation
"""
regressor.score(X_test, y_test)
"""
Explanation: Again, scikit-learn provides an easy way to evaluate the prediction quantitatively using the score method. For regression tasks, this is the R2 score. Another popular way would be the mean squared error.
End of explanation
"""
from sklearn.neighbors import KNeighborsRegressor
kneighbor_regression = KNeighborsRegressor(n_neighbors=1)
kneighbor_regression.fit(X_train, y_train)
"""
Explanation: KNeighborsRegression
As for classification, we can also use a neighbor based method for regression. We can simply take the output of the nearest point, or we could average several nearest points. This method is less popular for regression than for classification, but still a good baseline.
End of explanation
"""
y_pred_train = kneighbor_regression.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.legend(loc='best')
"""
Explanation: Again, let us look at the behavior on training and test set:
End of explanation
"""
y_pred_test = kneighbor_regression.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.legend(loc='best')
"""
Explanation: On the training set, we do a perfect job: each point is its own nearest neighbor!
End of explanation
"""
kneighbor_regression.score(X_test, y_test)
"""
Explanation: On the test set, we also do a better job of capturing the variation, but our estimates look much more messy then before.
Let us look at the R2 score:
End of explanation
"""
|
emilhe/tmm | examples.ipynb | mit | from __future__ import division, print_function, absolute_import
from tmm import (coh_tmm, unpolarized_RT, ellips,
position_resolved, find_in_structure_with_inf)
from numpy import pi, linspace, inf, array
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Examples of plots and calculations using the tmm package
Imports
End of explanation
"""
try:
import colorpy.illuminants
import colorpy.colormodels
from tmm import color
colors_were_imported = True
except ImportError:
# without colorpy, you can't run sample5(), but everything else is fine.
colors_were_imported = False
# "5 * degree" is 5 degrees expressed in radians
# "1.2 / degree" is 1.2 radians expressed in degrees
degree = pi/180
"""
Explanation: Set up
End of explanation
"""
# list of layer thicknesses in nm
d_list = [inf,100,300,inf]
# list of refractive indices
n_list = [1,2.2,3.3+0.3j,1]
# list of wavenumbers to plot in nm^-1
ks=linspace(0.0001,.01,num=400)
# initialize lists of y-values to plot
Rnorm=[]
R45=[]
for k in ks:
# For normal incidence, s and p polarizations are identical.
# I arbitrarily decided to use 's'.
Rnorm.append(coh_tmm('s',n_list, d_list, 0, 1/k)['R'])
R45.append(unpolarized_RT(n_list, d_list, 45*degree, 1/k)['R'])
kcm = ks * 1e7 #ks in cm^-1 rather than nm^-1
plt.figure()
plt.plot(kcm,Rnorm,'blue',kcm,R45,'purple')
plt.xlabel('k (cm$^{-1}$)')
plt.ylabel('Fraction reflected')
plt.title('Reflection of unpolarized light at 0$^\circ$ incidence (blue), '
'45$^\circ$ (purple)');
"""
Explanation: Sample 1
Here's a thin non-absorbing layer, on top of a thick absorbing layer, with
air on both sides. Plotting reflected intensity versus wavenumber, at two
different incident angles.
End of explanation
"""
#index of refraction of my material: wavelength in nm versus index.
material_nk_data = array([[200, 2.1+0.1j],
[300, 2.4+0.3j],
[400, 2.3+0.4j],
[500, 2.2+0.4j],
[750, 2.2+0.5j]])
material_nk_fn = interp1d(material_nk_data[:,0].real,
material_nk_data[:,1], kind='quadratic')
d_list = [inf,300,inf] #in nm
lambda_list = linspace(200,750,400) #in nm
T_list = []
for lambda_vac in lambda_list:
n_list = [1, material_nk_fn(lambda_vac), 1]
T_list.append(coh_tmm('s',n_list,d_list,0,lambda_vac)['T'])
plt.figure()
plt.plot(lambda_list,T_list)
plt.xlabel('Wavelength (nm)')
plt.ylabel('Fraction of power transmitted')
plt.title('Transmission at normal incidence');
"""
Explanation: Sample 2
Here's the transmitted intensity versus wavelength through a single-layer
film which has some complicated wavelength-dependent index of refraction.
(I made these numbers up, but in real life they could be read out of a
graph / table published in the literature.) Air is on both sides of the
film, and the light is normally incident.
End of explanation
"""
n_list=[1,1.46,3.87+0.02j]
ds=linspace(0,1000,num=100) #in nm
psis=[]
Deltas=[]
for d in ds:
e_data=ellips(n_list, [inf,d,inf], 70*degree, 633) #in nm
psis.append(e_data['psi']/degree) # angle in degrees
Deltas.append(e_data['Delta']/degree) # angle in degrees
plt.figure()
plt.plot(ds,psis,ds,Deltas)
plt.xlabel('SiO2 thickness (nm)')
plt.ylabel('Ellipsometric angles (degrees)')
plt.title('Ellipsometric parameters for air/SiO2/Si, varying '
'SiO2 thickness.\n'
'@ 70$^\circ$, 633nm. '
'Should agree with Handbook of Ellipsometry Fig. 1.14');
"""
Explanation: Sample 3
Here is a calculation of the psi and Delta parameters measured in
ellipsometry. This reproduces Fig. 1.14 in Handbook of Ellipsometry by
Tompkins, 2005.
End of explanation
"""
d_list = [inf, 100, 300, inf] #in nm
n_list = [1, 2.2+0.2j, 3.3+0.3j, 1]
th_0=pi/4
lam_vac=400
pol='p'
coh_tmm_data = coh_tmm(pol,n_list,d_list,th_0,lam_vac)
ds = linspace(0,400,num=1000) #position in structure
poyn=[]
absor=[]
for d in ds:
layer, d_in_layer = find_in_structure_with_inf(d_list,d)
data=position_resolved(layer,d_in_layer,coh_tmm_data)
poyn.append(data['poyn'])
absor.append(data['absor'])
# convert data to numpy arrays for easy scaling in the plot
poyn = array(poyn)
absor = array(absor)
plt.figure()
plt.plot(ds,poyn,'blue',ds,200*absor,'purple')
plt.xlabel('depth (nm)')
plt.ylabel('AU')
plt.title('Local absorption (purple), Poynting vector (blue)');
"""
Explanation: Sample 4
Here is an example where we plot absorption and Poynting vector
as a function of depth.
End of explanation
"""
if not colors_were_imported:
print('Colorpy was not detected (or perhaps an error occurred when',
'loading it). You cannot do color calculations, sorry!',
'http://pypi.python.org/pypi/colorpy')
else:
# Crystalline silicon refractive index. Data from Palik via
# http://refractiveindex.info, I haven't checked it, but this is just for
# demonstration purposes anyway.
Si_n_data = [[400, 5.57 + 0.387j],
[450, 4.67 + 0.145j],
[500, 4.30 + 7.28e-2j],
[550, 4.08 + 4.06e-2j],
[600, 3.95 + 2.57e-2j],
[650, 3.85 + 1.64e-2j],
[700, 3.78 + 1.26e-2j]]
Si_n_data = array(Si_n_data)
Si_n_fn = interp1d(Si_n_data[:,0], Si_n_data[:,1], kind='linear')
# SiO2 refractive index (approximate): 1.46 regardless of wavelength
SiO2_n_fn = lambda wavelength : 1.46
# air refractive index
air_n_fn = lambda wavelength : 1
n_fn_list = [air_n_fn, SiO2_n_fn, Si_n_fn]
th_0 = 0
# Print the colors, and show plots, for the special case of 300nm-thick SiO2
d_list = [inf, 300, inf]
reflectances = color.calc_reflectances(n_fn_list, d_list, th_0)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
print('air / 300nm SiO2 / Si --- rgb =', color_dict['rgb'], ', xyY =', color_dict['xyY'])
plt.figure()
color.plot_reflectances(reflectances,
title='air / 300nm SiO2 / Si -- '
'Fraction reflected at each wavelength')
plt.figure()
color.plot_spectrum(spectrum,
title='air / 300nm SiO2 / Si -- '
'Reflected spectrum under D65 illumination')
# Calculate irgb color (i.e. gamma-corrected sRGB display color rounded to
# integers 0-255) versus thickness of SiO2
max_SiO2_thickness = 600
SiO2_thickness_list = linspace(0,max_SiO2_thickness,num=80)
irgb_list = []
for SiO2_d in SiO2_thickness_list:
d_list = [inf, SiO2_d, inf]
reflectances = color.calc_reflectances(n_fn_list, d_list, th_0)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
irgb_list.append(color_dict['irgb'])
# Plot those colors
print('Making color vs SiO2 thickness graph. Compare to (for example)')
print('http://www.htelabs.com/appnotes/sio2_color_chart_thermal_silicon_dioxide.htm')
plt.figure()
plt.plot([0,max_SiO2_thickness],[1,1])
plt.xlim(0,max_SiO2_thickness)
plt.ylim(0,1)
plt.xlabel('SiO2 thickness (nm)')
plt.yticks([])
plt.title('Air / SiO2 / Si color vs SiO2 thickness')
for i in range(len(SiO2_thickness_list)):
# One strip of each color, centered at x=SiO2_thickness_list[i]
if i==0:
x0 = 0
else:
x0 = (SiO2_thickness_list[i] + SiO2_thickness_list[i-1]) / 2
if i == len(SiO2_thickness_list) - 1:
x1 = max_SiO2_thickness
else:
x1 = (SiO2_thickness_list[i] + SiO2_thickness_list[i+1]) / 2
y0 = 0
y1 = 1
poly_x = [x0, x1, x1, x0]
poly_y = [y0, y0, y1, y1]
color_string = colorpy.colormodels.irgb_string_from_irgb(irgb_list[i])
plt.fill(poly_x, poly_y, color_string, edgecolor=color_string)
"""
Explanation: Sample 5
Color calculations: What color is a air / thin SiO2 / Si wafer?
End of explanation
"""
|
ivukotic/ML_platform_tests | PerfSONAR/AnomalyDetection/BDT/Testing BDT AD on simulated data.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_curve, auc
from pandas.tseries.offsets import *
import simulated_data
from graphviz import Source
"""
Explanation: Testing Boosted Decision Tree based Anomaly Detection on simulated data
This code generates large dataframe containing multiple timeseries, randomly adds changes in both mean and variance (anomalies), tries to train a BDT to distinguish measurements belonging to the timebin under investigation from measurements in a reference time period.
End of explanation
"""
cut = 0.55
window = 24
"""
Explanation: parameters to set
End of explanation
"""
# df = simulated_data.get_simulated_data()
df = simulated_data.get_simulated_fixed_data()
df.head()
"""
Explanation: generate data
End of explanation
"""
ax = df.plot(figsize=(20,7))
ax.set_xlabel("time", fontsize=16)
plt.savefig('simulated_fixed.png')
"""
Explanation: plot timeseries. Can take a minute to appear due to plot size and complexity.
End of explanation
"""
def check_for_anomaly(ref, sub):
y_ref = pd.Series([0] * ref.shape[0])
X_ref = ref
del X_ref['flag']
del X_ref['auc_score']
y_sub = pd.Series([1] * sub.shape[0])
X_sub=sub
del X_sub['flag']
del X_sub['auc_score']
# separate Reference and Subject into Train and Test
X_ref_train, X_ref_test, y_ref_train, y_ref_test = train_test_split(X_ref, y_ref, test_size=0.3, random_state=42)
X_sub_train, X_sub_test, y_sub_train, y_sub_test = train_test_split(X_sub, y_sub, test_size=0.3, random_state=42)
# combine training ref and sub samples
X_train = pd.concat([X_ref_train, X_sub_train])
y_train = pd.concat([y_ref_train, y_sub_train])
# combine testing ref and sub samples
X_test = pd.concat([X_ref_test, X_sub_test])
y_test = pd.concat([y_ref_test, y_sub_test])
clf = AdaBoostClassifier() #dtc
# clf = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),algorithm="SAMME",n_estimators=200)
#train an AdaBoost model to be able to tell the difference between the reference and subject data
clf.fit(X_train, y_train)
#Predict using the combined test data
y_predict = clf.predict(X_test)
# scores = cross_val_score(clf, X, y)
# print(scores)
fpr, tpr, thresholds = roc_curve(y_test, y_predict) # calculate the false positive rate and true positive rate
auc_score = auc(fpr, tpr) #calculate the AUC score
print ("auc_score = ", auc_score, "\tfeature importances:", clf.feature_importances_)
if auc_score > cut:
plot_roc(fpr, tpr, auc_score)
filename='tree_'+sub.index.min().strftime("%Y-%m-%d_%H")
tree.export_graphviz(clf.estimators_[0] , out_file=filename +'_1.dot')
tree.export_graphviz(clf.estimators_[1] , out_file=filename +'_2.dot')
return auc_score
def plot_roc(fpr,tpr, roc_auc):
plt.figure()
plt.plot(fpr, tpr, color='darkorange', label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.plot([0, 1], [0, 1], linestyle='--', color='r',label='Luck', alpha=.8)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
"""
Explanation: functions to check for anomaly and plot roc curves.
check_for_anomaly fuction receives both reference and subject dataframes, creates training and testing frames, does classification, tests it, prints auc results, and creates ROC curves when abouve the cut.
No outut is expected form this cell.
End of explanation
"""
df['auc_score']=0.5
#find min and max timestamps
start = df.index.min()
end = df.index.max()
#round start
start.seconds=0
start.minutes=0
ref = window * Hour()
sub = 1 * Hour()
# loop over them
ti=start+ref+sub
count=0
while ti < end + 1 * Minute():
ref_start = ti-ref-sub
ref_end = ti-sub
ref_df = df[(df.index >= ref_start) & (df.index < ref_end)]
sub_df = df[(df.index >= ref_end) & (df.index < ti)]
auc_score = check_for_anomaly(ref_df, sub_df)
df.loc[(df.index>=ref_end) & (df.index<=ti),['auc_score']] = auc_score
print(ti,"\trefes:" , ref_df.shape[0], "\tsubjects:", sub_df.shape[0], '\tauc:', auc_score)
ti = ti + sub
count=count+1
#if count>2: break
ax = df.plot(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
plt.savefig('BDT_simulated_fixed.png')
"""
Explanation: Looping over time intervals
This function actully runs anomaly dection over all the intervals. It takes only few seconds per interval, but plot generation takes 10-20 seconds.
End of explanation
"""
fig, ax = plt.subplots(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
df.loc[:,'Detected'] = 0
df.loc[df.auc_score>0.55,'Detected']=1
df.head()
ax.plot(df.flag, 'r')
ax.plot(df.auc_score,'g')
ax.fill( df.Detected, 'b', alpha=0.3)
ax.legend(loc='upper left')
plt.show()
fig.savefig('BDT_shaded_simulated_fixed.png')
"""
Explanation: make plot of created anomalies, auc values, and shade periods where anomaly is suspected.
End of explanation
"""
|
Rotvig/cs231n | Deep Learning/Exercise 1/Q2.ipynb | mit | # As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
"""
Explanation: Modular neural nets
In the previous exercise, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures.
In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this:
```python
def two_layer_net(X, W1, b1, W2, b2, reg):
# Forward pass; compute scores
s1, fc1_cache = affine_forward(X, W1, b1)
a1, relu_cache = relu_forward(s1)
scores, fc2_cache = affine_forward(a1, W2, b2)
# Loss functions return data loss and gradients on scores
data_loss, dscores = svm_loss(scores, y)
# Compute backward pass
da1, dW2, db2 = affine_backward(dscores, fc2_cache)
ds1 = relu_backward(da1, relu_cache)
dX, dW1, db1 = affine_backward(ds1, fc1_cache)
# A real network would add regularization here
# Return loss and gradients
return loss, dW1, db1, dW2, db2
```
End of explanation
"""
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Affine layer: forward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done we will test your can test your implementation by running the following:
End of explanation
"""
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be less than 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
"""
Explanation: Affine layer: backward
Now implement the affine_backward function. You can test your implementation using numeric gradient checking.
End of explanation
"""
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: ReLU layer: forward
Implement the relu_forward function and test your implementation by running the following:
End of explanation
"""
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
"""
Explanation: ReLU layer: backward
Implement the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
"""
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
"""
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. It's still a good idea to test them to make sure they work correctly.
End of explanation
"""
|
yl565/statsmodels | examples/notebooks/statespace_dfm_coincident.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas.io.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
"""
Explanation: Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses):
Industrial production (IPMAN)
Real aggregate income (excluding transfer payments) (W875RX1)
Manufacturing and trade sales (CMRMTSPL)
Employees on non-farm payrolls (PAYEMS)
In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005.
End of explanation
"""
# HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
# CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
# HMRMT_growth = HMRMT.diff() / HMRMT.shift()
# sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# # Fill in the recent entries (1997 onwards)
# sales[CMRMT.index] = CMRMT
# # Backfill the previous entries (pre 1997)
# idx = sales.ix[:'1997-01-01'].index
# for t in range(len(idx)-1, 0, -1):
# month = idx[t]
# prev_month = idx[t-1]
# sales.ix[prev_month] = sales.ix[month] / (1 + HMRMT_growth.ix[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.ix[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
"""
Explanation: Note: in a recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT.
This has since (02/11/16) been corrected, however the series could also be constructed by hand from HMRMT and CMRMT, as shown below (process taken from the notes in the Alfred xls file).
End of explanation
"""
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
"""
Explanation: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
End of explanation
"""
# Get the endogenous data
endog = dta.ix['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params)
"""
Explanation: Dynamic factors
A general dynamic factor model is written as:
$$
\begin{align}
y_t & = \Lambda f_t + B x_t + u_t \
f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\
u_t & = C_1 u_{t-1} + \dots + C_1 f_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma)
\end{align}
$$
where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors.
This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters.
Model specification
The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process.
Thus the specification considered here is:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
where $i$ is one of: [indprod, income, sales, emp ].
This model can be formulated using the DynamicFactor model built-in to Statsmodels. In particular, we have the following specification:
k_factors = 1 - (there is 1 unobserved factor)
factor_order = 2 - (it follows an AR(2) process)
error_var = False - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below)
error_order = 2 - (the errors are autocorrelated of order 2: i.e. AR(2) processes)
error_cov_type = 'diagonal' - (the innovations are uncorrelated; this is again the default)
Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the fit() method.
Note: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow.
Aside: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in DynamicFactor class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below.
Parameter estimation
Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method.
End of explanation
"""
print(res.summary(separate_params=False))
"""
Explanation: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference:
The estimated parameters
The estimated factor
Parameters
The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret.
One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor.
Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence.
End of explanation
"""
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
"""
Explanation: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons:
The sign-related identification issue described above.
Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data.
It is for these reasons that the coincident index is created (see below).
With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity.
End of explanation
"""
res.plot_coefficients_of_determination(figsize=(8,2));
"""
Explanation: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
End of explanation
"""
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.ix['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.ix['1992-07-01'] / coincident_index.ix['1992-07-01'])
return coincident_index
"""
Explanation: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
End of explanation
"""
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
"""
Explanation: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
End of explanation
"""
from statsmodels.tsa.statespace import tools
class ExtendedDFM(sm.tsa.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
return super(ExtendedDFM, self).param_names + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
constrained = super(ExtendedDFM, self).transform_params(
unconstrained[:-3])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained, unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
unconstrained = super(ExtendedDFM, self).untransform_params(
constrained[:-3])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained, constrained[-3:]]
def update(self, params, transformed=True, complex_step=False):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
params[self._params_factor_zero] = 0
# Now perform the usual DFM update, but exclude our new parameters
super(ExtendedDFM, self).update(params[:-3], transformed=True, complex_step=complex_step)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
"""
Explanation: Appendix 1: Extending the dynamic factor model
Recall that the previous specification was described by:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Written in state space form, the previous specification of the model had the following observation equation:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
and transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
the DynamicFactor model handles setting up the state space representation and, in the DynamicFactor.update method, it fills in the fitted parameter values into the appropriate locations.
The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in {\text{indprod}, \text{income}, \text{sales} }\
y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \
u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Now, the corresponding observation equation should look like the following:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
This model cannot be handled out-of-the-box by the DynamicFactor class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way.
First, notice that if we had set factor_order = 4, we would almost have what we wanted. In that case, the last line of the observation equation would be:
$$
\begin{bmatrix}
\vdots \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\vdots & & & & & & & & & & & \vdots \
\lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
\vdots
\end{bmatrix}
$$
and the first line of the transition equation would be:
$$
\begin{bmatrix}
f_t \
\vdots
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\vdots & & & & & & & & & & & \vdots \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
\vdots
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
Relative to what we want, we have the following differences:
In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters.
We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4).
Our strategy will be to subclass DynamicFactor, and let it do most of the work (setting up the state space representation, etc.) where it assumes that factor_order = 4. The only things we will actually do in the subclass will be to fix those two issues.
First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods __init__, start_params, param_names, transform_params, untransform_params, and update form the core of all state space models in Statsmodels, not just the DynamicFactor class.
End of explanation
"""
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(maxiter=1000, disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000)
print(extended_res.summary(separate_params=False))
"""
Explanation: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons:
The version in the DynamicFactor class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters.
The version in the DynamicFactor class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here.
update
The most important reason we need to specify a new update method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent DynamicFactor.update class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually.
End of explanation
"""
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
"""
Explanation: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results.
End of explanation
"""
|
ysasaki6023/NeuralNetworkStudy | examples/visualization.ipynb | mit | def target(x):
return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)
x = np.linspace(-2, 10, 1000)
y = target(x)
plt.plot(x, y)
"""
Explanation: Target Function
Lets create a target 1-D function with multiple local maxima to test and visualize how the BayesianOptimization package works. The target function we will try to maximize is the following:
$$f(x) = e^{-(x - 2)^2} + e^{-\frac{(x - 6)^2}{10}} + \frac{1}{x^2 + 1}, $$ its maximum is at $x = 2$ and we will restrict the interval of interest to $x \in (-2, 10)$.
End of explanation
"""
bo = BayesianOptimization(target, {'x': (-2, 10)})
"""
Explanation: Create a BayesianOptimization Object
Enter the target function to be maximized, its variable(s) and their corresponding ranges (see this example for a multi-variable case). A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
End of explanation
"""
gp_params = {'corr': 'cubic'}
bo.maximize(init_points=2, n_iter=0, acq='ucb', kappa=5, **gp_params)
"""
Explanation: In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
$\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold. Additionally we will use the cubic correlation in our Gaussian Process.
End of explanation
"""
def posterior(bo, xmin=-2, xmax=10):
xmin, xmax = -2, 10
bo.gp.fit(bo.X, bo.Y)
mu, sigma2 = bo.gp.predict(np.linspace(xmin, xmax, 1000).reshape(-1, 1), eval_MSE=True)
return mu, np.sqrt(sigma2)
def plot_gp(bo, x, y):
fig = plt.figure(figsize=(16, 10))
fig.suptitle('Gaussian Process and Utility Function After {} Steps'.format(len(bo.X)), fontdict={'size':30})
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
mu, sigma = posterior(bo)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(bo.X.flatten(), bo.Y, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility = bo.util.utility(x.reshape((-1, 1)), bo.gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
"""
Explanation: Plotting and visualizing the algorithm at each step
Lets first define a couple functions to make plotting easier
End of explanation
"""
plot_gp(bo, x, y)
"""
Explanation: Two random points
After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After one step of GP (and two random points)
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After two steps of GP (and two random points)
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After three steps of GP (and two random points)
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After four steps of GP (and two random points)
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After five steps of GP (and two random points)
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After six steps of GP (and two random points)
End of explanation
"""
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
"""
Explanation: After seven steps of GP (and two random points)
End of explanation
"""
|
folivetti/BIGDATA | Spark/Lab5b_kmeans_quantiza.ipynb | mit | import os
import numpy as np
def parseRDD(point):
""" Parser for the current dataset. It receives a data point and return
a sentence (third field).
Args:
point (str): input data point
Returns:
str: a string
"""
data = point.split('\t')
return (int(data[0]),data[2])
def notempty(point):
""" Returns whether the point string is not empty
Args:
point (str): input string
Returns:
bool: True if it is not empty
"""
return len(point[1])>0
filename = os.path.join("Data","Aula04","MovieReviews2.tsv")
rawRDD = sc.textFile(filename,100)
header = rawRDD.take(1)[0]
dataRDD = (rawRDD
#.sample(False, 0.1, seed=42)
.filter(lambda x: x!=header)
.map(parseRDD)
.filter(notempty)
#.sample( False, 0.1, 42 )
)
print 'Read {} lines'.format(dataRDD.count())
print 'Sample line: {}'.format(dataRDD.takeSample(False, 1)[0])
"""
Explanation: Lab 5b - k-Means para Quantização de Atributos
Os algoritmos de agrupamento de dados, além de serem utilizados em análise exploratória para extrair padrões de similaridade entre os objetos, pode ser utilizado para compactar o espaço de dados.
Neste notebook vamos utilizar nossa base de dados de Sentiment Movie Reviews para os experimentos. Primeiro iremos utilizar a técnica word2vec que aprende uma transformação dos tokens de uma base em um vetor de atributos. Em seguida, utilizaremos o algoritmo k-Means para compactar a informação desses atributos e projetar cada objeto em um espaço de atributos de tamanho fixo.
As células-exercícios iniciam com o comentário # EXERCICIO e os códigos a serem completados estão marcados pelos comentários <COMPLETAR>.
Nesse notebook:
Parte 1: Word2Vec
Parte 2: k-Means para quantizar os atributos
Parte 3: Aplicando um k-NN
Parte 0: Preliminares
Para este notebook utilizaremos a base de dados Movie Reviews que será utilizada para o segundo projeto.
A base de dados tem os campos separados por '\t' e o seguinte formato:
"id da frase","id da sentença","Frase","Sentimento"
Para esse laboratório utilizaremos apenas o campo "Frase".
End of explanation
"""
# EXERCICIO
import re
split_regex = r'\W+'
stopfile = os.path.join("Data","Aula04","stopwords.txt")
stopwords = set(sc.textFile(stopfile).collect())
def tokenize(string):
""" An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
"""
return <COMPLETAR>
wordsRDD = dataRDD.map(lambda x: tokenize(x[1]))
print wordsRDD.take(1)[0]
# TEST Tokenize a String (1a)
assert wordsRDD.take(1)[0]==[u'quiet', u'introspective', u'entertaining', u'independent', u'worth', u'seeking'], 'lista incorreta!'
print 'ok!'
"""
Explanation: Parte 1: Word2Vec
A técnica word2vec aprende através de uma rede neural semântica uma representação vetorial de cada token em um corpus de tal forma que palavras semanticamente similares sejam similares na representação vetorial.
O PySpark contém uma implementação dessa técnica, para aplicá-la basta passar um RDD em que cada objeto representa um documento e cada documento é representado por uma lista de tokens na ordem em que aparecem originalmente no corpus. Após o processo de treinamento, podemos transformar um token utilizando o método transform para transformar cada token em uma representaçã vetorial.
Nesse ponto, cada objeto de nossa base será representada por uma matriz de tamanho variável.
(1a) Gerando RDD de tokens
Utilize a função de tokenização tokenize do Lab4d para gerar uma RDD wordsRDD contendo listas de tokens da nossa base original.
End of explanation
"""
# EXERCICIO
from pyspark.mllib.feature import Word2Vec
model = Word2Vec().<COMPLETAR>
print model.transform(u'entertaining')
print model.findSynonyms(u'entertaining', 2)
dist = np.abs(model.transform(u'entertaining')-np.array([-0.246186971664,-0.127226486802,0.0271916668862,0.0112947737798,-0.206053063273])).mean()
assert dist<1e-6, 'valores incorretos'
print 'ok!'
assert model.findSynonyms(u'entertaining', 1)[0][0] == 'affair', 'valores incorretos'
print 'ok!'
"""
Explanation: (1b) Aplicando transformação word2vec
Crie um modelo word2vec aplicando o método fit na RDD criada no exercício anterior.
Para aplicar esse método deve ser fazer um pipeline de métodos, primeiro executando Word2Vec(), em seguida aplicando o método setVectorSize() com o tamanho que queremos para nosso vetor (utilize tamanho 5), seguido de setSeed() para a semente aleatória, em caso de experimentos controlados (utilizaremos 42) e, finalmente, fit() com nossa wordsRDD como parâmetro.
End of explanation
"""
# EXERCICIO
uniqueWords = (wordsRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.collect()
)
print '{} tokens únicos'.format(len(uniqueWords))
w2v = {}
for w in uniqueWords:
w2v[w] = <COMPLETAR>
w2vb = sc.broadcast(w2v)
print 'Vetor entertaining: {}'.format( w2v[u'entertaining'])
vectorsRDD = (wordsRDD
.<COMPLETAR>
)
recs = vectorsRDD.take(2)
firstRec, secondRec = recs[0], recs[1]
print firstRec.shape, secondRec.shape
# TEST Tokenizing the small datasets (1c)
assert len(uniqueWords) == 3332, 'valor incorreto'
print 'ok!'
assert np.mean(np.abs(w2v[u'entertaining']-[-0.24618697, -0.12722649, 0.02719167, 0.01129477, -0.20605306]))<1e-6,'valor incorreto'
print 'ok!'
assert secondRec.shape == (10,5)
print 'ok!'
"""
Explanation: (1c) Gerando uma RDD de matrizes
Como primeiro passo, precisamos gerar um dicionário em que a chave são as palavras e o valor é o vetor representativo dessa palavra.
Para isso vamos primeiro gerar uma lista uniqueWords contendo as palavras únicas do RDD words, removendo aquelas que aparecem menos do que 5 vezes $^1$. Em seguida, criaremos um dicionário w2v que a chave é um token e o valor é um np.array do vetor transformado daquele token$^2$.
Finalmente, vamos criar uma RDD chamada vectorsRDD em que cada registro é representado por uma matriz onde cada linha representa uma palavra transformada.
1
Na versão 1.3 do PySpark o modelo Word2Vec utiliza apenas os tokens que aparecem mais do que 5 vezes no corpus, na versão 1.4 isso é parametrizado.
2
Na versão 1.4 do PySpark isso pode ser feito utilizando o método `getVectors()
End of explanation
"""
# EXERCICIO
from pyspark.mllib.clustering import KMeans
vectors2RDD = sc.parallelize(np.array(w2v.values()),1)
print 'Sample vector: {}'.format(vectors2RDD.take(1))
modelK = KMeans.<COMPLETAR>
clustersRDD = vectors2RDD.<COMPLETAR>
print '10 first clusters allocation: {}'.format(clustersRDD.take(10))
# TEST Amazon record with the most tokens (1d)
assert clustersRDD.take(10)==[134, 126, 209, 221, 401, 485, 197, 269, 296, 265], 'valor incorreto'
print 'ok'
"""
Explanation: Parte 2: k-Means para quantizar os atributos
Nesse momento é fácil perceber que não podemos aplicar nossas técnicas de aprendizado supervisionado nessa base de dados:
A regressão logística requer um vetor de tamanho fixo representando cada objeto
O k-NN necessita uma forma clara de comparação entre dois objetos, que métrica de similaridade devemos aplicar?
Para resolver essa situação, vamos executar uma nova transformação em nossa RDD. Primeiro vamos aproveitar o fato de que dois tokens com significado similar são mapeados em vetores similares, para agrupá-los em um atributo único.
Ao aplicarmos o k-Means nesse conjunto de vetores, podemos criar $k$ pontos representativos e, para cada documento, gerar um histograma de contagem de tokens nos clusters gerados.
(2a) Agrupando os vetores e criando centros representativos
Como primeiro passo vamos gerar um RDD com os valores do dicionário w2v. Em seguida, aplicaremos o algoritmo k-Means com $k = 200$.
End of explanation
"""
# EXERCICIO
def quantizador(point, model, k, w2v):
key = <COMPLETAR>
words = <COMPLETAR>
matrix = np.array( <COMPLETAR> )
features = np.zeros(k)
for v in matrix:
c = <COMPLETAR>
features[c] += 1
return (key, features)
quantRDD = dataRDD.map(lambda x: quantizador(x, modelK, 500, w2v))
print quantRDD.take(1)
# TEST Implement a TF function (2a)
assert quantRDD.take(1)[0][1].sum() == 5, 'valores incorretos'
print 'ok!'
"""
Explanation: (2b) Transformando matriz de dados em vetores quantizados
O próximo passo consiste em transformar nosso RDD de frases em um RDD de pares (id, vetor quantizado). Para isso vamos criar uma função quantizador que receberá como parâmetros o objeto, o modelo de k-means, o valor de k e o dicionário word2vec.
Para cada ponto, vamos separar o id e aplicar a função tokenize na string. Em seguida, transformamos a lista de tokens em uma matriz word2vec. Finalmente, aplicamos cada vetor dessa matriz no modelo de k-Means, gerando um vetor de tamanho $k$ em que cada posição $i$ indica quantos tokens pertencem ao cluster $i$.
End of explanation
"""
dataNorms = quantRDD.map(lambda rec: (rec[0],np.sqrt(rec[1].dot(rec[1]))))
dataNormsBroadcast = sc.broadcast(dataNorms.collectAsMap())
"""
Explanation: Part 3: Aplicando k-NN
Nessa parte, vamos testar o algoritmo k-NN para verificar se a base quantizada é capaz de capturar a semelhança entre documentos.
(4a) Pré-calcule a norma dos vetores
Da mesma forma que no Lab 4c, calcule a norma dos vetores de quantRDD.
End of explanation
"""
# EXERCICIO
from itertools import product
def calcsim(rec):
items = list(rec[1])
return <COMPLETAR>
newRDD = (quantRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.cache()
)
newcount = newRDD.count()
print newcount
assert newcount==11796442, 'incorrect value'
print 'ok'
"""
Explanation: (4b) Calcule a similaridade do cosseno entre pares de registros
Complete a função cosineSim que receberá um par ( (doc1, vec1), (doc2, vec2) ) para calcular a similaridade do cosseno entre esses dois documentos.
Usando o índice invertido, agrupe a base pela chave (token) e aplique a função genList que deve gerar uma lista de tuplas (token, ((doc1,tfidf),(doc2,tfidf)) de todos os pares de documentos com essa token em comum exceto nos casos doc1==doc2.
Em seguida, gere tuplas do tipo ( (doc1, doc2), tfidf1tfidf2/(norma1norma2) ) a reduza para realizar a somatória desses valores sob a mesma chave.
Dessa forma teremos os registros de pares de documentos que possuem similaridade não nula com sua similaridade calculada.
End of explanation
"""
# EXERCICIO
def genklist(rec,k):
""" Generate the list of the k most similar documents to the key
Args:
record: a pair, (doc, [(doc,sim)])
k: number of most similar elements
Returns:
pair: (doc, [(doc,sim)])
"""
<COMPLETAR>
return (key, docs[:k])
def knn(simRDD, k):
""" Generate the knn RDD for a given RDD.
Args:
simRDD: RDD of ( (doc1,doc2), sim)
k: number of most similar elements
Returns:
RDD: RDD of ( doc1, [docs, sims])
"""
ksimRDD = (simRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)
return ksimRDD
ksimReviewsRDD = knn(newRDD, 3)
ksimReviewsRDD.take(3)
print dataRDD.filter(lambda x: x[0] in [55300,39009,130973,66284]).collect()
"""
Explanation: (4f) k-NN
Vamos gerar agora a lista dos k documentos mais similares de cada documento.
Gere uma RDD partindo da commonTokens de tal forma a ter ( id1, (id2, sim) )
Agrupe pela chave e transforme a RDD para ( id1, [ (id,sim) ] ) onde a lista deve ter k elementos
End of explanation
"""
|
emmaqian/DataScientistBootcamp | DS_HW1_Huimin Qian_052617.ipynb | mit | # import the necessary package at the very beginning
import numpy as np
import pandas as pd
print(str(float(100*177/891)) + '%')
"""
Explanation: 数据应用学院 Data Scientist Program
Hw1
End of explanation
"""
def foolOne(x): # note: assume x is a number
y = x * 2
y -= 25
return y
## Type Your Answer Below ##
foolOne_lambda = lambda x: x*2-25
# Generate a random 3*4 matrix for test
tlist = np.random.randn(3,4)
tlist
# Check if the lambda function yields same results as previous function
def test_foolOne(tlist, func1, func2):
if func1(tlist).all() == func2(tlist).all():
print("Same results!")
test_foolOne(tlist, foolOne, foolOne_lambda)
def foolTwo(x): # note: assume x here is a string
if x.startswith('g'):
return True
else:
return False
## Type Your Answer Below ##
foolTwo_lambda = lambda x: x.startswith('g')
# Generate a random 3*4 matrix of strings for test
# reference: https://pythontips.com/2013/07/28/generating-a-random-string/
# reference: http://www.programcreek.com/python/example/1246/string.ascii_lowercase
import random
import string
def random_string(size):
new_string = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(size)])
return new_string
def test_foolTwo():
test_string = random_string(6)
if foolTwo_lambda(test_string) == foolTwo(test_string):
return True
for i in range(10):
if test_foolTwo() is False:
print('Different results!')
"""
Explanation: 1. Please rewrite following functions to lambda expressions
Example:
```
def AddOne(x):
y=x+1
return y
addOneLambda = lambda x: x+1
```
End of explanation
"""
## Type Your Answer Below ##
# reference: https://docs.python.org/3/tutorial/datastructures.html
# tuple is immutable. They cannot be changed once they are made.
# tuples are easier for the python interpreter to deal with and therefore might end up being easier
# tuples might indicate that each entry has a distinct meaning and their order has some meaning (e.g., year)
# Another pragmatic reason to use tuple is when you have data which you know should not be changed (e.g., constant)
# tuples can be used as keys in dictionaries
# tuples usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing (or even by attribute in the case of namedtuples).
tuple1 = (1, 2, 3, 'a', True)
print('tuple: ', tuple1)
print('1st item of tuple: ', tuple1[0])
tuple1[0] = 4 # item assignment won't work for tuple
# tuple with just one element
tuple2 = (1) # just a number, so has no elements
print(type(tuple2))
tuple2[0]
# tuple with just one element
tuple3 = (1, )
print(type(tuple3))
tuple3[0]
# Question for TA: is tuple comprehension supported?
tuple4 = (char for char in 'abcdabcdabcd' if char not in 'ac')
print(tuple4)
# Question for TA: is the following two tuples the same?
tuple4= (1,2,'a'),(True, False)
tuple5 = ((1,2,'a'),(True, False))
print(tuple4)
print(tuple5)
# lists' elements are usually homogeneous and are accessed by iterating over the list.
list1 = [1, 2, 3, 'a', True]
print('list1: ', list1)
print('1st item of list: ', list1[0])
list1[0] = 4 # item assignment works for list
# list comprehensions
list_int = [element for element in list1 if type(element)==int]
print("list_int", list2)
## Type Your Answer Below ##
# A set is an unordered collection with no duplicate elements.
# set() can be used to eliminate duplicate entries
list1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
set1 = set(list1)
print(set1)
# set can be used for membership testing
set2 = {1, 2, 'abc', True}
print('abc' in set2) # membership testing
set1[0] # set does not support indexing
# set comprehensions
set4 = {char for char in 'abcdabcdabcd' if char not in 'ac'}
print(set4)
"""
Explanation: 2. What's the difference between tuple and list?
End of explanation
"""
# Calculate the time cost differences between set and list
import time
import random
def compute_search_speed_difference(scope):
list1 = []
dic1 = {}
set1 = set(dic1)
for i in range(0,scope):
list1.append(i)
set1.add(i)
random_n = random.randint(0,100000) # look for this random integer in both list and set
list_search_starttime = time.time()
list_search = random_n in list1
list_search_endtime = time.time()
list_search_time = list_search_endtime - list_search_starttime # Calculate the look-up time in list
#print("The look up time for the list is:")
#print(list_search_time)
set_search_starttime = time.time()
set_search = random_n in set1
set_search_endtime = time.time()
set_search_time = set_search_endtime - set_search_starttime # Calculate the look-up time in set
#print("The look up time for the set is:")
#print(set_search_time)
speed_difference = list_search_time - set_search_time
return(speed_difference)
def test(testing_times, scope):
test_speed_difference = []
for i in range(0,testing_times):
test_speed_difference.append(compute_search_speed_difference(scope))
return(test_speed_difference)
#print(test(1000, 100000)) # test 10 times can print out the time cost differences
print("On average, the look up time for a list is more than a set in:")
print(np.mean(test(100, 1000)))
"""
Explanation: 3. Why set is faster than list in python?
Answers:
Set and list are implemented using two different data structures - Hash tables and Dynamic arrays.
. Python lists are implemented as dynamic arrays (which can preserve ), which must be searched one by one to compare every single member for equality, with lookup speed O(n) depending on the size of the list.
. Python sets are implemented as hash tables, which can directly jump and locate the bucket (the position determined by the object's hash) using hash in a constant speed O(1), regardless of the size of the set.
End of explanation
"""
## Type Your Answer Below ##
student = np.array([0, 'Alex', 3, 'M'])
print(student) # all the values' datatype is converted to str
"""
Explanation: 4. What's the major difference between array in numpy and series in pandas?
Pandas series (which can contain values of different data types) is much more general and flexible than the one-dimensional Numpy array(which can only contain one data type).
While Numpy array has an implicitly defined integer used to access the values, the Pandas series has an explicitly defined index (which can be any data type) associated with the values (which gives the series object additonal capabilities).
What's the relationships among Numpy, Pandas and SciPy:
. Numpy is a libary for efficient array computations, modeled after Matlab. Arrays differ from plain Python lists in the way they are stored and handled. Array elements stay together in memory, so they can be quickly accessed. Numpy also supports quick subindexing (a[0,:,2]). Furthermore, Numpy provides vectorized mathematical functions (when you call numpy.sin(a), the sine function is applied on every element of array a), which are faster than a Python for loop.
. Pandas library is good for analyzing tabular data for exploratory data analysis, statistics and visualization. It's used to understand the data you have.
. Scipy provides large menu of libraries for scientific computation, such as integration, interpolation, signal processing, linear algebra, statistics. It's built upon the infrastructure of Numpy. It's good for performing scientific and engineering calculation.
. Scikit-learn is a collection of advanced machine-learning algorithms for Python. It is built upon Numpy and SciPy. It's good to use the data you have to train a machine-learning algorithm.
End of explanation
"""
## Type Your Answer Below ##
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/pcsanwald/kaggle-titanic/master/train.csv')
df.sample(3)
df.tail(3)
df.describe()
df.info()
"""
Explanation: Question 5-11 are related to titanic data (train.csv) on kaggle website
You can download the data from the following link:<br />https://www.kaggle.com/c/titanic/data
5. Read titanic data (train.csv) into pandas dataframe, and display a sample of data.
End of explanation
"""
## Type Your Answer Below ##
len(df[df.age.isnull()])/len(df)*100
"""
Explanation: 6. What's the percentage of null value in 'Age'?
End of explanation
"""
## Type Your Answer Below ##
df.embarked.value_counts()
print('number of classes: ', len(df.embarked.value_counts().index))
print('names of classes: ', df.embarked.value_counts().index)
# Another method
embarked_set = set(df.embarked)
print(df.embarked.unique())
"""
Explanation: 7. How many unique classes in 'Embarked' ?
End of explanation
"""
## Type Your Answer Below ##
male_survived = df[df.survived==1][df.sex=='male']
male_survived_n = len(df.query('''sex=='male' and survived ==1'''))
female_survived = df[df.survived==1][df.sex=='female']
female_survived_n = len(df.query('''sex=='female' and survived ==1'''))
df_survived = pd.DataFrame({'male':male_survived_n, 'female': female_survived_n}, index=['Survived_number'])
df_survived
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
df_survived.plot(kind='bar', title='survived female and male', legend='True')
sns.pointplot(x='embarked', y='survived', hue='sex', data=df, palette={'male':'blue', 'female':'pink'}, markers=["*", "o"], linestyles=['-', '--'])
grid = sns.FacetGrid(df, col='embarked')
grid.map(sns.pointplot, 'pclass', 'survived', 'sex', palette={'male':'blue', 'female':'pink'}, markers=["*", "o"], linestyles=['-', '--'])
grid.add_legend()
grid = sns.FacetGrid(data_train, col='pclass')
grid.map(sns.barplot, 'embarked', 'age', 'sex')
grid.add_legend()
"""
Explanation: 8. Compare survival chance between male and female passangers.
Please use pandas to plot a chart you think can address this question
End of explanation
"""
## Type Your Answer Below ##
df_23=df.query('''age>23''')
df_23
"""
Explanation: Observations from barplot above:
In Pclass = 1 and 2, female has higher mean age than male. But in Pclass = 3, female has lower mean age than male.
Passengers in Pclass = 1 has the highest average age, followed by Pclass = 2 and Pclass = 3.
Age trend among Embarked is not abvious
Decisions:
Use 'Pclass'and 'Sex' in estimating missing values in 'Age'.
9. Show the table of passangers who are 23 years old.
End of explanation
"""
# first split name into string lists by ' '
def format_name(df):
df['split_name'] = df.name.apply(lambda x: x.split(' '))
return df
print(df.sample(3).split_name, '\n')
# for each subset string of name, check if "jack" or "rose" in it
for i in format_name(df).split_name:
for l in i:
if (("jack" in l.lower()) | ("rose" in l.lower()) ):
print("found names that contain jack or rose: ", l)
"""
Explanation: 10. Is there a Jack or Rose in our dataset?
End of explanation
"""
## Type Your Answer Below ##
df4 = df.query('''pclass==1''')
def percent(x):
m = int(x.count())
n = m/len(df4)
return(n)
df[['survived','pclass']].query('''pclass==1''').groupby([ 'survived']).agg({'pclass':percent})
"""
Explanation: 11. What's the percentage of surviving when passangers' pclass is 1?
End of explanation
"""
|
trungdong/datasets-provanalytics-dmkd | Extra 2.2 - Unbalanced Data - Application 2.ipynb | mit | import pandas as pd
df = pd.read_csv("collabmap/depgraphs.csv", index_col='id')
df.head()
df.describe()
"""
Explanation: Extra 2.1 - Unbalanced Data - Application 1: CollabMap Data Quality
Assessing the quality of crowdsourced data in CollabMap from their provenance
In this notebook, we compared the classification accuracy on unbalanced (original) CollabMap datasets vs that on a balanced CollabMap datasets.
Goal: To determine if the provenance network analytics method can identify trustworthy data (i.e. buildings, routes, and route sets) contributed by crowd workers in CollabMap.
Classification labels: $\mathcal{L} = \left{ \textit{trusted}, \textit{uncertain} \right} $.
Training data:
Buildings: 5175
Routes: 4710
Route sets: 4997
Reading data
The CollabMap dataset is provided in the collabmap/depgraphs.csv file, each row corresponds to a building, route, or route sets created in the application:
* id: the identifier of the data entity (i.e. building/route/route set).
* trust_value: the beta trust value calculated from the votes for the data entity.
* The remaining columns provide the provenance network metrics calculated from the dependency provenance graph of the entity.
End of explanation
"""
trust_threshold = 0.75
df['label'] = df.apply(lambda row: 'Trusted' if row.trust_value >= trust_threshold else 'Uncertain', axis=1)
df.head() # The new label column is the last column below
"""
Explanation: Labelling data
Based on its trust value, we categorise the data entity into two sets: trusted and uncertain. Here, the threshold for the trust value, whose range is [0, 1], is chosen to be 0.75.
End of explanation
"""
# We will not use trust value from now on
df.drop('trust_value', axis=1, inplace=True)
df.shape # the dataframe now have 23 columns (22 metrics + label)
"""
Explanation: Having used the trust valuue to label all the data entities, we remove the trust_value column from the data frame.
End of explanation
"""
df_buildings = df.filter(like="Building", axis=0)
df_routes = df.filter(regex="^Route\d", axis=0)
df_routesets = df.filter(like="RouteSet", axis=0)
df_buildings.shape, df_routes.shape, df_routesets.shape # The number of data points in each dataset
"""
Explanation: Filtering data
We split the dataset into three: buildings, routes, and route sets.
End of explanation
"""
from analytics import test_classification
"""
Explanation: Classification on unbalanced (original) data
We now run the cross validation tests on the three unbalanced datasets (df_buildings, df_routes, and df_routesets) using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code.
End of explanation
"""
# Cross validation test on building classification
res, imps = test_classification(df_buildings)
# adding the Data Type column
res['Data Type'] = 'Building'
imps['Data Type'] = 'Building'
# storing the results and importance of features
results_unb = res
importances_unb = imps
"""
Explanation: Building Classification
We test the classification of buildings, collect individual accuracy scores results and the importance of every feature in each test in importances (both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later.
End of explanation
"""
# Cross validation test on route classification
res, imps = test_classification(df_routes)
# adding the Data Type column
res['Data Type'] = 'Route'
imps['Data Type'] = 'Route'
# storing the results and importance of features
results_unb = results_unb.append(res, ignore_index=True)
importances_unb = importances_unb.append(imps, ignore_index=True)
"""
Explanation: Route Classification
End of explanation
"""
# Cross validation test on route classification
res, imps = test_classification(df_routesets)
# adding the Data Type column
res['Data Type'] = 'Route Set'
imps['Data Type'] = 'Route Set'
# storing the results and importance of features
results_unb = results_unb.append(res, ignore_index=True)
importances_unb = importances_unb.append(imps, ignore_index=True)
"""
Explanation: Route Set Classification
End of explanation
"""
from analytics import balance_smote
"""
Explanation: ## Classification on balanced data
We repeat the same experiements but now with balanced datasets
Balancing Data
This section explore the balance of each of the three datasets and balance them using the SMOTE Oversampling Method.
End of explanation
"""
df_buildings.label.value_counts()
"""
Explanation: Buildings
End of explanation
"""
df_buildings = balance_smote(df_buildings)
"""
Explanation: Balancing the building dataset:
End of explanation
"""
df_routes.label.value_counts()
"""
Explanation: Routes
End of explanation
"""
df_routes = balance_smote(df_routes)
"""
Explanation: Balancing the route dataset:
End of explanation
"""
df_routesets.label.value_counts()
"""
Explanation: Route Sets
End of explanation
"""
df_routesets = balance_smote(df_routesets)
"""
Explanation: Balancing the route set dataset:
End of explanation
"""
# Cross validation test on building classification
res, imps = test_classification(df_buildings)
# adding the Data Type column
res['Data Type'] = 'Building'
imps['Data Type'] = 'Building'
# storing the results and importance of features
results_bal = res
importances_bal = imps
"""
Explanation: Building Classification
We test the classification of buildings, collect individual accuracy scores results and the importance of every feature in each test in importances (both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later.
End of explanation
"""
# Cross validation test on route classification
res, imps = test_classification(df_routes)
# adding the Data Type column
res['Data Type'] = 'Route'
imps['Data Type'] = 'Route'
# storing the results and importance of features
results_bal = results_bal.append(res, ignore_index=True)
importances_bal = importances_bal.append(imps, ignore_index=True)
"""
Explanation: Route Classification
End of explanation
"""
# Cross validation test on route classification
res, imps = test_classification(df_routesets)
# adding the Data Type column
res['Data Type'] = 'Route Set'
imps['Data Type'] = 'Route Set'
# storing the results and importance of features
results_bal = results_bal.append(res, ignore_index=True)
importances_bal = importances_bal.append(imps, ignore_index=True)
"""
Explanation: Route Set Classification
End of explanation
"""
# Merging the two result sets
results_unb['Balanced'] = False
results_bal['Balanced'] = True
results = results_unb.append(results_bal, ignore_index=True)
"""
Explanation: Combining the results
End of explanation
"""
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("talk")
"""
Explanation: Charting the accuracy scores
End of explanation
"""
results.Accuracy = results.Accuracy * 100
pal = sns.light_palette("seagreen", n_colors=3, reverse=True)
g = sns.factorplot(data=results, x='Data Type', y='Accuracy', hue='Metrics', col='Balanced',
kind='bar', palette=pal, aspect=1.2, errwidth=1, capsize=0.04)
g.set(ylim=(88, 98))
"""
Explanation: Converting the accuracy score from [0, 1] to percentage, i.e [0, 100]:
End of explanation
"""
|
lneuhaus/pyrpl | docs/source/user_guide/tutorial/tutorial.ipynb | mit | import pyrpl
print pyrpl.__file__
"""
Explanation: Introduction to pyrpl
1) Introduction
The RedPitaya is an affordable FPGA board with fast analog inputs and outputs. This makes it interesting also for quantum optics experiments. The software package PyRPL (Python RedPitaya Lockbox) is an implementation of many devices that are needed for optics experiments every day. The user interface and all high-level functionality is written in python, but an essential part of the software is hidden in a custom FPGA design (based on the official RedPitaya software version 0.95). While most users probably never want to touch the FPGA design, the Verilog source code is provided together with this package and may be modified to customize the software to your needs.
2) Table of contents
In this document, you will find the following sections:
1. Introduction
2. ToC
3. Installation
4. First steps
5. RedPitaya Modules
6. The Pyrpl class
7. The Graphical User Interface
If you are using Pyrpl for the first time, you should read sections 1-4. This will take about 15 minutes and should leave you able to communicate with your RedPitaya via python.
If you plan to use Pyrpl for a project that is not related to quantum optics, you probably want to go to section 5 then and omit section 6 altogether. Inversely, if you are only interested in a powerful tool for quantum optics and dont care about the details of the implementation, go to section 6. If you plan to contribute to the repository, you should definitely read section 5 to get an idea of what this software package realy does, and where help is needed. Finaly, Pyrpl also comes with a Graphical User Interface (GUI) to interactively control the modules described in section 5. Please, read section 7 for a quick description of the GUI.
3) Installation
Option 3: Simple clone from GitHub (developers)
If instead you plan to synchronize with github on a regular basis, you can also leave the downloaded code where it is and add the parent directory of the pyrpl folder to the PYTHONPATH environment variable as described in this thread: http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath. For all beta-testers and developers, this is the preferred option. So the typical PYTHONPATH environment variable should look somewhat like this:
$\texttt{PYTHONPATH=C:\OTHER_MODULE;C:\GITHUB\PYRPL}$
If you are experiencing problems with the dependencies on other python packages, executing the following command in the pyrpl directory might help:
$\texttt{python setup.py install develop}$
If at a later point, you have the impression that updates from github are not reflected in the program's behavior, try this:
End of explanation
"""
!pip install pyrpl #if you look at this file in ipython notebook, just execute this cell to install pyrplockbox
"""
Explanation: Should the directory not be the one of your local github installation, you might have an older version of pyrpl installed. Just delete any such directories other than your principal github clone and everything should work.
Option 2: from GitHub using setuptools (beta version)
Download the code manually from https://github.com/lneuhaus/pyrpl/archive/master.zip and unzip it or get it directly from git by typing
$\texttt{git clone https://github.com/lneuhaus/pyrpl.git YOUR_DESTINATIONFOLDER}$
In a command line shell, navigate into your new local pyrplockbox directory and execute
$\texttt{python setup.py install}$
This copies the files into the side-package directory of python. The setup should make sure that you have the python libraries paramiko (http://www.paramiko.org/installing.html) and scp (https://pypi.python.org/pypi/scp) installed. If this is not the case you will get a corresponding error message in a later step of this tutorial.
Option 1: with pip (coming soon)
If you have pip correctly installed, executing the following line in a command line should install pyrplockbox and all dependencies:
$\texttt{pip install pyrpl}$
End of explanation
"""
from pyrpl import RedPitaya
"""
Explanation: Compiling the server application (optional)
The software comes with a precompiled version of the server application (written in C) that runs on the RedPitaya. This application is uploaded automatically when you start the connection. If you made changes to this file, you can recompile it by typing
$\texttt{python setup.py compile_server}$
For this to work, you must have gcc and the cross-compiling libraries installed. Basically, if you can compile any of the official RedPitaya software written in C, then this should work, too.
If you do not have a working cross-compiler installed on your UserPC, you can also compile directly on the RedPitaya (tested with ecosystem v0.95). To do so, you must upload the directory pyrpl/monitor_server on the redpitaya, and launch the compilation with the command
$\texttt{make CROSS_COMPILE=}$
Compiling the FPGA bitfile (optional)
If you would like to modify the FPGA code or just make sure that it can be compiled, you should have a working installation of Vivado 2015.4. For windows users it is recommended to set up a virtual machine with Ubuntu on which the compiler can be run in order to avoid any compatibility problems. For the FPGA part, you only need the /fpga subdirectory of this software. Make sure it is somewhere in the file system of the machine with the vivado installation. Then type the following commands. You should adapt the path in the first and second commands to the locations of the Vivado installation / the fpga directory in your filesystem:
$\texttt{source /opt/Xilinx/Vivado/2015.4/settings64.sh}$
$\texttt{cd /home/myusername/fpga}$
$\texttt{make}$
The compilation should take between 15 and 30 minutes. The result will be the file $\texttt{fpga/red_pitaya.bin}$. To test the new FPGA design, make sure that this file in the fpga subdirectory of your pyrpl code directory. That is, if you used a virtual machine for the compilation, you must copy the file back to the original machine on which you run pyrpl.
Unitary tests (optional)
In order to make sure that any recent changes do not affect prior functionality, a large number of automated tests have been implemented. Every push to the github repository is automatically installed tested on an empty virtual linux system. However, the testing server has currently no RedPitaya available to run tests directly on the FPGA. Therefore it is also useful to run these tests on your local machine in case you modified the code.
Currently, the tests confirm that
- all pyrpl modules can be loaded in python
- all designated registers can be read and written
- future: functionality of all major submodules against reference benchmarks
To run the test, navigate in command line into the pyrpl directory and type
$\texttt{set REDPITAYA=192.168.1.100}$ (in windows) or
$\texttt{export REDPITAYA=192.168.1.100}$ (in linux)
$\texttt{python setup.py nosetests}$
The first command tells the test at which IP address it can find a RedPitaya. The last command runs the actual test. After a few seconds, there should be some output saying that the software has passed more than 140 tests.
After you have implemented additional features, you are encouraged to add unitary tests to consolidate the changes. If you immediately validate your changes with unitary tests, this will result in a huge productivity improvement for you. You can find all test files in the folder $\texttt{pyrpl/pyrpl/test}$, and the existing examples (notably $\texttt{test_example.py}$) should give you a good point to start. As long as you add a function starting with 'test_' in one of these files, your test should automatically run along with the others. As you add more tests, you will see the number of total tests increase when you run the test launcher.
Workflow to submit code changes (for developers)
As soon as the code will have reached version 0.9.0.3 (high-level unitary tests implemented and passing, approx. end of May 2016), we will consider the master branch of the github repository as the stable pre-release version. The goal is that the master branch will guarantee functionality at all times.
Any changes to the code, if they do not pass the unitary tests or have not been tested, are to be submitted as pull-requests in order not to endanger the stability of the master branch. We will briefly desribe how to properly submit your changes in that scenario.
Let's say you already changed the code of your local clone of pyrpl. Instead of directly committing the change to the master branch, you should create your own branch. In the windows application of github, when you are looking at the pyrpl repository, there is a small symbol looking like a steet bifurcation in the upper left corner, that says "Create new branch" when you hold the cursor over it. Click it and enter the name of your branch "leos development branch" or similar. The program will automatically switch to that branch. Now you can commit your changes, and then hit the "publish" or "sync" button in the upper right. That will upload your changes so everyone can see and test them.
You can continue working on your branch, add more commits and sync them with the online repository until your change is working. If the master branch has changed in the meantime, just click 'sync' to download them, and then the button "update from master" (upper left corner of the window) that will insert the most recent changes of the master branch into your branch. If the button doesn't work, that means that there are no changes available. This way you can benefit from the updates of the stable pre-release version, as long as they don't conflict with the changes you have been working on. If there are conflicts, github will wait for you to resolve them. In case you have been recompiling the fpga, there will always be a conflict w.r.t. the file 'red_pitaya.bin' (since it is a binary file, github cannot simply merge the differences you implemented). The best way to deal with this problem is to recompile the fpga bitfile after the 'update from master'. This way the binary file in your repository will correspond to the fpga code of the merged verilog files, and github will understand from the most recent modification date of the file that your local version of red_pitaya.bin is the one to keep.
At some point, you might want to insert your changes into the master branch, because they have been well-tested and are going to be useful for everyone else, too. To do so, after having committed and synced all recent changes to your branch, click on "Pull request" in the upper right corner, enter a title and description concerning the changes you have made, and click "Send pull request". Now your job is done. I will review and test the modifications of your code once again, possibly fix incompatibility issues, and merge it into the master branch once all is well. After the merge, you can delete your development branch. If you plan to continue working on related changes, you can also keep the branch and send pull requests later on. If you plan to work on a different feature, I recommend you create a new branch with a name related to the new feature, since this will make the evolution history of the feature more understandable for others. Or, if you would like to go back to following the master branch, click on the little downward arrow besides the name of your branch close to the street bifurcation symbol in the upper left of the github window. You will be able to choose which branch to work on, and to select master.
Let's all try to stick to this protocol. It might seem a little complicated at first, but you will quikly appreciate the fact that other people's mistakes won't be able to endanger your working code, and that by following the commits of the master branch alone, you will realize if an update is incompatible with your work.
4) First steps
If the installation went well, you should now be able to load the package in python. If that works you can pass directly to the next section 'Connecting to the RedPitaya'.
End of explanation
"""
cd c:\lneuhaus\github\pyrpl
"""
Explanation: Sometimes, python has problems finding the path to pyrplockbox. In that case you should add the pyrplockbox directory to your pythonpath environment variable (http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath). If you do not know how to do that, just manually navigate the ipython console to the directory, for example:
End of explanation
"""
from pyrpl import RedPitaya
"""
Explanation: Now retry to load the module. It should really work now.
End of explanation
"""
#define hostname
HOSTNAME = "192.168.1.100"
from pyrpl import RedPitaya
r = RedPitaya(hostname=HOSTNAME)
"""
Explanation: Connecting to the RedPitaya
You should have a working SD card (any version of the SD card content is okay) in your RedPitaya (for instructions see http://redpitaya.com/quick-start/). The RedPitaya should be connected via ethernet to your computer. To set this up, there is plenty of instructions on the RedPitaya website (http://redpitaya.com/quick-start/). If you type the ip address of your module in a browser, you should be able to start the different apps from the manufacturer. The default address is http://192.168.1.100.
If this works, we can load the python interface of pyrplockbox by specifying the RedPitaya's ip address.
End of explanation
"""
#check the value of input1
print r.scope.voltage1
"""
Explanation: If you see at least one '>' symbol, your computer has successfully connected to your RedPitaya via SSH. This means that your connection works. The message 'Server application started on port 2222' means that your computer has sucessfully installed and started a server application on your RedPitaya. Once you get 'Client started with success', your python session has successfully connected to that server and all things are in place to get started.
Basic communication with your RedPitaya
End of explanation
"""
#see how the adc reading fluctuates over time
import time
from matplotlib import pyplot as plt
times,data = [],[]
t0 = time.time()
n = 3000
for i in range(n):
times.append(time.time()-t0)
data.append(r.scope.voltage1)
print "Rough time to read one FPGA register: ", (time.time()-t0)/n*1e6, "µs"
%matplotlib inline
f, axarr = plt.subplots(1,2, sharey=True)
axarr[0].plot(times, data, "+");
axarr[0].set_title("ADC voltage vs time");
axarr[1].hist(data, bins=10,normed=True, orientation="horizontal");
axarr[1].set_title("ADC voltage histogram");
"""
Explanation: With the last command, you have successfully retrieved a value from an FPGA register. This operation takes about 300 µs on my computer. So there is enough time to repeat the reading n times.
End of explanation
"""
#blink some leds for 5 seconds
from time import sleep
for i in range(1025):
r.hk.led=i
sleep(0.005)
# now feel free to play around a little to get familiar with binary representation by looking at the leds.
from time import sleep
r.hk.led = 0b00000001
for i in range(10):
r.hk.led = ~r.hk.led>>1
sleep(0.2)
import random
for i in range(100):
r.hk.led = random.randint(0,255)
sleep(0.02)
"""
Explanation: You see that the input values are not exactly zero. This is normal with all RedPitayas as some offsets are hard to keep zero when the environment changes (temperature etc.). So we will have to compensate for the offsets with our software. Another thing is that you see quite a bit of scatter beetween the points - almost as much that you do not see that the datapoints are quantized. The conclusion here is that the input noise is typically not totally negligible. Therefore we will need to use every trick at hand to get optimal noise performance.
After reading from the RedPitaya, let's now try to write to the register controlling the first 8 yellow LED's on the board. The number written to the LED register is displayed on the LED array in binary representation. You should see some fast flashing of the yellow leds for a few seconds when you execute the next block.
End of explanation
"""
r.hk #"housekeeping" = LEDs and digital inputs/outputs
r.ams #"analog mixed signals" = auxiliary ADCs and DACs.
r.scope #oscilloscope interface
r.asg1 #"arbitrary signal generator" channel 1
r.asg2 #"arbitrary signal generator" channel 2
r.pid0 #first of four PID modules
r.pid1
r.pid2
r.pid3
r.iq0 #first of three I+Q quadrature demodulation/modulation modules
r.iq1
r.iq2
r.iir #"infinite impules response" filter module that can realize complex transfer functions
"""
Explanation: 5) RedPitaya modules
Let's now look a bit closer at the class RedPitaya. Besides managing the communication with your board, it contains different modules that represent the different sections of the FPGA. You already encountered two of them in the example above: "hk" and "scope". Here is the full list of modules:
End of explanation
"""
asg = r.asg1 # make a shortcut
print "Trigger sources:", asg.trigger_sources
print "Output options: ", asg.output_directs
"""
Explanation: ASG and Scope module
Arbitrary Signal Generator
There are two Arbitrary Signal Generator modules: asg1 and asg2. For these modules, any waveform composed of $2^{14}$ programmable points is sent to the output with arbitrary frequency and start phase upon a trigger event.
End of explanation
"""
asg.output_direct = 'out2'
asg.setup(waveform='halframp', frequency=20e4, amplitude=0.8, offset=0, trigger_source='immediately')
"""
Explanation: Let's set up the ASG to output a sawtooth signal of amplitude 0.8 V (peak-to-peak 1.6 V) at 1 MHz on output 2:
End of explanation
"""
s = r.scope # shortcut
print "Available decimation factors:", s.decimations
print "Trigger sources:", s.trigger_sources
print "Available inputs: ", s.inputs
"""
Explanation: Oscilloscope
The scope works similar to the ASG but in reverse: Two channels are available. A table of $2^{14}$ datapoints for each channel is filled with the time series of incoming data. Downloading a full trace takes about 10 ms over standard ethernet. The rate at which the memory is filled is the sampling rate (125 MHz) divided by the value of 'decimation'. The property 'average' decides whether each datapoint is a single sample or the average of all samples over the decimation interval.
End of explanation
"""
from time import sleep
from pyrpl import RedPitaya
#reload everything
#r = RedPitaya(hostname="192.168.1.100")
asg = r.asg1
s = r.scope
# turn off asg so the scope has a chance to measure its "off-state" as well
asg.output_direct = "off"
# setup scope
s.input1 = 'asg1'
# pass asg signal through pid0 with a simple integrator - just for fun (detailed explanations for pid will follow)
r.pid0.input = 'asg1'
r.pid0.ival = 0 # reset the integrator to zero
r.pid0.i = 1000 # unity gain frequency of 1000 hz
r.pid0.p = 1.0 # proportional gain of 1.0
r.pid0.inputfilter = [0,0,0,0] # leave input filter disabled for now
# show pid output on channel2
s.input2 = 'pid0'
# trig at zero volt crossing
s.threshold_ch1 = 0
# positive/negative slope is detected by waiting for input to
# sweept through hysteresis around the trigger threshold in
# the right direction
s.hysteresis_ch1 = 0.01
# trigger on the input signal positive slope
s.trigger_source = 'ch1_positive_edge'
# take data symetrically around the trigger event
s.trigger_delay = 0
# set decimation factor to 64 -> full scope trace is 8ns * 2^14 * decimation = 8.3 ms long
s.decimation = 64
# setup the scope for an acquisition
s.setup()
print "\nBefore turning on asg:"
print "Curve ready:", s.curve_ready() # trigger should still be armed
# turn on asg and leave enough time for the scope to record the data
asg.setup(frequency=1e3, amplitude=0.3, start_phase=90, waveform='halframp', trigger_source='immediately')
sleep(0.010)
# check that the trigger has been disarmed
print "\nAfter turning on asg:"
print "Curve ready:", s.curve_ready()
print "Trigger event age [ms]:",8e-9*((s.current_timestamp&0xFFFFFFFFFFFFFFFF) - s.trigger_timestamp)*1000
# plot the data
%matplotlib inline
plt.plot(s.times*1e3,s.curve(ch=1),s.times*1e3,s.curve(ch=2));
plt.xlabel("Time [ms]");
plt.ylabel("Voltage");
"""
Explanation: Let's have a look at a signal generated by asg1. Later we will use convenience functions to reduce the amount of code necessary to set up the scope:
End of explanation
"""
# useful functions for scope diagnostics
print "Curve ready:", s.curve_ready()
print "Trigger source:",s.trigger_source
print "Trigger threshold [V]:",s.threshold_ch1
print "Averaging:",s.average
print "Trigger delay [s]:",s.trigger_delay
print "Trace duration [s]: ",s.duration
print "Trigger hysteresis [V]", s.hysteresis_ch1
print "Current scope time [cycles]:",hex(s.current_timestamp)
print "Trigger time [cycles]:",hex(s.trigger_timestamp)
print "Current voltage on channel 1 [V]:", r.scope.voltage1
print "First point in data buffer 1 [V]:", s.ch1_firstpoint
"""
Explanation: What do we see? The blue trace for channel 1 shows just the output signal of the asg. The time=0 corresponds to the trigger event. One can see that the trigger was not activated by the constant signal of 0 at the beginning, since it did not cross the hysteresis interval. One can also see a 'bug': After setting up the asg, it outputs the first value of its data table until its waveform output is triggered. For the halframp signal, as it is implemented in pyrpl, this is the maximally negative value. However, we passed the argument start_phase=90 to the asg.setup function, which shifts the first point by a quarter period. Can you guess what happens when we set start_phase=180? You should try it out!
In green, we see the same signal, filtered through the pid module. The nonzero proportional gain leads to instant jumps along with the asg signal. The integrator is responsible for the constant decrease rate at the beginning, and the low-pass that smoothens the asg waveform a little. One can also foresee that, if we are not paying attention, too large an integrator gain will quickly saturate the outputs.
End of explanation
"""
print r.pid0.help()
"""
Explanation: PID module
We have already seen some use of the pid module above. There are four PID modules available: pid0 to pid3.
End of explanation
"""
#make shortcut
pid = r.pid0
#turn off by setting gains to zero
pid.p,pid.i = 0,0
print "P/I gain when turned off:", pid.i,pid.p
# small nonzero numbers set gain to minimum value - avoids rounding off to zero gain
pid.p = 1e-100
pid.i = 1e-100
print "Minimum proportional gain: ",pid.p
print "Minimum integral unity-gain frequency [Hz]: ",pid.i
# saturation at maximum values
pid.p = 1e100
pid.i = 1e100
print "Maximum proportional gain: ",pid.p
print "Maximum integral unity-gain frequency [Hz]: ",pid.i
"""
Explanation: Proportional and integral gain
End of explanation
"""
import numpy as np
#make shortcut
pid = r.pid0
# set input to asg1
pid.input = "asg1"
# set asg to constant 0.1 Volts
r.asg1.setup(waveform="DC", offset = 0.1)
# set scope ch1 to pid0
r.scope.input1 = 'pid0'
#turn off the gains for now
pid.p,pid.i = 0, 0
#set integral value to zero
pid.ival = 0
#prepare data recording
from time import time
times, ivals, outputs = [], [], []
# turn on integrator to whatever negative gain
pid.i = -10
# set integral value above the maximum positive voltage
pid.ival = 1.5
#take 1000 points - jitter of the ethernet delay will add a noise here but we dont care
for n in range(1000):
times.append(time())
ivals.append(pid.ival)
outputs.append(r.scope.voltage1)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
times = np.array(times)-min(times)
plt.plot(times,ivals,times,outputs);
plt.xlabel("Time [s]");
plt.ylabel("Voltage");
"""
Explanation: Control with the integral value register
End of explanation
"""
# off by default
r.pid0.inputfilter
# minimum cutoff frequency is 2 Hz, maximum 77 kHz (for now)
r.pid0.inputfilter = [1,1e10,-1,-1e10]
print r.pid0.inputfilter
# not setting a coefficient turns that filter off
r.pid0.inputfilter = [0,4,8]
print r.pid0.inputfilter
# setting without list also works
r.pid0.inputfilter = -2000
print r.pid0.inputfilter
# turn off again
r.pid0.inputfilter = []
print r.pid0.inputfilter
"""
Explanation: Again, what do we see? We set up the pid module with a constant (positive) input from the ASG. We then turned on the integrator (with negative gain), which will inevitably lead to a slow drift of the output towards negative voltages (blue trace). We had set the integral value above the positive saturation voltage, such that it takes longer until it reaches the negative saturation voltage. The output of the pid module is bound to saturate at +- 1 Volts, which is clearly visible in the green trace. The value of the integral is internally represented by a 32 bit number, so it can practically take arbitrarily large values compared to the 14 bit output. You can set it within the range from +4 to -4V, for example if you want to exloit the delay, or even if you want to compensate it with proportional gain.
Input filters
The pid module has one more feature: A bank of 4 input filters in series. These filters can be either off (bandwidth=0), lowpass (bandwidth positive) or highpass (bandwidth negative). The way these filters were implemented demands that the filter bandwidths can only take values that scale as the powers of 2.
End of explanation
"""
#reload to make sure settings are default ones
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iq = r.iq0
# modulation/demodulation frequency 25 MHz
# two lowpass filters with 10 and 20 kHz bandwidth
# input signal is analog input 1
# input AC-coupled with cutoff frequency near 50 kHz
# modulation amplitude 0.1 V
# modulation goes to out1
# output_signal is the demodulated quadrature 1
# quadrature_1 is amplified by 10
iq.setup(frequency=25e6, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.5,
input='adc1', output_direct='out1',
output_signal='quadrature', quadrature_factor=10)
"""
Explanation: You should now go back to the Scope and ASG example above and play around with the setting of these filters to convince yourself that they do what they are supposed to.
IQ module
Demodulation of a signal means convolving it with a sine and cosine at the 'carrier frequency'. The two resulting signals are usually low-pass filtered and called 'quadrature I' and and 'quadrature Q'. Based on this simple idea, the IQ module of pyrpl can implement several functionalities, depending on the particular setting of the various registers. In most cases, the configuration can be completely carried out through the setup function of the module.
<img src="IQmodule.png">
Lock-in detection / PDH / synchronous detection
End of explanation
"""
# shortcut for na
na = r.na
na.iq_name = 'iq1'
#take transfer functions. first: iq1 -> iq1, second iq1->out1->(your cable)->adc1
f, iq1, amplitudes = na.curve(start=1e3,stop=62.5e6,points=1001,rbw=1000,avg=1,amplitude=0.2,input='iq1',output_direct='off', acbandwidth=0)
f, adc1, amplitudes = na.curve(start=1e3,stop=62.5e6,points=1001,rbw=1000,avg=1,amplitude=0.2,input='adc1',output_direct='out1', acbandwidth=0)
#plot
from pyrpl.iir import bodeplot
%matplotlib inline
bodeplot([(f, iq1, "iq1->iq1"), (f, adc1, "iq1->out1->in1->iq1")], xlog=True)
"""
Explanation: After this setup, the demodulated quadrature is available as the output_signal of iq0, and can serve for example as the input of a PID module to stabilize the frequency of a laser to a reference cavity. The module was tested and is in daily use in our lab. Frequencies as low as 20 Hz and as high as 50 MHz have been used for this technique. At the present time, the functionality of a PDH-like detection as the one set up above cannot be conveniently tested internally. We plan to upgrade the IQ-module to VCO functionality in the near future, which will also enable testing the PDH functionality.
Network analyzer
When implementing complex functionality in the RedPitaya, the network analyzer module is by far the most useful tool for diagnostics. The network analyzer is able to probe the transfer function of any other module or external device by exciting the device with a sine of variable frequency and analyzing the resulting output from that device. This is done by demodulating the device output (=network analyzer input) with the same sine that was used for the excitation and a corresponding cosine, lowpass-filtering, and averaging the two quadratures for a well-defined number of cycles. From the two quadratures, one can extract the magnitude and phase shift of the device's transfer function at the probed frequencies. Let's illustrate the behaviour. For this example, you should connect output 1 to input 1 of your RedPitaya, such that we can compare the analog transfer function to a reference. Make sure you put a 50 Ohm terminator in parallel with input 1.
End of explanation
"""
# shortcut for na and bpf (bandpass filter)
na = r.na
na.iq_name = 'iq1'
bpf = r.iq2
# setup bandpass
bpf.setup(frequency = 2.5e6, #center frequency
Q=10.0, # the filter quality factor
acbandwidth = 10e5, # ac filter to remove pot. input offsets
phase=0, # nominal phase at center frequency (propagation phase lags not accounted for)
gain=2.0, # peak gain = +6 dB
output_direct='off',
output_signal='output_direct',
input='iq1')
# take transfer function
f, tf1, ampl = na.curve(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off')
# add a phase advance of 82.3 degrees and measure transfer function
bpf.phase = 82.3
f, tf2, ampl = na.curve(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off')
#plot
from pyrpl.iir import bodeplot
%matplotlib inline
bodeplot([(f, tf1, "phase = 0.0"), (f, tf2, "phase = %.1f"%bpf.phase)])
"""
Explanation: If your cable is properly connected, you will see that both magnitudes are near 0 dB over most of the frequency range. Near the Nyquist frequency (62.5 MHz), one can see that the internal signal remains flat while the analog signal is strongly attenuated, as it should be to avoid aliasing. One can also see that the delay (phase lag) of the internal signal is much less than the one through the analog signal path.
If you have executed the last example (PDH detection) in this python session, iq0 should still send a modulation to out1, which is added to the signal of the network analyzer, and sampled by input1. In this case, you should see a little peak near the PDH modulation frequency, which was 25 MHz in the example above.
Lorentzian bandpass filter
The iq module can also be used as a bandpass filter with continuously tunable phase. Let's measure the transfer function of such a bandpass with the network analyzer:
End of explanation
"""
iq = r.iq0
# turn off pfd module for settings
iq.pfd_on = False
# local oscillator frequency
iq.frequency = 33.7e6
# local oscillator phase
iq.phase = 0
iq.input = 'adc1'
iq.output_direct = 'off'
iq.output_signal = 'pfd'
print "Before turning on:"
print "Frequency difference error integral", iq.pfd_integral
print "After turning on:"
iq.pfd_on = True
for i in range(10):
print "Frequency difference error integral", iq.pfd_integral
"""
Explanation: Frequency comparator module
To lock the frequency of a VCO (Voltage controlled oscillator) to a frequency reference defined by the RedPitaya, the IQ module contains the frequency comparator block. This is how you set it up. You have to feed the output of this module through a PID block to send it to the analog output. As you will see, if your feedback is not already enabled when you turn on the module, its integrator will rapidly saturate (-585 is the maximum value here, while a value of the order of 1e-3 indicates a reasonable frequency lock).
End of explanation
"""
#reload to make sure settings are default ones
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iir = r.iir
#print docstring of the setup function
print iir.setup.__doc__
#prepare plot parameters
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
#setup a complicated transfer function
zeros = [ -4e4j-300, +4e4j-300,-2e5j-1000, +2e5j-1000, -2e6j-3000, +2e6j-3000]
poles = [ -1e6, -5e4j-300, +5e4j-300, -1e5j-3000, +1e5j-3000, -1e6j-30000, +1e6j-30000]
designdata = iir.setup(zeros, poles, loops=None, plot=True);
print "Filter sampling frequency: ", 125./iir.loops,"MHz"
"""
Explanation: IIR module
Sometimes it is interesting to realize even more complicated filters. This is the case, for example, when a piezo resonance limits the maximum gain of a feedback loop. For these situations, the IIR module can implement filters with 'Infinite Impulse Response' (https://en.wikipedia.org/wiki/Infinite_impulse_response). It is the your task to choose the filter to be implemented by specifying the complex values of the poles and zeros of the filter. In the current version of pyrpl, the IIR module can implement IIR filters with the following properties:
- strictly proper transfer function (number of poles > number of zeros)
- poles (zeros) either real or complex-conjugate pairs
- no three or more identical real poles (zeros)
- no two or more identical pairs of complex conjugate poles (zeros)
- pole and zero frequencies should be larger than $\frac{f_\rm{nyquist}}{1000}$ (but you can optimize the nyquist frequency of your filter by tuning the 'loops' parameter)
- the DC-gain of the filter must be 1.0. Despite the FPGA implemention being more flexible, we found this constraint rather practical. If you need different behavior, pass the IIR signal through a PID module and use its input filter and proportional gain. If you still need different behaviour, the file iir.py is a good starting point.
- total filter order <= 16 (realizable with 8 parallel biquads)
- a remaining bug limits the dynamic range to about 30 dB before internal saturation interferes with filter performance
Filters whose poles have a positive real part are unstable by design. Zeros with positive real part lead to non-minimum phase lag. Nevertheless, the IIR module will let you implement these filters.
In general the IIR module is still fragile in the sense that you should verify the correct implementation of each filter you design. Usually you can trust the simulated transfer function. It is nevertheless a good idea to use the internal network analyzer module to actually measure the IIR transfer function with an amplitude comparable to the signal you expect to go through the filter, as to verify that no saturation of internal filter signals limits its performance.
End of explanation
"""
# first thing to check if the filter is not ok
print "IIR overflows before:", bool(iir.overflow)
# measure tf of iir filter
r.iir.input = 'iq1'
f, tf, ampl = r.na.curve(iq_name='iq1', start=1e4, stop=3e6, points = 301, rbw=100, avg=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
# first thing to check if the filter is not ok
print "IIR overflows after:", bool(iir.overflow)
#plot with design data
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
from pyrpl.iir import bodeplot
bodeplot(designdata +[(f,tf,"measured system")],xlog=True)
"""
Explanation: If you try changing a few coefficients, you will see that your design filter is not always properly realized. The bottleneck here is the conversion from the analytical expression (poles and zeros) to the filter coefficients, not the FPGA performance. This conversion is (among other things) limited by floating point precision. We hope to provide a more robust algorithm in future versions. If you can obtain filter coefficients by another, preferrably analytical method, this might lead to better results than our generic algorithm.
Let's check if the filter is really working as it is supposed:
End of explanation
"""
#rescale the filter by 20fold reduction of DC gain
designdata = iir.setup(zeros,poles,g=0.1,loops=None,plot=False);
# first thing to check if the filter is not ok
print "IIR overflows before:", bool(iir.overflow)
# measure tf of iir filter
r.iir.input = 'iq1'
f, tf, ampl = r.iq1.na_trace(start=1e4, stop=3e6, points = 301, rbw=100, avg=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
# first thing to check if the filter is not ok
print "IIR overflows after:", bool(iir.overflow)
#plot with design data
%matplotlib inline
pylab.rcParams['figure.figsize'] = (10, 6)
from pyrpl.iir import bodeplot
bodeplot(designdata+[(f,tf,"measured system")],xlog=True)
"""
Explanation: As you can see, the filter has trouble to realize large dynamic ranges. With the current standard design software, it takes some 'practice' to design transfer functions which are properly implemented by the code. While most zeros are properly realized by the filter, you see that the first two poles suffer from some kind of saturation. We are working on an automatic rescaling of the coefficients to allow for optimum dynamic range. From the overflow register printed above the plot, you can also see that the network analyzer scan caused an internal overflow in the filter. All these are signs that different parameters should be tried.
A straightforward way to impove filter performance is to adjust the DC-gain and compensate it later with the gain of a subsequent PID module. See for yourself what the parameter g=0.1 (instead of the default value g=1.0) does here:
End of explanation
"""
iir = r.iir
# useful diagnostic functions
print "IIR on:", iir.on
print "IIR bypassed:", iir.shortcut
print "IIR copydata:", iir.copydata
print "IIR loops:", iir.loops
print "IIR overflows:", bin(iir.overflow)
print "\nCoefficients (6 per biquad):"
print iir.coefficients
# set the unity transfer function to the filter
iir._setup_unity()
"""
Explanation: You see that we have improved the second peak (and avoided internal overflows) at the cost of increased nosie in other regions. Of course this noise can be reduced by increasing the NA averaging time. But maybe it will be detrimental to your application? After all, IIR filter design is far from trivial, but this tutorial should have given you enough information to get started and maybe to improve the way we have implemented the filter in pyrpl (e.g. by implementing automated filter coefficient scaling).
If you plan to play more with the filter, these are the remaining internal iir registers:
End of explanation
"""
pid = r.pid0
print pid.help()
pid.ival #bug: help forgets about pid.ival: current integrator value [volts]
"""
Explanation: 6) The Pyrpl class
The RedPitayas in our lab are mostly used to stabilize one item or another in quantum optics experiments. To do so, the experimenter usually does not want to bother with the detailed implementation on the RedPitaya while trying to understand the physics going on in her/his experiment. For this situation, we have developed the Pyrpl class, which provides an API with high-level functions such as:
# optimial pdh-lock with setpoint 0.1 cavity bandwidth away from resonance
cavity.lock(method='pdh',detuning=0.1)
# unlock the cavity
cavity.unlock()
# calibrate the fringe height of an interferometer, and lock it at local oscillator phase 45 degrees
interferometer.lock(phase=45.0)
First attempts at locking
SECTION NOT READY YET, BECAUSE CODE NOT CLEANED YET
Now lets go for a first attempt to lock something. Say you connect the error signal (transmission or reflection) of your setup to input 1. Make sure that the peak-to-peak of the error signal coincides with the maximum voltages the RedPitaya can handle (-1 to +1 V if the jumpers are set to LV). This is important for getting optimal noise performance. If your signal is too low, amplify it. If it is too high, you should build a voltage divider with 2 resistors of the order of a few kOhm (that way, the input impedance of the RedPitaya of 1 MOhm does not interfere).
Next, connect output 1 to the standard actuator at your hand, e.g. a piezo. Again, you should try to exploit the full -1 to +1 V output range. If the voltage at the actuator must be kept below 0.5V for example, you should make another voltage divider for this. Make sure that you take the input impedance of your actuator into consideration here. If you output needs to be amplified, it is best practice to put the voltage divider after the amplifier as to also attenuate the noise added by the amplifier. Hovever, when this poses a problem (limited bandwidth because of capacity of the actuator), you have to put the voltage divider before the amplifier. Also, this is the moment when you should think about low-pass filtering the actuator voltage. Because of DAC noise, analog low-pass filters are usually more effective than digital ones. A 3dB bandwidth of the order of 100 Hz is a good starting point for most piezos.
You often need two actuators to control your cavity. This is because the output resolution of 14 bits can only realize 16384 different values. This would mean that with a finesse of 15000, you would only be able to set it to resonance or a linewidth away from it, but nothing in between. To solve this, use a coarse actuator to cover at least one free spectral range which brings you near the resonance, and a fine one whose range is 1000 or 10000 times smaller and who gives you lots of graduation around the resonance. The coarse actuator should be strongly low-pass filtered (typical bandwidth of 1Hz or even less), the fine actuator can have 100 Hz or even higher bandwidth. Do not get confused here: the unity-gain frequency of your final lock can be 10- or even 100-fold above the 3dB bandwidth of the analog filter at the output - it suffices to increase the proportional gain of the RedPitaya Lockbox.
Once everything is connected, let's grab a PID module, make a shortcut to it and print its helpstring. All modules have a metho help() which prints all available registers and their description:
End of explanation
"""
pid.input = 'adc1'
pid.output_direct = 'out1'
#see other available options just for curiosity:
print pid.inputs
print pid.output_directs
"""
Explanation: We need to inform our RedPitaya about which connections we want to make. The cabling discussed above translates into:
End of explanation
"""
# turn on the laser
offresonant = r.scope.voltage1 #volts at analog input 1 with the unlocked cavity
# make a guess of what voltage you will measure at an optical resonance
resonant = 0.5 #Volts at analog input 1
# set the setpoint at relative reflection of 0.75 / rel. transmission of 0.25
pid.setpoint = 0.75*offresonant + 0.25*resonant
"""
Explanation: Finally, we need to define a setpoint. Lets first measure the offset when the laser is away from the resonance, and then measure or estimate how much light gets through on resonance.
End of explanation
"""
pid.i = 0 # make sure gain is off
pid.p = 0
#errorsignal = adc1 - setpoint
if resonant > offresonant: # when we are away from resonance, error is negative.
slopesign = 1.0 # therefore, near resonance, the slope is positive as the error crosses zero.
else:
slopesign = -1.0
gainsign = -slopesign #the gain must be the opposite to stabilize
# the effectove gain will in any case slopesign*gainsign = -1.
#Therefore we must start at the maximum positive voltage, so the negative effective gain leads to a decreasing output
pid.ival = 1.0 #sets the integrator value = output voltage to maximum
from time import sleep
sleep(1.0) #wait for the voltage to stabilize (adjust for a few times the lowpass filter bandwidth)
#finally, turn on the integrator
pid.i = gainsign * 0.1
#with a bit of luck, this should work
from time import time
t0 = time()
while True:
relative_error = abs((r.scope.voltage1-pid.setpoint)/(offresonant-resonant))
if time()-t0 > 2: #diagnostics every 2 seconds
print "relative error:",relative_error
t0 = time()
if relative_error < 0.1:
break
sleep(0.01)
if pid.ival <= -1:
print "Resonance missed. Trying again slower.."
pid.ival = 1.2 #overshoot a little
pid.i /= 2
print "Resonance approch successful"
"""
Explanation: Now lets start to approach the resonance. We need to figure out from which side we are coming. The choice is made such that a simple integrator will naturally drift into the resonance and stay there:
End of explanation
"""
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iq = r.iq0
iq.setup(frequency=1000e3, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.4,
input='adc1', output_direct='out1',
output_signal='output_direct', quadrature_factor=0)
iq.frequency=10
r.scope.input1='adc1'
# shortcut for na
na = r.na
na.iq_name = "iq1"
# pid1 will be our device under test
pid = r.pid0
pid.input = 'iq1'
pid.i = 0
pid.ival = 0
pid.p = 1.0
pid.setpoint = 0
pid.inputfilter = []#[-1e3, 5e3, 20e3, 80e3]
# take the transfer function through pid1, this will take a few seconds...
x, y, ampl = na.curve(start=0,stop=200e3,points=101,rbw=100,avg=1,amplitude=0.5,input='iq1',output_direct='off', acbandwidth=0)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.plot(x*1e-3,np.abs(y)**2);
plt.xlabel("Frequency [kHz]");
plt.ylabel("|S21|");
r.pid0.input = 'iq1'
r.pid0.output_direct='off'
r.iq2.input='iq1'
r.iq2.setup(0,bandwidth=0,gain=1.0,phase=0,acbandwidth=100,amplitude=0,input='iq1',output_direct='out1')
r.pid0.p=0.1
x,y = na.na_trace(start=1e4,stop=1e4,points=401,rbw=100,avg=1,amplitude=0.1,input='adc1',output_direct='off', acbandwidth=0)
r.iq2.frequency=1e6
r.iq2._reads(0x140,4)
r.iq2._na_averages=125000000
r.iq0.output_direct='off'
r.scope.input2='dac2'
r.iq0.amplitude=0.5
r.iq0.amplitude
"""
Explanation: Questions to users: what parameters do you know?
finesse of the cavity? 1000
length? 1.57m
what error signals are available? transmission direct, reflection AC -> directement pdh analogique
are modulators available n/a
what cavity length / laser frequency actuators are available? PZT mephisto DC - 10kHz, 48MHz opt./V, V_rp apmplifie x20
temperature du laser <1 Hz 2.5~GHz/V, apres AOM
what is known about them (displacement, bandwidth, amplifiers)?
what analog filters are present? YAG PZT a 10kHz
imposer le design des sorties
More to come
End of explanation
"""
# Make sure the notebook was launched with the following option:
# ipython notebook --pylab=qt
from pyrpl.gui import RedPitayaGui
r = RedPitayaGui(HOSTNAME)
r.gui()
"""
Explanation: 7) The Graphical User Interface
Most of the modules described in section 5 can be controlled via a graphical user interface. The graphical window can be displayed with the following:
WARNING: For the GUI to work fine within an ipython session, the option --gui=qt has to be given to the command launching ipython. This makes sure that an event loop is running.
End of explanation
"""
from pyrpl.gui import RedPitayaGui
from PyQt4 import QtCore, QtGui
class RedPitayaGuiCustom(RedPitayaGui):
"""
This is the derived class containing our customizations
"""
def customize_scope(self): #This function is called upon object instanciation
"""
By overwritting this function in the child class, the user can perform custom initializations.
"""
self.scope_widget.layout_custom = QtGui.QHBoxLayout()
#Adds an horizontal layout for our extra-buttons
self.scope_widget.button_scan = QtGui.QPushButton("Scan")
# creates a button "Scan"
self.scope_widget.button_lock = QtGui.QPushButton("Lock")
# creates a button "Lock"
self.scope_widget.label_setpoint = QtGui.QLabel("Setpoint")
# creates a label for the setpoint spinbox
self.scope_widget.spinbox_setpoint = QtGui.QDoubleSpinBox()
# creates a spinbox to enter the value of the setpoint
self.scope_widget.spinbox_setpoint.setDecimals(4)
# sets the desired number of decimals for the spinbox
self.scope_widget.spinbox_setpoint.setSingleStep(0.001)
# Change the step by which the setpoint is incremented when using the arrows
self.scope_widget.layout_custom.addWidget(self.scope_widget.button_scan)
self.scope_widget.layout_custom.addWidget(self.scope_widget.button_lock)
self.scope_widget.layout_custom.addWidget(self.scope_widget.label_setpoint)
self.scope_widget.layout_custom.addWidget(self.scope_widget.spinbox_setpoint)
# Adds the buttons in the layout
self.scope_widget.main_layout.addLayout(self.scope_widget.layout_custom)
# Adds the layout at the bottom of the scope layout
self.scope_widget.button_scan.clicked.connect(self.scan)
self.scope_widget.button_lock.clicked.connect(self.lock)
self.scope_widget.spinbox_setpoint.valueChanged.connect(self.change_setpoint)
# connects the buttons to the desired functions
def custom_setup(self): #This function is also called upon object instanciation
"""
By overwritting this function in the child class, the user can perform custom initializations.
"""
#setup asg1 to output the desired ramp
self.asg1.offset = .5
self.asg1.scale = 0.5
self.asg1.waveform = "ramp"
self.asg1.frequency = 100
self.asg1.trigger_source = 'immediately'
#setup the scope to record approximately one period
self.scope.duration = 0.01
self.scope.input1 = 'dac1'
self.scope.input2 = 'dac2'
self.scope.trigger_source = 'asg1'
#automatically start the scope
self.scope_widget.run_continuous()
def change_setpoint(self):
"""
Directly reflects the value of the spinbox into the pid0 setpoint
"""
self.pid0.setpoint = self.scope_widget.spinbox_setpoint.value()
def lock(self): #Called when button lock is clicked
"""
Set up everything in "lock mode"
"""
# disable button lock
self.scope_widget.button_lock.setEnabled(False)
# enable button scan
self.scope_widget.button_scan.setEnabled(True)
# shut down the asg
self.asg1.output_direct = 'off'
# set pid input/outputs
self.pid0.input = 'adc1'
self.pid0.output_direct = 'out2'
#set pid parameters
self.pid0.setpoint = self.scope_widget.spinbox_setpoint.value()
self.pid0.p = 0.1
self.pid0.i = 100
self.pid0.ival = 0
def scan(self): #Called when button lock is clicked
"""
Set up everything in "scan mode"
"""
# enable button lock
self.scope_widget.button_lock.setEnabled(True)
# enable button scan
self.scope_widget.button_scan.setEnabled(False)
# switch asg on
self.asg1.output_direct = 'out2'
#switch pid off
self.pid0.output_direct = 'off'
# Instantiate the class RePitayaGuiCustom
r = RedPitayaGuiCustom(HOSTNAME)
# launch the gui
r.gui()
"""
Explanation: The following window should open itself. Feel free to play with the button and tabs to start and stop the scope acquisition...
<img src="gui.bmp">
The window is composed of several tabs, each corresponding to a particular module. Since they generate a graphical output, the scope, network analyzer, and spectrum analyzer modules are very pleasant to use in GUI mode. For instance, the scope tab can be used to display in real-time the waveforms acquired by the redpitaya scope. Since the refresh rate is quite good, the scope tab can be used to perform optical alignements or to monitor transient signals as one would do it with a standalone scope.
Subclassing RedPitayaGui to customize the GUI
It is often convenient to develop a GUI that relies heavily on the existing RedPitayaGui, but with a few more buttons or functionalities. In this case, the most convenient solution is to derive the RedPitayaGui class. The GUI is programmed using the framework PyQt4. The full documentation of the framework can be found here: http://pyqt.sourceforge.net/Docs/PyQt4/. However, to quickly start in the right direction, a simple example of how to customize the gui is given below: The following code shows how to add a few buttons at the bottom of the scope tab to switch the experiment between the two states: Scanning with asg1/Locking with pid1
End of explanation
"""
%pylab qt
from pyrpl import Pyrpl
p = Pyrpl('test') # we have to do something about the notebook initializations...
import asyncio
async def run_temperature_lock(setpoint=0.1): # coroutines can receive arguments
with p.asgs.pop("temperature") as asg: # use the context manager "with" to
# make sure the asg will be freed after the acquisition
asg.setup(frequency=0, amplitue=0, offset=0) # Use the asg as a dummy
while IS_TEMP_LOCK_ACTIVE: # The loop will run untill this flag is manually changed to False
await asyncio.sleep(1) # Give way to other coroutines for 1 s
measured_temp = asg.offset # Dummy "temperature" measurment
asg.offset+= (setpoint - measured_temp)*0.1 # feedback with an integral gain
print("measured temp: ", measured_temp) # print the measured value to see how the execution flow works
async def run_n_fits(n): # a coroutine to launch n acquisitions
sa = p.spectrumanalyzer
with p.asgs.pop("fit_spectra") as asg: # use contextmanager again
asg.setup(output_direct='out1',
trigger_source='immediately')
freqs = [] # variables stay available all along the coroutine's execution
for i in range(n): # The coroutine qill be executed several times on the await statement inside this loop
asg.setup(frequency=1000*i) # Move the asg frequency
sa.setup(input=asg, avg=10, span=100e3, baseband=True) # setup the sa for the acquisition
spectrum = await sa.single_async() # wait for 10 averages to be ready
freq = sa.data_x[spectrum.argmax()] # take the max of the spectrum
freqs.append(freq) # append it ti the result
print("measured peak frequency: ", freq) # print to show how the execution goes
return freqs # Once the execution is over, the Future will be filled with the result...
from asyncio import ensure_future, get_event_loop
IS_TEMP_LOCK_ACTIVE = True
temp_future = ensure_future(run_temperature_lock(0.5)) # send temperature control task to the eventloop
fits_future = ensure_future(run_n_fits(50)) # send spectrum measurement task to the eventloop
## add the following lines if you don't already have an event_loop configured in ipython
# LOOP = get_event_loop()
# LOOP.run_until_complete()
IS_TEMP_LOCK_ACTIVE = False # hint, you can stop the spectrum acquisition task by pressin "pause or stop in the
print(fits_future.result())
"""
Explanation: Now, a custom gui with several extra buttons at the bottom of the scope tab should open itself. You can play with the buttons "scan" and "Lock" and see the effect on the channels.
<img src="custom_gui.png">
8) Using asynchronous functions with python 3
Pyrpl uses the Qt eventloop to perform asynchronous tasks, but it has been set as the default loop of asyncio, such that you only need to learn how to use the standard python module asyncio, and you don't need to know anything about Qt. To give you a quick overview of what can be done, we present in the following block an exemple of 2 tasks running in parrallele. The first one mimicks a temperature control loop, measuring periodically a signal every 1 s, and changing the offset of an asg based on the measured value (we realize this way a slow and rudimentary software pid). In parrallele, another task consists in repeatedly shifting the frequency of an asg, and measuring an averaged spectrum on the spectrum analyzer.
Both tasks are defined by coroutines (a python function that is preceded by the keyword async, and that can contain the keyword await). Basically, the execution of each coroutine is interrupted whenever the keyword await is encountered, giving the chance to other tasks to be executed. It will only be resumed once the underlying coroutine's value becomes ready.
Finally to execute the cocroutines, it is not enough to call my_coroutine(), since we need to send the task to the event loop. For that, we use the function ensure_future from the asyncio module. This function immediately returns an object that is not the result of the task (not the object that is behind return inside the coroutine), but rather a Future object, that can be used to retrieve the actual result once it is ready (this is done by calling future.result() latter on).
If you are executing the code inside the ipython notebook, then, this is all you have to do, since an event loop is already running in the back (a qt eventloop if you are using the option %pylab qt). Otherwise, you have to use one of the functions (LOOP.run_forever(), LOOP.run_until_complete(), or LOOP.run_in_executor()) to launch the eventloop.
End of explanation
"""
|
egentry/lamat-2016-solutions | day5/ODE_practice.ipynb | mit | y_0 = 1
t_0 = 0
t_f = 10
def dy_dt(y):
return .5*y
def analytic_solution_1st_order(t):
return np.exp(.5*t)
dt = .5
t_array = np.arange(t_0, t_f, dt)
y_array = np.empty_like(t_array)
y_array[0] = y_0
for i in range(len(y_array)-1):
y_array[i+1] = y_array[i] + (dt * dy_dt(y_array[i]))
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_1st_order(t_array), label="analytic")
plt.legend(loc="best")
plt.xlabel("t")
plt.ylabel("y")
"""
Explanation: 1st Order ODE
Let's solve:
$$ \dot{y}(t) = .5 \cdot y(t)$$
with the initial condition:
$$ y(t=0)=1 $$
End of explanation
"""
dt = .1
t_array = np.arange(t_0, t_f, dt)
y_array = np.empty_like(t_array)
y_array[0] = y_0
for i in range(len(y_array)-1):
y_array[i+1] = y_array[i] + (dt * dy_dt(y_array[i]))
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_1st_order(t_array), label="analytic")
plt.legend(loc="best")
plt.xlabel("t")
plt.ylabel("y")
"""
Explanation: Try smaller timestep
End of explanation
"""
y_0 = 1
dy_dt_0 = 0
#which gives us:
z_0 = (y_0, dy_dt_0)
t_0 = 0
t_f = 10
def dz_dt(z):
y, dy_dt = z #unpack the z vector
return np.array([dy_dt, -5*y])
def analytic_solution_2nd_order(t):
return np.cos(np.sqrt(5)*t)
dt = .1
t_array = np.arange(t_0, t_f, dt)
z_list = [z_0]
for i in range(len(t_array)-1):
z_list.append(z_list[i] + dt*dz_dt(z_list[i]))
z_array = np.array(z_list)
y_array = z_array[:,0]
dy_dt_array = z_array[:,1]
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_2nd_order(t_array), label="analytic")
plt.legend(loc="best")
"""
Explanation: Our numeric result is more accurate when we use a smaller timestep, but it's still not perfect
2nd Order ODE
Let's solve:
$$ \ddot{y}(t) = - 5 y(t)$$
with the initial condition:
$$ y(t=0)=1 $$
$$ \dot{y}(t=0) = 0 $$
To do this we need to convert our 2nd order ODE into two 1st order ODEs. Let's define a new vector:
$$ z \equiv (y, \dot{y}) $$
For this vector, we have its derivative:
$$ \dot{z} = (\dot{y}, - 5 y) $$
End of explanation
"""
dt = .01
t_array = np.arange(t_0, t_f, dt)
z_list = [z_0]
for i in range(len(t_array)-1):
z_list.append(z_list[i] + dt*dz_dt(z_list[i]))
z_array = np.array(z_list)
y_array = z_array[:,0]
dy_dt_array = z_array[:,1]
plt.plot(t_array, y_array, label="numeric")
plt.plot(t_array, analytic_solution_2nd_order(t_array), label="analytic")
plt.legend(loc="best")
"""
Explanation: Try again, with a smaller timestep
End of explanation
"""
|
kit-cel/wt | wt/vorlesung/ch1_3/laplace_hypergeometric.ipynb | gpl-2.0 | # importing
import numpy as np
from scipy import special
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
"""
Explanation: Content and Objective
Show simulation results for hypergeometric distribution ("Lotto-like") and compare with theory
Parameters to be adapted: number of balls (overall N and red R), sample size n
Import
End of explanation
"""
# number of balls overall, red balls and sample size
N = 100
R = 30
n = 10
# number of trials
N_trials = int( 1e4 )
# get analytical solution by passing through r and applying according formula
# NOTE: r is a vector, so definition formula for Pr is being applied pointwise
r = np.arange( 0, n + 1 )
Pr = special.binom( R, r ) * special.binom( N-R, n - r ) / special.binom( N, n )
### if you prefer for-loops...
#Pr = np.zeros( n + 1 )
#for ind_rho, val_rho in enumerate( r ):
# Pr[ ind_rho ] = special.binom( R, val_rho ) * special.binom( N-R, n - val_rho ) / special.binom( N, n )
"""
Explanation: Parameters and Analytical Solution
End of explanation
"""
# initialize empty array for sampled number of red balls
numb_red = np.zeros( N_trials )
# do N_trials samples
# NOTE: _n is an auxiliary counter; n is the parameter of the distribution
for _n in np.arange( N_trials ):
# initialize box
balls = R * ['red'] + (N-R) * ['white']
# sample without replacing
sample = np.random.choice( balls, n, replace=False )
# count number of red samples
# first check whether sampled values are 'red'
# int( boolean ) in order to generate summable values
is_red = [ s == 'red' for s in sample ]
numb_red[ _n ] = np.sum( [ int(i) for i in is_red ] )
# get histogram
# NOTE: density=True leads to sum equalling 1
bins = [ -.5 + k for k in np.arange( n + 2) ]
hist = np.histogram( numb_red, bins, density=True )
# printing probabilities
np.set_printoptions(precision=3)
print('Theoretical values: {}'.format( Pr ) )
print('\nSimulation values: {}'.format( hist[0] ) )
"""
Explanation: Simulation
End of explanation
"""
# plotting
plt.figure()
width = 0.2
plt.bar( r, Pr, linewidth=2.0, width=width, label='theo.')
plt.bar( r + width, hist[0], linewidth=2.0, width=width, label='sim.' )
plt.xlabel('$r$')
plt.ylabel('$P_r$')
plt.grid( True )
plt.legend( loc = 'upper right' )
"""
Explanation: Plotting
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/tutorials/optimizing.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
import phoebe
b = phoebe.default_binary()
"""
Explanation: Advanced: Optimizing Performance with PHOEBE
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
print(phoebe.conf.interactive_checks)
phoebe.interactive_checks_off()
print(phoebe.conf.interactive_checks)
"""
Explanation: Interactivity Options
When running in an interactive Python session, PHOEBE updates all constraints and runs various checks after each command. Although this is convenient, it does take some time, and it can sometimes be advantageous to disable this to save computation time.
Interactive Checks
By default, interactive checks are enabled when PHOEBE is being run in an interactive session (either an interactive python, IPython, or Jupyter notebook session), but disabled when PHOEBE is run as a script directly from the console. When enabled, PHOEBE will re-run the system checks after every single change to the bundle, raising warnings via the logger as soon as they occur.
This default behavior can be changed via phoebe.interactive_checks_on() or phoebe.interactive_checks_off(). The current value can be accessed via phoebe.conf.interactive_checks.
End of explanation
"""
print(b.run_checks())
b.set_value('requiv', component='primary', value=50)
print(b.run_checks())
"""
Explanation: If disabled, you can always manually run the checks via b.run_checks().
End of explanation
"""
print(phoebe.conf.interactive_constraints)
print(b.filter('mass', component='primary'))
b.set_value('sma@binary', 10)
print(b.filter('mass', component='primary'))
"""
Explanation: Interactive Constraints
By default, interactive constraints are always enabled in PHOEBE, unless explicitly disabled. Whenever a value is changed in the bundle that affects the value of a constrained value, that constraint is immediately executed and all applicable values updated. The ensures that all constrained values are "up-to-date".
If disabled, constraints are delayed and only executed when needed by PHOEBE (when calling run_compute, for example). This can save significant time, as each value that needs updating only needs to have its constraint executed once, instead of multiple times.
This default behavior can be changed via phoebe.interactive_constraints_on() or phoebe.interactive_constraints_off(). The current value can be accessed via phoebe.conf.interactive_constraints.
Let's first look at the default behavior with interactive constraints on.
End of explanation
"""
phoebe.interactive_constraints_off()
print(phoebe.conf.interactive_constraints)
print(b.filter('mass', component='primary'))
b.set_value('sma@binary', 15)
print(b.filter('mass', component='primary'))
"""
Explanation: Note that the mass has already updated, according to the constraint, when the value of the semi-major axes was changed. If we disable interactive constraints this will not be the case.
End of explanation
"""
b.run_delayed_constraints()
print(b.filter('mass', component='primary'))
phoebe.reset_settings()
"""
Explanation: No need to worry though - all constraints will be run automatically before passing to the backend. If you need to access the value of a constrained parameter, you can explicitly ask for all delayed constraints to be executed via b.run_delayed_constraints().
End of explanation
"""
b.add_dataset('lc')
print(b.get_dataset())
"""
Explanation: Filtering Options
check_visible
By default, everytime you call filter or set_value, PHOEBE checks to see if the current value is visible (meaning it is relevant given the value of other parameters). Although not terribly expensive, these checks can add up... so disabling these checks can save time. Note that these are automatically temporarily disabled during run_compute. If disabling these checks, be aware that changing the value of some parameters may have no affect on the resulting computations. You can always manually check the visibility/relevance of a parameter by calling parameter.is_visible.
This default behavior can be changed via phoebe.check_visible_on() or phoebe.check_visible_off().
Let's first look at the default behavior with check_visible on.
End of explanation
"""
phoebe.check_visible_off()
print(b.get_dataset())
"""
Explanation: Now if we disable check_visible, we'll see the same thing as if we passed check_visible=False to any filter call.
End of explanation
"""
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary').visible_if)
"""
Explanation: Now the same filter is returning additional parameters. For example, ld_coeffs_source parameters were initially hidden because ld_mode is set to 'interp'. We can see the rules that are being followed:
End of explanation
"""
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary').is_visible)
phoebe.reset_settings()
"""
Explanation: and can still manually check to see that it shouldn't be visible (isn't currently relevant given the value of ld_func):
End of explanation
"""
print(b.get_dataset())
print(b.get_dataset(check_default=False))
phoebe.check_default_off()
print(b.get_dataset())
phoebe.reset_settings()
"""
Explanation: check_default
Similarly, PHOEBE automatically excludes any parameter which is tagged with a '_default' tag. These parameters exist to provide default values when a new component or dataset are added to the bundle, but can usually be ignored, and so are excluded from any filter calls. Although not at all expensive, this too can be disabled at the settings level or by passing check_default=False to any filter call.
This default behavior can be changed via phoebe.check_default_on() or phoebe.check_default_off().
End of explanation
"""
phoebe.get_download_passband_defaults()
"""
Explanation: Passband Options
PHOEBE automatically fetches necessary tables from tables.phoebe-project.org. By default, only the necessary tables for each passband are fetched (except when calling download_passband manually) and the fits files are fetched uncompressed.
For more details, see the API docs on download_passband and update_passband as well as the passband updating tutorial.
The default values mentioned in the API docs for content and gzipped can be exposed via phoebe.get_download_passband_defaults and changed via phoebe.set_download_passband_defaults. Note that setting gzipped to True will minimize file storage for the passband files and will result in faster download speeds, but take significantly longer to load by PHOEBE as they have to be uncompressed each time they are loaded. If you have a large number of installed passbands, this could significantly slow importing PHOEBE.
End of explanation
"""
|
giotta/EUR8217 | labo/tests/test-r-in-python.ipynb | mit | %load_ext rpy2.ipython
"""
Explanation: Jupyter, R et Python
Exemple ci-dessous est le même que celui de test-r.ipynb mais le présent notebook a le kernel Python 3 (les code cells sont interprétées en Python par défaut.
En suivant la procédure présentée dans le README.md, on arrive à utiliser R dans ce note book Python.
Exemple ci-dessous provient de :
* https://www.continuum.io/blog/developer/jupyter-and-conda-r
Astuce de R in Python identifié par deuxpi, user du channel #montrealpython sur IRC (irc.freenode.net).
Pré-requis
Avoir installé le module Python rpy2
$ pip install rpy2
Charger l'extension ipython
End of explanation
"""
%%R
library(dplyr)
%%R
iris
"""
Explanation: Utiliser dans les code cells
End of explanation
"""
%%R
iris %>%
group_by(Species) %>%
summarise(Sepal.Width.Avg = mean(Sepal.Width)) %>%
arrange(Sepal.Width.Avg)
%%R
library(ggplot2)
ggplot(data=iris, aes(x=Sepal.Length, y=Sepal.Width, color=Species)) + geom_point(size=3)
"""
Explanation: Le formatage de l'output n'est pas parfaitement identique à un notebook en R nativement
* bloc d'output est plus long
* headers pas en gras
* ...
End of explanation
"""
variable = None # Python
# %R -o
# %R -i
%%R
install.packages("sas7bdat", repos="http://cran.rstudio.com/")
"""
Explanation: Utiliser dans le code Python (inline)
End of explanation
"""
|
bjshaw/phys202-2015-work | assignments/assignment10/ODEsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
"""
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
"""
def solve_euler(derivs, y0, x):
"""Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
Ytemp = np.zeros_like(x)
Ytemp[0] = y0
h = x[1]-x[0]
for n in range(0,len(x)-1):
Ytemp[n+1] = Ytemp[n] + h*derivs(Ytemp[n],x[n])
return Ytemp
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
"""
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
"""
def solve_midpoint(derivs, y0, x):
"""Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
Ytemp = np.zeros_like(x)
Ytemp[0] = y0
h = x[1]-x[0]
for n in range(0,len(x)-1):
Ytemp[n+1] = Ytemp[n] + h*derivs(Ytemp[n]+(h)/2*derivs(Ytemp[n],x[n]),x[n]+h/2)
return Ytemp
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
"""
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
"""
def solve_exact(x):
"""compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
"""
y = np.zeros_like(x)
y = 0.25*np.exp(2*x) - 0.5*x - 0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
"""
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
"""
x = np.linspace(0,1,11)
y0 = 0
def derivs(y,x):
dy = x + 2*y
return np.array(dy)
euler = solve_euler(derivs,y0,x)
midpoint = solve_midpoint(derivs,y0,x)
exact = solve_exact(x)
odein = odeint(derivs,y0,x)
eulerdiff = abs(euler-exact)
midpointdiff = abs(midpoint-exact)
odeint = []
[odeint.extend(n) for n in odein];
odeintdiff = abs(odeint-exact)
plt.figure(figsize=(20,6))
plt.subplot(1,2,1)
plt.plot(x,euler,color='r',label='Euler')
plt.plot(x,midpoint,color='b',label='Midpoint')
plt.plot(x,exact,color='g',label='Exact')
plt.plot(x,odein,color='k',label='Odeint',alpha=.4)
plt.legend(bbox_to_anchor=(0.8,1.0),fontsize=16)
plt.xlabel('x',fontsize=16)
plt.ylabel('y(x)',fontsize=16)
plt.title('Solutions vs. x',fontsize=20)
plt.subplot(1,2,2)
plt.plot(x,eulerdiff,color='r',label='Euler')
plt.plot(x,midpointdiff,color='b',label='Midpoint')
plt.plot(x,odeintdiff,color='k',label='Odeint')
plt.legend(bbox_to_anchor=(0.7,.95),fontsize=16)
plt.ylim(-.01,.3)
plt.xlabel('x',fontsize=16)
plt.ylabel('error from exact',fontsize=16)
plt.title('Error from exact vs. x',fontsize=20)
plt.tight_layout()
assert True # leave this for grading the plots
"""
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation
"""
|
pschragger/big-data-python-class | Lectures/Week 2 - Python and Jupyter for Big-Data/Lecture 2 continued.ipynb | mit | import re
print all([
not re.match("a","cat"),
re.search("a","cat"),
not re.search("c","dog"),
3 == len(re.split("[ab]","carbs")),
"R-D-" == re.sub("[0-9]","-","R2D2")
]) # prints true if all are true
"""
Explanation: Lecture 2 continued
regular expressions
Provides a way to search text.
Looking for matching patterns
End of explanation
"""
class Set:
def __init__(self, values=None):
"""This is the constructor"""
self.dict={}
if values is not None:
for value in values:
self.add(value)
def __repr__(self):
"""Sting to represent object in class like a to_string function"""
return "set:"+str(self.dict.keys())
def add(self, value):
self.dict[value] = True
def contains(self, value):
return value in self.dict
def remove(self,value):
del self.dict[value]
#using the class
s = Set([1,2,3])
s.add(4)
s.remove(3)
print s.contains(3)
print s
"""
Explanation: Object-oriented programming
Creating classes of objects with data and methods(functions) that operate on the data.
End of explanation
"""
def exp(base,power):
return base**power
#exp(2,power)
def two_to_the(power):
return exp(2,power)
print( two_to_the(3) )
def multiply(x,y): return x*y
products = map(multiply,[1,2],[4,5])
print products
"""
Explanation: Functional Tools
sometimes you want to change behavior based on the passed value types
End of explanation
"""
#example use
documents = ["a","b"]
def do_something(index,item):
print "index:"+str(index)+" Item:"+item
for i, document in enumerate(documents):
do_something(i,document)
def do_something(index ):
print "index:"+str(index)
for i, _ in enumerate(documents): do_something(i)
"""
Explanation: enumerate
enumerates creates a tuple (index,element)
End of explanation
"""
list1 = ['a','b','c']
list2 = [1,2,3]
zip(list1,list2)
pairs = [('a', 1), ('b', 2), ('c', 3)]
letters, numbers = zip(*pairs) # * performs argument unpacking
print letters
print numbers
"""
Explanation: Zip and argument unpacking
zip forms two more more lists for other lists
End of explanation
"""
def doubler(f):
def g(x):
return 2 * f(x)
return g
def f1(x):
return x+1
g= doubler(f1)
print g(3)
def magic(*args, **kwargs):
print "unamed args", args
print "Key word args", kwargs
magic(1,2,key1="word1",key2="word2")
"""
Explanation: args and kwargs
higher order function support - functions on functions
End of explanation
"""
|
gwtsa/gwtsa | examples/groundwater_paper/Ex2_monitoring_network/Example2.ipynb | mit | # Import the packages
import pandas as pd
import pastas as ps
import numpy as np
import os
import matplotlib.pyplot as plt
%matplotlib inline
# This notebook has been developed using Pastas version 0.9.9 and Python 3.7
print("Pastas version: {}".format(ps.__version__))
print("Pandas version: {}".format(pd.__version__))
print("Numpy version: {}".format(np.__version__))
print("Python version: {}".format(os.sys.version))
ps.set_log_level("ERROR")
"""
Explanation: <IMG SRC="https://raw.githubusercontent.com/pastas/pastas/master/doc/_static/logo.png" WIDTH=250 ALIGN="right">
Example 2: Analysis of groundwater monitoring networks using Pastas
This notebook is supplementary material to the following paper submitted to Groundwater:
R.A. Collenteur, M. Bakker, R. Calje, S. Klop, F. Schaars, (2019) Pastas: open source software for the analysis of groundwater time series, Manuscript under review.
In this second example, it is demonstrated how scripts can be used to analyze a large number of time series. Consider a pumping well field surrounded by a number of observations wells. The pumping wells are screened in the middle aquifer of a three-aquifer system. The objective is to estimate the drawdown caused by the groundwater pumping in each observation well.
1. Import the packages
End of explanation
"""
# Start an Pastas Project
pr = ps.Project(name="Example2")
# Load a metadata-file with xy-coordinates from the groundwater heads
metadata = pd.read_csv("data/metadata_heads.csv", index_col=0)
# Add the groundwater head observations to the database
for fname in os.listdir("./data/heads/"):
fname = os.path.join("./data/heads/", fname)
obs = pd.read_csv(fname, parse_dates=True, index_col=0, squeeze=True)
meta = metadata.loc[obs.name].to_dict()
pr.add_series(obs, kind="oseries", metadata=meta)
# Load a metadata-file with xy-coordinates from the explanatory variables
metadata = pd.read_csv("data/metadata_stresses.csv", index_col=0)
# Import the precipitation time series
rain = pd.read_csv("data/rain.csv", parse_dates=True, index_col=0, squeeze=True)
pr.add_series(rain, kind="prec", metadata=metadata.loc[rain.name].to_dict())
# Import the evaporation time series
evap = pd.read_csv("data/evap.csv", parse_dates=True, index_col=0, squeeze=True)
pr.add_series(evap, kind="evap", metadata=metadata.loc["Bilt"].to_dict())
# Import the well abstraction time series
well = pd.read_csv("data/well.csv", parse_dates=True, index_col=0, squeeze=True)
pr.add_series(well, kind="well", metadata=metadata.loc[well.name].to_dict())
# Use the plotting method of the Pastas Project to plot the stresses
pr.plots.stresses(["prec", "evap", "well"], cols=1, figsize=(10,5), sharex=True);
plt.xlim("1960", "2018");
"""
Explanation: 2. Importing the time series
In this codeblock the time series are imported and collected into a Pastas Project. This Project is a Pastas Class that contains methods that aim to ease the work when dealing with multiple time series. The following time series are imported:
44 time series with head observations [m] from the monitoring network;
precipitation [m/d] from KNMI station Oudenbosch;
potential evaporation [m/d] from KNMI station de Bilt;
Total pumping rate [m3/d] from well field Seppe.
End of explanation
"""
# Create folder to save the model figures
mlpath = "models"
if not os.path.exists(mlpath):
os.mkdir(mlpath)
# Choose the calibration period
tmin = "1970"
tmax = "2017-09"
num = 0
for oseries in pr.oseries.index:
# Create a Model for each time series and add a StressModel2 for the recharge
ml = pr.add_model(oseries)
# Add the RechargeModel to simulate the effect of rainfall and evaporation
rm = ps.RechargeModel(rain, evap, rfunc=ps.Gamma, name="recharge")
ml.add_stressmodel(rm)
# Add a StressModel to simulate the effect of the groundwater extractions
sm = ps.StressModel(well, rfunc=ps.Hantush, name="well", settings="well", up=False)
ml.add_stressmodel(sm)
# Since we are dealing with different measurement frequencies for the head,
# change the initial parameter for the noise model a little.
# alpha_init = ml.oseries.series.index.to_series().diff() / pd.Timedelta(1, 'd')
# ml.set_initial("noise_alpha", alpha_init.mean())
ml.set_initial("noise_alpha", 3)
# Estimate the model parameters
ml.solve(tmin=tmin, tmax=tmax, report=False, solver=ps.LmfitSolve)
# Check if the estimated effect of the groundwater extraction is significant.
# If not, delete the stressmodel and calibrate the model again.
gain, stderr = ml.parameters.loc["well_A", ["optimal", "stderr"]]
if 1.96 * stderr > -gain:
num += 1
ml.del_stressmodel("well")
ml.solve(tmin=tmin, tmax=tmax, report=False)
# Plot the results and store the plot
ml.plots.results()
path = os.path.join(mlpath, ml.name + ".png")
plt.savefig(path, bbox_inches="tight")
plt.close()
print("The number of models where the well is dropped from the model is:", str(num))
"""
Explanation: 3/4/5. Creating and optimizing the Time Series Model
For each time series of groundwater head observations a TFN model is constructed with the following model components:
- A Constant
- A NoiseModel
- A RechargeModel object to simulate the effect of recharge
- A StressModel object to simulate the effect of groundwater extraction
Calibrating all models can take a couple of minutes!!
End of explanation
"""
params = {
'axes.labelsize': 18,
'axes.axisbelow': True,
'font.size': 16,
'font.family': 'serif',
'legend.fontsize': 16,
'xtick.labelsize': 16,
'ytick.labelsize': 16,
'text.usetex': False,
'figure.figsize': [8.2, 5],
'lines.linewidth' : 2,
}
plt.rcParams.update(params)
# Save figures or not
savefig = True
figpath = "figures"
if not os.path.exists(figpath):
os.mkdir(figpath)
"""
Explanation: Make plots for publication
In the next codeblocks the Figures used in the Pastas paper are created. The first codeblock sets the matplotlib parameters to obtain publication-quality figures. The following figures are created:
Figure of the drawdown estimated for each observations well;
Figure of the decomposition of the different contributions;
Figure of the pumping rate of the well field.
End of explanation
"""
try:
from timml import ModelMaq, Well
plot_timml = True
# Values from REGIS II v2.2 (Site id B49F0240)
z = [9, -25, -83, -115, -190] # Reference to NAP
kv = np.array([1e-3, 5e-3]) # Min-Max of Vertical hydraulic conductivity for both leaky layer
D1 = z[0]-z[1] # Estimated thickness of leaky layer
c1 = D1/kv # Estimated resistance
D2 = z[2] - z[3]
c2 = D2 / kv
kh1 = np.array([1e0, 2.5e0]) # Min-Max of Horizontal hydraulic conductivity for aquifer 1
kh2 = np.array([1e1, 2.5e1]) # Min-Max of Horizontal hydraulic conductivity for aquifer 2
mlm = ModelMaq(kaq=[kh1.mean(), 35], z=z, c=[c1.max(), c2.mean()], \
topboundary='semi', hstar=0)
w = Well(mlm, 0, 0, 34791, layers=1)
mlm.solve()
x = np.linspace(100, 5000, 100)
h = mlm.headalongline(x, 0)
except:
plot_timml = False
# Get the parameters and distances to plot
param_A = pr.get_parameters(["well_A"])*well.loc["2007":].mean()
#param_A.fillna(0.0, inplace=True)
distances = pr.get_distances(oseries=param_A.index, kind="well").squeeze()
# Select model per aquifer
shallow = pr.oseries.z.loc[(pr.oseries.z<96)].index
aquifer = pr.oseries.z.loc[(pr.oseries.z<186) & (pr.oseries.z>96)].index
deep = pr.oseries.z.loc[pr.oseries.z>186].index
# Make the plot
fig = plt.figure(figsize=(8,5))
plt.grid(zorder=-10)
display_error_bars = True
if display_error_bars:
std = pr.get_parameters(["well_A"], param_value="stderr")*well.loc["2007":].mean()
plt.errorbar(distances[shallow], param_A[shallow], yerr=1.96*std[shallow], linestyle="",
elinewidth=2, marker="", markersize=10, capsize=4)
plt.errorbar(distances[aquifer], param_A[aquifer], yerr=1.96*std[aquifer], linestyle="",
elinewidth=2, marker="", capsize=4)
# plt.errorbar(distances[deep], param_A[deep], yerr=1.96*std[deep], linestyle="",
# elinewidth=2, marker="", capsize=4)
plt.scatter(distances[shallow], param_A[shallow], marker="^", s=80)
plt.scatter(distances[aquifer], param_A[aquifer], marker="s", s=80)
#plt.scatter(distances[deep], param_A[deep], marker="v", s=100)
# Plot two-layer TimML model for comparison
if plot_timml:
plt.plot(x, h[0], color="C0", linestyle="--" )
plt.plot(x, h[1], color="C1", linestyle="--" )
legend = ["TimML L1", "TimML L2", "aquifer 1", "aquifer 2"]
else:
legend = ["aquifer 1", "aquifer 2"]
plt.ylabel("steady drawdown (m)")
plt.xlabel("radial distance from the center of the well field (m)")
plt.xlim(0, 4501)
plt.ylim(-10,0)
plt.legend(legend, loc=4)
if savefig:
path = os.path.join(figpath, "drawdown.eps")
plt.savefig(path, bbox_inches="tight", dpi=300)
"""
Explanation: Figure of the drawdown estimated for each observations well
End of explanation
"""
# Select a model to plot
ml = pr.models["B49F0232_5"]
# Create the figure
[ax1, ax2, ax3] = ml.plots.decomposition(split=False, figsize=(7,6), ytick_base=1, tmin="1985")
plt.xticks(rotation=0)
ax1.set_yticks([2, 0, -2])
ax1.set_ylabel("head (m)")
ax1.legend().set_visible(False)
ax3.set_yticks([-4, -6])
ax2.set_ylabel("contributions (m) ") # Little trick to get the label right
ax3.set_xlabel("year")
ax3.set_title("pumping well")
if savefig:
path = os.path.join(figpath, ml.name + ".eps")
plt.savefig(path, bbox_inches="tight", dpi=300)
"""
Explanation: Example figure of a TFN model
End of explanation
"""
fig, ax = plt.subplots(1,1, figsize=(8,2.5), sharex=True)
ax.plot(well, color="k")
ax.set_ylabel("pumping rate\n[m$^3$/day]")
ax.set_xlabel("year")
ax.set_xlim("1951", "2018")
if savefig:
path = os.path.join(figpath, "extraction.eps")
plt.savefig(path, bbox_inches="tight", dpi=300)
"""
Explanation: Figure of the pumping rate of the well field
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Regression/Assignment_six/week-6-local-regression-assignment-blank.ipynb | mit | import graphlab
"""
Explanation: Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will:
* Find the k-nearest neighbors of a given query input
* Predict the output for the query input using the k-nearest neighbors
* Choose the best value of k using a validation set
Fire up GraphLab Create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data_small.gl/kc_house_data_small.gl')
"""
Explanation: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
End of explanation
"""
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
"""
Explanation: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
End of explanation
"""
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
features = feature_matrix / norms
return features, norms
"""
Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
End of explanation
"""
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
"""
Explanation: Split data into training, test, and validation sets
End of explanation
"""
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
"""
Explanation: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:
End of explanation
"""
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
"""
Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.
End of explanation
"""
print features_test[0]
"""
Explanation: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
End of explanation
"""
print features_train[9]
"""
Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
End of explanation
"""
print np.sqrt(np.sum((features_train[9]-features_test[0])**2))
"""
Explanation: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.
End of explanation
"""
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[0])**2)))
"""
Explanation: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.
Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.
End of explanation
"""
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[2])**2)))
"""
Explanation: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
End of explanation
"""
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
"""
Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
End of explanation
"""
print features_train[0:3] - features_test[0]
"""
Explanation: The subtraction operator (-) in Numpy is vectorized as follows:
End of explanation
"""
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
"""
Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
End of explanation
"""
diff = features_train[0:len(features_train)] - features_test[0]
"""
Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.
Perform 1-nearest neighbor regression
Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.
End of explanation
"""
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
"""
Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842:
End of explanation
"""
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
"""
Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
End of explanation
"""
distances = np.sqrt(np.sum(diff**2, axis=1))
"""
Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint: Do not forget to take the square root of the sum of squares.
End of explanation
"""
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
"""
Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496:
End of explanation
"""
def compute_distances(features_instances, features_query):
diff = features_instances[0:len(features_instances)] - features_query
distances = np.sqrt(np.sum(diff**2, axis=1))
return distances
"""
Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
End of explanation
"""
distances = compute_distances(features_train, features_test[2])
min = distances[0]
index = 0
for i in xrange(len(distances)):
if(distances[i] < min):
min = distances[i]
index = i
print min
print index
print output_train[382]
"""
Explanation: QUIZ QUESTIONS
Take the query house to be third ho
use of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
What is the predicted value of the query house based on 1-nearest neighbor regression?
End of explanation
"""
def k_nearest_neighbors(k, feature_train, features_query):
distances = compute_distances(features_train, features_query)
neighbors = np.argsort(distances)[0:k]
return neighbors
"""
Explanation: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint: Look at the documentation for np.argsort.
End of explanation
"""
print k_nearest_neighbors(4, features_train, features_test[2])
"""
Explanation: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
End of explanation
"""
def predict_output_of_query(k, features_train, output_train, features_query):
neighbors = k_nearest_neighbors(k, features_train, features_query)
prices = output_train[neighbors]
prediction = np.sum(prices)/k
return prediction
"""
Explanation: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature vector of the query house, whose price we are predicting.
The function should return a predicted value of the query house.
Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.
End of explanation
"""
print predict_output_of_query(4, features_train, output_train, features_test[2])
"""
Explanation: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
End of explanation
"""
def predict_output(k, features_train, output_train, features_query):
predictions = []
for i in xrange(len(features_query)):
prediction = predict_output_of_query(k, features_train, output_train, features_query[i])
predictions.append(prediction)
return predictions
"""
Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature matrix for the query set.
The function should return a set of predicted values, one for each house in the query set.
Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.
End of explanation
"""
print predict_output(10, features_train, output_train,features_test[0:10])
"""
Explanation: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
What is the predicted value of this house?
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
"""
Explanation: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:
For k in [1, 2, ..., 15]:
Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.
Computes the RSS for these predictions on the VALIDATION set
Stores the RSS computed above in rss_all
Report which k produced the lowest RSS on VALIDATION set.
(Depending on your computing environment, this computation may take 10-15 minutes.)
To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
End of explanation
"""
|
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/05_06/Final/Data Frame Plots.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
"""
Explanation: Data Frame Plots
documentation: http://pandas.pydata.org/pandas-docs/stable/visualization.html
End of explanation
"""
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
ts.plot()
plt.show()
"""
Explanation: The plot method on Series and DataFrame is just a simple wrapper around plt.plot()
If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as show in the plot window.
End of explanation
"""
df = pd.DataFrame(np.random.randn(1000, 4), index=pd.date_range('1/1/2016', periods=1000), columns=list('ABCD'))
df = df.cumsum()
plt.figure()
df.plot()
plt.show()
"""
Explanation: On DataFrame, plot() is a convenience to plot all of the columns, and include a legend within the plot.
End of explanation
"""
df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()
df3['A'] = pd.Series(list(range(len(df))))
df3.plot(x='A', y='B')
plt.show()
df3.tail()
"""
Explanation: You can plot one column versus another using the x and y keywords in plot():
End of explanation
"""
plt.figure()
df.ix[5].plot(kind='bar')
plt.axhline(0, color='k')
plt.show()
df.ix[5]
"""
Explanation: Plots other than line plots
Plotting methods allow for a handful of plot styles other than the default Line plot. These methods can be provided as the kind keyword argument to plot(). These include:
‘bar’ or ‘barh’ for bar plots
‘hist’ for histogram
‘box’ for boxplot
‘kde’ or 'density' for density plots
‘area’ for area plots
‘scatter’ for scatter plots
‘hexbin’ for hexagonal bin plots
‘pie’ for pie plots
For example, a bar plot can be created the following way:
End of explanation
"""
df2 = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df2.plot.bar(stacked=True)
plt.show()
"""
Explanation: stack bar chart
End of explanation
"""
df2.plot.barh(stacked=True)
plt.show()
"""
Explanation: horizontal bar chart
End of explanation
"""
df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
df.plot.box()
plt.show()
"""
Explanation: box plot
End of explanation
"""
df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df.plot.area()
plt.show()
"""
Explanation: area plot
End of explanation
"""
ser = pd.Series(np.random.randn(1000))
ser.plot.kde()
plt.show()
"""
Explanation: Plotting with Missing Data
Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are dropped, left out, or filled depending on the plot type.
| Plot Type | NaN Handling | |
|----------------|-------------------------|---|
| Line | Leave gaps at NaNs | |
| Line (stacked) | Fill 0’s | |
| Bar | Fill 0’s | |
| Scatter | Drop NaNs | |
| Histogram | Drop NaNs (column-wise) | |
| Box | Drop NaNs (column-wise) | |
| Area | Fill 0’s | |
| KDE | Drop NaNs (column-wise) | |
| Hexbin | Drop NaNs | |
| Pie | Fill 0’s | |
If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled, consider using fillna() or dropna() before plotting.
density plot
End of explanation
"""
from pandas.tools.plotting import lag_plot
plt.figure()
data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(np.linspace(-99 * np.pi, 99 * np.pi, num=1000)))
lag_plot(data)
plt.show()
"""
Explanation: lag plot
Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the lag plot. Non-random structure implies that the underlying data are not random.
End of explanation
"""
|
harishkrao/Machine-Learning | Titanic - Machine Learning from Disaster - data analysis and visualization.ipynb | mit | sns.barplot(x='Pclass',y='Survived',data=train, hue='Sex')
"""
Explanation: The plot shows that the number of female survivors were significantly more than the male survivors. There were more survivors overall in first class than in any other class.
There were also less survivors overall in third class than in any other class.
Male survivors were twice in first class than in second or third class. Female survivors in first class were twice that of third class.
End of explanation
"""
sns.barplot(x='Sex',y='Survived',data=train, hue='Pclass')
"""
Explanation: The plot explains the above facts in a different representation.
End of explanation
"""
sns.swarmplot(x='Survived',y='Age',hue='Pclass',data=train)
"""
Explanation: The plot explains the distribution of survivors across age and class. More red on the lower part of the left swarm indicates that younger passengers in the third class had the least chance to survive.
More blue spots on the top part of the right swarm meant that elderly passengers from the first class had the best chance to survive.
Distribution of blue spots on the right swarm is uniform - indicating that, irrespective of age, the first class had better chances of survival.
End of explanation
"""
sns.swarmplot(x='Survived',y='Age',hue='Sex',data=train)
"""
Explanation: The plot shows that male passengers had the least chance of survival and female passengers had the best chance of survival.
End of explanation
"""
sns.swarmplot(x='Sex',y='Age',data=train)
"""
Explanation: Same data with a different representation.
End of explanation
"""
sns.pointplot(x='Pclass',y='Fare',data=train)
"""
Explanation: Plot showing distribution of fares among classes of travel. A first class ticket is about 4 times a second class ticket.
A third class ticket costs about 3/4 a second class ticket.
End of explanation
"""
sns.barplot(x='Embarked',y='Fare',data=train)
"""
Explanation: The plot shows differences in fares based on the point of embarkation.
Fares from Cherbourg were the highest, in fact costing about twice as the fares from Southampton and about three times as the fares from Queenstown.
Fares from Southampton costed twice that of Queenstown.
C = Cherbourg, Q = Queenstown, S = Southampton
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/006560919734f06efa76c80dc321a748/plot_object_source_estimate.ipynb | bsd-3-clause | import os
from mne import read_source_estimate
from mne.datasets import sample
print(__doc__)
# Paths to example data
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
"""
Explanation: The :class:SourceEstimate <mne.SourceEstimate> data structure
Source estimates, commonly referred to as STC (Source Time Courses),
are obtained from source localization methods.
Source localization method solve the so-called 'inverse problem'.
MNE provides different methods for solving it:
dSPM, sLORETA, LCMV, MxNE etc.
Source localization consists in projecting the EEG/MEG sensor data into
a 3-dimensional 'source space' positioned in the individual subject's brain
anatomy. Hence the data is transformed such that the recorded time series at
each sensor location maps to time series at each spatial location of the
'source space' where is defined our source estimates.
An STC object contains the amplitudes of the sources over time.
It only stores the amplitudes of activations but
not the locations of the sources. To get access to the locations
you need to have the :class:source space <mne.SourceSpaces>
(often abbreviated src) used to compute the
:class:forward operator <mne.Forward> (often abbreviated fwd).
See tut-forward for more details on forward modeling, and
tut-inverse-methods
for an example of source localization with dSPM, sLORETA or eLORETA.
Source estimates come in different forms:
- :class:`mne.SourceEstimate`: For cortically constrained source spaces.
- :class:`mne.VolSourceEstimate`: For volumetric source spaces
- :class:`mne.VectorSourceEstimate`: For cortically constrained source
spaces with vector-valued source activations (strength and orientation)
- :class:`mne.MixedSourceEstimate`: For source spaces formed of a
combination of cortically constrained and volumetric sources.
<div class="alert alert-info"><h4>Note</h4><p>:class:`(Vector) <mne.VectorSourceEstimate>`
:class:`SourceEstimate <mne.SourceEstimate>` are surface representations
mostly used together with `FreeSurfer <tut-freesurfer>`
surface representations.</p></div>
Let's get ourselves an idea of what a :class:mne.SourceEstimate really
is. We first set up the environment and load some data:
End of explanation
"""
stc = read_source_estimate(fname_stc, subject='sample')
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# Plot surface
brain = stc.plot(**surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'SourceEstimate', 'title', font_size=16)
"""
Explanation: Load and inspect example data
This data set contains source estimation data from an audio visual task. It
has been mapped onto the inflated cortical surface representation obtained
from FreeSurfer <tut-freesurfer>
using the dSPM method. It highlights a noticeable peak in the auditory
cortices.
Let's see how it looks like.
End of explanation
"""
shape = stc.data.shape
print('The data has %s vertex locations with %s sample points each.' % shape)
"""
Explanation: SourceEstimate (stc)
A source estimate contains the time series of a activations
at spatial locations defined by the source space.
In the context of a FreeSurfer surfaces - which consist of 3D triangulations
- we could call each data point on the inflated brain
representation a vertex . If every vertex represents the spatial location
of a time series, the time series and spatial location can be written into a
matrix, where to each vertex (rows) at multiple time points (columns) a value
can be assigned. This value is the strength of our signal at a given point in
space and time. Exactly this matrix is stored in stc.data.
Let's have a look at the shape
End of explanation
"""
shape_lh = stc.lh_data.shape
print('The left hemisphere has %s vertex locations with %s sample points each.'
% shape_lh)
"""
Explanation: We see that stc carries 7498 time series of 25 samples length. Those time
series belong to 7498 vertices, which in turn represent locations
on the cortical surface. So where do those vertex values come from?
FreeSurfer separates both hemispheres and creates surfaces
representation for left and right hemisphere. Indices to surface locations
are stored in stc.vertices. This is a list with two arrays of integers,
that index a particular vertex of the FreeSurfer mesh. A value of 42 would
hence map to the x,y,z coordinates of the mesh with index 42.
See next section on how to get access to the positions in a
:class:mne.SourceSpaces object.
Since both hemispheres are always represented separately, both attributes
introduced above, can also be obtained by selecting the respective
hemisphere. This is done by adding the correct prefix (lh or rh).
End of explanation
"""
is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0]
print('The number of vertices in stc.lh_data and stc.rh_data do ' +
('not ' if not is_equal else '') +
'sum up to the number of rows in stc.data')
"""
Explanation: Since we did not change the time representation, only the selected subset of
vertices and hence only the row size of the matrix changed. We can check if
the rows of stc.lh_data and stc.rh_data sum up to the value we had
before.
End of explanation
"""
peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True,
time_as_index=True)
"""
Explanation: Indeed and as the mindful reader already suspected, the same can be said
about vertices. stc.lh_vertno thereby maps to the left and
stc.rh_vertno to the right inflated surface representation of
FreeSurfer.
Relationship to SourceSpaces (src)
As mentioned above, :class:src <mne.SourceSpaces> carries the mapping from
stc to the surface. The surface is built up from a
triangulated mesh <https://en.wikipedia.org/wiki/Surface_triangulation>_
for each hemisphere. Each triangle building up a face consists of 3 vertices.
Since src is a list of two source spaces (left and right hemisphere), we can
access the respective data by selecting the source space first. Faces
building up the left hemisphere can be accessed via src[0]['tris'], where
the index $0$ stands for the left and $1$ for the right
hemisphere.
The values in src[0]['tris'] refer to row indices in src[0]['rr'].
Here we find the actual coordinates of the surface mesh. Hence every index
value for vertices will select a coordinate from here. Furthermore
src[0]['vertno'] stores the same data as stc.lh_vertno,
except when working with sparse solvers such as
:func:mne.inverse_sparse.mixed_norm, as then only a fraction of
vertices actually have non-zero activations.
In other words stc.lh_vertno equals src[0]['vertno'], whereas
stc.rh_vertno equals src[1]['vertno']. Thus the Nth time series in
stc.lh_data corresponds to the Nth value in stc.lh_vertno and
src[0]['vertno'] respectively, which in turn map the time series to a
specific location on the surface, represented as the set of cartesian
coordinates stc.lh_vertno[N] in src[0]['rr'].
Let's obtain the peak amplitude of the data as vertex and time point index
End of explanation
"""
peak_vertex_surf = stc.lh_vertno[peak_vertex]
peak_value = stc.lh_data[peak_vertex, peak_time]
"""
Explanation: The first value thereby indicates which vertex and the second which time
point index from within stc.lh_vertno or stc.lh_data is used. We can
use the respective information to get the index of the surface vertex
resembling the peak and its value.
End of explanation
"""
brain = stc.plot(**surfer_kwargs)
# We add the new peak coordinate (as vertex index) as an annotation dot
brain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue')
# We add a title as well, stating the amplitude at this time and location
brain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_size=14)
"""
Explanation: Let's visualize this as well, using the same surfer_kwargs as in the
beginning.
End of explanation
"""
|
csadorf/signac | doc/signac_101_Getting_Started.ipynb | bsd-3-clause | import signac
assert signac.__version__ >= '0.8.0'
"""
Explanation: 1.1 Getting started
Prerequisites
Installation
This tutorial requires signac, so make sure to install the package before starting.
The easiest way to do so is using conda:
$ conda config --add channels conda-forge
$ conda install signac
or pip:
pip install signac --user
Please refer to the documentation for detailed instructions on how to install signac.
After succesful installation, the following cell should execute without error:
End of explanation
"""
% rm -rf projects/tutorial/workspace
"""
Explanation: We start by removing all data which might be left-over from previous executions of this tutorial.
End of explanation
"""
def V_idg(N, kT, p):
return N * kT / p
"""
Explanation: A minimal example
For this tutorial we want to compute the volume of an ideal gas as a function of its pressure and thermal energy using the ideal gas equation
$p V = N kT$, where
$N$ refers to the system size, $p$ to the pressure, $kT$ to the thermal energy and $V$ is the volume of the system.
End of explanation
"""
import signac
project = signac.init_project('TutorialProject', 'projects/tutorial')
"""
Explanation: We can execute the complete study in just a few lines of code.
First, we initialize the project directory and get a project handle:
End of explanation
"""
for p in 0.1, 1.0, 10.0:
sp = {'p': p, 'kT': 1.0, 'N': 1000}
job = project.open_job(sp)
job.document['V'] = V_idg(**sp)
"""
Explanation: We iterate over the variable of interest p and construct a complete state point sp which contains all the meta data associated with our data.
In this simple example the meta data is very compact, but in principle the state point may be highly complex.
Next, we obtain a job handle and store the result of the calculation within the job document.
The job document is a persistent dictionary for storage of simple key-value pairs.
Here, we exploit that the state point dictionary sp can easily be passed into the V_idg() function using the keyword expansion syntax (**sp).
End of explanation
"""
for job in project:
print(job.sp.p, job.document['V'])
"""
Explanation: We can then examine our results by iterating over the data space:
End of explanation
"""
print(project.root_directory())
print(project.workspace())
"""
Explanation: That's it.
...
Ok, there's more...
Let's have a closer look at the individual components.
The Basics
The signac data management framework assists the user in managing the data space of individual projects.
All data related to one or multiple projects is stored in a workspace, which by default is a directory called workspace within the project's root directory.
End of explanation
"""
job = project.open_job({'p': 1.0, 'kT': 1.0, 'N': 1000})
"""
Explanation: The core idea is to tightly couple state points, unique sets of parameters, with their associated data.
In general, the parameter space needs to contain all parameters that will affect our data.
For the ideal gas that is a 3-dimensional space spanned by the thermal energy kT, the pressure p and the system size N.
These are the input parameters for our calculations, while the calculated volume V is the output data.
In terms of signac this relationship is represented by an instance of Job.
We use the open_job() method to get a job handle for a specific set of input parameters.
End of explanation
"""
print(job.statepoint())
print(job.workspace())
"""
Explanation: The job handle tightly couples our input parameters (p, kT, N) with the storage location of the output data.
You can inspect both the input parameters and the storage location explicitly:
End of explanation
"""
print(job.statepoint()['p'])
print(job.sp.p)
"""
Explanation: For convenience, a job's state point may also be accessed via the short-hand sp attribute.
For example, to access the pressure value p we can use either of the two following expressions:
End of explanation
"""
job2 = project.open_job({'kT': 1.0, 'N': 1000, 'p': 1.0})
print(job.get_id(), job2.get_id())
"""
Explanation: Each job has a unique id representing the state point.
This means opening a job with the exact same input parameters is guaranteed to have the exact same id.
End of explanation
"""
print(job.workspace())
"""
Explanation: The job id is used to uniquely identify data associated with a specific state point.
Think of the job as a container that is used to store all data associated with the state point.
For example, it should be safe to assume that all files that are stored within the job's workspace directory are tightly coupled to the job's statepoint.
End of explanation
"""
import os
fn_out = os.path.join(job.workspace(), 'V.txt')
with open(fn_out, 'w') as file:
V = V_idg(** job.statepoint())
file.write(str(V) + '\n')
"""
Explanation: Let's store the volume calculated for each state point in a file called V.txt within the job's workspace.
End of explanation
"""
with open(job.fn('V.txt'), 'w') as file:
V = V_idg(** job.statepoint())
file.write(str(V) + '\n')
"""
Explanation: Because this is such a common pattern, signac signac allows you to short-cut this with the job.fn() method.
End of explanation
"""
with job:
with open('V.txt', 'w') as file:
file.write(str(V) + '\n')
"""
Explanation: Sometimes it is easier to temporarily switch the current working directory while storing data for a specific job.
For this purpose, we can use the Job object as context manager.
This means that we switch into the workspace directory associated with the job after entering, and switch back into the original working directory after exiting.
End of explanation
"""
job.document['V'] = V_idg(** job.statepoint())
print(job.statepoint(), job.document)
"""
Explanation: Another alternative to store light-weight data is the job document as shown in the minimal example.
The job document is a persistent JSON storage file for simple key-value pairs.
End of explanation
"""
for pressure in 0.1, 1.0, 10.0:
statepoint = {'p': pressure, 'kT': 1.0, 'N': 1000}
job = project.open_job(statepoint)
job.document['V'] = V_idg(** job.statepoint())
"""
Explanation: Since we are usually interested in more than one state point, the standard operation is to iterate over all variable(s) of interest, construct the full state point, get the associated job handle, and then either just initialize the job or perform the full operation.
End of explanation
"""
for job in project:
print(job.statepoint(), job.document)
"""
Explanation: Let's verify our result by inspecting the data.
End of explanation
"""
|
deepmind/enn | enn/colabs/epinet_demo.ipynb | apache-2.0 | # Copyright 2022 DeepMind Technologies Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
!git clone https://github.com/deepmind/enn.git
!pip install -q enn/
"""
Explanation: Run a pre-trained Epinet on ImageNet
This demo shows how to run and evaluate a pre-trained Epinet on ImageNet. Epinet is a new ENN architecture that can supplement any conventional NN and be trained to estimate uncertainty.
An epinet is a neural network with privileged access to inputs and outputs of activation units in the base network.
A subset of these inputs and outputs, denoted by $\phi_\zeta(x)$, are taken as input to the epinet along with an epistemic index $z$.
For epinet parameters $\eta$, the epinet outputs $\sigma_\eta(\phi_\zeta(x), z)$.
To produce an ENN, the output of the epinet is added to that of the base network, though with a "stop gradient" written $[[\cdot]]$:
$$ f_\theta(x, z) = \mu_\zeta(x) + \sigma_\eta([[\phi_\zeta(x)]], z). $$
We can visualize this network architecture:
For more details about Epinet, refer to the paper
Epistemic Neural Networks (Osband et al., 2022).
It's recommended to use Runtime->Change Runtime Type to pick a GPU for speed.
End of explanation
"""
#@title General imports
import warnings
warnings.filterwarnings('ignore')
#@title Development imports
from typing import Callable, NamedTuple
import numpy as np
import pandas as pd
import plotnine as gg
from acme.utils.loggers.terminal import TerminalLogger
import dataclasses
import chex
import haiku as hk
import jax
import jax.numpy as jnp
import optax
import dill
#@title ENN imports
import enn
from enn import datasets
from enn.checkpoints import base as checkpoint_base
from enn.networks.epinet import base as epinet_base
from enn.checkpoints import utils
from enn.checkpoints import imagenet
from enn.checkpoints import catalog
from enn import metrics as enn_metrics
"""
Explanation: Imports
End of explanation
"""
!wget https://storage.googleapis.com/dm-enn/processed_batch.npzs --no-check-certificate
with open('processed_batch.npzs', 'rb') as file:
batch = dill.load(file)
images, labels = batch['images'], batch['labels']
"""
Explanation: Load ImageNet dataset
Our enn library provides functionalities in enn/datasets to load ImageNet, CIFAR10/100, and MNIST datasets. To load these datasets, you need to download that dataset into the default tensorflow dataset directory of ~/tensorflow_datasets/downloads/manual/.
In this colab, we want to evaluate Epinet on only one small batch of ImageNet test images. To this end, we provide a sample batch of size 100 at https://storage.googleapis.com/dm-enn/processed_batch.npzs which can be download as follows.
End of explanation
"""
# Define a dict of metrics including `accuracy`, `marginal nll`, and `joint nll`.
evaluation_metrics = {
'accuracy': enn_metrics.make_accuracy_calculator(),
'marginal nll': enn_metrics.make_nll_marginal_calculator(),
'joint nll': enn_metrics.make_nll_polyadic_calculator(tau=10, kappa=2),
}
"""
Explanation: Define a set of evaluation metrics
Our enn library provides the set of known metrics for evaluating the performance of neural networks. These metrics which can be access from enn/metrics can be divided in three categories:
Marginal: includes metrics like accuracy and marginal negative log-likelihood (NLL) for evaluating marginal predictions.
Joint: includes metrics for evaluating joint predictions.
Calibration: includes metrics for calculating calibration error.
Each metric takes logits and lables with the following shapes:
- logits: [num_enn_samples, batch_size, num_classes]
- labels: [batch_size, 1]
num_enn_samples specifies the number of sample logits per input image.
End of explanation
"""
# Get the Epinet checkpoint
epinet_resnet50_imagenet_ckpt = catalog.ImagenetModels.RESNET_50_FINAL_EPINET.value
epinet_resnet50_imagenet_ckpt
"""
Explanation: Load pre-trained Epinet
Pre-trained Epinet can be accessed from ImagenetModels in enn.checkpointing.catalog.py. As of now, we provide pre-trained Epinet based on ResNet-50, ResNet-101, ResNet-152, and ResNet-200. In this colab, we want to load Epinet based on ResNet-50 which can be accessed from the checkpoint RESNET_50_FINAL_EPINET.
End of explanation
"""
# Set the number of sample logits per input image
num_enn_samples = 100
# Recover the enn sampler
epinet_enn_sampler = utils.make_epinet_sampler_from_checkpoint(
epinet_resnet50_imagenet_ckpt,
num_enn_samples=num_enn_samples,)
# Get the epinet logits
key = jax.random.PRNGKey(seed=0)
epinet_logits = epinet_enn_sampler(images, key)
# epinet logits has shape [num_enn_sample, eval_batch_size, num_classes]
epinet_logits.shape
# Labels loaded from our dataset has shape [eval_batch_size,]. Our evaluation
# metrics requires labels to have shape [eval_batch_size, 1].
eval_labels = labels[:, None]
# Evaluate
epinet_results = {key: float(metric(epinet_logits, eval_labels))
for key, metric in evaluation_metrics.items()}
epinet_results
"""
Explanation: From the checkpoint, we can recover an enn sampler, which is a function that takes a batch of images and one random key, and returns multiple sample logits per input image. To recover the enn sampler, we can use make_epinet_sampler_from_checkpoint (from enn/checkpoints/utils.py) which takes the checkpoint and also the number of sample logits we want per image (num_enn_samples).
End of explanation
"""
# Get the ResNet-50 checkpoint
resnet50_imagenet_ckpt = catalog.ImagenetModels.RESNET_50.value
resnet50_imagenet_ckpt
"""
Explanation: Load pre-trained ResNet
To have a better sense of how amazing Epinet is, we can compare its performance with a pretrained ResNet-50.
Pre-trained ResNets can be accessed from ImagenetModels in enn.checkpointing.catalog.py. As of now, we provide pre-trained ResNet-50, ResNet-101, ResNet-152, and ResNet-200.
End of explanation
"""
# Set the number of sample logits per input image to 1
num_enn_samples = 1
# Recover the enn sampler
resnet50_enn_sampler = utils.make_enn_sampler_from_checkpoint(
resnet50_imagenet_ckpt,
num_enn_samples=num_enn_samples,)
# Get the epinet logits
key = jax.random.PRNGKey(seed=0)
resnet50_logits = resnet50_enn_sampler(images, key)
# ResNet logits has shape [num_enn_sample, eval_batch_size, num_classes]
resnet50_logits.shape
# Labels loaded from our dataset has shape [eval_batch_size,]. Our evaluation
# metrics requires labels to have shape [eval_batch_size, 1].
eval_labels = labels[:, None]
# Evaluate
resnet50_results = {key: float(metric(resnet50_logits, eval_labels))
for key, metric in evaluation_metrics.items()}
resnet50_results
"""
Explanation: From the checkpoint, we can recover an enn sampler, which is a function that takes a batch of images and one random key, and returns multiple sample logits per input image. To recover the enn sampler for ResNet-50, we can use make_enn_sampler_from_checkpoint (from enn/checkpoints/utils.py) which takes the checkpoint and also the number of sample logits we want per image (num_enn_samples). Here we set num_enn_samples=1, as having num_enn_samples > 1 just results in multiple similar sample logits per input image.
End of explanation
"""
# Make a dataframe of the results
resnet50_results['model'] = 'resnet'
epinet_results['model'] = 'epinet'
df = pd.DataFrame([resnet50_results, epinet_results])
df
# Compare the results
plt_df = pd.melt(df, id_vars=['model'], value_vars=evaluation_metrics.keys())
p = (gg.ggplot(plt_df)
+ gg.aes(x='model', y='value', fill='model')
+ gg.geom_col()
+ gg.facet_wrap('variable', scales='free',)
+ gg.theme(figure_size=(14, 4), panel_spacing=0.7)
)
p
"""
Explanation: Compare Epinet and ResNet results
End of explanation
"""
|
anhquan0412/deeplearning_fastai | deeplearning1/nbs/char-rnn.ipynb | apache-2.0 | path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read().lower()
print('corpus length:', len(text))
!tail -n 25 {path}
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:-6])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
"""
Explanation: Setup
We haven't really looked into the detail of how this works yet - so this is provided for self-study for those who are interested. We'll look at it closely next week.
End of explanation
"""
maxlen = 40
sentences = []
next_chars = []
for i in range(0, len(idx) - maxlen+1):
sentences.append(idx[i: i + maxlen])
next_chars.append(idx[i+1: i+maxlen+1])
print('nb sequences:', len(sentences))
sentences = np.concatenate([[np.array(o)] for o in sentences[:-2]])
next_chars = np.concatenate([[np.array(o)] for o in next_chars[:-2]])
sentences.shape, next_chars.shape
n_fac = 24
model=Sequential([
Embedding(vocab_size, n_fac, input_length=maxlen),
LSTM(units=512, input_shape=(n_fac,),return_sequences=True, dropout=0.2, recurrent_dropout=0.2,
implementation=2),
Dropout(0.2),
LSTM(512, return_sequences=True, dropout=0.2, recurrent_dropout=0.2,
implementation=2),
Dropout(0.2),
TimeDistributed(Dense(vocab_size)),
Activation('softmax')
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
"""
Explanation: Preprocess and create model
End of explanation
"""
def print_example():
seed_string="ethics is a basic foundation of all that"
for i in range(320):
x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:] # [-40] picks up the last 40 chars
preds = model.predict(x, verbose=0)[0][-1] # [-1] picks up the last char
preds = preds/np.sum(preds)
next_char = choice(chars, p=preds)
seed_string = seed_string + next_char
print(seed_string)
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.optimizer.lr=0.001
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.optimizer.lr=0.0001
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.save_weights('data/char_rnn.h5')
model.optimizer.lr=0.00001
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
print_example()
model.save_weights('data/char_rnn.h5')
"""
Explanation: Train
End of explanation
"""
|
google-research/google-research | activation_clustering/examples/cifar10/train.ipynb | apache-2.0 | import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from activation_clustering import ac_model, utils
# The same dataset preprocessing as used in the baseline cifar10 model training.
def input_fn(batch_size, ds, label_key='label'):
dataset = ds.batch(batch_size, drop_remainder=True).prefetch(tf.data.experimental.AUTOTUNE)
def interface(batch):
features = tf.cast(batch['image'], tf.float32) / 255
labels = batch[label_key]
return features, labels
return dataset.map(interface)
"""
Explanation: Copyright 2020 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Activation Clustering Model: Training
This notebook shows how to train an activation clustering model from a trained baseline Keras model. Here we use a ResNet classification model trained on the CIFAR-10 dataset as an example. The model is included as model.h5.
End of explanation
"""
model = tf.keras.models.load_model('model.h5')
"""
Explanation: Train an activation clustering model from a baseline model
Load the baseline model for the actication clustering model to calculate activations.
End of explanation
"""
clustering_config = [
('activation', {'n_clusters': 15}),
('activation_18', {'n_clusters': 15}),
('activation_36', {'n_clusters': 15}),
('activation_54', {'n_clusters': 15})
]
# Uncomment this for shorter training time for debugging/test runs.
# clustering_config = [
# ('activation', {'n_clusters': 10}),
# ('activation_54', {'n_clusters': 10, 'filters': [16, 16, 16, 8]})
# ]
work_dir = 'new_work_dir'
new_acm = ac_model.ACModel(model, clustering_config, work_dir=work_dir)
"""
Explanation: Activation clustering model's configurations. The first entry in each pair is a layer name of the baseline model, whose output activations will be clustered. The second entry is a dict with key n_clusters specifying the number of clusters.
We use deep embedding clustering (DEC) as the clustering algorithm in this implementation, which has several other parameters that you can expose by modifying the activation_clustering library and configure here.
End of explanation
"""
new_acm.build_clustering_models()
new_acm.clustering_models
train_ds = tfds.load(
'cifar10:3.*.*',
shuffle_files=False,
split='train'
)
test_ds = tfds.load(
'cifar10:3.*.*',
shuffle_files=False,
split='test'
)
# # Uncommend this to use just a portion of data in this example for shorter training time.
# train_ds = tfds.load(
# 'cifar10:3.*.*',
# shuffle_files=False,
# split='train[:10%]'
# )
# test_ds = tfds.load(
# 'cifar10:3.*.*',
# shuffle_files=False,
# split='test[:10%]'
# )
# Cache the activations to make it easier to iterate.
batch_size = 500
ds = input_fn(batch_size, train_ds)
new_acm.cache_activations(ds, tag='train')
del ds
ds = input_fn(batch_size, test_ds)
new_acm.cache_activations(ds, tag='test')
del ds
activations_dict = new_acm.load_activations_dict(
activations_filename=work_dir+'/activations/activations_train.npz')
test_activations_dict = new_acm.load_activations_dict(
activations_filename=work_dir+'/activations/activations_test.npz')
for k, v in activations_dict.items():
print(k, v.shape)
# Here we use a small number of epochs/iterations for shorter training time.
# The activation clustering training loop handles model saving in its `work_dir`.
epochs = 15
maxiter = 980
# # Uncomment this for shorter training time
# epochs = 2
# maxiter = 280
new_acm.fit(activations_dict=activations_dict, epochs=epochs, maxiter=maxiter)
"""
Explanation: Calling build_clustering_models creates clustering models, one for each specified activation.
End of explanation
"""
|
microsoft/dowhy | docs/source/example_notebooks/do_sampler_demo.ipynb | mit | import os, sys
sys.path.append(os.path.abspath("../../../"))
import numpy as np
import pandas as pd
import dowhy.api
N = 5000
z = np.random.uniform(size=N)
d = np.random.binomial(1., p=1./(1. + np.exp(-5. * z)))
y = 2. * z + d + 0.1 * np.random.normal(size=N)
df = pd.DataFrame({'Z': z, 'D': d, 'Y': y})
(df[df.D == 1].mean() - df[df.D == 0].mean())['Y']
"""
Explanation: Do-sampler Introduction
by Adam Kelleher
The "do-sampler" is a new feature in do-why. While most potential-outcomes oriented estimators focus on estimating the specific contrast $E[Y_0 - Y_1]$, Pearlian inference focuses on more fundamental quantities like the joint distribution of a set of outcomes Y, $P(Y)$, which can be used to derive other statistics of interest.
Generally, it's hard to represent a probability distribution non-parametrically. Even if you could, you wouldn't want to gloss over finite-sample problems with you data you used to generate it. With these issues in mind, we decided to represent interventional distributions by sampling from them with an object called to "do-sampler". With these samples, we can hope to compute finite-sample statistics of our interventional data. If we bootstrap many such samples, we can even hope for good sampling distributions for these statistics.
The user should note that this is still an area of active research, so you should be careful about being too confident in bootstrapped error bars from do-samplers.
Note that do samplers sample from the outcome distribution, and so will vary significantly from sample to sample. To use them to compute outcomes, it's recommended to generate several such samples to get an idea of the posterior variance of your statistic of interest.
Pearlian Interventions
Following the notion of an intervention in a Pearlian causal model, our do-samplers implement a sequence of steps:
Disrupt causes
Make Effective
Propagate and sample
In the first stage, we imagine cutting the in-edges to all of the variables we're intervening on. In the second stage, we set the value of those variables to their interventional quantities. In the third stage, we propagate that value forward through our model to compute interventional outcomes with a sampling procedure.
In practice, there are many ways we can implement these steps. They're most explicit when we build the model as a linear bayesian network in PyMC3, which is what underlies the MCMC do sampler. In that case, we fit one bayesian network to the data, then construct a new network representing the interventional network. The structural equations are set with the parameters fit in the initial network, and we sample from that new network to get our do sample.
In the weighting do sampler, we abstractly think of "disrupting the causes" by accounting for selection into the causal state through propensity score estimation. These scores contain the information used to block back-door paths, and so have the same statistics effect as cutting edges into the causal state. We make the treatment effective by selecting the subset of our data set with the correct value of the causal state. Finally, we generated a weighted random sample using inverse propensity weighting to get our do sample.
There are other ways you could implement these three steps, but the formula is the same. We've abstracted them out as abstract class methods which you should override if you'd like to create your own do sampler!
Statefulness
The do sampler when accessed through the high-level pandas API is stateless by default.This makes it intuitive to work with, and you can generate different samples with repeated calls to the pandas.DataFrame.causal.do. It can be made stateful, which is sometimes useful.
The 3-stage process we mentioned before is implemented by passing an internal pandas.DataFrame through each of the three stages, but regarding it as temporary. The internal dataframe is reset by default before returning the result.
It can be much more efficient to maintain state in the do sampler between generating samples. This is especially true when step 1 requires fitting an expensive model, as is the case with the MCMC do sampler, the kernel density sampler, and the weighting sampler.
Instead of re-fitting the model for each sample, you'd like to fit it once, and then generate many samples from the do sampler. You can do this by setting the kwarg stateful=True when you call the pandas.DataFrame.causal.do method. To reset the state of the dataframe (deleting the model as well as the internal dataframe), you can call the pandas.DataFrame.causal.reset method.
Through the lower-level API, the sampler is stateful by default. The assumption is that a "power user" who is using the low-level API will want more control over the sampling process. In this case, state is carried by internal dataframe self._df, which is a copy of the dataframe passed on instantiation. The original dataframe is kept in self._data, and is used when the user resets state.
Integration
The do-sampler is built on top of the identification abstraction used throughout do-why. It uses a dowhy.CausalModel to perform identification, and builds any models it needs automatically using this identification.
Specifying Interventions
There is a kwarg on the dowhy.do_sampler.DoSampler object called keep_original_treatment. While an intervention might be to set all units treatment values to some specific value, it's often natural to keep them set as they were, and instead remove confounding bias during effect estimation. If you'd prefer not to specify an intervention, you can set the kwarg like keep_original_treatment=True, and the second stage of the 3-stage process will be skipped. In that case, any intervention specified on sampling will be ignored.
If the keep_original_treatment flag is set to false (it is by default), then you must specify an intervention when you sample from the do sampler. For details, see the demo below!
Demo
First, let's generate some data and a causal model. Here, Z confounds our causal state, D, with the outcome, Y.
End of explanation
"""
from dowhy import CausalModel
causes = ['D']
outcomes = ['Y']
common_causes = ['Z']
model = CausalModel(df,
causes,
outcomes,
common_causes=common_causes)
"""
Explanation: So the naive effect is around 60% high. Now, let's build a causal model for this data.
End of explanation
"""
identification = model.identify_effect(proceed_when_unidentifiable=True)
"""
Explanation: Now that we have a model, we can try to identify the causal effect.
End of explanation
"""
from dowhy.do_samplers.weighting_sampler import WeightingSampler
sampler = WeightingSampler(df,
causal_model=model,
keep_original_treatment=True,
variable_types={'D': 'b', 'Z': 'c', 'Y': 'c'}
)
"""
Explanation: Identification works! We didn't actually need to do this yet, since it will happen internally with the do sampler, but it can't hurt to check that identification works before proceeding. Now, let's build the sampler.
End of explanation
"""
interventional_df = sampler.do_sample(None)
(interventional_df[interventional_df.D == 1].mean() - interventional_df[interventional_df.D == 0].mean())['Y']
"""
Explanation: Now, we can just sample from the interventional distribution! Since we set the keep_original_treatment flag to False, any treatment we pass here will be ignored. Here, we'll just pass None to acknowledge that we know we don't want to pass anything.
If you'd prefer to specify an intervention, you can just put the interventional value here instead as a list or numpy array.
End of explanation
"""
|
fdion/infographics_research | nfl_viz.ipynb | mit | !wget http://nflsavant.com/pbp_data.php?year=2015 -O pbp-2015.csv
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_context("talk")
plt.figure(figsize=(10, 8))
df = pd.read_csv('pbp-2015.csv')
# What do we have?
df.columns
def event_to_datetime(row):
"""Calculate a datetime from date, quarter, minute and second of an event."""
mins = 15 * (row['Quarter'] - 1) + row['Minute']
hours, mins = divmod(mins, 60)
return "{} {}:{:02}:{:02}".format(row['GameDate'], hours, mins, row['Second'])
df['datetime'] = pd.to_datetime(df.apply(event_to_datetime, axis=1))
"""
Explanation: Exploring NFL data through visualization
Today, we'll go with our first option, download data from http://nflsavant.com as csv using wget. We can then load this local file using pandas read_csv. read_csv can also read the csv data directly from the URL, but this way we don't have to download the file each time we load our data frame. Something I'm sure the owner of the website will appreciate.
End of explanation
"""
car = df[(df.OffenseTeam=='CAR')]
"""
Explanation: Local team
Happens to be the Carolina Panthers. So let's look at their offence.
End of explanation
"""
ax = car.plot(x='datetime', y='Yards')
"""
Explanation: Pandas plot does a decent job, but doesn't know about categoricals. We can't use the game date string for X axis. Here we use the datetime we calculated from quarter, mins etc. But now it thinks it's a time series. Which it looks like one. Consider it instead a form of parallel plot, ignoring the slope graph between each date, since it doesn't mean anything here (pandas also has an actual parallel plot).
End of explanation
"""
g = sns.stripplot(x='GameDate', y='Yards', data=car, jitter=True)
for item in g.get_xticklabels(): item.set_rotation(60)
# We can also alter the look of the strip plot significantly
g = sns.stripplot(x='Yards', y='GameDate', data=car,
palette="Set2", size=6, marker="D", edgecolor="gray", alpha=.25)
"""
Explanation: Not bad, but not completely helpful. Sure, pandas also has bar plots. But I think something else could be better visually. Let's see what Seaborn has to offer. How about a strip plot? It is a scatter plot for categorical data. We'll add jitter on the x axis to better see the data
End of explanation
"""
car_atl = df[(df.OffenseTeam=='CAR')|(df.OffenseTeam=='ATL')]
"""
Explanation: Dare to compare
How about comparing two teams? Say, Carolina and Atlanta. Let's see how they do in yards (loss and gains) per quarters, for this season up to 9/13.
End of explanation
"""
with sns.color_palette([sns.color_palette("muted")[2],sns.color_palette("muted")[5]]):
g = sns.stripplot(x='Quarter', y='Yards', data=car_atl, hue='OffenseTeam', jitter=True)
g.hlines(0,-1,6, color='grey')
"""
Explanation: Colors can really improve readability. Atlanta Falcons primary color is red and Carolina Panthers primary color is light blue. Using those (context manager with color_palette):
End of explanation
"""
ax = car.boxplot(column='Yards', by='GameDate')
ax.set_title("Carolina offence Yardage by game")
"""
Explanation: Distribution
We can also look at the distribution using plot types that are specifically designed for this. One pandas dataframe method is boxplot. Does the job, although we can make something prettier than this.
End of explanation
"""
g = sns.boxplot(data=car, y='Yards', x='GameDate')
for item in g.get_xticklabels(): item.set_rotation(60)
"""
Explanation: Let's have a look at the same thing using Seaborn. We'll fix the x axis tick labels too, rotating them.
End of explanation
"""
g = sns.violinplot(data=car, x='Yards', y='GameDate', orient='h')
g.vlines(0,-1,15, alpha=0.5)
"""
Explanation: Finally, let's look at one more way to look at the distribution of data using Seaborn's violin plot.
End of explanation
"""
|
derekjchow/models | research/deeplab/deeplab_demo.ipynb | apache-2.0 | import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
import tensorflow as tf
"""
Explanation: Overview
This colab demonstrates the steps to use the DeepLab model to perform semantic segmentation on a sample input image. Expected outputs are semantic labels overlayed on the sample image.
About DeepLab
The models used in this colab perform semantic segmentation. Semantic segmentation models focus on assigning semantic labels, such as sky, person, or car, to multiple objects and stuff in a single image.
Instructions
<h3><a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a> Use a free TPU device</h3>
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All. You can also run the cells manually with Shift-ENTER.
Import Libraries
End of explanation
"""
class DeepLabModel(object):
"""Class to load deeplab model and run inference."""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_pascal_label_colormap():
"""Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A Colormap for visualizing segmentation results.
"""
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)
for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_pascal_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
"""
Explanation: Import helper methods
These methods help us perform the following tasks:
* Load the latest version of the pretrained DeepLab model
* Load the colormap from the PASCAL VOC dataset
* Adds colors to various labels, such as "pink" for people, "green" for bicycle and more
* Visualize an image, and add an overlay of colors on various regions
End of explanation
"""
MODEL_NAME = 'mobilenetv2_coco_voctrainaug' # @param ['mobilenetv2_coco_voctrainaug', 'mobilenetv2_coco_voctrainval', 'xception_coco_voctrainaug', 'xception_coco_voctrainval']
_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
'mobilenetv2_coco_voctrainaug':
'deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz',
'mobilenetv2_coco_voctrainval':
'deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz',
'xception_coco_voctrainaug':
'deeplabv3_pascal_train_aug_2018_01_04.tar.gz',
'xception_coco_voctrainval':
'deeplabv3_pascal_trainval_2018_01_04.tar.gz',
}
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_dir = tempfile.mkdtemp()
tf.gfile.MakeDirs(model_dir)
download_path = os.path.join(model_dir, _TARBALL_NAME)
print('downloading model, this might take a while...')
urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],
download_path)
print('download completed! loading DeepLab model...')
MODEL = DeepLabModel(download_path)
print('model loaded successfully!')
"""
Explanation: Select a pretrained model
We have trained the DeepLab model using various backbone networks. Select one from the MODEL_NAME list.
End of explanation
"""
SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']
IMAGE_URL = '' #@param {type:"string"}
_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
"""Inferences DeepLab model and visualizes result."""
try:
f = urllib.request.urlopen(url)
jpeg_str = f.read()
original_im = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: ' + url)
return
print('running deeplab on image %s...' % url)
resized_im, seg_map = MODEL.run(original_im)
vis_segmentation(resized_im, seg_map)
image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
run_visualization(image_url)
"""
Explanation: Run on sample images
Select one of sample images (leave IMAGE_URL empty) or feed any internet image
url for inference.
Note that this colab uses single scale inference for fast computation,
so the results may slightly differ from the visualizations in the
README file,
which uses multi-scale and left-right flipped inputs.
End of explanation
"""
|
google/starthinker | colabs/cm360_conversion_upload_from_bigquery.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: CM360 Conversion Upload From BigQuery
Move from BigQuery to CM.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'account':'',
'auth_cm':'user', # Credentials used for CM.
'floodlight_activity_id':'',
'auth_bigquery':'user', # Credentials for BigQuery.
'floodlight_conversion_type':'encryptedUserId', # Must match the values specifed in the last column.
'encryption_entity_id':'', # Typically the same as the account id.
'encryption_entity_type':'DCM_ACCOUNT',
'encryption_entity_source':'DATA_TRANSFER',
'dataset':'Source containing the conversion data.',
'table':'Source containing the conversion data.',
'legacy':False, # Matters if source is a view.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter CM360 Conversion Upload From BigQuery Recipe Parameters
Specify a CM Account ID, Floodligh Activity ID and Conversion Type.
Include BigQuery dataset and table.
Columns: Ordinal, timestampMicros, quantity, value, encryptedUserId | encryptedUserIdCandidates | gclid | mobileDeviceId | matchId | dclid
Include encryption information if using encryptedUserId or encryptedUserIdCandidates.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'conversion_upload':{
'auth':{'field':{'name':'auth_cm','kind':'authentication','order':1,'default':'user','description':'Credentials used for CM.'}},
'account_id':{'field':{'name':'account','kind':'string','order':0,'default':''}},
'activity_id':{'field':{'name':'floodlight_activity_id','kind':'integer','order':1,'default':''}},
'conversion_type':{'field':{'name':'floodlight_conversion_type','kind':'choice','order':2,'choices':['encryptedUserId','encryptedUserIdCandidates','dclid','gclid','matchId','mobileDeviceId'],'default':'encryptedUserId','description':'Must match the values specifed in the last column.'}},
'encryptionInfo':{
'encryptionEntityId':{'field':{'name':'encryption_entity_id','kind':'integer','order':3,'default':'','description':'Typically the same as the account id.'}},
'encryptionEntityType':{'field':{'name':'encryption_entity_type','kind':'choice','order':4,'choices':['ADWORDS_CUSTOMER','DBM_ADVERTISER','DBM_PARTNER','DCM_ACCOUNT','DCM_ADVERTISER','DFP_NETWORK_CODE'],'default':'DCM_ACCOUNT'}},
'encryptionSource':{'field':{'name':'encryption_entity_source','kind':'choice','order':5,'choices':['AD_SERVING','DATA_TRANSFER'],'default':'DATA_TRANSFER'}}
},
'from':{
'bigquery':{
'auth':{'field':{'name':'auth_bigquery','kind':'authentication','order':1,'default':'user','description':'Credentials for BigQuery.'}},
'dataset':{'field':{'name':'dataset','kind':'string','order':6,'default':'Source containing the conversion data.'}},
'table':{'field':{'name':'table','kind':'string','order':7,'default':'Source containing the conversion data.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':8,'default':False,'description':'Matters if source is a view.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute CM360 Conversion Upload From BigQuery
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
broundy/udacity | nanodegrees/deep_learning_foundations/unit_1/lesson_11_handwriting_recognition/handwritten-digit-recognition-with-tflearn-exercise.ipynb | unlicense | # Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(1)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#Input
net = tflearn.input_data([None, 784])
#Hidden
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
#Output
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
#Make it Go
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=30)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
End of explanation
"""
|
opentraffic/reporter-quality-testing-rig | notebooks/GPS Processing.ipynb | lgpl-3.0 | import pandas as pd
from matplotlib import pyplot as plt
import os
import sys; sys.path.insert(0, os.path.abspath('..'));
import validator.validator as val
import numpy as np
import time as t
import glob
import seaborn as sns
%matplotlib inline
"""
Explanation: Open Traffic Reporter: Speed Error Comparison with Real-World Data
End of explanation
"""
quickstop = pd.read_csv('../data/driven/Log-20170725-102138 home to quikstop.csv', skiprows=2)
quickstop.head(10)
"""
Explanation: 1. Download TrackAddict onto your mobile device
for iPhone here.
2. (optional) Install a WiFi/Bluetooth enabled OBDII Scanner into the dataport of your automobile
device used for testing can be purchased here.
3. Open the app and record your trips as you drive around town
make sure you configure the app to record speed in km/h
4. In the app, export your trips by selecting the "Share" option, and e-mailing the raw .csv files to yourself
5. Store all of your trip .csv's in a new directory on your computer
End of explanation
"""
path ='../data/driven' # path to TrackAddict exported .csv's
errors = val.computeTrackAddictSpeedErrs(path) # specify data collection method
errors.head()
val.plotDrivenVsValhallaScatter(errors, 'Oakland', saveFig=False)
val.plotSpeedErrCDFs(errors, 'Oakland', saveFig=False)
"""
Explanation: 6. Point this notebook to your new data directory and run the following code:
End of explanation
"""
|
rkburnside/python_development | bootcamp/.ipynb_checkpoints/Functions and Methods Homework-checkpoint.ipynb | gpl-2.0 | def vol(rad):
pass
"""
Explanation: Functions and Methods Homework
Complete the following questions:
Write a function that computes the volume of a sphere given its radius.
End of explanation
"""
def ran_check(num,low,high):
pass
"""
Explanation: Write a function that checks whether a number is in a given range (Inclusive of high and low)
End of explanation
"""
def ran_bool(num,low,high):
pass
ran_bool(3,1,10)
"""
Explanation: If you only wanted to return a boolean:
End of explanation
"""
def up_low(s):
pass
"""
Explanation: Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters.
Sample String : 'Hello Mr. Rogers, how are you this fine Tuesday?'
Expected Output :
No. of Upper case characters : 4
No. of Lower case Characters : 33
If you feel ambitious, explore the Collections module to solve this problem!
End of explanation
"""
def unique_list(l):
pass
unique_list([1,1,1,1,2,2,3,3,3,3,4,5])
"""
Explanation: Write a Python function that takes a list and returns a new list with unique elements of the first list.
Sample List : [1,1,1,1,2,2,3,3,3,3,4,5]
Unique List : [1, 2, 3, 4, 5]
End of explanation
"""
def multiply(numbers):
pass
multiply([1,2,3,-4])
"""
Explanation: Write a Python function to multiply all the numbers in a list.
Sample List : [1, 2, 3, -4]
Expected Output : -24
End of explanation
"""
def palindrome(s):
pass
palindrome('helleh')
"""
Explanation: Write a Python function that checks whether a passed string is palindrome or not.
Note: A palindrome is word, phrase, or sequence that reads the same backward as forward, e.g., madam or nurses run.
End of explanation
"""
import string
def ispangram(str1, alphabet=string.ascii_lowercase):
pass
ispangram("The quick brown fox jumps over the lazy dog")
string.ascii_lowercase
"""
Explanation: Hard:
Write a Python function to check whether a string is pangram or not.
Note : Pangrams are words or sentences containing every letter of the alphabet at least once.
For example : "The quick brown fox jumps over the lazy dog"
Hint: Look at the string module
End of explanation
"""
|
slundberg/shap | notebooks/tabular_examples/neural_networks/Census income classification with Keras.ipynb | mit | from sklearn.model_selection import train_test_split
from keras.layers import Input, Dense, Flatten, Concatenate, concatenate, Dropout, Lambda
from keras.models import Model
from keras.layers.embeddings import Embedding
from tqdm import tqdm
import shap
# print the JS visualization code to the notebook
shap.initjs()
"""
Explanation: Census income classification with Keras
To download a copy of this notebook visit github.
End of explanation
"""
X,y = shap.datasets.adult()
X_display,y_display = shap.datasets.adult(display=True)
# normalize data (this is important for model convergence)
dtypes = list(zip(X.dtypes.index, map(str, X.dtypes)))
for k,dtype in dtypes:
if dtype == "float32":
X[k] -= X[k].mean()
X[k] /= X[k].std()
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2, random_state=7)
"""
Explanation: Load dataset
End of explanation
"""
# build model
input_els = []
encoded_els = []
for k,dtype in dtypes:
input_els.append(Input(shape=(1,)))
if dtype == "int8":
e = Flatten()(Embedding(X_train[k].max()+1, 1)(input_els[-1]))
else:
e = input_els[-1]
encoded_els.append(e)
encoded_els = concatenate(encoded_els)
layer1 = Dropout(0.5)(Dense(100, activation="relu")(encoded_els))
out = Dense(1)(layer1)
# train model
regression = Model(inputs=input_els, outputs=[out])
regression.compile(optimizer="adam", loss='binary_crossentropy')
regression.fit(
[X_train[k].values for k,t in dtypes],
y_train,
epochs=50,
batch_size=512,
shuffle=True,
validation_data=([X_valid[k].values for k,t in dtypes], y_valid)
)
"""
Explanation: Train Keras model
End of explanation
"""
def f(X):
return regression.predict([X[:,i] for i in range(X.shape[1])]).flatten()
"""
Explanation: Explain predictions
Here we take the Keras model trained above and explain why it makes different predictions for different individuals. SHAP expects model functions to take a 2D numpy array as input, so we define a wrapper function around the original Keras predict function.
End of explanation
"""
explainer = shap.KernelExplainer(f, X.iloc[:50,:])
shap_values = explainer.shap_values(X.iloc[299,:], nsamples=500)
shap.force_plot(explainer.expected_value, shap_values, X_display.iloc[299,:])
"""
Explanation: Explain a single prediction
Here we use a selection of 50 samples from the dataset to represent "typical" feature values, and then use 500 perterbation samples to estimate the SHAP values for a given prediction. Note that this requires 500 * 50 evaluations of the model.
End of explanation
"""
shap_values50 = explainer.shap_values(X.iloc[280:330,:], nsamples=500)
shap.force_plot(explainer.expected_value, shap_values50, X_display.iloc[280:330,:])
"""
Explanation: Explain many predictions
Here we repeat the above explanation process for 50 individuals. Since we are using a sampling based approximation each explanation can take a couple seconds depending on your machine setup.
End of explanation
"""
|
joshspeagle/frankenz | demos/2 - Photometric Inference.ipynb | mit | from __future__ import print_function, division
import sys
import pickle
import numpy as np
import scipy
import matplotlib
from matplotlib import pyplot as plt
from six.moves import range
# import frankenz code
import frankenz as fz
# plot in-line within the notebook
%matplotlib inline
np.random.seed(83481)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'axes.titlepad': '15.0'})
rcParams.update({'font.size': 30})
"""
Explanation: Photometric Inference
This notebook outlines the basics of how to conduct basic redshift inference (i.e. a set of intrinsic labels) using photometry (i.e. a set of observed features).
Setup
End of explanation
"""
survey = pickle.load(open('../data/mock_sdss_cww_bpz.pkl', 'rb')) # load data
types = survey.data['types'] # type flag
templates = survey.data['templates'] # template ID
redshifts = survey.data['redshifts'] # redshift
mags = survey.data['refmags'] # magnitude (reference)
phot_obs = survey.data['phot_obs'] # observed photometry
phot_err = survey.data['phot_err'] # photometry error
phot_true = survey.data['phot_true'] # true photometry
Nobs = len(types)
"""
Explanation: Data
For our proof-of-concept tests, we will use the mock SDSS data we previously generated.
End of explanation
"""
# plotting magnitude prior
plt.figure(figsize=(14, 4))
depths = np.array([f['depth_mag5sig'] for f in survey.filters])
mdepth = depths[survey.ref_filter]
mhigh = mdepth + 2.5 * np.log10(2)
mgrid = np.arange(14., mhigh + 0.01, 0.01)
plt.plot(mgrid, survey.pm(mgrid, mdepth), lw=5, color='navy')
plt.axvline(mdepth, ls='--', lw=5, color='black')
plt.xlabel(survey.filters[survey.ref_filter]['name'] + ' (mag)')
plt.xlim([14., mhigh])
plt.ylabel('P(mag)')
plt.ylim([0., None])
plt.yticks([])
plt.tight_layout()
# plotting prior
mgrid_sub = mgrid[::20]
Nmag = len(mgrid_sub)
zgrid = np.linspace(0., 4., 1000)
pgal_colors = plt.get_cmap('Reds')(np.linspace(0, 1, Nmag)) # PGAL colors
sgal_colors = plt.get_cmap('Purples')(np.linspace(0, 1, Nmag)) # SGAL colors
sb_colors = plt.get_cmap('Blues')(np.linspace(0, 1, Nmag)) # SB colors
plt.figure(figsize=(14, 12))
for i, color in zip(range(survey.NTYPE), [pgal_colors, sgal_colors, sb_colors]):
plt.subplot(3,1,i+1)
for j, c in zip(mgrid_sub, color):
pztm = [survey.pztm(z, i, j) for z in zgrid]
plt.plot(zgrid, pztm, lw=3, color=c, alpha=0.6)
plt.xlabel('Redshift')
plt.xlim([0, 4])
plt.ylabel('P({0}|mag)'.format(survey.TYPES[i]), fontsize=24)
plt.ylim([0., None])
plt.yticks([])
plt.tight_layout()
# plotting templates
tcolors = plt.get_cmap('viridis_r')(np.linspace(0., 1., survey.NTEMPLATE)) # template colors
xlow = min([min(f['wavelength']) for f in survey.filters]) # lower bound
xhigh = max([max(f['wavelength']) for f in survey.filters]) # upper bound
plt.figure(figsize=(14, 6))
for t, c in zip(survey.templates, tcolors):
wave, fnu, name = t['wavelength'], t['fnu'], t['name']
sel = (wave > xlow) & (wave < xhigh)
plt.semilogy(wave[sel], fnu[sel], lw=3, color=c,
label=name, alpha=0.7)
plt.xlim([xlow, xhigh])
plt.xticks(np.arange(3000., 11000.+1., 2000.))
plt.xlabel(r'Wavelength ($\AA$)')
plt.ylabel(r'$F_{\nu}$ (normalized)')
plt.legend(ncol=int(survey.NTEMPLATE/6 + 1), fontsize=13, loc=4)
plt.tight_layout()
"""
Explanation: Inference with Noisy Redshifts
For every observed galaxy $g \in \mathbf{g}$ out of $N_\mathbf{g}$ galaxies, let's assume we have an associated noisy redshift estimate $\hat{z}g$ with PDF $P(\hat{z}_g | z)$. We are interested in constructing an estimate for the population redshift distribution $N(z|\mathbf{g})$ by projecting our results onto a relevant (possibly noisy) redshift basis $\lbrace \dots, P(\hat{z}_h|z) \equiv K(z|\hat{z}_h), \dots \rbrace$ indexed by $h \in \mathbf{h}$ with $N{\mathbf{h}}$ elements. The use of $K(\cdot|\cdot)$ instead of $P(\cdot|\cdot)$ here is used to suggest the use of an underlying redshift kernel. We will return to this later.
Abusing notation slightly, we can write our likelihood between $g$ and $h$ as
$$ \mathcal{L}(g|h) \equiv P(\hat{z}_g | \hat{z}_h) = \int P(\hat{z}_g | z) K(z | \hat{z}_h) dz $$
where we have marginalized over the true redshift $z$. Note that the likelihood is (by construction) unnormalized so that $\sum_g \mathcal{L}(g|h) \neq 1$.
Combined with a prior over our basis $P(h)$, we can then write the posterior between $h$ and $g$ using Bayes Theorem as
$$ P(h|g) = \frac{\mathcal{L}(g|h)\pi(h)}{\mathcal{Z}_g}
= \frac{\mathcal{L}(g|h)\pi(h)}{\sum_h \mathcal{L}(g|h) \pi(h)} $$
where $\pi(g)$ is the prior and $\mathcal{Z}_g$ is the evidence (i.e. marginal likelihood) of $g$.
We are interested in the number density of observed galaxies as a function of redshift, $N(z|\mathbf{g})$. We can define this as a weighted sum over our redshift basis
$$ N(z|\mathbf{g}) = \sum_h w_h(\mathbf{g}) \, K(z|\hat{z}_h) $$
where $w_h(\mathbf{g})$ are the associated weights. For now, we will take the ansatz that $w_h(\mathbf{g}) = \sum_g P(h|g)$, i.e. that we can estimate $N(z|\mathbf{g})$ by stacking all our galaxy PDFs. This isn't quite correct but is sufficient for our purposes here; we will illustrate how to derive these weights properly in a later notebook.
Inference with Noisy Photometry
Here, we want to do this same exercise over our set of observed $N_{\mathbf{b}}$-dimensional features $\mathbf{F}$ with PDF $P(\hat{\mathbf{F}}|g)$. The only difference from the case above is that we are dealing with observables $P(\hat{\mathbf{F}}|h)$ rather than kernels. Applying Bayes Theorem and emulating our previous example gives us
$$ \mathcal{L}(g|h) \equiv P(\mathbf{F}_g| \mathbf{F}_h)
= \int P(\hat{\mathbf{F}}_g | \mathbf{F}) P(\mathbf{F} | \hat{\mathbf{F}}_h) d\mathbf{F}
= \frac{\int P(\hat{\mathbf{F}}_g | \mathbf{F}) P(\hat{\mathbf{F}}_h | \mathbf{F}) \pi(\mathbf{F}) d\mathbf{F}}{\int P(\hat{\mathbf{F}}_h | \mathbf{F}) \pi(\mathbf{F}) d\mathbf{F}}$$
where we have now introduced $\pi(\mathbf{F})$ to be a $p$-dimensional prior over the true features. For our purposes, we will assume these set of features correspond to a set of observed flux densities $\hat{F}{i,b}$ in a set of $N{\mathbf{b}}$ photometric bands indexed by $b \in \mathbf{b}$.
In practice, $\mathbf{g}$ constitutes a set of unlabeled objects with unknown properties while $\mathbf{h}$ is a set of labeled objects with known properties. Labeled objects might constitute a particular "training set" (in machine learning-based applications) or a set of models (in template fitting-based applications).
We are interested in inferring the redshift PDF $P(z|g)$ for our observed object $g$ based on its observed photometry $\hat{\mathbf{F}}_g$. Given our labeled objects $\mathbf{h}$ with corresponding redshift kernels $K(z|h)$, this is just
$$
P(z|g) = \sum_h K(z|h)P(h|g) = \frac{\sum_h K(z|h)\mathcal{L}(g|h)\pi(h)}{\sum_h \mathcal{L}(g|h)\pi(h)}
$$
which corresponds to a posterior-weighted mixture of the $K(z|h)$ redshift kernels.
The "Big Data" Approximation
It is important to note that we've made a pretty big assumption here: that we can reduce a continuous process over $\mathbf{F}$ to a discrete set of comparisons over our training data $\mathbf{h}$. This choice constitutes a "Big Data" approximation that necessarily introduces some (Poisson) noise into our estimates, and is designed to take advantage of datasets where many ($\gtrsim 10^4$ or so) training objects are available such that our parameter space is (relatively) densely sampled. We will come back to this assumption later.
Our Prior
In this particular case, our prior $P(h)=P(z_h,t_h,m_h)$ is defined over a series of models parameterized by magnitude, type, and redshift as described in the Mock Data notebook. These are saved within our original survey object and briefly shown below.
End of explanation
"""
# sample good example object
idx = np.random.choice(np.arange(Nobs)[(mags < 22.5) & (mags > 22)])
# compute loglikelihoods (noiseless)
ll, nb, chisq = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
phot_true, phot_err,
np.ones_like(phot_true),
free_scale=False, ignore_model_err=True,
dim_prior=False)
# compute loglikelihoods (noisy)
ptemp = np.random.normal(phot_true, phot_err) # re-jitter to avoid exact duplicates
ll2, nb2, chisq2 = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
ptemp, phot_err,
np.ones_like(phot_true),
free_scale=False, ignore_model_err=False,
dim_prior=False)
"""
Explanation: Photometric Likelihoods
For most galaxies, we can take $P(\hat{\mathbf{F}}_i|\mathbf{F})$ to be a multivariate Normal (i.e. Gaussian) distribution such that
$$
P(\hat{\mathbf{F}}i|\mathbf{F}) = \mathcal{N}(\hat{\mathbf{F}}_i|\mathbf{F},\mathbf{C}_i)
\equiv \frac{\exp\left[-\frac{1}{2}||\hat{\mathbf{F}}_i-\mathbf{F}||{\mathbf{C}_i}^2\right]}{|2\pi\mathbf{C}_i|^{1/2}}
$$
where
$$
||\hat{\mathbf{F}}i-\mathbf{F}||{\mathbf{C}_i}^2 \equiv (\hat{\mathbf{F}}_i-\mathbf{F})^{\rm T}\mathbf{C}_i^{-1}(\hat{\mathbf{F}}_i-\mathbf{F})
$$
is the squared Mahalanobis distance between $\hat{\mathbf{F}}_i$ and $\mathbf{F}$ given covariance matrix $\mathbf{C}_i$ (i.e. the photometric errors), ${\rm T}$ is the transpose operator, and $|\mathbf{C}_i|$ is the determinant of $\mathbf{C}_g$.
While we will use matrix notation for compactness, in practice we will assume all our covariances are diagonal (i.e. the errors are independent) such that
$$
||\hat{\mathbf{F}}g-\mathbf{F}||{\mathbf{C}g}^2 = \sum{b} \frac{(\hat{F}{g,b}-F_b)^2}{\sigma^2{g,b}}
$$
Likelihood: Magnitudes (Scale-dependent)
We first look at the simplest case: a direct observational comparison over $\mathbf{F}$ (i.e. galaxy magnitudes).
The product of two multivariate Normal distributions $\mathcal{N}(\hat{\mathbf{F}}g|\mathbf{F},\mathbf{C}_g)$ and $\mathcal{N}(\hat{\mathbf{F}}_h|\mathbf{F},\mathbf{C}_h)$ is a scaled multivariate Normal of the form $S{gh}\,\mathcal{N}(\mathbf{F}{gh}|\mathbf{F},\mathbf{C}{gh})$ where
$$
S_{gh} \equiv \mathcal{N}(\hat{\mathbf{F}}g|\hat{\mathbf{F}}_h, \mathbf{C}_g + \mathbf{C}_h), \quad
\mathbf{F}{gh} \equiv \mathbf{C}{gh} \left( \mathbf{C}_g^{-1}\mathbf{F}_g
+ \mathbf{C}_h^{-1}\mathbf{F}_h \right), \quad
\mathbf{C}{gh} \equiv \left(\mathbf{C}_g^{-1} + \mathbf{C}_h^{-1}\right)^{-1}
$$
If we assume a uniform prior on our flux densities $P(\mathbf{F})=1$, our likelihood then becomes
$$ \mathcal{L}(g|h) = \int P(\hat{\mathbf{F}}g | \mathbf{F}) P(\hat{\mathbf{F}}_h | \mathbf{F}) d\mathbf{F}
= S{gh} \int \mathcal{N}(\mathbf{F}{gh}|\mathbf{F},\mathbf{C}{gh}) d\mathbf{F}
= S_{gh} $$
The log-likelihood can then be written as
\begin{equation}
\boxed{
-2\ln \mathcal{L}(g|h) = ||\mathbf{F}g - \mathbf{F}_h||{\mathbf{C}{g} + \mathbf{C}{h}}^2 + \ln|\mathbf{C}{g} + \mathbf{C}{h}| + N_\mathbf{b}\ln(2\pi)
}
\end{equation}
Let's compute an example PDF using frankenz for objects in our mock catalog. Since these are sampled from the prior, we've actually introduced our prior implicitly via the distribution of objects in our labeled sample. As a result, computing likelihoods directly in magnitudes actually probes (with some noise) the full posterior distribution (as defined by BPZ).
We will compare two versions of our results:
- Noiseless case: computed using the "true" underlying photometry underlying each training object.
- Noisy case: computed using our "observed" mock photometry.
End of explanation
"""
# define plotting functions
try:
from scipy.special import logsumexp
except ImportError:
from scipy.misc import logsumexp
def plot_flux(phot_obs, phot_err, phot, logl,
ocolor='black', mcolor='blue', thresh=1e-1):
"""Plot SEDs."""
wave = np.array([f['lambda_eff'] for f in survey.filters])
wt = np.exp(logl)
wtmax = wt.max()
sel = np.arange(len(phot))[wt > thresh * wtmax]
[plt.plot(wave, phot[i], alpha=wt[i]/wtmax*0.4, lw=3,
zorder=1, color=mcolor) for i in sel]
plt.errorbar(wave, phot_obs, yerr=phot_err, lw=3, color=ocolor, zorder=2)
plt.xlabel(r'Wavelength ($\AA$)')
plt.xlim([wave.min() - 100, wave.max() + 100])
plt.ylim([(phot_obs - phot_err).min() * 0.9, (phot_obs + phot_err).max() * 1.1])
plt.ylabel(r'$F_\nu$')
plt.yticks(fontsize=24)
plt.tight_layout()
def plot_redshift(redshifts, logl, ztrue=None, color='yellow',
tcolor='red'):
"""Plot redshift PDF."""
n, _, _ = plt.hist(redshifts, bins=zgrid, weights=np.exp(logl),
histtype='stepfilled', edgecolor='black',
lw=3, color=color, alpha=0.8)
if ztrue is not None:
plt.vlines(ztrue, 0., n.max() * 1.1, color=tcolor, linestyles='--', lw=2)
plt.xlabel('Redshift')
plt.ylabel('PDF')
plt.xlim([zgrid[0], zgrid[-1]])
plt.ylim([0., n.max() * 1.1])
plt.yticks([])
plt.tight_layout()
def plot_zt(redshifts, templates, logl, ztrue=None, ttrue=None,
cmap='viridis', tcolor='red', thresh=1e-2):
"""Plot joint template-redshift PDF."""
lsum = logsumexp(logl)
wt = np.exp(logl - lsum)
plt.hist2d(redshifts, templates, bins=[zgrid, tgrid],
weights=wt,
cmin=thresh*max(wt),
cmap=cmap)
if ttrue is not None:
plt.hlines(ttrue, zgrid.min(), zgrid.max(),
color=tcolor, lw=2, linestyles='--')
if ztrue is not None:
plt.vlines(ztrue, tgrid.min(), tgrid.max(),
color=tcolor, lw=2, linestyles='--')
plt.xlabel('Redshift')
plt.ylabel('Template')
plt.xlim([zgrid[0], zgrid[-1]])
plt.ylim([tgrid[0], tgrid[-1]])
plt.tight_layout()
# plot flux distribution
plt.figure(figsize=(16, 14))
plt.subplot(3,2,1)
plot_flux(phot_obs[idx], phot_err[idx], phot_true, ll,
ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Noiseless (mag)')
plt.subplot(3,2,2)
plot_flux(phot_obs[idx], phot_err[idx], ptemp, ll2,
ocolor='black', mcolor='red', thresh=0.5)
plt.title('Noisy (mag)');
# plot redshift distribution
zgrid = np.arange(0., 4. + 0.1, 0.05)
plt.subplot(3,2,3)
plot_redshift(redshifts, ll, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,2,4)
plot_redshift(redshifts, ll2, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
# plot redshift-type joint distribution
tgrid = np.arange(survey.NTEMPLATE + 1) - 0.5
plt.subplot(3,2,5)
plot_zt(redshifts, templates, ll,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,2,6)
plot_zt(redshifts, templates, ll2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5);
"""
Explanation: Note that the log-likelihood function defined in frankenz contains a number of additional options that have been specified above. These will be discussed later.
End of explanation
"""
# compute color loglikelihoods (noiseless)
llc, nbc, chisq, s, serr = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
phot_true, phot_err,
np.ones_like(phot_true),
dim_prior=False, free_scale=True,
ignore_model_err=True, return_scale=True)
# compute color loglikelihoods (noisy)
llc2, nbc2, chisq2, s2, serr2 = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
ptemp, phot_err,
np.ones_like(phot_true),
dim_prior=False, free_scale=True,
ignore_model_err=False, return_scale=True)
# plot flux distribution
plt.figure(figsize=(16, 14))
plt.subplot(3,2,1)
plot_flux(phot_obs[idx], phot_err[idx], s[:, None] * phot_true,
llc, ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Noiseless (color)')
plt.subplot(3,2,2)
plot_flux(phot_obs[idx], phot_err[idx], s2[:, None] * ptemp,
llc2, ocolor='black', mcolor='red', thresh=0.5)
plt.title('Noisy (color)');
# plot redshift distribution
plt.subplot(3,2,3)
plot_redshift(redshifts, llc, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,2,4)
plot_redshift(redshifts, llc2, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
# plot redshift-type joint distribution
plt.subplot(3,2,5)
plot_zt(redshifts, templates, llc, thresh=1e-2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,2,6)
plot_zt(redshifts, templates, llc2, thresh=1e-2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5);
"""
Explanation: As expected, the PDF computed from our noisy photometry is broader than than the noiseless case.
Likelihood: Colors
We can also defined our likelihoods in terms of flux ratios (i.e. galaxy "colors") by introducing a scaling parameter $\ell$. Assuming $P(\mathbf{F},\ell) = 1$ is uniform this takes the form
$$
\mathcal{L}\ell(g|h) = \int \mathcal{N}\left(\hat{\mathbf{F}}_g | \ell\hat{\mathbf{F}}_h, \mathbf{C}{g}+\ell^2\mathbf{C}_{h} \right)\,d\ell
$$
Although this integral does not have an analytic solution, we can numerically solve for the maximum-likelihood result $\mathcal{L}(g|h, \ell_{\rm ML})$. See Leistedt & Hogg (2017) for some additional discussion related to this integral.
If we assume, however, that $\mathbf{C}_h = \mathbf{0}$ (i.e. no model errors), then there is an analytic solution with log-likelihood
\begin{equation}
\boxed{
-2\ln \mathcal{L}\ell(g|h) = ||\hat{\mathbf{F}}_g - \ell{\rm ML}\hat{\mathbf{F}}h||{\mathbf{C}{g}}^2 + N\mathbf{b}\ln(2\pi) + \ln|\mathbf{C}_{g}|
}
\end{equation}
where
$$
\ell_{\rm ML} = \frac{\hat{\mathbf{F}}_g^{\rm T} \mathbf{C}_g^{-1} \hat{\mathbf{F}}_h}
{\hat{\mathbf{F}}_h^{\rm T} \mathbf{C}_g^{-1} \hat{\mathbf{F}}_h}
$$
We can now repeat the above exercise using our color-based likelihoods. As above, we compare two versions:
- Noiseless case: computed using the "true" underlying photometry underlying each training object.
- Noisy case: computed using our "observed" mock photometry.
End of explanation
"""
# compute color loglikelihoods over grid
mphot = survey.models['data'].reshape(-1, survey.NFILTER)
merr = np.zeros_like(mphot)
mmask = np.ones_like(mphot)
llm, nbm, chisqm, sm, smerr = fz.pdf.loglike(phot_obs[idx], phot_err[idx],
np.ones(survey.NFILTER),
mphot, merr, mmask,
dim_prior=False, free_scale=True,
ignore_model_err=True, return_scale=True)
# compute prior
mzgrid = survey.models['zgrid']
prior = np.array([fz.priors.bpz_pz_tm(mzgrid, t, mags[idx])
for t in survey.TTYPE]).T.flatten()
# plot flux distribution
plt.figure(figsize=(24, 15))
plt.subplot(3,3,1)
plot_flux(phot_obs[idx], phot_err[idx], sm[:, None] * mphot,
llm, ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Likelihood (grid)')
plt.subplot(3,3,2)
plot_flux(phot_obs[idx], phot_err[idx], sm[:, None] * mphot,
llm + np.log(prior).flatten(),
ocolor='black', mcolor='red', thresh=0.5)
plt.title('Posterior (grid)')
plt.subplot(3,3,3)
plot_flux(phot_obs[idx], phot_err[idx], phot_true, ll,
ocolor='black', mcolor='blue', thresh=0.5)
plt.title('Mag Likelihood\n(noiseless samples)')
# plot redshift distribution
mredshifts = np.array([mzgrid for i in range(survey.NTEMPLATE)]).T.flatten()
plt.subplot(3,3,4)
plot_redshift(mredshifts, llm, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,3,5)
plot_redshift(mredshifts, llm + np.log(prior).flatten(),
ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
plt.subplot(3,3,6)
plot_redshift(redshifts, ll, ztrue=redshifts[idx])
plt.xticks(zgrid[::20])
# plot redshift-type joint distribution
mtemplates = np.array([np.arange(survey.NTEMPLATE)
for i in range(len(mzgrid))]).flatten()
plt.subplot(3,3,7)
plot_zt(mredshifts, mtemplates, llm, thresh=1e-2,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,3,8)
plot_zt(mredshifts, mtemplates, llm + np.log(prior).flatten(),
thresh=1e-2, ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5)
plt.subplot(3,3,9)
plot_zt(redshifts, templates, ll,
ztrue=redshifts[idx], ttrue=templates[idx])
plt.xticks(zgrid[::20])
plt.yticks(tgrid[1:] - 0.5);
"""
Explanation: Finally, it's useful to compare the case where we compute our posteriors directly from our underlying model grid and apply our priors directly. By construction, this should agree with the "true" posterior distribution up to the approximation that for a given template $t$ and redshift $z$ we can take the model to have a magnitude based on $\ell_{\rm ML}$ rather than integrating over the full $\pi(\ell)$ distribution, i.e.
$$
\int \pi(\ell) \mathcal{N}\left(\hat{\mathbf{F}}g | \ell\hat{\mathbf{F}}_h, \mathbf{C}{g}+\ell^2\mathbf{C}{h} \right)\,d\ell \approx \pi(\ell{\rm ML}) \mathcal{L}(g|h, \ell_{\rm ML})
$$
End of explanation
"""
sel = (phot_obs / phot_err)[:, survey.ref_filter] > 5. # S/N > 5 cut
Nsel = sel.sum()
Ntrain, Ntest = 60000, 5000
train_sel = np.arange(Nobs)[sel][:Ntrain] # training set
test_sel = np.arange(Nobs)[sel][Ntrain:Ntrain+Ntest] # testing set
Nmodel = len(mphot)
print('Number of observed galaxies (all):', Nobs)
print('Number of observed galaxies (selected):', Nsel)
print('Number of models:', Nmodel)
print('Number of training galaxies:', Ntrain)
print('Number of testing galaxies:', Ntest)
"""
Explanation: As expected, the secondary solutions seen in our grid-based likelihoods are suppressed by our prior, which indicates many of these solutions are distinctly unphysical (at least given the original assumptions used when constructing our mock).
In addition, the BPZ posterior computed over our grid of models agrees quite well with the noiseless magnitude-based likelihoods computed over our noiseless samples (i.e. our labeled "training" data). This demonstrates that an utilizing an unbiased, representative training set instead of a grid of models inherently gives access to complex priors that otherwise have to be modeled analytically. In other words, we can take $P(h) = 1$ for all $h \in \mathbf{h}$ since the distribution of our labeled photometric samples probes the underlying $P(z, t, m)$ distribution.
In practice, however, we do not often have access to a fully representative training sample, and often must derive an estimate of $P(\mathbf{h})$ through other means. We will return to this point later.
Population Tests
We now want to see how things look on a larger sample of objects.
End of explanation
"""
# initialize asinh magnitudes ("Luptitudes")
flux_zeropoint = 10**(-0.4 * -23.9) # AB magnitude zeropoint
fdepths = np.array([f['depth_flux1sig'] for f in survey.filters])
mag, magerr = fz.pdf.luptitude(phot_obs, phot_err, skynoise=fdepths,
zeropoints=flux_zeropoint)
# initialize magnitude dictionary
mdict = fz.pdf.PDFDict(pdf_grid=np.arange(-20., 60., 5e-3),
sigma_grid=np.linspace(0.01, 5., 500))
"""
Explanation: Sidenote: KDE in frankenz
One of the ways frankenz differs from other photometric redshift (photo-z) codes is that it tries to avoid discretizing quantities whenever and wherever possible. Since redshifts, flux densities, and many other photometric quantities are continuous with smooth PDFs, we attempt to work directly in this continuous space whenever possible instead of resorting to binning.
We accomplish this through kernel density estimation (KDE). Since almost all photometric observable PDFs are Gaussian, by connecting each observable with an associated Gaussian kernel density we can (in theory) construct a density estimate at any location in parameter space by evaluating the probability density of all kernels at that location.
In practice, such a brute-force approach is prohibitively computationally expensive. Instead, we approximate the contribution from any particular object by:
evaluating only a small subset of "nearby" kernels,
evaluating the overall kernel density estimates over a discrete basis, and
evaluating only the "central regions" of our kernels.
This is implemented within the gauss_kde function in frankenz's pdf module.
In addition, we can also use a stationary pre-computed dictionary of Gaussian kernels to discretize our operations. This avoids repetitive, expensive computations at the (very small) cost of increased memory overhead and errors from imposing a minimum resolution. This is implemented via the PDFDict class and the gauss_kde_dict function. We will use the option whenever possible going forward.
Magnitude Distribution
Let's use this functionality to visualize the stacked magnitude distribution of our population.
End of explanation
"""
# plotting magnitude distribution
msmooth = 0.05
fcolors = plt.get_cmap('viridis')(np.linspace(0,1, survey.NFILTER))
plt.figure(figsize=(20, 10))
for i in range(survey.NFILTER):
plt.subplot(2, int(survey.NFILTER/2)+1, i+1)
# compute pdf (all)
magerr_t = np.sqrt(magerr[:, i]**2 + msmooth**2)
mag_pdf = fz.pdf.gauss_kde_dict(mdict, y=mag[:, i],
y_std=magerr_t)
plt.semilogy(mdict.grid, mag_pdf, lw=3,
color=fcolors[i])
# compute pdf (selected)
magsel_pdf = fz.pdf.gauss_kde_dict(mdict, y=mag[sel, i],
y_std=magerr_t[sel])
plt.semilogy(mdict.grid, magsel_pdf, lw=3,
color=fcolors[i], ls='--')
# prettify
plt.xlim([16, 30])
plt.ylim([1., mag_pdf.max() * 1.2])
plt.xticks(np.arange(16, 30, 4))
plt.xlabel(survey.filters[i]['name'] + '-band Luptitude')
plt.ylabel('log(Counts)')
plt.tight_layout()
"""
Explanation: Note that we've used asinh magnitudes (i.e. "Luptitudes"; Lupton et al. 1999) rather than $\log_{10}$ magnitudes in order to incorporate data with negative measured fluxes.
End of explanation
"""
# initialize redshift dictionary
rdict = fz.pdf.PDFDict(pdf_grid=np.arange(0., 7.+1e-5, 0.01),
sigma_grid=np.linspace(0.005, 2., 500))
# plotting redshift distribution
plt.figure(figsize=(14, 6))
rsmooth = 0.05
# all
zerr_t = np.ones_like(redshifts) * rsmooth
z_pdf = fz.pdf.gauss_kde_dict(rdict, y=redshifts,
y_std=zerr_t)
plt.plot(rdict.grid, z_pdf / z_pdf.sum(), lw=5, color='black')
plt.fill_between(rdict.grid, z_pdf / z_pdf.sum(), color='gray',
alpha=0.4, label='Underlying')
# selected
zsel_pdf = fz.pdf.gauss_kde_dict(rdict, y=redshifts[sel],
y_std=zerr_t[sel])
plt.plot(rdict.grid, zsel_pdf / zsel_pdf.sum(), lw=5, color='navy')
plt.fill_between(rdict.grid, zsel_pdf / zsel_pdf.sum(),
color='blue', alpha=0.4, label='Observed')
# prettify
plt.xlim([0, 4])
plt.ylim([0, None])
plt.yticks([])
plt.legend(fontsize=20)
plt.xlabel('Redshift')
plt.ylabel('$N(z|\mathbf{g})$')
plt.tight_layout()
"""
Explanation: Note that, by default, all KDE options implemented in frankenz use some type of thresholding/clipping to avoid including portions of the PDFs with negligible weight and objects with negligible contributions to the overall stacked PDF. The default option is weight thresholding, where objects with $w < f_\min w_\max$ are excluded (with $f_\min = 10^{-3}$ by default). An alternative option is CDF thresholding, where objects that make up the $1 - c_\min$ portion of the sorted CDF are excluded (with $c_\min = 2 \times 10^{-4}$ by default). See the documentation for more details.
Redshift Distribution
Let's now compute our effective $N(z|\mathbf{g})$.
End of explanation
"""
# initialize datasets
phot_train, phot_test = phot_obs[train_sel], phot_obs[test_sel]
err_train, err_test = phot_err[train_sel], phot_err[test_sel]
mask_train, mask_test = np.ones_like(phot_train), np.ones_like(phot_test)
"""
Explanation: Comparison 1: Mag (samples) vs Color (grid)
As a first proof of concept, we want to just check whether the population distribution inferred from our samples (using magnitudes) agree with those inferred from our underlying model grid (using colors).
End of explanation
"""
from frankenz.fitting import BruteForce
# initialize BruteForce objects
model_BF = BruteForce(mphot, merr, mmask) # model grid
train_BF = BruteForce(phot_train, err_train, mask_train) # training data
# define log(posterior) function
def lprob_bpz(x, xe, xm, ys, yes, yms,
mzgrid=None, ttypes=None, ref=None):
results = fz.pdf.loglike(x, xe, xm, ys, yes, yms,
ignore_model_err=True,
free_scale=True)
lnlike, ndim, chi2 = results
mag = -2.5 * np.log10(x[ref]) + 23.9
prior = np.array([fz.priors.bpz_pz_tm(mzgrid, t, mag)
for t in ttypes]).T.flatten()
lnprior = np.log(prior)
return lnprior, lnlike, lnlike + lnprior, ndim, chi2
"""
Explanation: To fit these objects, we will take advantage of the BruteForce object available through frankenz's fitting module.
End of explanation
"""
# fit data
model_BF.fit(phot_test, err_test, mask_test, lprob_func=lprob_bpz,
lprob_args=[mzgrid, survey.TTYPE, survey.ref_filter])
# compute posterior-weighted redshift PDFs
mredshifts = np.array([mzgrid for i in range(survey.NTEMPLATE)]).T.flatten()
pdfs_post = model_BF.predict(mredshifts, np.ones_like(mredshifts) * rsmooth,
label_dict=rdict)
# compute likelihood-weighted redshift PDFs
mredshifts = np.array([mzgrid for i in range(survey.NTEMPLATE)]).T.flatten()
pdfs_like = model_BF.predict(mredshifts, np.ones_like(mredshifts) * rsmooth,
label_dict=rdict, logwt=model_BF.fit_lnlike)
"""
Explanation: We'll start by fitting our model grid and generating posterior and likelihood-weighted redshift predictions.
End of explanation
"""
pdfs_train = train_BF.fit_predict(phot_test, err_test, mask_test,
redshifts[train_sel],
np.ones_like(train_sel) * rsmooth,
label_dict=rdict, save_fits=False)
# true distribution
zpdf0 = fz.pdf.gauss_kde_dict(rdict, y=redshifts[test_sel],
y_std=np.ones_like(test_sel) * rsmooth)
# plotting
plt.figure(figsize=(14, 6))
plt.plot(rdict.grid, zpdf0, lw=5, color='black',
label='Underlying')
plt.plot(rdict.grid, pdfs_like.sum(axis=0),
lw=5, color='gray', alpha=0.7,
label='BPZ Color Likelihood (grid)')
plt.plot(rdict.grid, pdfs_post.sum(axis=0),
lw=5, color='red', alpha=0.6,
label='BPZ Color Posterior (grid)')
plt.plot(rdict.grid, pdfs_train.sum(axis=0),
lw=5, color='blue', alpha=0.6,
label='Mag Likelihood (samples)')
plt.xlim([0., 6.])
plt.ylim([0., None])
plt.yticks([])
plt.legend(fontsize=20)
plt.xlabel('Redshift')
plt.ylabel('$N(z|\mathbf{g})$')
plt.tight_layout()
"""
Explanation: Now we'll generate predictions using our training (labeled) data. While we passed an explicit log-posterior earlier, all classes implemented in fitting default to using the logprob function from frankenz.pdf (which is just a thin wrapper for loglike that returns quantities in the proper format).
End of explanation
"""
pdfs_train_c = train_BF.fit_predict(phot_test, err_test, mask_test,
redshifts[train_sel],
np.ones_like(train_sel) * rsmooth,
lprob_kwargs={'free_scale': True,
'ignore_model_err': True},
label_dict=rdict, save_fits=False)
pdfs_train_cerr = train_BF.fit_predict(phot_test, err_test, mask_test,
redshifts[train_sel],
np.ones_like(train_sel) * rsmooth,
lprob_kwargs={'free_scale': True,
'ignore_model_err': False},
label_dict=rdict, save_fits=False)
# plotting
plt.figure(figsize=(14, 6))
plt.plot(rdict.grid, zpdf0, lw=3, color='black',
label='Underlying')
plt.plot(rdict.grid, pdfs_train.sum(axis=0),
lw=3, color='blue', alpha=0.6,
label='Mag Likelihood (samples; w/ errors)')
plt.plot(rdict.grid, pdfs_train_c.sum(axis=0),
lw=3, color='seagreen', alpha=0.6,
label='Color Likelihood (samples; w/o errors)')
plt.plot(rdict.grid, pdfs_train_cerr.sum(axis=0),
lw=3, color='firebrick', alpha=0.8,
label='Color Likelihood (samples; w/ errors)')
plt.xlim([0., 6.])
plt.ylim([0., None])
plt.yticks([])
plt.legend(fontsize=20)
plt.xlabel('Redshift')
plt.ylabel('$N(z|\mathbf{g})$')
plt.tight_layout()
"""
Explanation: We see that the population redshift distribution $N(z|\mathbf{g})$ computed from our noisy fluxes is very close to that computed by the (approximate) BPZ posterior (which is "correct" by construction). These both differ markedly from the color-based likelihoods computed over our noiseless grid, demonstrating the impact of the prior for data observed at moderate/low signal-to-noise (S/N).
Comparison 2: Mag (samples) vs Color (samples)
Just for completeness, we also show the difference between computing our results using magnitudes (as above) vs color, with and without accounting for observational errors.
End of explanation
"""
|
davakian/playground | notebooks/sierpinski_cube_plotly.ipynb | mit | def sierp_cube_iter(x0, x1, y0, y1, z0, z1, cur_depth, max_depth=3, n_pts=10, cur_index=0):
if cur_depth >= max_depth:
x = np.linspace(x0, x1, n_pts)
y = np.linspace(y0, y1, n_pts)
z = np.linspace(z0, z1, n_pts)
xx, yy, zz = np.meshgrid(x, y, z)
rr = np.ones(shape=xx.shape) * np.cos(xx * 5) * np.cos(yy * 5) * np.cos(zz * 5)
ii = np.ones(shape=xx.shape) * cur_index
df_res = pd.DataFrame({'x': xx.flatten(),
'y': yy.flatten(),
'z': zz.flatten(),
'r': rr.flatten(),
'i': ii.flatten()})
else:
dx = (x1 - x0) / 3
dy = (y1 - y0) / 3
dz = (z1 - z0) / 3
i_sub = 0
df_res = None
for ix in range(3):
for iy in range(3):
for iz in range(3):
if int(ix == 1) + int(iy == 1) + int(iz == 1) >= 2:
continue
print('\t' * cur_depth, ': #', i_sub + 1, '-', ix, iy, iz)
df_sub = sierp_cube_iter(x0 + ix * dx,
x0 + (ix + 1) * dx,
y0 + iy * dy,
y0 + (iy + 1) * dy,
z0 + iz * dz,
z0 + (iz + 1) * dz,
cur_depth + 1,
max_depth=max_depth,
n_pts=n_pts,
cur_index=cur_index * 20 + i_sub)
i_sub += 1
if df_res is None:
df_res = df_sub
else:
df_res = pd.concat([df_res, df_sub], axis=0)
return df_res
df_sierp = sierp_cube_iter(0, 1, 0, 1, 0, 1, 0, max_depth=2, n_pts=10)
len(df_sierp)
df_sierp.describe()
df_cut = df_sierp[df_sierp.z == 0.0]
df_cut.describe()
figure(figsize=(8, 8))
scatter(x=df_cut.x, y=df_cut.y, c=df_cut.r, marker='o',
cmap='viridis_r', alpha=0.8)
# colorscales
# 'pairs' | 'Greys' | 'Greens' | 'Bluered' | 'Hot' | 'Picnic' | 'Portland' | 'Jet' | 'RdBu' |
# 'Blackbody' | 'Earth' | 'Electric' | 'YIOrRd' | 'YIGnBu'
data = []
i_side = 0
for cond in [(df_sierp.x == df_sierp.x.min()), (df_sierp.x == df_sierp.x.max()),
(df_sierp.y == df_sierp.y.min()), (df_sierp.y == df_sierp.y.max()),
(df_sierp.z == df_sierp.z.min()), (df_sierp.z == df_sierp.z.max())]:
df_sierp_plot = df_sierp[cond]
print(df_sierp_plot.shape)
trace = go.Scatter3d(
x=df_sierp_plot.x,
y=df_sierp_plot.y,
z=df_sierp_plot.z,
mode='markers',
marker=dict(
size=2,
color=df_sierp_plot.r,
colorscale='Hot',
opacity=1.0
),
name='Side %d' % (i_side + 1),
visible=True
)
data.append(trace)
i_side += 1
layout = go.Layout(
autosize=True,
width=800,
height=600,
title='Sierpinski Cube Surface (d=2) - Point Cloud Plot',
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='3d-sierpinski-cube-pt-cloud')
"""
Explanation: Sierpinski Cube
Start with a cube, on iteration:
- divide into 3 x 3 x 3 sub-cubes
- recurse on sub-cubes where # of 2 indexes < 2
End of explanation
"""
df_sierp.to_csv('sierp_cube.csv')
"""
Explanation: Save the data
End of explanation
"""
def sierp_cube_mesh_iter(x0, x1, y0, y1, z0, z1, cur_depth, max_depth=3, cur_index=0):
if cur_depth >= max_depth:
x = [x0, x0, x1, x1, x0, x0, x1, x1]
y = [y0, y1, y1, y0, y0, y1, y1, y0]
z = [z0, z0, z0, z0, z1, z1, z1, z1]
i = [7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2]
j = [3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3]
k = [0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6]
r = [2 * cur_index * cos(x[i] + y[i] + z[i]) for i in range(len(x))]
return (x, y, z, i, j, k, r, len(x))
else:
x = []
y = []
z = []
i = []
j = []
k = []
r = []
n = 0
dx = (x1 - x0) / 3
dy = (y1 - y0) / 3
dz = (z1 - z0) / 3
i_sub = 0
df_res = None
for ix in range(3):
for iy in range(3):
for iz in range(3):
if int(ix == 1) + int(iy == 1) + int(iz == 1) >= 2:
continue
print('\t' * cur_depth, ': #', i_sub + 1, '-', ix, iy, iz)
(sub_x, sub_y, sub_z, sub_i, sub_j, sub_k, sub_r, sub_n) = \
sierp_cube_mesh_iter(x0 + ix * dx,
x0 + (ix + 1) * dx,
y0 + iy * dy,
y0 + (iy + 1) * dy,
z0 + iz * dz,
z0 + (iz + 1) * dz,
cur_depth + 1,
max_depth=max_depth,
cur_index=cur_index * 20 + i_sub)
i_sub += 1
i += [n + _i for _i in sub_i]
j += [n + _j for _j in sub_j]
k += [n + _k for _k in sub_k]
x += sub_x
y += sub_y
z += sub_z
r += sub_r
n += sub_n
return (x, y, z, i, j, k, r, n)
def sierp_cube_mesh(x0, x1, y0, y1, z0, z1, max_depth=3):
(x, y, z, i, j, k, r, n) = sierp_cube_mesh_iter(x0, x1, y0, y1, z0, z1, 0, max_depth=max_depth)
mesh = go.Mesh3d(
x = x,
y = y,
z = z,
colorscale = 'Greens',
intensity = r,
i = i,
j = j,
k = k,
name='Sierpinski Cube (d=%d)' % max_depth,
showscale=True,
lighting=dict(ambient=0.99, roughness=0.99, diffuse=0.99)
)
data = [mesh]
layout = go.Layout(
autosize=True,
width=800,
height=600,
title='Sierpinski Cube (d=%d) - Mesh Plot' % max_depth,
)
fig = go.Figure(data=data, layout=layout)
return fig
fig = sierp_cube_mesh(0, 1, 0, 1, 0, 1, max_depth=3)
py.iplot(fig, filename='3d-sierpinski-cube-mesh')
"""
Explanation: Get a mesh representation of the cube
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex01-Read SST NetCDF data, subsample and save.ipynb | mit | %matplotlib inline
import numpy as np
from netCDF4 import Dataset # http://unidata.github.io/netcdf4-python/
"""
Explanation: Read SST NetCDF data, Subsample and Save
This notebook carries out some basic operations:
* Open a data file
* Check variables
* Indexing to subsampe a variable
* Save data
1. Load basic libraries
End of explanation
"""
#!wget ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.derived/surface_gauss/skt.sfc.mon.mean.nc
ncfile = 'data\skt.mon.mean.nc'
"""
Explanation: 2. Set input NetCDF file info
You can download data using wget or aria2c under Linux. In this case, the data is already downloaded and put into the folder of data.
End of explanation
"""
fh = Dataset(ncfile, mode='r') # file handle, open in read only mode
print(fh)
fh.close() # close the file
fh = Dataset(ncfile, mode='r') # file handle, open in read only mode
lon = fh.variables['lon'][:]
lat = fh.variables['lat'][:]
nctime = fh.variables['time'][:]
t_unit = fh.variables['time'].units
skt = fh.variables['skt'][:]
try :
t_cal = fh.variables['time'].calendar
except AttributeError : # Attribute doesn't exist
t_cal = u"gregorian" # or standard
fh.close() # close the file
"""
Explanation: 3. Extract variables
Open a NetCDF file and print the file handler, then you can find the information of its variales from last several lines.
Just like:
* platform: Model
* Conventions: COARDS
* dimensions(sizes): lon(192), lat(94), time(687)
* variables(dimensions): float32 lat(lat), float32 lon(lon), float64 time(time), float32 skt(time,lat,lon)
End of explanation
"""
lat[0] # Caution! Python’s indexing starts with zero
lat[-1] # gives the last value of the vector
"""
Explanation: 4. Access the first and the last value of latitude
End of explanation
"""
lat_so = lat[-21:-1]
lon_so = lon
skt_so = skt[:,-21:-1,:]
"""
Explanation: 5. Select a subregion
Lat: -50 ~ -90
Lon: 0 ~ 360
End of explanation
"""
np.savez('data/skt.so.mon.mean.npz', skt_so=skt_so, lat_so=lat_so, lon_so=lon_so)
"""
Explanation: 6. Save subregion data
save subregion data (several arrays) into a single file in uncompressed .npz format using np.savez.
End of explanation
"""
npzfile = np.load('data/skt.so.mon.mean.npz')
npzfile.files
"""
Explanation: Surely, you can load these data back.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.