repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
martinjrobins/hobo | examples/sampling/transformation-with-and-without-jacobian.ipynb | bsd-3-clause | import pints
import numpy as np
import matplotlib.pyplot as plt
# Set some random seed so this notebook can be reproduced
np.random.seed(10)
"""
Explanation: Need for Jacobian adjustment in parameter transformation
This example illustrates the importance of including a Jacobian term when sampling from a transformed parameter space (if you want to obtain samples on the untransformed space, that is).
We will show the difference between sampling under parameter transformation with and without Jacobian adjustment, and how not using Jacobian adjustment gives an incorrect result.
End of explanation
"""
from scipy.stats import beta
class BetaLogPDF(pints.LogPDF):
def n_parameters(self):
return 1
def __call__(self, x):
if 0. < x[0] < 1.:
return np.log(beta.pdf(x, a=2, b=2))[0]
else:
return -np.inf
log_pdf = BetaLogPDF()
# Generate some initial positions
x0 = np.random.uniform([0], [1], size=(4, 1))
"""
Explanation: Before we get started
We will use a 1-dimensional beta distribution as an illustration:
$$f(x; a, b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} x^{a-1} (1-x)^{b-1},$$
with $a=2$ and $b=2$.
End of explanation
"""
# Create an adaptive covariance MCMC routine
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(10000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
"""
Explanation: We try to sample it with MCMC.
End of explanation
"""
plt.hist(chains[0], bins=15, density=True, label='samples')
x = np.linspace(0, 1, 100)
plt.plot(x, beta.pdf(x, a=2, b=2), label='true')
plt.legend()
plt.xlabel(r'$x$')
plt.ylabel('PDF')
plt.show()
"""
Explanation: Now we inspect the MCMC samples and how it compares to the analytical form of the beta distribution.
End of explanation
"""
transformation = pints.LogitTransformation(1)
"""
Explanation: So far so good, we see that we are able to sample from the original space $x \in [0, 1]$, let's call this the "model space" where it's meaningful to be within $0$ and $1$ for some problem of interest.
Transformation
Now we consider a transformation, we will use the logit (or log-odds) transformation:
$$y = \text{logit}(x) = \log\left(\frac{x}{1-x}\right),$$
which transforms the constrained model parameter $x$ to an unconstrained search space $y$.
End of explanation
"""
class NaiveTransformedLogPDF(pints.LogPDF):
"""Transforming LogPDF without Jacobian adjustment"""
def __init__(self, log_pdf, transform):
self._log_pdf = log_pdf
self._transform = transform
def n_parameters(self):
return self._log_pdf.n_parameters()
def __call__(self, y):
# Transform from search space y back to model space x
x = self._transform.to_model(y)
# Then we call the log-pdf in the model space x
return self._log_pdf(x)
naive_trans_log_pdf = NaiveTransformedLogPDF(log_pdf, transformation)
# Transform the initial position to search space
y0 = [transformation.to_search(x) for x in x0]
"""
Explanation: We will now compare two transformations of the beta distribution: without and with Jacobian adjustment.
A naive transformation of PDF without Jacobian adjustment and how it gets wrong
The first one is the naive transformation which we simply transform the parameters as if using the wrapping the model without applying Jacobian adjustment.
End of explanation
"""
# Create an adaptive covariance MCMC routine
mcmc = pints.MCMCController(
naive_trans_log_pdf, # Naive transformation without Jacobian
4,
y0, # Input is in the search space y
method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(10000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
naive_chains_in_y = mcmc.run()
print('Done!')
# Discard warm-up
naive_chains_in_y = [naive_chain[1000:] for naive_chain in naive_chains_in_y]
"""
Explanation: We then sample this naive transformed beta distribution using MCMC in the search space $y$.
End of explanation
"""
plt.hist(naive_chains_in_y[0], bins=15, density=True)
plt.xlabel(r'$y$')
plt.ylabel('PDF')
plt.show()
"""
Explanation: Let's have a look at what the samples look like in the search space $y$ first.
End of explanation
"""
# Transform the samples from y back to x
naive_chains_in_x = np.zeros(naive_chains_in_y[0].shape)
for i, y in enumerate(naive_chains_in_y[0]):
naive_chains_in_x[i, :] = transformation.to_model(y)
plt.hist(naive_chains_in_x, bins=15, density=True, label='samples')
x = np.linspace(0, 1, 100)
plt.plot(x, beta.pdf(x, a=2, b=2), label='true')
plt.legend()
plt.xlabel(r'$x$')
plt.ylabel('PDF (without Jacobian adjustment)')
plt.show()
"""
Explanation: OK, not quite sure what that means, $y$ is the search space parameter which we are not interested in it!
Let's backward transform these samples directly back to the model space $x$ and hope for the best...
End of explanation
"""
pints_trans_log_pdf = pints.TransformedLogPDF(log_pdf, transformation)
"""
Explanation: The sampled distribution has gone horribly wrong using this naive transformation without Jacobian adjustment!!!
Transforming PDF using Pints (with Jacobian adjustment)
This time we will use pints.TransformedLogPDF which will handle all the necessary correction and adjustment for the transformation!
End of explanation
"""
# Create an adaptive covariance MCMC routine
mcmc = pints.MCMCController(
pints_trans_log_pdf, # Transformation using pints
4,
y0, # Again in search space y
method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(10000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
pints_chains_in_y = mcmc.run()
print('Done!')
# Discard warm-up
pints_chains_in_y = [pints_chain[1000:] for pints_chain in pints_chains_in_y]
plt.hist(pints_chains_in_y[0], bins=15, density=True)
plt.xlabel(r'$y$')
plt.ylabel('PDF')
plt.show()
# Transform the samples from y back to x
pints_chains_in_x = np.zeros(pints_chains_in_y[0].shape)
for i, y in enumerate(pints_chains_in_y[0]):
pints_chains_in_x[i, :] = transformation.to_model(y)
plt.hist(pints_chains_in_x, bins=15, density=True, label='samples')
x = np.linspace(0, 1, 100)
plt.plot(x, beta.pdf(x, a=2, b=2), label='true')
plt.legend()
plt.xlabel(r'$x$')
plt.ylabel('PDF (with Jacobian adjustment)')
plt.show()
"""
Explanation: We will repeat the same procedure as before: sampling in the search space $y$.
End of explanation
"""
|
jhnphm/xbs_xbd | Getting_Started.ipynb | gpl-3.0 | # %load startup.ipy
#! /usr/bin/env python3
import sys
sys.path.append('./python')
import logging.config
import os
import xbx.database as xbxdb
import xbx.util as xbxu
import xbx.config as xbxc
import xbx.build as xbxb
import xbx.run as xbxr
logging.config.fileConfig("logging.ini", disable_existing_loggers=False)
CONFIG_PATH="config.ini"
xbxdb.init(xbxu.get_db_path(CONFIG_PATH))
config = xbxc.Config(CONFIG_PATH)
dbsession = xbxdb.scoped_session()
"""
Explanation: Startup stuff, sourced from startup.ipy
End of explanation
"""
s=dbsession
l = s.query(xbxr.RunSession).order_by(xbxr.RunSession.timestamp.desc())
print([i.timestamp for i in l])
"""
Explanation: List RunSessions, ordered by descending timestamp
End of explanation
"""
rs=l.first()
print(rs)
"""
Explanation: Print latest RunSession
End of explanation
"""
print([i for i in rs.build_execs])
"""
Explanation: We have overridden the __repr__ function in the base class for SqlAlchemy tables to print out the type, a dictionary of contents, and a list of relations.
Let's print out all build executions:
End of explanation
"""
print([(i, i.build) for i in rs.build_execs])
"""
Explanation: Not much information here. Let's print out information on the builds associated with the build executions:
End of explanation
"""
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint([(i, i.build) for i in rs.build_execs])
"""
Explanation: Not very readable. Let's use prettyprint:
End of explanation
"""
import datetime
pp.pprint([(eval(repr(i)), eval(repr(i.build))) for i in rs.build_execs])
"""
Explanation: Better, but not good. The overridden repr implementation is supposed to be evalable. Note that we need to import datetime and call repr explicitly. Let's try:
End of explanation
"""
import datetime
for i in rs.build_execs:
if not i.test_ok:
for j in i.runs:
print(j)
"""
Explanation: Much better. We can see the 0hash and icepole implementations succeeded but the 0hash implementation failed. The log is mostly empty since we've rebuilt and thus there's not much makefile output.
We want to know why the aesgcm implementations failed. To do this, we must examine the runs relation.
End of explanation
"""
import datetime
for i in rs.build_execs:
if not i.test_ok:
for j in i.runs:
print('{}: {}'.format(i.build.buildid,j.checksumfail_cause))
"""
Explanation: Too much stuff. Let's clean it up:
End of explanation
"""
|
shoyer/xray | examples/xarray_seasonal_means.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
from netCDF4 import num2date
import matplotlib.pyplot as plt
print("numpy version : ", np.__version__)
print("pandas version : ", pd.__version__)
print("xarray version : ", xr.__version__)
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Calculating-Seasonal-Averages-from-Timeseries-of-Monthly-Means-" data-toc-modified-id="Calculating-Seasonal-Averages-from-Timeseries-of-Monthly-Means--1"><span class="toc-item-num">1 </span>Calculating Seasonal Averages from Timeseries of Monthly Means </a></span><ul class="toc-item"><li><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Some-calendar-information-so-we-can-support-any-netCDF-calendar." data-toc-modified-id="Some-calendar-information-so-we-can-support-any-netCDF-calendar.-1.0.0.1"><span class="toc-item-num">1.0.0.1 </span>Some calendar information so we can support any netCDF calendar.</a></span></li><li><span><a href="#A-few-calendar-functions-to-determine-the-number-of-days-in-each-month" data-toc-modified-id="A-few-calendar-functions-to-determine-the-number-of-days-in-each-month-1.0.0.2"><span class="toc-item-num">1.0.0.2 </span>A few calendar functions to determine the number of days in each month</a></span></li><li><span><a href="#Open-the-Dataset" data-toc-modified-id="Open-the-Dataset-1.0.0.3"><span class="toc-item-num">1.0.0.3 </span>Open the <code>Dataset</code></a></span></li><li><span><a href="#Now-for-the-heavy-lifting:" data-toc-modified-id="Now-for-the-heavy-lifting:-1.0.0.4"><span class="toc-item-num">1.0.0.4 </span>Now for the heavy lifting:</a></span></li></ul></li></ul></li></ul></li></ul></div>
Calculating Seasonal Averages from Timeseries of Monthly Means
Author: Joe Hamman
The data used for this example can be found in the xray-data repository. You may need to change the path to rasm.nc below.
Suppose we have a netCDF or xray Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days.
Suppose we have a netCDF or xarray.Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days.
End of explanation
"""
dpm = {'noleap': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'365_day': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'standard': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'proleptic_gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'all_leap': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'366_day': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'360_day': [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30]}
"""
Explanation: Some calendar information so we can support any netCDF calendar.
End of explanation
"""
def leap_year(year, calendar='standard'):
"""Determine if year is a leap year"""
leap = False
if ((calendar in ['standard', 'gregorian',
'proleptic_gregorian', 'julian']) and
(year % 4 == 0)):
leap = True
if ((calendar == 'proleptic_gregorian') and
(year % 100 == 0) and
(year % 400 != 0)):
leap = False
elif ((calendar in ['standard', 'gregorian']) and
(year % 100 == 0) and (year % 400 != 0) and
(year < 1583)):
leap = False
return leap
def get_dpm(time, calendar='standard'):
"""
return a array of days per month corresponding to the months provided in `months`
"""
month_length = np.zeros(len(time), dtype=np.int)
cal_days = dpm[calendar]
for i, (month, year) in enumerate(zip(time.month, time.year)):
month_length[i] = cal_days[month]
if leap_year(year, calendar=calendar):
month_length[i] += 1
return month_length
"""
Explanation: A few calendar functions to determine the number of days in each month
If you were just using the standard calendar, it would be easy to use the calendar.month_range function.
End of explanation
"""
ds = xr.tutorial.open_dataset('rasm').load()
print(ds)
"""
Explanation: Open the Dataset
End of explanation
"""
# Make a DataArray with the number of days in each month, size = len(time)
month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar='noleap'),
coords=[ds.time], name='month_length')
# Calculate the weights by grouping by 'time.season'.
# Conversion to float type ('astype(float)') only necessary for Python 2.x
weights = month_length.groupby('time.season') / month_length.astype(float).groupby('time.season').sum()
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4))
# Calculate the weighted average
ds_weighted = (ds * weights).groupby('time.season').sum(dim='time')
print(ds_weighted)
# only used for comparisons
ds_unweighted = ds.groupby('time.season').mean('time')
ds_diff = ds_weighted - ds_unweighted
# Quick plot to show the results
notnull = pd.notnull(ds_unweighted['Tair'][0])
fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14,12))
for i, season in enumerate(('DJF', 'MAM', 'JJA', 'SON')):
ds_weighted['Tair'].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 0], vmin=-30, vmax=30, cmap='Spectral_r',
add_colorbar=True, extend='both')
ds_unweighted['Tair'].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 1], vmin=-30, vmax=30, cmap='Spectral_r',
add_colorbar=True, extend='both')
ds_diff['Tair'].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 2], vmin=-0.1, vmax=.1, cmap='RdBu_r',
add_colorbar=True, extend='both')
axes[i, 0].set_ylabel(season)
axes[i, 1].set_ylabel('')
axes[i, 2].set_ylabel('')
for ax in axes.flat:
ax.axes.get_xaxis().set_ticklabels([])
ax.axes.get_yaxis().set_ticklabels([])
ax.axes.axis('tight')
ax.set_xlabel('')
axes[0, 0].set_title('Weighted by DPM')
axes[0, 1].set_title('Equal Weighting')
axes[0, 2].set_title('Difference')
plt.tight_layout()
fig.suptitle('Seasonal Surface Air Temperature', fontsize=16, y=1.02)
# Wrap it into a simple function
def season_mean(ds, calendar='standard'):
# Make a DataArray of season/year groups
year_season = xr.DataArray(ds.time.to_index().to_period(freq='Q-NOV').to_timestamp(how='E'),
coords=[ds.time], name='year_season')
# Make a DataArray with the number of days in each month, size = len(time)
month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar=calendar),
coords=[ds.time], name='month_length')
# Calculate the weights by grouping by 'time.season'
weights = month_length.groupby('time.season') / month_length.groupby('time.season').sum()
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4))
# Calculate the weighted average
return (ds * weights).groupby('time.season').sum(dim='time')
"""
Explanation: Now for the heavy lifting:
We first have to come up with the weights,
- calculate the month lengths for each monthly data record
- calculate weights using groupby('time.season')
Finally, we just need to multiply our weights by the Dataset and sum allong the time dimension.
End of explanation
"""
|
ML4DS/ML4all | R_lab2_GP/Pract_regression_professor.ipynb | mit | # Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
from scipy import spatial
import pylab
pylab.rcParams['figure.figsize'] = 8, 5
"""
Explanation: Gaussian Process regression
Authors: Miguel Lázaro Gredilla
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro
Notebook version: 1.0 (Nov, 07, 2017)
Changes: v.1.0 - First version. Python version
v.1.1 - Extraction from a longer version ingluding Bayesian regresssion.
Python 3 compatibility
Pending changes:
End of explanation
"""
np.random.seed(3)
"""
Explanation: 1. Introduction
In this exercise the student will review several key concepts of Bayesian regression and Gaussian processes.
For the purpose of this exercise, the regression model is
$${s}({\bf x}) = f({\bf x}) + \varepsilon$$
where ${s}({\bf x})$ is the output corresponding to input ${\bf x}$, $f({\bf x})$ is the unobservable latent function, and $\varepsilon$ is white zero-mean Gaussian noise, i.e., $\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2)$.
Practical considerations
Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$.
Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$.
Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix.
Reproducibility of computations
To guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook.
End of explanation
"""
# Load data from matlab file DatosLabReg.mat
# matvar = <FILL IN>
matvar = scipy.io.loadmat('DatosLabReg.mat')
# Take main variables, Xtrain, Xtest, Ytrain, Ytest from the corresponding dictionary entries in matvar:
# <SOL>
Xtrain = matvar['Xtrain']
Xtest = matvar['Xtest']
Ytrain = matvar['Ytrain']
Ytest = matvar['Ytest']
# </SOL>
# Data normalization
# <SOL>
mean_x = np.mean(Xtrain,axis=0)
std_x = np.std(Xtrain,axis=0)
Xtrain = (Xtrain - mean_x) / std_x
Xtest = (Xtest - mean_x) / std_x
# </SOL>
"""
Explanation: 2. The stocks dataset.
Load and properly normalize data corresponding to the evolution of the stocks of 10 airline companies. This data set is an adaptation of the Stock dataset from http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html, which in turn was taken from the StatLib Repository, http://lib.stat.cmu.edu/
End of explanation
"""
sigma_0 = np.std(Ytrain)
sigma_eps = sigma_0 / np.sqrt(10)
l = 8
print('sigma_0 = {0}'.format(sigma_0))
print('sigma_eps = {0}'.format(sigma_eps))
"""
Explanation: After running this code, you will have inside matrix Xtrain the evolution of (normalized) price for 9 airlines, whereas vector Ytrain will contain a single column with the price evolution of the tenth airline. The objective of the regression task is to estimate the price of the tenth airline from the prices of the other nine.
3. Non-linear regression with Gaussian Processes
3.1. Multidimensional regression
Rather than using a parametric form for $f({\mathbf x})$, in this section we will use directly the values of the latent function that we will model with a Gaussian process
$$f({\mathbf x}) \sim {\cal GP}\left(0,k_f({\mathbf x}_i,{\mathbf x}_j)\right),$$
where we are assuming a zero mean, and where we will use the Ornstein-Uhlenbeck covariance function, which is defined as:
$$k_f({\mathbf x}_i,{\mathbf x}_j) = \sigma_0^2 \exp \left( -\frac{1}{l}\|{\mathbf x}_i-{\mathbf x}_j\|\right)$$
First, we will use the following gross estimation for the hyperparameters:
End of explanation
"""
# Compute Kernel matrices.
# You may find spatial.distance.cdist() usefull to compute the euclidean distances required by Gaussian kernels.
# <SOL>
# Compute appropriate distances
dist = spatial.distance.cdist(Xtrain, Xtrain, 'euclidean')
dist_ss = spatial.distance.cdist(Xtest, Xtest, 'euclidean')
dist_s = spatial.distance.cdist(Xtest, Xtrain, 'euclidean')
# Compute Kernel matrices
K = (sigma_0**2)*np.exp(-dist/l)
K_ss = (sigma_0**2)*np.exp(-dist_ss/l)
K_s = (sigma_0**2)*np.exp(-dist_s/l)
# </SOL>
# Compute predictive mean
# m_y = <FILL IN>
m_y = K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot((Ytrain))
# Compute predictive variance
# v_y = <FILL IN>
v_y = np.diagonal(K_ss - K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot(K_s.T)) + sigma_eps**2
# Compute MSE
# MSE = <FILL IN>
MSE = np.mean((m_y - Ytest)**2)
# Compute NLPD
# NLPD = <FILL IN>
NLPD = 0.5 * np.mean(((Ytest - m_y)**2)/(np.matrix(v_y).T) + 0.5*np.log(2*np.pi*np.matrix(v_y).T))
print(m_y.T)
"""
Explanation: As we studied in a previous session, the joint distribution of the target values in the training set, ${\mathbf s}$, and the latent values corresponding to the test points, ${\mathbf f}^\ast$, is given by
$$\left[\begin{array}{c}{\bf s}\{\bf f}^\ast\end{array}\right]~\sim~{\cal N}\left({\bf 0},\left[\begin{array}{cc}{\bf K} + \sigma_\varepsilon^2 {\bf I}& {\bf K}\ast^\top \ {\bf K}\ast & {\bf K}_{\ast\ast} \end{array}\right]\right)$$
Using this model, obtain the posterior of ${\mathbf s}^\ast$ given ${\mathbf s}$. In particular, calculate the <i>a posteriori</i> predictive mean and standard deviations, ${\mathbb E}\left{s({\bf x}^\ast)\mid{\bf s}\right}$ and $\sqrt{{\mathbb V}\left{s({\bf x}^\ast)\mid{\bf s}\right}}$ for each test sample ${\bf x}^\ast$.
Obtain the MSE and NLPD.
End of explanation
"""
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
"""
Explanation: You should obtain the following results:
End of explanation
"""
# <SOL>
X_1d = np.matrix(Xtrain[:,0]).T
Xt_1d = np.matrix(Xtest[:,0]).T
Xt_1d = np.sort(Xt_1d,axis=0) #We sort the vector for representational purposes
dist = spatial.distance.cdist(X_1d,X_1d,'euclidean')
dist_ss = spatial.distance.cdist(Xt_1d,Xt_1d,'euclidean')
dist_s = spatial.distance.cdist(Xt_1d,X_1d,'euclidean')
K = (sigma_0**2)*np.exp(-dist/l)
K_ss = (sigma_0**2)*np.exp(-dist_ss/l)
K_s = (sigma_0**2)*np.exp(-dist_s/l)
m_y = K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot((Ytrain))
v_f = K_ss - K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot(K_s.T)
L = np.linalg.cholesky(v_f+1e-10*np.eye(v_f.shape[0]))
for iter in range(50):
f_ast = L.dot(np.random.randn(len(Xt_1d),1)) + m_y
plt.plot(np.array(Xt_1d)[:,0],f_ast[:,0],'c:');
# Plot as well the test points
plt.plot(np.array(Xtest[:,0]),Ytest[:,0],'r.',markersize=12);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0],'b-',linewidth=3,label='Predictive mean');
plt.legend(loc='best')
plt.xlabel('x',fontsize=18);
plt.ylabel('s',fontsize=18);
# </SOL>
"""
Explanation: 3.2. Unidimensional regression
Use now only the first company to compute the non-linear regression. Obtain the posterior
distribution of $f({\mathbf x}^\ast)$ evaluated at the test values ${\mathbf x}^\ast$, i.e, $p(f({\mathbf x}^\ast)\mid {\mathbf s})$.
This distribution is Gaussian, with mean ${\mathbb E}\left{f({\bf x}^\ast)\mid{\bf s}\right}$ and a covariance matrix $\text{Cov}\left[f({\bf x}^\ast)\mid{\bf s}\right]$. Sample 50 random vectors from the distribution and plot them vs. the values $x^\ast$, together with the test samples.
The Bayesian model does not provide a single function, but a pdf over functions, from which we extracted 50 possible functions.
End of explanation
"""
# <SOL>
X_1d = np.matrix(Xtrain[:,0]).T
Xt_1d = np.matrix(Xtest[:,0]).T
idx = np.argsort(Xt_1d,axis=0) # We sort the vector for representational purposes
Xt_1d = np.sort(Xt_1d,axis=0)
idx = np.array(idx).flatten().T
Ytest = Ytest[idx]
dist = spatial.distance.cdist(X_1d,X_1d,'euclidean')
dist_ss = spatial.distance.cdist(Xt_1d,Xt_1d,'euclidean')
dist_s = spatial.distance.cdist(Xt_1d,X_1d,'euclidean')
K = (sigma_0**2)*np.exp(-dist/l)
K_ss = (sigma_0**2)*np.exp(-dist_ss/l)
K_s = (sigma_0**2)*np.exp(-dist_s/l)
m_y = K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot((Ytrain))
v_f = K_ss - K_s.dot(np.linalg.inv(K + sigma_eps**2 * np.eye(K.shape[0]))).dot(K_s.T)
v_f_diag = np.diagonal(v_f)
L = np.linalg.cholesky(v_f+1e-10*np.eye(v_f.shape[0]))
for iter in range(50):
f_ast = L.dot(np.random.randn(len(Xt_1d),1)) + m_y
plt.plot(np.array(Xt_1d)[:,0],f_ast[:,0],'c:');
# Plot as well the test points
plt.plot(np.array(Xtest[:,0]),Ytest[:,0],'r.',markersize=12);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0],'b-',linewidth=3,label='Predictive mean');
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]+2*v_f_diag,'m--',label='Predictive mean of f $\pm$ 2std',linewidth=3);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]-2*v_f_diag,'m--',linewidth=3);
#Plot now the posterior mean and posterior mean \pm 2 std for s (i.e., adding the noise variance)
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]+2*v_f_diag+2*sigma_eps,'m:',label='Predictive mean of s $\pm$ 2std',linewidth=3);
plt.plot(np.array(Xt_1d)[:,0],m_y[:,0]-2*v_f_diag-2*sigma_eps,'m:',linewidth=3);
plt.legend(loc='best')
plt.xlabel('x',fontsize=18);
plt.ylabel('s',fontsize=18);
# </SOL>
"""
Explanation: Plot again the previous figure, this time including in your plot the confidence interval delimited by two standard deviations of the prediction. You can observe how $95.45\%$ of observed data fall within the designated area.
End of explanation
"""
# <SOL>
MSE = np.mean((m_y - Ytest)**2)
v_y = np.diagonal(v_f) + sigma_eps**2
NLPD = 0.5 * np.mean(((Ytest - m_y)**2)/(np.matrix(v_y).T) + 0.5*np.log(2*np.pi*np.matrix(v_y).T))
# </SOL>
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
"""
Explanation: Compute now the MSE and NLPD of the model. The correct results are given below:
End of explanation
"""
|
jaeoh2/self-driving-car-nd | CarND-LaneLines-P1/P1.ipynb | mit | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
"""
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Read in an Image
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def x_result(m,y,b):
return int((y-b)/m)
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# for line in lines:
# for x1,y1,x2,y2 in line:
# cv2.line(img, (x1, y1), (x2, y2), color, thickness)
m_list = []
m_left = []
m_right = []
b_left = []
b_right = []
for line in lines:
for x1,y1,x2,y2 in line:
m = ((y2-y1)/(x2-x1))
b = y1 - m*x1
m_list.append([m,b])
for m,b in m_list:
if m<0:
m_left.append(m)
b_left.append(b)
else:
m_right.append(m)
b_right.append(b)
ml = np.mean(m_left)
mr = np.mean(m_right)
bl = np.mean(b_left)
br = np.mean(b_right)
cv2.line(img, (x_result(ml,roi_images[0].shape[1],bl), roi_images[0].shape[1]),
(x_result(ml,330,bl),330), [255,0,0], 10)
cv2.line(img, (x_result(mr,roi_images[0].shape[1],br), roi_images[0].shape[1]),
(x_result(mr,330,br),330), [255,0,0], 10)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
## Import Images
image_list = os.listdir("test_images/")
images = []
for i in range(len(image_list)):
images.append(mpimg.imread(os.path.join('test_images/',image_list[i])))
images[0].shape
## Pre-Process Images(gray,blur,cany)
pp_images = []
for i in range(len(images)):
pp_images.append(canny(gaussian_blur(grayscale(images[i]),kernel_size=5),low_threshold=50, high_threshold=150))
plt.imshow(pp_images[0],cmap='gray')
## Set ROI
imshape = pp_images[0].shape
vertices = np.array([[(100,imshape[0]),(420, 330), (550, 330), (imshape[1]-50,imshape[0])]], dtype=np.int32)
roi_images = []
for i in range(len(pp_images)):
roi_images.append(region_of_interest(pp_images[i], vertices))
plt.imshow(roi_images[5])
## Draw line
draw_images = []
for i in range(len(roi_images)):
# draw_images.append(hough_lines(roi_images[i], rho=1, theta=np.pi/180, threshold=15, min_line_len=40, max_line_gap=20))
draw_images.append(hough_lines(roi_images[i], rho=1, theta=np.pi/180, threshold=40, min_line_len=40, max_line_gap=20))
plt.subplot(1,2,1),plt.imshow(images[5])
plt.subplot(1,2,2),plt.imshow(draw_images[5])
## Overlay images
result = []
initial_img = images
for i in range(len(draw_images)):
result.append(weighted_img(draw_images[i], initial_img[i], α=0.8, β=1., λ=0.))
plt.imshow(result[5])
"""
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
pp_img = canny(gaussian_blur(grayscale(image),kernel_size=5),low_threshold=50, high_threshold=150)
imshape = pp_img.shape
vertices = np.array([[(100,imshape[0]),(420, 330), (550, 330), (imshape[1]-50,imshape[0])]], dtype=np.int32)
roi_img = region_of_interest(pp_img, vertices)
line = hough_lines(roi_img, rho=1, theta=np.pi/180, threshold=40, min_line_len=40, max_line_gap=200)
result = weighted_img(line, image, α=0.8, β=1., λ=0.)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
HUDataScience/StatisticalMethods2016 | notebooks/Exo8_SNeIa_HubbleDiagram.ipynb | apache-2.0 | import warnings
# No annoying warnings
warnings.filterwarnings('ignore')
# Because we always need that
# plot within the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as mpl
"""
Explanation: Exercise 8 – Fit an Hubble Diagram
The SN Ia Science in short
The Type Ia Supernova event is the thermonuclear runaway of a white dwarf. This bright event is extremely stable and the maximum of luminosity of any explosion reaches roughly the same maximum magnitude M0. Two additional parameters: the speed of evolution of the explosion x1 (also called stretch) and the color of the event color enable to further standardize the SN maximum of luminosity. Thanks to the statibility of the absolute SN magnitude, the observation of the effective SN magnitude is a direct indication of the event's distance. Combined with the redshift measurement of the Supernovae -the redshift tracing the overall expansion of the Universe since the SN event- we can measurment the history of the expansion of the Universe. The magnitude vs. redshift plot is called an Hubble Diagram and the Hubble diagram of the SNe Ia enabled the discovery of the accelerated expansion of the Universe in 1999, and thereby of the existance of dark energy (Noble 2011)
Data you have to measures the density Dark Energy
The latest compilation of Type Ia Supernovae Data has ~1000 SN Ia distance measurements. The data are registered
in
notebooks/data/SNeIaCosmo/jla_lcparams.txt
The Model
In this example we will use the most straightforward analysis. The stretch and color corrected magnitude of the SNe Ia is:
$$
mb_{corr} = mb - (x1 \times \alpha - color \times \beta)
$$
The expected magnitude of the SNe Ia is:
$$
mb_{expected} = \mu \left(z, \Omega \right) + M_0
$$
where $\mu\left(z, \Omega \right)$ is the distance modulus (distance in log scale) for a given cosmology. To have access to $\mu$ use astropy.
In the flat Lambda CDM model, the only free cosmological parameter you can fit is Omega_m, the density of matter today, knowing that, in that case, the density of dark energy is 1-Omega_m
Astropy
The website with all the information is here http://www.astropy.org/
If you installed python using Anaconda, you should have astropy. Otherwise sudo pip install astropy should work
To get the cosmology for an Hubble Constant of 70 (use that) and for a given density of matter (Omega_m, which will be one of your free parameter):
python
from astropy import cosmology
cosmo = cosmology.FlatLambdaCDM(70, Omega_m)
To get the distance modulus
python
from astropy import cosmology
mu = cosmology.FlatLambdaCDM(70, Omega_m).distmod(z).value
Your Job: find the density of Dark energy
Create a Chi2 fitting class that enables to derive the density of dark energy. You should find Omega_m~0.3 so a universe composed of ~70% of dark energy and 30% of matter
Plot an Hubble diagram showing the corrected SN mangitude (mb_corr) as a function of the redshift (zcmb). Overplot the model.
Remark alpha beta and M0 have to be fitted together with the cosmology, we call that 'nuisance parameters'
Remark 2 ignore errors on the redshift (z), but take into account errors on mb and on the x1 and color parameters. For this example ignore the covariance terms.
Correction
End of explanation
"""
from astropy import table, cosmology
"""
Explanation: First Steps access the Data
Import for the cosmological analysis (an also the convenient table)
End of explanation
"""
data = table.Table.read("data/SNeIaCosmo/jla_lcparams.txt", format="ascii")
data.colnames
"""
Explanation: Read the data
End of explanation
"""
# copy paste of a class build before
class Chi2Fit( object ):
def __init__(self, jla_data=None):
""" init the class
Parameters:
-----------
jla_data: [astropy.table]
Astropy Table containing the Supernovae properties
(zcmb, mb, x1, color etc.)
Return
------
Void
"""
if jla_data is not None:
self.set_data(jla_data)
# ------------------- #
# Setter #
# ------------------- #
def set_data(self, datatable):
""" Set the data of the class. This must be an astropy table """
# use this tricks to forbid the user to overwrite self.data...
self._data = datatable
def setup(self, parameters):
""" Set the parameters of the class:
- alpha
- beta
- Om
- M0
"""
self._parameters = parameters
# ------------------- #
# GETTER #
# ------------------- #
def get_mbcorr(self, parameters):
""" corrected sn magnitude with its associated variance"""
return self.mb - (self.x1*parameters[0] + self.color*parameters[1]),\
self.mb_err**2 + (self.x1*parameters[0])**2 + (self.color*parameters[1])**2
def get_mbexp(self, parameters, z=None):
""" corrected sn magnitude with its associated error"""
zused = z if z is not None else self.zcmb
return cosmology.FlatLambdaCDM(70, parameters[2]).distmod(zused ).value + parameters[3]
def fit(self,guess):
""" fit the model to the data
The methods uses scipy.optimize.minize to fit the model
to the data. The fit output is saved as self.fitout, the
best fit parameters being self.fitout["x"]
Parameters
----------
guess: [array]
initial guess for the minimizer. It's size must correspond
to the amount of free parameters of the model.
Return
------
Void (create self.fitout)
"""
from scipy.optimize import minimize
bounds = [[None,None],[None,None],[0,None],[None,None]]
self.fitout = minimize(self.chi2, guess,bounds=bounds)
print self.fitout
self._bestfitparameters = self.fitout["x"]
def chi2(self,parameters):
""" The chi2 of the model with the given `parameters`
in comparison to the object's data
Return
------
float (the chi2)
"""
mbcorr, mbcorr_var = self.get_mbcorr(parameters)
mb_exp = self.get_mbexp(parameters)
chi2 = ((mbcorr-mb_exp)**2)/(mbcorr_var)
return np.sum(chi2)
def plot(self, parameters):
""" Vizualize the data and the model for the given
parameters
Return
------
Void
"""
fig = mpl.figure()
ax = fig.add_subplot(1,1,1)
self.setup(parameters)
ax.errorbar(self.zcmb,self.setted_mbcorr,
yerr=self.setted_mbcorr_err, ls="None",marker='o', color="b",
ecolor="0.7")
x = np.linspace(0.001,1.4,10000)
#print self.get_cosmo_distmod(parameters,x)
ax.plot(x,self.get_mbexp(self._parameters,x),'-r', scalex=False,scaley=False)
fig.show()
# ================== #
# Properties #
# ================== #
@property
def data(self):
""" Data table containing the data of the instance """
return self._data
@data.setter
def data(self, newdata):
""" Set the data """
# add all the relevant tests
print "You setted new data"
self._data = newdata
@property
def npoints(self):
""" number of data points """
return len(self.data)
# ----------
# - Parameters
@property
def parameters(self):
""" Current parameters of the fit """
if not self.has_parameters():
raise ValueError("No Parameters defined. See the self.setup() method")
return self._parameters
def has_parameters(self):
return "_parameters" in dir(self)
# -- Current Param prop
@property
def _alpha(self):
return self._parameters[0]
@property
def _beta(self):
return self._parameters[1]
@property
def _omegam(self):
return self._parameters[2]
@property
def _M0(self):
return self._parameters[3]
# -------
# -- Param derived properties
@property
def setted_mbcorr(self):
""" corrected hubble residuals"""
return self.get_mbcorr(self.parameters)[0]
@property
def setted_mbcorr_err(self):
""" corrected hubble residuals"""
return np.sqrt(self.get_mbcorr(self.parameters)[1])
@property
def setted_mu(self):
""" distance modulus for the given cosmology """
return cosmology.FlatLambdaCDM(70, _omegam).distmod(self.zcmb).value
@property
def setted_M0(self):
""" absolute SN magnitude for the setted parameters """
return self._M0
@property
def setted_mbexp(self):
return self.setted_mu + self._M0
# -------
# -- Data derived properties
@property
def mb(self):
""" observed magnitude (in the b-band) of the Supernovae """
return self.data["mb"]
@property
def mb_err(self):
""" observed magnitude (in the b-band) of the Supernovae """
return self.data["dmb"]
@property
def x1(self):
""" Lightcurve stretch """
return self.data["x1"]
@property
def x1_err(self):
""" errors on the Lightcurve stretch """
return self.data["dx1"]
@property
def color(self):
""" Lightcurve color """
return self.data["color"]
@property
def color_err(self):
""" errors on the Lightcurve color """
return self.data["dcolor"]
@property
def zcmb(self):
""" cosmological redshift of the Supenovae """
return self.data["zcmb"]
c = Chi2Fit(data)
c.setup([ -0.18980149, 3.65435315, 0.32575054, -19.06810566])
c.setted_mbcorr
c.fit([0.13,3,0.2,-20])
c.plot(c._bestfitparameters)
c.setted_mbcorr
"""
Explanation: The Chi2 Method
an introduction of the "property" and "setter" decorators
decorators are kind-of functions of function. Check e.g., http://thecodeship.com/patterns/guide-to-python-function-decorators/
End of explanation
"""
|
josealber84/deep-learning | weight-initialization/weight_initialization.ipynb | mit | %matplotlib inline
import tensorflow as tf
import helper
from tensorflow.examples.tutorials.mnist import input_data
print('Getting MNIST Dataset...')
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print('Data Extracted.')
"""
Explanation: Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
Testing Weights
Dataset
To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.
We'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.
End of explanation
"""
# Save the shapes of weights for each layer
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1])
"""
Explanation: Neural Network
<img style="float: left" src="images/neural_network.png"/>
For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.
End of explanation
"""
all_zero_weights = [
tf.Variable(tf.zeros(layer_1_weight_shape)),
tf.Variable(tf.zeros(layer_2_weight_shape)),
tf.Variable(tf.zeros(layer_3_weight_shape))
]
all_one_weights = [
tf.Variable(tf.ones(layer_1_weight_shape)),
tf.Variable(tf.ones(layer_2_weight_shape)),
tf.Variable(tf.ones(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'All Zeros vs All Ones',
[
(all_zero_weights, 'All Zeros'),
(all_one_weights, 'All Ones')])
"""
Explanation: Initialize Weights
Let's start looking at some initial weights.
All Zeros or Ones
If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.
Run the cell below to see the difference between weights of all zeros against all ones.
End of explanation
"""
helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))
"""
Explanation: As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
Uniform Distribution
A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.
tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.
maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.
dtype: The type of the output: float32, float64, int32, or int64.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
We can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.
End of explanation
"""
# Default for tf.random_uniform is minval=0 and maxval=1
basline_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape)),
tf.Variable(tf.random_uniform(layer_2_weight_shape)),
tf.Variable(tf.random_uniform(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'Baseline',
[(basline_weights, 'tf.random_uniform [0, 1)')])
"""
Explanation: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the tf.random_uniform function, let's apply it to some initial weights.
Baseline
Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.
End of explanation
"""
uniform_neg1to1_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))
]
helper.compare_init_weights(
mnist,
'[0, 1) vs [-1, 1)',
[
(basline_weights, 'tf.random_uniform [0, 1)'),
(uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])
"""
Explanation: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
General rule for setting weights
The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where
$y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).
Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).
End of explanation
"""
uniform_neg01to01_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))
]
uniform_neg001to001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))
]
uniform_neg0001to0001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))
]
helper.compare_init_weights(
mnist,
'[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg1to1_weights, '[-1, 1)'),
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
"""
Explanation: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
Too small
Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.
End of explanation
"""
import numpy as np
general_rule_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))
]
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs General Rule',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(general_rule_weights, 'General Rule')],
plot_n_batches=None)
"""
Explanation: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
End of explanation
"""
helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))
"""
Explanation: The range we found and $y=1/\sqrt{n}$ are really close.
Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.
Normal Distribution
Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a normal distribution.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
"""
normal_01_weights = [
tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Uniform [-0.1, 0.1) vs Normal stddev 0.1',
[
(uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),
(normal_01_weights, 'Normal stddev 0.1')])
"""
Explanation: Let's compare the normal distribution against the previous uniform distribution.
End of explanation
"""
helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))
"""
Explanation: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.
Truncated Normal Distribution
tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
"""
trunc_normal_01_weights = [
tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Normal vs Truncated Normal',
[
(normal_01_weights, 'Normal'),
(trunc_normal_01_weights, 'Truncated Normal')])
"""
Explanation: Again, let's compare the previous results with the previous distribution.
End of explanation
"""
helper.compare_init_weights(
mnist,
'Baseline vs Truncated Normal',
[
(basline_weights, 'Baseline'),
(trunc_normal_01_weights, 'Truncated Normal')])
"""
Explanation: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.
End of explanation
"""
|
chemiskyy/simmit | Examples/ODF/.ipynb_checkpoints/ODF-checkpoint.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from simmit import smartplus as sim
from simmit import identify as iden
import os
dir = os.path.dirname(os.path.realpath('__file__'))
"""
Explanation: Orientation density functions
End of explanation
"""
x = np.arange(0,182,2)
path_data = dir + '/data/'
peak_file = 'Npeaks0.dat'
y = sim.get_densities(x, path_data, peak_file, False)
fig = plt.figure()
plt.grid(True)
plt.plot(x,y, c='black')
"""
Explanation: In this Python Notebook we will show how to properly run a simulation of a composite material, providing the ODF (orientation density function) of the reinforcments.
Such identification procedure require:
1. Proper ODF peak data
1. Proper composite properties
2. A proper numerical model (here a composite model for laminate constitutive model)
End of explanation
"""
NPhases_file = dir + '/data/Nellipsoids0.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
NPhases[::]
umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
nstatev = 0
rho = 1.12 #The density of the material (overall)
c_p = 1.64 #The specific heat capacity (overall)
nphases = 2 #The number of phases
num_file = 0 #The num of the file that contains the subphases
int1 = 20
int2 = 20
psi_rve = 0.
theta_rve = 0.
phi_rve = 0.
props = np.array([nphases, num_file, int1, int2, 0])
path_data = 'data'
path_results = 'results'
Nfile_init = 'Nellipsoids0.dat'
Nfile_disc = 'Nellipsoids1.dat'
nphases_rve = 36
num_phase_disc = 1
sim.ODF_discretization(nphases_rve, num_phase_disc, 0., 180., umat_name, props, path_data, peak_file, Nfile_init, Nfile_disc, 1)
NPhases_file = dir + '/data/Nellipsoids1.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
#We plot here the five first phases
NPhases[:5]
#Plot the concentration and the angle
c, angle = np.loadtxt(NPhases_file, usecols=(4,5), skiprows=2, unpack=True)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
# the histogram of the data
xs = np.arange(0,180,5)
rects1 = ax1.bar(xs, c, width=5, color='r', align='center')
ax2.plot(x, y, 'b-')
ax1.set_xlabel('X data')
ax1.set_ylabel('Y1 data', color='g')
ax2.set_ylabel('Y2 data', color='b')
ax1.set_ylim([0,0.025])
ax2.set_ylim([0,0.25])
plt.show()
#plt.grid(True)
#plt.plot(angle,c, c='black')
plt.show()
#Run the simulation
pathfile = 'path.txt'
nphases = 37 #The number of phases
num_file = 1 #The num of the file that contains the subphases
props = np.array([nphases, num_file, int1, int2])
outputfile = 'results_MTN.txt'
sim.solver(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data, path_results, pathfile, outputfile)
fig = plt.figure()
outputfile_macro = dir + '/' + path_results + '/results_MTN_global-0.txt'
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_macro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='black')
for i in range(8,12):
outputfile_micro = dir + '/' + path_results + '/results_MTN_global-0-' + str(i) + '.txt'
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_micro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='red')
plt.xlabel('Strain')
plt.ylabel('Stress (MPa)')
plt.show()
"""
Explanation: In the previous graph we can see a multi-peak ODF (peaks are modeled using PEARSONVII functions). It actually represent quite well the microstructure of injected plates.
The next step is to discretize the ODF into phases.
The file containing the initial 2-phase microstructure contains the following informations
End of explanation
"""
|
arcyfelix/Courses | 18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/03-Convolutional-Neural-Networks/02-CNN-Project-Exercise.ipynb | apache-2.0 | # Put file path as a string here
CIFAR_DIR = './data./cifar-10-batches-py/'
"""
Explanation: CNN-Project-Exercise
We'll be using the CIFAR-10 dataset, which is very famous dataset for image recognition!
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Follow the Instructions in Bold, if you get stuck somewhere, view the solutions video! Most of the challenge with this project is actually dealing with the data and its dimensions, not from setting up the CNN itself!
Step 0: Get the Data
Note: If you have trouble with this just watch the solutions video. This doesn't really have anything to do with the exercise, its more about setting up your data. Please make sure to watch the solutions video before posting any QA questions.
Download the data for CIFAR from here: https://www.cs.toronto.edu/~kriz/cifar.html
Specifically the CIFAR-10 python version link: https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
Remember the directory you save the file in!
End of explanation
"""
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
cifar_dict = pickle.load(fo, encoding='bytes')
return cifar_dict
dirs = ['batches.meta','data_batch_1','data_batch_2','data_batch_3','data_batch_4','data_batch_5','test_batch']
all_data = [0, 1, 2, 3, 4, 5, 6]
for i,direc in zip(all_data,dirs):
all_data[i] = unpickle(CIFAR_DIR + direc)
batch_meta = all_data[0]
data_batch1 = all_data[1]
data_batch2 = all_data[2]
data_batch3 = all_data[3]
data_batch4 = all_data[4]
data_batch5 = all_data[5]
test_batch = all_data[6]
batch_meta
"""
Explanation: The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle.
Load the Data. Use the Code Below to load the data:
End of explanation
"""
data_batch1.keys()
"""
Explanation: Why the 'b's in front of the string?
Bytes literals are always prefixed with 'b' or 'B'; they produce an instance of the bytes type instead of the str type. They may only contain ASCII characters; bytes with a numeric value of 128 or greater must be expressed with escapes.
https://stackoverflow.com/questions/6269765/what-does-the-b-character-do-in-front-of-a-string-literal
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
X = data_batch1[b'data']
# Put the code here that transforms the X array!
X = X.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("uint8")
plt.imshow(X[0])
plt.imshow(X[1])
plt.imshow(X[4])
"""
Explanation: Loaded in this way, each of the batch files contains a dictionary with the following elements:
* data -- a 10000x3072 numpy array of uint8s. Each row of the array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image.
* labels -- a list of 10000 numbers in the range 0-9. The number at index i indicates the label of the ith image in the array data.
The dataset contains another file, called batches.meta. It too contains a Python dictionary object. It has the following entries:
label_names -- a 10-element list which gives meaningful names to the numeric labels in the labels array described above. For example, label_names[0] == "airplane", label_names[1] == "automobile", etc.
Display a single image using matplotlib.
Grab a single image from data_batch1 and display it with plt.imshow(). You'll need to reshape and transpose the numpy array inside the X = data_batch[b'data'] dictionary entry.
It should end up looking like this:
# Array of all images reshaped and formatted for viewing
X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("uint8")
End of explanation
"""
def one_hot_encode(vec, vals = 10):
'''
For use to one-hot encode the 10- possible labels
'''
n = len(vec)
out = np.zeros((n, vals))
out[range(n), vec] = 1
return out
class CifarHelper():
def __init__(self):
self.i = 0
# Grabs a list of all the data batches for training
self.all_train_batches = [data_batch1, data_batch2, data_batch3, data_batch4, data_batch5]
# Grabs a list of all the test batches (really just one batch)
self.test_batch = [test_batch]
# Intialize some empty variables for later on
self.training_images = None
self.training_labels = None
self.test_images = None
self.test_labels = None
def set_up_images(self):
print("Setting Up Training Images and Labels")
# Vertically stacks the training images
self.training_images = np.vstack([d[b"data"] for d in self.all_train_batches])
train_len = len(self.training_images)
# Reshapes and normalizes training images
self.training_images = self.training_images.reshape(train_len, 3, 32, 32).transpose(0, 2, 3, 1) / 255
# One hot Encodes the training labels (e.g. [0, 0, 0, 1, 0, 0, 0, 0, 0, 0])
self.training_labels = one_hot_encode(np.hstack([d[b"labels"] for d in self.all_train_batches]), 10)
print("Setting Up Test Images and Labels")
# Vertically stacks the test images
self.test_images = np.vstack([d[b"data"] for d in self.test_batch])
test_len = len(self.test_images)
# Reshapes and normalizes test images
self.test_images = self.test_images.reshape(test_len, 3, 32, 32).transpose(0, 2, 3, 1)/255
# One hot Encodes the test labels (e.g. [0, 0, 0, 1, 0, 0, 0, 0, 0, 0])
self.test_labels = one_hot_encode(np.hstack([d[b"labels"] for d in self.test_batch]), 10)
def next_batch(self, batch_size):
# Note that the 100 dimension in the reshape call is set by an assumed batch size of 100
x = self.training_images[self.i: self.i + batch_size].reshape(100, 32, 32, 3)
y = self.training_labels[self.i: self.i + batch_size]
self.i = (self.i + batch_size) % len(self.training_images)
return x, y
"""
Explanation: Helper Functions for Dealing With Data.
Use the provided code below to help with dealing with grabbing the next batch once you've gotten ready to create the Graph Session. Can you break down how it works?
End of explanation
"""
# Before Your tf.Session run these two lines
ch = CifarHelper()
ch.set_up_images()
# During your session to grab the next batch use this line
# (Just like we did for mnist.train.next_batch)
# batch = ch.next_batch(100)
"""
Explanation: How to use the above code:
End of explanation
"""
import tensorflow as tf
"""
Explanation: Creating the Model
Import tensorflow
End of explanation
"""
x = tf.placeholder(shape = [None, 32, 32, 3], dtype = tf.float32)
y = tf.placeholder(shape = [None, 10], dtype = tf.float32)
"""
Explanation: Create 2 placeholders, x and y_true. Their shapes should be:
x shape = [None,32,32,3]
y_true shape = [None,10]
End of explanation
"""
hold_prob = tf.placeholder(dtype = tf.float32)
"""
Explanation: Create one more placeholder called hold_prob. No need for shape here. This placeholder will just hold a single probability for the dropout.
End of explanation
"""
def initialize_weights(shape):
init_random_distribution = tf.truncated_normal(shape, stddev = 0.1)
return tf.Variable(init_random_distribution)
def initialize_bias(shape):
init_bias_vals = tf.constant(0.1, shape=shape)
return tf.Variable(init_bias_vals)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides = [1, 1, 1, 1], padding = 'SAME')
def max_pool_2_by_2(x):
return tf.nn.max_pool(x, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'SAME')
def convolutional_layer(input_x, shape):
W = initialize_weights(shape)
b = initialize_bias([shape[3]])
return tf.nn.relu(conv2d(input_x, W) + b)
def fully_connected_layer(input_layer, size):
input_size = int(input_layer.get_shape()[1])
W = initialize_weights([input_size, size])
b = initialize_bias([size])
return tf.matmul(input_layer, W) + b
"""
Explanation: Helper Functions
Grab the helper functions from MNIST with CNN (or recreate them here yourself for a hard challenge!). You'll need:
init_weights
init_bias
conv2d
max_pool_2by2
convolutional_layer
normal_full_layer
End of explanation
"""
convo_1 = convolutional_layer(x, shape = [4, 4, 3, 32])
pool_1 = max_pool_2_by_2(convo_1)
"""
Explanation: Create the Layers
Create a convolutional layer and a pooling layer as we did for MNIST.
Its up to you what the 2d size of the convolution should be, but the last two digits need to be 3 and 32 because of the 3 color channels and 32 pixels. So for example you could use:
convo_1 = convolutional_layer(x,shape=[4,4,3,32])
End of explanation
"""
convo_2 = convolutional_layer(pool_1, shape = [4, 4, 32, 64])
pool_2 = max_pool_2_by_2(convo_2)
"""
Explanation: Create the next convolutional and pooling layers. The last two dimensions of the convo_2 layer should be 32,64
End of explanation
"""
8*8*64
flattened = tf.reshape(pool_2, shape = [-1, 8 * 8 * 64])
"""
Explanation: Now create a flattened layer by reshaping the pooling layer into [-1,8 * 8 * 64] or [-1,4096]
End of explanation
"""
fully_connected_layer_1 = tf.nn.relu(fully_connected_layer(flattened,1024))
"""
Explanation: Create a new full layer using the normal_full_layer function and passing in your flattend convolutional 2 layer with size=1024. (You could also choose to reduce this to something like 512)
End of explanation
"""
fully_connected_layer_after_dropout = tf.nn.dropout(x = fully_connected_layer_1,
keep_prob = hold_prob)
"""
Explanation: Now create the dropout layer with tf.nn.dropout, remember to pass in your hold_prob placeholder.
End of explanation
"""
y_pred = fully_connected_layer(fully_connected_layer_after_dropout, size = 10)
"""
Explanation: Finally set the output to y_pred by passing in the dropout layer into the normal_full_layer function. The size should be 10 because of the 10 possible labels
End of explanation
"""
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y,
logits = y_pred))
"""
Explanation: Loss Function
Create a cross_entropy loss function
End of explanation
"""
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
train = optimizer.minimize(cross_entropy)
"""
Explanation: Optimizer
Create the optimizer using an Adam Optimizer.
End of explanation
"""
init = tf.global_variables_initializer()
"""
Explanation: Create a variable to intialize all the global tf variables.
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(5000):
batch = ch.next_batch(100)
sess.run(train, feed_dict={x: batch[0],
y: batch[1],
hold_prob: 0.5})
# PRINT OUT A MESSAGE EVERY 100 STEPS
if i%100 == 0:
print('Currently on step {}'.format(i))
print('Accuracy is:')
# Test the Train Model
matches = tf.equal(tf.argmax(y_pred,1),tf.argmax(y,1))
acc = tf.reduce_mean(tf.cast(matches,tf.float32))
print(sess.run(acc,feed_dict={x: ch.test_images,
y: ch.test_labels,
hold_prob: 1.0}))
print('\n')
"""
Explanation: Graph Session
Perform the training and test print outs in a Tf session and run your model!
End of explanation
"""
|
AllenDowney/ModSimPy | soln/chap03soln.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# set the random number generator
np.random.seed(7)
"""
Explanation: Modeling and Simulation in Python
Chapter 3
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
def step(state, p1, p2):
"""Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
"""
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
def bike_to_wellesley(state):
"""Move one bike from Olin to Wellesley.
state: bikeshare State object
"""
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
"""Move one bike from Wellesley to Olin.
state: bikeshare State object
"""
state.wellesley -= 1
state.olin += 1
def decorate_bikeshare():
"""Add a title and label the axes."""
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
"""
Explanation: More than one State object
Here's the code from the previous chapter, with two changes:
I've added DocStrings that explain what each function does, and what parameters it takes.
I've added a parameter named state to the functions so they work with whatever State object we give them, instead of always using bikeshare. That makes it possible to work with more than one State object.
End of explanation
"""
def run_simulation(state, p1, p2, num_steps):
"""Simulate the given number of time steps.
state: State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
num_steps: number of time steps
"""
results = TimeSeries()
for i in range(num_steps):
step(state, p1, p2)
results[i] = state.olin
plot(results, label='Olin')
"""
Explanation: And here's run_simulation, which is a solution to the exercise at the end of the previous notebook.
End of explanation
"""
bikeshare1 = State(olin=10, wellesley=2)
bikeshare2 = State(olin=2, wellesley=10)
"""
Explanation: Now we can create more than one State object:
End of explanation
"""
bike_to_olin(bikeshare1)
bike_to_wellesley(bikeshare2)
"""
Explanation: Whenever we call a function, we indicate which State object to work with:
End of explanation
"""
bikeshare1
bikeshare2
"""
Explanation: And you can confirm that the different objects are getting updated independently:
End of explanation
"""
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
"""
Explanation: Negative bikes
In the code we have so far, the number of bikes at one of the locations can go negative, and the number of bikes at the other location can exceed the actual number of bikes in the system.
If you run this simulation a few times, it happens often.
End of explanation
"""
def bike_to_wellesley(state):
"""Move one bike from Olin to Wellesley.
state: bikeshare State object
"""
if state.olin == 0:
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
"""Move one bike from Wellesley to Olin.
state: bikeshare State object
"""
if state.wellesley == 0:
return
state.wellesley -= 1
state.olin += 1
"""
Explanation: We can fix this problem using the return statement to exit the function early if an update would cause negative bikes.
End of explanation
"""
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
"""
Explanation: Now if you run the simulation again, it should behave.
End of explanation
"""
x = 5
"""
Explanation: Comparison operators
The if statements in the previous section used the comparison operator ==. The other comparison operators are listed in the book.
It is easy to confuse the comparison operator == with the assignment operator =.
Remember that = creates a variable or gives an existing variable a new value.
End of explanation
"""
x == 5
"""
Explanation: Whereas == compares two values and returns True if they are equal.
End of explanation
"""
if x == 5:
print('yes, x is 5')
"""
Explanation: You can use == in an if statement.
End of explanation
"""
# If you remove the # from the if statement and run it, you'll get
# SyntaxError: invalid syntax
#if x = 5:
# print('yes, x is 5')
"""
Explanation: But if you use = in an if statement, you get an error.
End of explanation
"""
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0)
"""
Explanation: Exercise: Add an else clause to the if statement above, and print an appropriate message.
Replace the == operator with one or two of the other comparison operators, and confirm they do what you expect.
Metrics
Now that we have a working simulation, we'll use it to evaluate alternative designs and see how good or bad they are. The metric we'll use is the number of customers who arrive and find no bikes available, which might indicate a design problem.
First we'll make a new State object that creates and initializes additional state variables to keep track of the metrics.
End of explanation
"""
def bike_to_wellesley(state):
"""Move one bike from Olin to Wellesley.
state: bikeshare State object
"""
if state.olin == 0:
state.olin_empty += 1
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
"""Move one bike from Wellesley to Olin.
state: bikeshare State object
"""
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
"""
Explanation: Next we need versions of bike_to_wellesley and bike_to_olin that update the metrics.
End of explanation
"""
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
"""
Explanation: Now when we run a simulation, it keeps track of unhappy customers.
End of explanation
"""
bikeshare.olin_empty
bikeshare.wellesley_empty
"""
Explanation: After the simulation, we can print the number of unhappy customers at each location.
End of explanation
"""
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0)
# Solution
def step(state, p1, p2):
"""Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
"""
state.clock += 1
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
# Solution
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
# Solution
bikeshare
"""
Explanation: Exercises
Exercise: As another metric, we might be interested in the time until the first customer arrives and doesn't find a bike. To make that work, we have to add a "clock" to keep track of how many time steps have elapsed:
Create a new State object with an additional state variable, clock, initialized to 0.
Write a modified version of step that adds one to the clock each time it is invoked.
Test your code by running the simulation and check the value of clock at the end.
End of explanation
"""
# Solution
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0, t_first_empty=-1)
# Solution
def step(state, p1, p2):
"""Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
"""
state.clock += 1
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
if state.t_first_empty != -1:
return
if state.olin_empty + state.wellesley_empty > 0:
state.t_first_empty = state.clock
# Solution
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
# Solution
bikeshare
"""
Explanation: Exercise: Continuing the previous exercise, let's record the time when the first customer arrives and doesn't find a bike.
Create a new State object with an additional state variable, t_first_empty, initialized to -1 as a special value to indicate that it has not been set.
Write a modified version of step that checks whetherolin_empty and wellesley_empty are 0. If not, it should set t_first_empty to clock (but only if t_first_empty has not already been set).
Test your code by running the simulation and printing the values of olin_empty, wellesley_empty, and t_first_empty at the end.
End of explanation
"""
|
darkomen/TFG | medidas/04082015/.ipynb_checkpoints/Untitled-checkpoint.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('841551.CSV')
%pylab inline
#Mostramos un resumen de los datos obtenidoss
datos.describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X', 'Diametro Y', 'RPM TRAC']
#Mostramos en varias gráficas la información obtenida tras el ensayo
datos[columns].plot(subplots=True, figsize=(20,20))
"""
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. Los datos analizados son del filamento de bq el día 20 de Julio del 2015
End of explanation
"""
datos.ix[:, "Diametro X [mm]":"Diametro Y [mm]"].plot(figsize=(16,3))
datos.ix[:, "Diametro X [mm]":"Diametro Y [mm]"].boxplot(return_type='axes')
"""
Explanation: Representamos ambos diámetros en la misma gráfica
End of explanation
"""
pd.rolling_mean(datos[columns], 50).plot(subplots=True, figsize=(12,12))
"""
Explanation: Mostramos la representación gráfica de la media de las muestras
End of explanation
"""
plt.scatter(x=datos['Diametro X [mm]'], y=datos['Diametro Y [mm]'], marker='.')
"""
Explanation: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
"""
datos_filtrados = datos[(datos['Diametro X [mm]'] >= 0.9) & (datos['Diametro Y [mm]'] >= 0.9)]
"""
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
"""
plt.scatter(x=datos_filtrados['Diametro X [mm]'], y=datos_filtrados['Diametro Y [mm]'], marker='.')
"""
Explanation: Representación de X/Y
End of explanation
"""
ratio = datos_filtrados['Diametro X [mm]']/datos_filtrados['Diametro Y [mm]']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
"""
Explanation: Analizamos datos del ratio
End of explanation
"""
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X [mm]'] > Th_u) | (datos['Diametro X [mm]'] < Th_d) |
(datos['Diametro Y [mm]'] > Th_u) | (datos['Diametro Y [mm]'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
"""
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation
"""
|
tpin3694/tpin3694.github.io | python/functions_vs_generators.ipynb | mit | # Create a function that
def function(names):
# For each name in a list of names
for name in names:
# Returns the name
return name
# Create a variable of that function
students = function(['Abe', 'Bob', 'Christina', 'Derek', 'Eleanor'])
# Run the function
students
"""
Explanation: Title: Functions Vs. Generators
Slug: functions_vs_generators
Summary: Functions Vs. Generators in Python.
Date: 2016-01-23 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Interesting in learning more? Check out Fluent Python
Create A Function
End of explanation
"""
# Create a generator that
def generator(names):
# For each name in a list of names
for name in names:
# Yields a generator object
yield name
# Same as above, create a variable for the generator
students = generator(['Abe', 'Bob', 'Christina', 'Derek', 'Eleanor'])
"""
Explanation: Now we have a problem, we were only returned the name of the first student. Why? Because the function only ran the for name in names iteration once!
Create A Generator
A generator is a function, but instead of returning the return, instead returns an iterator. The generator below is exactly the same as the function above except I have replaced return with yield (which defines whether a function with a regular function or a generator function).
End of explanation
"""
# Run the generator
students
"""
Explanation: Everything has been the same so far, but now things get interesting. Above when we ran students when it was a function, it returned one name. However, now that students refers to a generator, it yields a generator object of names!
End of explanation
"""
# Return the next student
next(students)
# Return the next student
next(students)
# Return the next student
next(students)
"""
Explanation: What can we do this a generator object? A lot! As a generator students will can each student in the list of students:
End of explanation
"""
# List all remaining students in the generator
list(students)
"""
Explanation: It is interesting to note that if we use list(students) we can see all the students still remaining in the generator object's iteration:
End of explanation
"""
|
MedievalSure/ToStudy | notebook/02_01_1DConvection.ipynb | mit | import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, G.F. Forsyth, C.D. Cooper. Based on CFD Python, (c)2013 L.A. Barba, also under CC-BY.
Space & Time
Introduction to numerical solution of PDEs
Welcome to Space and Time: Introduction to finite-difference solutions of PDEs, the second module of "Practical Numerical Methods with Python".
In the first module, we looked into numerical integration methods for the solution of ordinary differential equations (ODEs), using the phugoid model of glider flight as a motivation. In this module, we will study the numerical solution of partial differential equations (PDEs), where the unknown is a multi-variate function. The problem could depend on time, $t$, and one spatial dimension $x$ (or more), which means we need to build a discretization grid with each independent variable.
We will start our discussion of numerical PDEs with 1-D linear and non-linear convection equations, the 1-D diffusion equation, and 1-D Burgers' equation. We hope you will enjoy them!
1D linear convection
The one-dimensional linear convection equation is the simplest, most basic model that can be used to learn something about numerical solution of PDEs. It's surprising that this little equation can teach us so much! Here it is:
\begin{equation}\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0\end{equation}
The equation represents a wave propagating with speed $c$ in the $x$ direction, without change of shape. For that reason, it's sometimes called the one-way wave equation (sometimes also the advection equation).
With an initial condition $u(x,0)=u_0(x)$, the equation has an exact solution given by:
\begin{equation}u(x,t)=u_0(x-ct).
\end{equation}
Go on: check it. Take the time and space derivative and stick them into the equation to see that it holds!
Look at the exact solution for a moment ... we know two things about it:
its shape does not change, being always the same as the initial wave, $u_0$, only shifted in the $x$-direction; and
it's constant along so-called characteristic curves, $x-ct=$constant. This means that for any point in space and time, you can move back along the characteristic curve to $t=0$ to know the value of the solution.
Characteristic curves for positive wave speed.
Why do we call the equations linear? PDEs can be either linear or non-linear. In a linear equation, the unknown function $u$ and its derivatives appear only in linear terms, in other words, there are no products, powers, or transcendental functions applied on them.
What is the most important feature of linear equations? Do you remember? In case you forgot: solutions can be superposed to generate new solutions that still satisfy the original equation. This is super useful!
Finite-differences
In the previous lessons, we discretized time derivatives; now we have derivatives in both space and time, so we need to discretize with respect to both these variables.
Imagine a space-time plot, where the coordinates in the vertical direction represent advancing in time—for example, from $t^n$ to $t^{n+1}$—and the coordinates in the horizontal direction move in space: consecutive points are $x_{i-1}$, $x_i$, and $x_{i+1}$. This creates a grid where a point has both a temporal and spatial index. Here is a graphical representation of the space-time grid:
\begin{matrix}
t^{n+1} & \rightarrow & \bullet && \bullet && \bullet \
t^n & \rightarrow & \bullet && \bullet && \bullet \
& & x_{i-1} && x_i && x_{i+1}
\end{matrix}
For the numerical solution of $u(x,t)$, we'll use subscripts to denote the spatial position, like $u_i$, and superscripts to denote the temporal instant, like $u^n$. We would then label the solution at the top-middle point in the grid above as follows:
$u^{n+1}_{i}$.
Each grid point below has an index $i$, corresponding to the spatial position and increasing to the right, and an index $n$, corresponding to the time instant and increasing upwards. A small grid segment would have the following values of the numerical solution at each point:
\begin{matrix}
& &\bullet & & \bullet & & \bullet \
& &u^{n+1}{i-1} & & u^{n+1}_i & & u^{n+1}{i+1} \
& &\bullet & & \bullet & & \bullet \
& &u^n_{i-1} & & u^n_i & & u^n_{i+1} \
& &\bullet & & \bullet & & \bullet \
& &u^{n-1}{i-1} & & u^{n-1}_i & & u^{n-1}{i+1} \
\end{matrix}
Another way to explain our discretization grid is to say that it is built with constant steps in time and space, $\Delta t$ and $\Delta x$, as follows:
\begin{eqnarray}
x_i &=& i\, \Delta x \quad \text{and} \quad t^n= n\, \Delta t \nonumber \
u_i^n &=& u(i\, \Delta x, n\, \Delta t)
\end{eqnarray}
Discretizing our model equation
Let's see how to discretize the 1-D linear convection equation in both space and time. By definition, the partial derivative with respect to time changes only with time and not with space; its discretized form changes only the $n$ indices. Similarly, the partial derivative with respect to $x$ changes with space not time, and only the $i$ indices are affected.
We'll discretize the spatial coordinate $x$ into points indexed from $i=0$ to $N$, and then step in discrete time intervals of size $\Delta t$.
From the definition of a derivative (and simply removing the limit), we know that for $\Delta x$ sufficiently small:
\begin{equation}\frac{\partial u}{\partial x}\approx \frac{u(x+\Delta x)-u(x)}{\Delta x}\end{equation}
This formula could be applied at any point $x_i$. But note that it's not the only way that we can estimate the derivative. The geometrical interpretation of the first derivative $\partial u/ \partial x$ at any point is that it represents the slope of the tangent to the curve $u(x)$. In the sketch below, we show a slope line at $x_i$ and mark it as "exact." If the formula written above is applied at $x_i$, it approximates the derivative using the next spatial grid point: it is then called a forward difference formula.
But as shown in the sketch below, we could also estimate the spatial derivative using the point behind $x_i$, in which case it is called a backward difference. We could even use the two points on each side of $x_i$, and obtain what's called a central difference (but in that case the denominator would be $2\Delta x$).
Three finite-difference approximations at $x_i$.
We have three possible ways to represent a discrete form of $\partial u/ \partial x$:
Forward difference: uses $x_i$ and $x_i + \Delta x$,
Backward difference: uses $x_i$ and $x_i- \Delta x$,
Central difference: uses two points on either side of $x_i$.
The sketch above also suggests that some finite-difference formulas might be better than others: it looks like the central difference approximation is closer to the slope of the "exact" derivative. Curious if this is just an effect of our exaggerated picture? We'll show you later how to make this observation rigorous!
The three formulas are:
\begin{eqnarray}
\frac{\partial u}{\partial x} & \approx & \frac{u(x_{i+1})-u(x_i)}{\Delta x} \quad\text{Forward}\
\frac{\partial u}{\partial x} & \approx & \frac{u(x_i)-u(x_{i-1})}{\Delta x} \quad\text{Backward}\
\frac{\partial u}{\partial x} & \approx & \frac{u(x_{i+1})-u(x_{i-1})}{2\Delta x} \quad\text{Central}
\end{eqnarray}
Euler's method is equivalent to using a forward-difference scheme for the time derivative. Let's stick with that, and choose the backward-difference scheme for the space derivative. Our discrete equation is then:
\begin{equation}\frac{u_i^{n+1}-u_i^n}{\Delta t} + c \frac{u_i^n - u_{i-1}^n}{\Delta x} = 0, \end{equation}
where $n$ and $n+1$ are two consecutive steps in time, while $i-1$ and $i$ are two neighboring points of the discretized $x$ coordinate. With given initial conditions, the only unknown in this discretization is $u_i^{n+1}$. We solve for this unknown to get an equation that lets us step in time, as follows:
\begin{equation}u_i^{n+1} = u_i^n - c \frac{\Delta t}{\Delta x}(u_i^n-u_{i-1}^n)\end{equation}
We like to make drawings of a grid segment, showing the grid points that influence our numerical solution. This is called a stencil. Below is the stencil for solving our model equation with the finite-difference formula we wrote above.
Stencil for the "forward-time/backward-space" scheme.
And compute!
Alright. Let's get a little Python on the road. First: we need to load our array and plotting libraries, as usual. And if you noticed in the Bonus! notebook for Module 1, we taught you a neat trick to set some global plotting parameters with the rcParams module. We like to do that.
End of explanation
"""
nx = 41 # try changing this number from 41 to 81 and Run All ... what happens?
dx = 2/(nx-1)
nt = 25
dt = .02
c = 1 #assume wavespeed of c = 1
x = numpy.linspace(0,2,nx)
"""
Explanation: As a first exercise, we'll solve the 1D linear convection equation with a square wave initial condition, defined as follows:
\begin{equation}
u(x,0)=\begin{cases}2 & \text{where } 0.5\leq x \leq 1,\
1 & \text{everywhere else in } (0, 2)
\end{cases}
\end{equation}
We also need a boundary condition on $x$: let $u=1$ at $x=0$. Our spatial domain for the numerical solution will only cover the range $x\in (0, 2)$.
Square wave initial condition.
Now let's define a few variables; we want to make an evenly spaced grid of points within our spatial domain. In the code below, we define a variable called nx that will be the number of spatial grid points, and a variable dx that will be the distance between any pair of adjacent grid points. We also can define a step in time, dt, a number of steps, nt, and a value for the wave speed: we like to keep things simple and make $c=1$.
End of explanation
"""
u = numpy.ones(nx) #numpy function ones()
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
print(lbound)
print(ubound)
"""
Explanation: We also need to set up our initial conditions. Here, we use the NumPy function ones() defining an array which is nx elements long with every value equal to $1$. How useful! We then change a slice of that array to the value $u=2$, to get the square wave, and we print out the initial array just to admire it. But which values should we change? The problem states that we need to change the indices of u such that the square wave begins at $x = 0.5$ and ends at $x = 1$.
We can use the numpy.where function to return a list of indices where the vector $x$ meets (or doesn't meet) some condition.
End of explanation
"""
bounds = numpy.intersect1d(lbound, ubound)
u[bounds]=2 #setting u = 2 between 0.5 and 1 as per our I.C.s
print(u)
"""
Explanation: That leaves us with two vectors. lbound, which has the indices for $x \geq .5$ and 'ubound', which has the indices for $x \leq 1$. To combine these two, we can use an intersection, with numpy.intersect1d.
End of explanation
"""
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
"""
Explanation: Remember that Python can also combine commands, we could have instead written
Python
u[numpy.intersect1d(numpy.where(x >= 0.5), numpy.where(x <= 1))] = 2
but that can be a little hard to read.
Now let's take a look at those initial conditions we've built with a handy plot.
End of explanation
"""
for n in range(1,nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i]-c*dt/dx*(un[i]-un[i-1])
"""
Explanation: It does look pretty close to what we expected. But it looks like the sides of the square wave are not perfectly vertical. Is that right? Think for a bit.
Now it's time to write some code for the discrete form of the convection equation using our chosen finite-difference scheme.
For every element of our array u, we need to perform the operation:
$$u_i^{n+1} = u_i^n - c \frac{\Delta t}{\Delta x}(u_i^n-u_{i-1}^n)$$
We'll store the result in a new (temporary) array un, which will be the solution $u$ for the next time-step. We will repeat this operation for as many time-steps as we specify and then we can see how far the wave has traveled.
We first initialize the placeholder array un to hold the values we calculate for the $n+1$ timestep, using once again the NumPy function ones().
Then, we may think we have two iterative operations: one in space and one in time (we'll learn differently later), so we may start by nesting a spatial loop inside the time loop, as shown below. You see that the code for the finite-difference scheme is a direct expression of the discrete equation:
End of explanation
"""
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
"""
Explanation: Note—We will learn later that the code as written above is quite inefficient, and there are better ways to write this, Python-style. But let's carry on.
Now let's inspect our solution array after advancing in time with a line plot.
End of explanation
"""
##problem parameters
nx = 41
dx = 2/(nx-1)
nt = 10
dt = .02
##initial conditions
u = numpy.ones(nx)
u[numpy.intersect1d(lbound, ubound)]=2
"""
Explanation: That's funny. Our square wave has definitely moved to the right, but it's no longer in the shape of a top-hat. What's going on?
Dig deeper
The solution differs from the expected square wave because the discretized equation is an approximation of the continuous differential equation that we want to solve. There are errors: we knew that. But the modified shape of the initial wave is something curious. Maybe it can be improved by making the grid spacing finer. Why don't you try it? Does it help?
Spatial truncation error
Recall the finite-difference approximation we are using for the spatial derivative:
\begin{equation}\frac{\partial u}{\partial x}\approx \frac{u(x+\Delta x)-u(x)}{\Delta x}\end{equation}
We obtain it by using the definition of the derivative at a point, and simply removing the limit, in the assumption that $\Delta x$ is very small. But we already learned with Euler's method that this introduces an error, called the truncation error.
Using a Taylor series expansion for the spatial terms now, we see that the backward-difference scheme produces a first-order method, in space.
\begin{equation}
\frac{\partial u}{\partial x}(x_i) = \frac{u(x_i)-u(x_{i-1})}{\Delta x} + \frac{\Delta x}{2} \frac{\partial^2 u}{\partial x^2}(x_i) - \frac{\Delta x^2}{6} \frac{\partial^3 u}{\partial x^3}(x_i)+ \cdots
\end{equation}
The dominant term that is neglected in the finite-difference approximation is of $\mathcal{O}(\Delta x)$. We also see that the approximation converges to the exact derivative as $\Delta x \rightarrow 0$. That's good news!
In summary, the chosen "forward-time/backward space" difference scheme is first-order in both space and time: the truncation errors are $\mathcal{O}(\Delta t, \Delta x)$. We'll come back to this!
Non-linear convection
Let's move on to the non-linear convection equation, using the same methods as before. The 1-D convection equation is:
\begin{equation}\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0\end{equation}
The only difference with the linear case is that we've replaced the constant wave speed $c$ by the variable speed $u$. The equation is non-linear because now we have a product of the solution and one of its derivatives: the product $u\,\partial u/\partial x$. This changes everything!
We're going to use the same discretization as for linear convection: forward difference in time and backward difference in space. Here is the discretized equation:
\begin{equation}\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n-u_{i-1}^n}{\Delta x} = 0\end{equation}
Solving for the only unknown term, $u_i^{n+1}$, gives an equation that can be used to advance in time:
\begin{equation}u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n)\end{equation}
There is very little that needs to change from the code written so far. In fact, we'll even use the same square-wave initial condition. But let's re-initialize the variable u with the initial values, and re-enter the numerical parameters here, for convenience (we no longer need $c$, though).
End of explanation
"""
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
"""
Explanation: How does it look?
End of explanation
"""
for n in range(1, nt):
un = u.copy()
u[1:] = un[1:]-un[1:]*dt/dx*(un[1:]-un[0:-1])
u[0] = 1.0
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
"""
Explanation: Changing just one line of code in the solution of linear convection, we are able to now get the non-linear solution: the line that corresponds to the discrete equation now has un[i] in the place where before we just had c. So you could write something like:
Python
for n in range(1,nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i]-un[i]*dt/dx*(un[i]-un[i-1])
We're going to be more clever than that and use NumPy to update all values of the spatial grid in one fell swoop. We don't really need to write a line of code that gets executed for each value of $u$ on the spatial grid. Python can update them all at once! Study the code below, and compare it with the one above. Here is a helpful sketch, to illustrate the array operation—also called a "vectorized" operation—for $u_i-u_{i-1}$.
<br>
Sketch to explain vectorized stencil operation. Adapted from "Indices point between elements" by Nelson Elhage.
End of explanation
"""
from IPython.core.display import HTML
css_file = 'numericalmoocstyle.css'
HTML(open(css_file, "r").read())
"""
Explanation: Hmm. That's quite interesting: like in the linear case, we see that we have lost the sharp sides of our initial square wave, but there's more. Now, the wave has also lost symmetry! It seems to be lagging on the rear side, while the front of the wave is steepening. Is this another form of numerical error, do you ask? No! It's physics!
Dig deeper
Think about the effect of having replaced the constant wave speed $c$ by the variable speed given by the solution $u$. It means that different parts of the wave move at different speeds. Make a sketch of an initial wave and think about where the speed is higher and where it is lower ...
References
Elhage, Nelson (2015), "Indices point between elements"
The cell below loads the style of the notebook.
End of explanation
"""
|
amandalouparker/graphviz-sitemaps | Notebook/sitemap_visualization_tool.ipynb | mpl-2.0 | url = 'https://www.sportchek.ca/sitemap.xml'
page = requests.get(url)
print('Loaded page with: %s' % page)
sitemap_index = BeautifulSoup(page.content, 'html.parser')
print('Created %s object' % type(sitemap_index))
"""
Explanation: How to visualize an XML sitemap using Python
<img src="static/sitemap_graph_2_layer.png" width="700">
A rich sitemap might contain page descriptions and modification dates along with image and video metadata, but the basic purpose of a sitemap is to provide a list the pages on a domain that are accessible to users and web crawlers. In this post, we'll use Python and a toolkit of libraries to parse, categorize, and visualize an XML sitemap. This will involve:
- extracting the page URLs
- categorizing URLs by page type
- plotting a sitemap graph tree
The scripts in this post are compatible with Python 2 and 3 and the library dependencies are Requests and BeautifulSoup4 for extracting the URLs, Pandas for categorization, and Graphviz for creating the visual sitemap. Once you have Python, these libraries can most likely be installed on any operating system with the following terminal commands:
pip install requests
pip install beautifulsoup4
pip install pandas
The Graphviz library is more difficult to install. On Mac it can be done with the help of homebrew:
brew install graphviz
pip install graphviz
For other operating systems or alternate methods, check out the installation instructions in the documentation.
Extracting URLs
We'll use the www.sportchek.ca sitemap as an example. It is hosted on their domain and open to the public. Like most large sites, the entire sitemap is split across multiple XML files. These are indexed at the /sitemap.xml page.
We start by opening the url in Python using requests and then instantiate a "soup" object containing the page content.
End of explanation
"""
sitemap_index.findAll('loc')
urls = [element.text for element in sitemap_index.findAll('loc')]
urls
"""
Explanation: Next we can pull the XML sitemap links, which live within the <loc> tags.
End of explanation
"""
%%time
def extract_links(url):
''' Open an XML sitemap and find content wrapped in <loc> tags. '''
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
links = [element.text for element in soup.findAll('loc')]
return links
sitemap_urls = []
for url in urls:
links = extract_links(url)
sitemap_urls += links
'Found {:,} URLs in the sitemap'.format(len(sitemap_urls))
"""
Explanation: With some investigation of the XML format for each file above, we again see that URLs can be identified by searching for <loc> tags. These URLs can be extracted the same as the XML links were from the index. We loop over the XML documents, appending all sitemap URLs to a list.
End of explanation
"""
with open('sitemap_urls.dat', 'w') as f:
for url in sitemap_urls:
f.write(url + '\n')
"""
Explanation: Let's write these to a file that can be opened in Excel.
End of explanation
"""
sitemap_urls = open('sitemap_urls.dat', 'r').read().splitlines()
print('Loaded {:,} URLs'.format(len(sitemap_urls)))
"""
Explanation: Categorization
Let's start by loading in the URLs we wrote to a file.
End of explanation
"""
def peel_layers(urls, layers=3):
''' Builds a dataframe containing all unique page identifiers up
to a specified depth and counting the number of sub-pages for each.
Prints results to a CSV file.
urls : list
List of page URLs.
layers : int
Depth of automated URL search. Large values for this parameter
may cause long runtimes depending on the number of URLs.
'''
# Store results in a dataframe
sitemap_layers = pd.DataFrame()
# Get base levels
bases = pd.Series([url.split('//')[-1].split('/')[0] for url in urls])
sitemap_layers[0] = bases
# Get specified number of layers
for layer in range(1, layers+1):
page_layer = []
for url, base in zip(urls, bases):
try:
page_layer.append(url.split(base)[-1].split('/')[layer])
except:
# There is nothing that deep!
page_layer.append('')
sitemap_layers[layer] = page_layer
# Count and drop duplicate rows + sort
sitemap_layers = sitemap_layers.groupby(list(range(0, layers+1)))[0].count()\
.rename('counts').reset_index()\
.sort_values('counts', ascending=False)\
.sort_values(list(range(0, layers)), ascending=True)\
.reset_index(drop=True)
# Convert column names to string types and export
sitemap_layers.columns = [str(col) for col in sitemap_layers.columns]
sitemap_layers.to_csv('sitemap_layers.csv', index=False)
# Return the dataframe
return sitemap_layers
"""
Explanation: Site-specific categorization such as identifying display listing pages and product pages can be done by applying filters over the URL list. Python is great for this because filters can be very detailed and chained together, plus your results can be reproduced by simply running the script!
On the other hand, we could take a different approach and - instead of filtering for specific URLs - apply an automated algorithm to peel back our sites layers and find the general structure.
End of explanation
"""
sitemap_layers = peel_layers(urls=sitemap_urls, layers=3)
"""
Explanation: The peel_layers function also counts the number of pages for each layer. These can be accessed by looking at the output dataframe in Python or opening the output file sitemap_layers.csv in Excel. Let's do this for three layers.
End of explanation
"""
counts = 0
for row in sitemap_layers.values:
# Check if the word "hockey" is contained in the 3rd layer
if 'hockey' in row[3]:
# Add the page counts value from the outer right column
counts += row[-1]
print('%d total hockey pages' % counts)
"""
Explanation: <img src="static/sitemap_layers_alt.png" width="700">
At this point you may be inclined to continue with further analysis in Excel, but we'll invite you to carry on in Python.
Filtering
The peel_layers function returns a Pandas DataFrame that we stored in the variable sitemap_layers. This contains the exported .csv data as a table inside Python, and it can be filtered or otherwise modified in any way. Say, for example, we are interested in the number of pages relating to hockey. We may want to run a script like this one that searches for rows with "hockey" in the third layer:
End of explanation
"""
counts = sitemap_layers[sitemap_layers['3'].apply(
lambda string: 'hockey' in string)]\
['counts'].sum()
print('%d total hockey pages' % counts)
"""
Explanation: This could also be accomplished in a single line.
End of explanation
"""
sitemap_fltr = sitemap_layers[sitemap_layers['3'].apply(lambda string: 'hockey' in string)]
sitemap_fltr
"""
Explanation: What we do here is filter the dataframe (as seen below) and then sum the counts column.
End of explanation
"""
sitemap_fltr.to_csv('hockey_pages.csv', index=False)
"""
Explanation: This table can be saved to an Excel readable format using the to_csv function.
End of explanation
"""
sitemap_fltr = sitemap_layers[sitemap_layers['3'].apply(lambda string: 'ski' in string or\
'snowboard' in string)]
sitemap_fltr
"""
Explanation: Filtering conditions can be as specific as you desire. For example if you want to find snowboard and ski pages:
End of explanation
"""
sitemap_fltr = sitemap_layers[sitemap_layers['3'].apply(lambda string: ('ski' in string or\
'snowboard' in string)\
and 'skills-dev' not in string)]
sitemap_fltr
"""
Explanation: Oops, it looks like "skills-development" is included as is contains "ski". Let's exclude this term.
End of explanation
"""
sitemap_fltr = sitemap_layers[sitemap_layers['3'].apply(lambda string: len(string.split('-')) >= 4)]
sitemap_fltr
"""
Explanation: Other useful filtering tools are the split and len functions. For instance, we could find all the pages with at least four "-" characters in the 3rd layer.
End of explanation
"""
def make_sitemap_graph(df, layers=3, limit=50, size='8,5'):
''' Make a sitemap graph up to a specified layer depth.
sitemap_layers : DataFrame
The dataframe created by the peel_layers function
containing sitemap information.
layers : int
Maximum depth to plot.
limit : int
The maximum number node edge connections. Good to set this
low for visualizing deep into site maps.
'''
# Check to make sure we are not trying to plot too many layers
if layers > len(df) - 1:
layers = len(df)-1
print('There are only %d layers available to plot, setting layers=%d'
% (layers, layers))
# Initialize graph
f = graphviz.Digraph('sitemap', filename='sitemap_graph_%d_layer' % layers)
f.body.extend(['rankdir=LR', 'size="%s"' % size])
def add_branch(f, names, vals, limit, connect_to=''):
''' Adds a set of nodes and edges to nodes on the previous layer. '''
# Get the currently existing node names
node_names = [item.split('"')[1] for item in f.body if 'label' in item]
# Only add a new branch it it will connect to a previously created node
if connect_to:
if connect_to in node_names:
for name, val in list(zip(names, vals))[:limit]:
f.node(name='%s-%s' % (connect_to, name), label=name)
f.edge(connect_to, '%s-%s' % (connect_to, name), label='{:,}'.format(val))
f.attr('node', shape='rectangle') # Plot nodes as rectangles
# Add the first layer of nodes
for name, counts in df.groupby(['0'])['counts'].sum().reset_index()\
.sort_values(['counts'], ascending=False).values:
f.node(name=name, label='{} ({:,})'.format(name, counts))
if layers == 0:
return f
f.attr('node', shape='oval') # Plot nodes as ovals
f.graph_attr.update()
# Loop over each layer adding nodes and edges to prior nodes
for i in range(1, layers+1):
cols = [str(i_) for i_ in range(i)]
for k in df[cols].drop_duplicates().values:
# Compute the mask to select correct data
mask = True
for j, ki in enumerate(k):
mask &= df[str(j)] == ki
# Select the data then count branch size, sort, and truncate
data = df[mask].groupby([str(i)])['counts'].sum()\
.reset_index().sort_values(['counts'], ascending=False)
# Add to the graph
add_branch(f,
names=data[str(i)].values,
vals=data['counts'].values,
limit=limit,
connect_to='-'.join(['%s']*i) % tuple(k))
return f
def apply_style(f, style, title=''):
''' Apply the style and add a title if desired. More styling options are
documented here: http://www.graphviz.org/doc/info/attrs.html#d:style
f : graphviz.dot.Digraph
The graph object as created by graphviz.
style : str
Available styles: 'light', 'dark'
title : str
Optional title placed at the bottom of the graph.
'''
dark_style = {
'graph': {
'label': title,
'bgcolor': '#3a3a3a',
'fontname': 'Helvetica',
'fontsize': '18',
'fontcolor': 'white',
},
'nodes': {
'style': 'filled',
'color': 'white',
'fillcolor': 'black',
'fontname': 'Helvetica',
'fontsize': '14',
'fontcolor': 'white',
},
'edges': {
'color': 'white',
'arrowhead': 'open',
'fontname': 'Helvetica',
'fontsize': '12',
'fontcolor': 'white',
}
}
light_style = {
'graph': {
'label': title,
'fontname': 'Helvetica',
'fontsize': '18',
'fontcolor': 'black',
},
'nodes': {
'style': 'filled',
'color': 'black',
'fillcolor': '#dbdddd',
'fontname': 'Helvetica',
'fontsize': '14',
'fontcolor': 'black',
},
'edges': {
'color': 'black',
'arrowhead': 'open',
'fontname': 'Helvetica',
'fontsize': '12',
'fontcolor': 'black',
}
}
if style == 'light':
apply_style = light_style
elif style == 'dark':
apply_style = dark_style
f.graph_attr = apply_style['graph']
f.node_attr = apply_style['nodes']
f.edge_attr = apply_style['edges']
return f
"""
Explanation: In this example, we split the string into a list of substrings as separated by the dashes and check if the list has more than 3 elements.
Working with Pandas DataFrames in Python can seem very complicated - especially for those new to Python - but the rewards are great.
Visualizing sitemap
Storing data in tables is the only reasonable option, but it's not always the best way to view the data. This is especially true when sharing it with others.
The sitemap dataframe we've generated can be nicely visualized using graphviz, where paths are illustrated with nodes and edges. The nodes contain site page layers and the edges are labelled by the number of sub-pages existing within that path.
End of explanation
"""
f = make_sitemap_graph(sitemap_layers, layers=2)
f = apply_style(f, 'light', title='Sport Check Sitemap')
f.render(cleanup=True)
f
"""
Explanation: The code that builds and exports this visualization is contained within a function called make_sitemap_graph that takes in our data and the number of layers deep we wish to see. For example we can do:
End of explanation
"""
f = make_sitemap_graph(sitemap_layers, layers=2)
f = apply_style(f, 'dark')
f.render(cleanup=True)
f
"""
Explanation: Or we can use the dark style:
End of explanation
"""
f = make_sitemap_graph(sitemap_layers, layers=3, size='35')
f = apply_style(f, 'light')
f.render(cleanup=True)
f
"""
Explanation: Setting layers=3 we see that our graph is already very large! Here we set size=35 to create a higher resolution PDF file where the details are clearly visible.
End of explanation
"""
sitemap_layers = peel_layers(urls=sitemap_urls, layers=5)
f = make_sitemap_graph(sitemap_layers, layers=5, limit=3, size='25')
f = apply_style(f, 'light')
f.render(cleanup=True)
f
"""
Explanation: Another useful feature built into the graphing script is the ability to limit branch size. This can let us create deep sitemap visualizations that don't grow out of control. For example, limiting each branch to the top three (in terms of recursive page count):
End of explanation
"""
|
ml4a/ml4a-guides | examples/info_retrieval/audio-tsne.ipynb | gpl-2.0 | %matplotlib inline
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import fnmatch
import os
import numpy as np
import librosa
import matplotlib.pyplot as plt
import librosa.display
from sklearn.manifold import TSNE
import json
"""
Explanation: Audio t-SNE
This notebook will show you how to create a t-SNE plot of a group of audio clips. Along the way, we'll cover a few basic audio processing and machine learning tasks.
We will make two separate t-SNE plots. The first is clustering a group of many audio files in a single directory. The second one takes only a single audio track (a song) as it's input, segments it into many audio chunks (and saves them to a directory) and clusters the resulting chunks.
This notebook requires numpy, matplotlib, scikit-learn, and librosa to run. To install librosa, run the following command in the terminal:
pip install librosa
After verifying you have the required libraries, verify the following import commands work.
End of explanation
"""
path = '../data/Vintage Drum Machines'
files = []
for root, dirnames, filenames in os.walk(path):
for filename in fnmatch.filter(filenames, '*.wav'):
files.append(os.path.join(root, filename))
print("found %d .wav files in %s"%(len(files),path))
"""
Explanation: First we need to scan some directory of audio files and collect all their paths into a single list. This notebook is using a free sample pack called "Vintage drum machines" which can be downloaded, along with all the other data needed for ml4a-guides, by running the script download.sh in the data folder, or downloaded and unzipped manually from http://ivcloud.de/index.php/s/QyDXk1EDYDTVYkF.
Once you've done that, or changed the path variable to another directory of audio samples on your computer, you can proceed with the next code block to load all the filepaths into memory.
End of explanation
"""
def get_features(y, sr):
y = y[0:sr] # analyze just first second
S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
log_S = librosa.amplitude_to_db(S, ref=np.max)
mfcc = librosa.feature.mfcc(S=log_S, n_mfcc=13)
delta_mfcc = librosa.feature.delta(mfcc, mode='nearest')
delta2_mfcc = librosa.feature.delta(mfcc, order=2, mode='nearest')
feature_vector = np.concatenate((np.mean(mfcc,1), np.mean(delta_mfcc,1), np.mean(delta2_mfcc,1)))
feature_vector = (feature_vector-np.mean(feature_vector)) / np.std(feature_vector)
return feature_vector
"""
Explanation: In the next block, we're going to create a function which extracts a feature vector from an audio file. We are using librosa, a python library for audio analysis and music information retrieval, to handle the feature extraction.
The function we're creating get_features will take a waveform y at a sample rate sr, and extract features for only the first second of the audio. It is possible to use longer samples, but because we are interested in clustering according to sonic similarity, we choose to focus on short samples with relatively homogenous content over their durations. Longer samples have different sections and would require somewhat more sophisticated feature extraction.
The feature extraction will calculate the first 13 mel-frequency cepstral coefficients of the audio file, as well as their first- and second-order derivatives, and concatenate them into a single 39-element feature vector. The feature vector is also standardized so that each feature has equal variance.
End of explanation
"""
feature_vectors = []
sound_paths = []
for i,f in enumerate(files):
if i % 100 == 0:
print("get %d of %d = %s"%(i+1, len(files), f))
try:
y, sr = librosa.load(f)
if len(y) < 2:
print("error loading %s" % f)
continue
feat = get_features(y, sr)
feature_vectors.append(feat)
sound_paths.append(f)
except:
print("error loading %s" % f)
print("calculated %d feature vectors"%len(feature_vectors))
"""
Explanation: Now we will iterate through all the files, and get their feature vectors, placing them into a new list feature_vectors. We also make a new array sound_paths to index the feature vectors to the correct paths, in case some of the files are empty or corrupted (as they are in the Vintage Drum Machines sample pack), or otherwise something goes wrong in analyzing them.
End of explanation
"""
model = TSNE(n_components=2, learning_rate=150, perplexity=30, verbose=2, angle=0.1).fit_transform(feature_vectors)
"""
Explanation: Now we can run t-SNE over the feature vectors to get a 2-dimensional embedding of our audio files. We use scikit-learn's TSNE function, and additionally normalize the results so that they are between 0 and 1.
End of explanation
"""
x_axis=model[:,0]
y_axis=model[:,1]
plt.figure(figsize = (10,10))
plt.scatter(x_axis, y_axis)
plt.show()
"""
Explanation: Let's plot our t-SNE points. We can use matplotlib to quickly scatter them and see their distribution.
End of explanation
"""
tsne_path = "../data/example-audio-tSNE.json"
x_norm = (x_axis - np.min(x_axis)) / (np.max(x_axis) - np.min(x_axis))
y_norm = (y_axis - np.min(y_axis)) / (np.max(y_axis) - np.min(y_axis))
data = [{"path":os.path.abspath(f), "point":[x, y]} for f, x, y in zip(sound_paths, x_norm, y_norm)]
with open(tsne_path, 'w') as outfile:
json.dump(data, outfile, cls=MyEncoder)
print("saved %s to disk!" % tsne_path)
"""
Explanation: We see our t-SNE plot of our audio files, but it's not particularly interesting! Since we are dealing with audio files, there's no easy way to compare neighboring audio samples to each other. We can use some other, more interactive environment to view the results of the t-SNE. One way we can do this is by saving the results to a JSON file which stores the filepaths and t-SNE assignments of all the audio files. We can then load this JSON file in another environment.
One example of this is provided in the "AudioTSNEViewer" application in ml4a-ofx. This is an openFrameworks application which loads all the audio clips into an interactive 2d grid (using the t-SNE layout) and lets you play each sample by hovering your mouse over it.
In any case, to save the t-SNE to a JSON file, we first normalize the coordinates to between 0 and 1 and save them, along with the full filepaths.
End of explanation
"""
source_audio = '/Users/gene/Downloads/bohemianrhapsody.mp3'
"""
Explanation: Now what about if we want to do this same analysis, but instead of analyzing a directory of individual audio clips, we want to cut up a single audio file into many chunks and cluster those instead.
We can add a few extra lines of code to the above in order to do this. First let's select a piece of audio. Find any song on your computer that you want to do this analysis to and set a path to it.
End of explanation
"""
hop_length = 512
y, sr = librosa.load(source_audio)
onsets = librosa.onset.onset_detect(y=y, sr=sr, hop_length=hop_length)
"""
Explanation: What we will now do is use librosa to calculate the onsets of our audio file. Onsets are the timestamps to the beginning of discrete sonic events in our audio. We set the hop_length (number of samples for each frame) to 512.
End of explanation
"""
times = [hop_length * onset / sr for onset in onsets]
plt.figure(figsize=(16,4))
plt.subplot(1, 1, 1)
librosa.display.waveplot(y, sr=sr)
plt.vlines(times, -1, 1, color='r', alpha=0.9, label='Onsets')
plt.title('Wavefile with %d onsets plotted' % len(times))
"""
Explanation: How do we interpret these numbers? Initially our original audio, at a sample rate sr is divided up into bins which each cointain 512 samples (specified by hop_length in the onset detection function). The onset numbers are an index to each of these bins. So for example, if the first onset is 20, this corresponds to sample 20 * 512 = 10,240. Given a sample rate of 22050, this corresponds to 10250/22050 = 0.46 seconds into the track.
We can view the onsets as vertical lines on top of the waveform using matplotlib and the following code.
End of explanation
"""
# where to save our new clips to
path_save_intervals = "/Users/gene/Downloads/bohemian_segments/"
# make new directory to save them
if not os.path.isdir(path_save_intervals):
os.mkdir(path_save_intervals)
# grab each interval, extract a feature vector, and save the new clip to our above path
feature_vectors = []
for i in range(len(onsets)-1):
idx_y1 = onsets[i ] * hop_length # first sample of the interval
idx_y2 = onsets[i+1] * hop_length # last sample of the interval
y_interval = y[idx_y1:idx_y2]
features = get_features(y_interval, sr) # get feature vector for the audio clip between y1 and y2
file_path = '%s/onset_%d.wav' % (path_save_intervals, i) # where to save our new audio clip
feature_vectors.append({"file":file_path, "features":features}) # append to a feature vector
librosa.output.write_wav(file_path, y_interval, sr) # save to disk
if i % 50 == 0:
print "analyzed %d/%d = %s"%(i+1, len(onsets)-1, file_path)
# save results to this json file
tsne_path = "../data/example-audio-tSNE-onsets.json"
# feature_vectors has both the features and file paths in it. let's pull out just the feature vectors
features_matrix = [f["features"] for f in feature_vectors]
# calculate a t-SNE and normalize it
model = TSNE(n_components=2, learning_rate=150, perplexity=30, verbose=2, angle=0.1).fit_transform(features_matrix)
x_axis, y_axis = model[:,0], model[:,1] # normalize t-SNE
x_norm = (x_axis - np.min(x_axis)) / (np.max(x_axis) - np.min(x_axis))
y_norm = (y_axis - np.min(y_axis)) / (np.max(y_axis) - np.min(y_axis))
data = [{"path":os.path.abspath(f['file']), "point":[float(x), float(y)]} for f, x, y in zip(feature_vectors, x_norm, y_norm)]
with open(tsne_path, 'w') as outfile:
json.dump(data, outfile)
print("saved %s to disk!" % tsne_path)
"""
Explanation: Now what we will do is, we will go through each of the detected onsets, and crop the original audio to the interval from that onset until the next one. We will create a new folder, ../data/audio_clips, into which we will then save all of the individual audio clips, and extract the same feature vector we described above for our new audio sample.
End of explanation
"""
colors = cm.rainbow(np.linspace(0, 1, len(x_axis)))
plt.figure(figsize = (8,6))
plt.scatter(x_axis, y_axis, color=colors)
plt.show()
"""
Explanation: Let's plot the results on another scatter plot. One nice thing we can do is color the points according to their order in the original track.
End of explanation
"""
|
ttesileanu/twostagelearning | spiking_simulations.ipynb | mit | %matplotlib inline
import matplotlib as mpl
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
plt.rc('text', usetex=True)
plt.rc('font', family='serif', serif='cm')
plt.rcParams['figure.titlesize'] = 10
plt.rcParams['axes.labelsize'] = 8
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['axes.labelpad'] = 3.0
from IPython.display import display, clear_output
from ipywidgets import FloatProgress
# comment out the next line if not working on a retina-display computer
import IPython
IPython.display.set_matplotlib_formats('retina')
import numpy as np
import copy
import time
import os
import cPickle as pickle
import simulation
from basic_defs import *
from helpers import *
"""
Explanation: Testing tutor-student matching with spiking simulations
End of explanation
"""
tmax = 600.0 # duration of motor program (ms)
dt = 0.2 # simulation timestep (ms)
nsteps = int(tmax/dt)
times = np.arange(0, tmax, dt)
# add some noise, but keep things reproducible
np.random.seed(0)
target_complex = 100.0*np.vstack((
np.convolve(np.sin(times/100 + 0.1*np.random.randn(len(times)))**6 +
np.cos(times/150 + 0.2*np.random.randn(len(times)) + np.random.randn())**4,
np.exp(-0.5*np.linspace(-3.0, 3.0, 200)**2)/np.sqrt(2*np.pi)/80, mode='same'),
np.convolve(np.sin(times/110 + 0.15*np.random.randn(len(times)) + np.pi/3)**6 +
np.cos(times/100 + 0.2*np.random.randn(len(times)) + np.random.randn())**4,
np.exp(-0.5*np.linspace(-3.0, 3.0, 200)**2)/np.sqrt(2*np.pi)/80, mode='same'),
))
# or start with something simple: constant target
target_const = np.vstack((70.0*np.ones(len(times)), 50.0*np.ones(len(times))))
# or something simple but not trivial: steps
target_piece = np.vstack((
np.hstack((20.0*np.ones(len(times)/2), 100.0*np.ones(len(times)/2))),
np.hstack((60.0*np.ones(len(times)/2), 30.0*np.ones(len(times)/2)))))
targets = {'complex': target_complex, 'piece': target_piece, 'constant': target_const}
"""
Explanation: Define target motor programs
End of explanation
"""
# choose one target
target_choice = 'complex'
#target_choice = 'constant'
target = copy.copy(targets[target_choice])
# make sure the target smoothly goes to zero at the edges
# this is to match the spiking simulation, which needs some time to ramp
# up in the beginning and time to ramp down at the end
edge_duration = 100.0 # ms
edge_len = int(edge_duration/dt)
tapering_x = np.linspace(0.0, 1.0, edge_len, endpoint=False)
tapering = (3 - 2*tapering_x)*tapering_x**2
target[:, :edge_len] *= tapering
target[:, -edge_len:] *= tapering[::-1]
"""
Explanation: Choose target
End of explanation
"""
class ProgressBar(object):
""" A callable that displays a widget progress bar and can also make a plot showing
the learning trace.
"""
def __init__(self, simulator, show_graph=True, graph_step=20, max_error=1000):
self.t0 = None
self.float = None
self.show_graph = show_graph
self.graph_step = graph_step
self.simulator = simulator
self.max_error = max_error
self.print_last = True
def __call__(self, i, n):
t = time.time()
if self.t0 is None:
self.t0 = t
t_diff = t - self.t0
current_res = self.simulator._current_res
text = 'step: {} ; time elapsed: {:.1f}s'.format(i, t_diff)
if len(current_res) > 0:
last_error = current_res[-1]['average_error']
if last_error <= self.max_error:
text += ' ; last error: {:.2f}'.format(last_error)
else:
text += ' ; last error: very large'
if self.float is None:
self.float = FloatProgress(min=0, max=100)
display(self.float)
else:
percentage = min(round(i*100.0/n), 100)
self.float.value = percentage
self.float.description = text
if self.show_graph and (i % self.graph_step == 0 or i == n):
crt_res = [_['average_error'] for _ in current_res]
plt.plot(range(len(crt_res)), crt_res, '.-k')
plt.xlim(0, n-1)
plt.xlabel('repetition')
plt.ylabel('error')
if len(crt_res) > 0:
if i < 100:
plt.ylim(np.min(crt_res) - 0.1, np.max(crt_res) + 0.1)
else:
plt.ylim(0, np.max(crt_res))
else:
plt.ylim(0, 1)
clear_output(wait=True)
if i < n:
display(plt.gcf())
if i == n:
self.float.close()
if self.print_last:
print(text)
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
"""
Explanation: General definitions
End of explanation
"""
# start with the best parameters from the experiment matcher
best_params_file = 'best_params_joint.pkl'
with open(best_params_file, 'rb') as inp:
best_params_full = pickle.load(inp)
# keep the values for the juvenile bird
default_params = {}
for key, value in best_params_full.items():
pound_i = key.find('##')
if pound_i >= 0:
if int(key[pound_i+2:]) > 0:
# this is not for the juvenile
continue
key = key[:pound_i]
default_params[key] = value
# add the target, and make sure we have the right tmax and dt
default_params['target'] = target
default_params['tmax'] = tmax
default_params['dt'] = dt
# the number of student neuros per output doesn't have to be so high
default_params['n_student_per_output'] = 40
# the best_params file also has no learning, so let's set better defaults there
default_params['plasticity_learning_rate'] = 0.6e-9
default_params['plasticity_constrain_positive'] = True
default_params['plasticity_taus'] = (80.0, 40.0)
default_params['plasticity_params'] = (1.0, 0.0)
default_params.pop('tutor_rule_gain', None)
default_params['tutor_rule_gain_per_student'] = 0.5
default_params['tutor_rule_tau'] = 0.0
# the best_params also didn't care about the controller -- let's se tthat
default_params['controller_mode'] = 'sum'
default_params['controller_scale'] = 0.5
# save!
defaults_name = 'default_params.pkl'
if not os.path.exists(defaults_name):
with open(defaults_name, 'wb') as out:
pickle.dump(default_params, out, 2)
else:
raise Exception('File exists!')
"""
Explanation: Create default parameters file
End of explanation
"""
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
res['conductor_spike'] = simulation.EventMonitor(simulator.conductor)
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 10 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(12314)
actual_params = dict(default_params)
actual_params['plasticity_params'] = (1.0, 0.0)
actual_params['tutor_rule_tau'] = 80.0
actual_params['progress_indicator'] = ProgressBar
actual_params['tracker_generator'] = tracker_generator
actual_params['snapshot_generator'] = snapshot_generator_pre
simulator = SpikingLearningSimulation(**actual_params)
res = simulator.run(200)
file_name = 'save/spiking_example.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'params': actual_params, 'res': res}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
show_repetition_pattern([_['student_spike'] for _ in res[-10:]], idx=range(10), ms=2.0)
plt.xlim(0, tmax)
crt_times0 = np.asarray(res[-1]['student_spike'].t)
crt_times = crt_times0[crt_times0 < tmax]
print('Average firing rate {:.2f} Hz.').format(len(crt_times)*1000.0/tmax/simulator.student.N)
"""
Explanation: Generate data for figures
Learning curve (blackbox)
End of explanation
"""
# add some noise, but keep things reproducible
np.random.seed(0)
smoothlen = 400
realTarget1 = np.zeros(len(times))
realTarget1[int_r(50.0/dt):int_r(65.0/dt)] = 90.0
realTarget1[int_r(65.0/dt):int_r(75.0/dt)] = 20.0
realTarget1[int_r(75.0/dt):int_r(100.0/dt)] = 90.0
realTarget1[int_r(125.0/dt):int_r(150.0/dt)] = 80.0
realTarget1[int_r(150.0/dt):int_r(160.0/dt)] = 40.0
realTarget1[int_r(250.0/dt):int_r(280.0/dt)] = 80.0
realTarget1[int_r(305.0/dt):int_r(320.0/dt)] = 70.0
realTarget1[int_r(350.0/dt):int_r(360.0/dt)] = 90.0
realTarget1[int_r(410.0/dt):int_r(450.0/dt)] = 100.0
realTarget1[int_r(450.0/dt):int_r(470.0/dt)] = 60.0
realTarget1[int_r(500.0/dt):int_r(540.0/dt)] = 80.0
realTarget1 = np.convolve(realTarget1,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget2 = np.zeros(len(times))
realTarget2[int_r(60.0/dt):int_r(75.0/dt)] = 90.0
realTarget2[int_r(100.0/dt):int_r(115.0/dt)] = 100.0
realTarget2[int_r(265.0/dt):int_r(290.0/dt)] = 90.0
realTarget2[int_r(320.0/dt):int_r(330.0/dt)] = 40.0
realTarget2[int_r(330.0/dt):int_r(365.0/dt)] = 100.0
realTarget2[int_r(385.0/dt):int_r(400.0/dt)] = 90.0
realTarget2[int_r(415.0/dt):int_r(450.0/dt)] = 80.0
realTarget2[int_r(470.0/dt):int_r(480.0/dt)] = 80.0
realTarget2[int_r(520.0/dt):int_r(540.0/dt)] = 90.0
realTarget2 = np.convolve(realTarget2,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget3 = np.zeros(len(times))
realTarget3[int_r(70.0/dt):int_r(100.0/dt)] = 100.0
realTarget3[int_r(160.0/dt):int_r(180.0/dt)] = 100.0
realTarget3[int_r(260.0/dt):int_r(275.0/dt)] = 100.0
realTarget3[int_r(285.0/dt):int_r(310.0/dt)] = 100.0
realTarget3[int_r(340.0/dt):int_r(360.0/dt)] = 100.0
realTarget3[int_r(435.0/dt):int_r(470.0/dt)] = 90.0
realTarget3[int_r(530.0/dt):int_r(540.0/dt)] = 80.0
realTarget3 = np.convolve(realTarget3,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget4 = np.zeros(len(times))
realTarget4[int_r(50.0/dt):int_r(65.0/dt)] = 30.0
realTarget4[int_r(65.0/dt):int_r(85.0/dt)] = 100.0
realTarget4[int_r(135.0/dt):int_r(150.0/dt)] = 90.0
realTarget4[int_r(285.0/dt):int_r(300.0/dt)] = 90.0
realTarget4[int_r(385.0/dt):int_r(405.0/dt)] = 60.0
realTarget4[int_r(430.0/dt):int_r(450.0/dt)] = 100.0
realTarget4[int_r(525.0/dt):int_r(540.0/dt)] = 70.0
realTarget4 = np.convolve(realTarget4,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget5 = np.zeros(len(times))
realTarget5[int_r(75.0/dt):int_r(85.0/dt)] = 20.0
realTarget5[int_r(115.0/dt):int_r(130.0/dt)] = 60.0
realTarget5[int_r(180.0/dt):int_r(200.0/dt)] = 90.0
realTarget5[int_r(265.0/dt):int_r(290.0/dt)] = 100.0
realTarget5[int_r(325.0/dt):int_r(350.0/dt)] = 70.0
realTarget5[int_r(410.0/dt):int_r(420.0/dt)] = 80.0
realTarget5[int_r(440.0/dt):int_r(455.0/dt)] = 70.0
realTarget5[int_r(535.0/dt):int_r(545.0/dt)] = 20.0
realTarget5 = np.convolve(realTarget5,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget = np.vstack((realTarget1, realTarget2, realTarget3, realTarget4, realTarget5))
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
res['conductor_spike'] = simulation.EventMonitor(simulator.conductor)
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 10 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(12314)
actual_params = dict(default_params)
actual_params['target'] = realTarget
actual_params['plasticity_params'] = (1.0, 0.0)
actual_params['tutor_rule_tau'] = 80.0
actual_params['progress_indicator'] = ProgressBar
actual_params['tracker_generator'] = tracker_generator
actual_params['snapshot_generator'] = snapshot_generator_pre
actual_params['tutor_rule_gain_per_student'] = 1.0
actual_params['plasticity_learning_rate'] = 1e-9
#actual_params['n_student_per_output'] = 10
#actual_params['controller_scale'] = 0.5*4
simulator = SpikingLearningSimulation(**actual_params)
res = simulator.run(600)
file_name = 'save/spiking_example_realistic.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'params': actual_params, 'res': res}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, realTarget, dt)
"""
Explanation: Learning curve blackbox, realistic target
End of explanation
"""
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
res['conductor_spike'] = simulation.EventMonitor(simulator.conductor)
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 10 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(12314)
actual_params = dict(default_params)
actual_params['plasticity_params'] = (1.0, 0.0)
actual_params['tutor_rule_tau'] = 80.0
actual_params['progress_indicator'] = ProgressBar
actual_params['tracker_generator'] = tracker_generator
actual_params['snapshot_generator'] = snapshot_generator_pre
actual_params['student_g_inh'] = 0
actual_params['student_i_external'] = -0.23
simulator = SpikingLearningSimulation(**actual_params)
res = simulator.run(200)
file_name = 'save/spiking_example_const_inh.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'params': actual_params, 'res': res}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
"""
Explanation: Learning curve (blackbox), constant inhibition
End of explanation
"""
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(212312)
args = dict(default_params)
args['relaxation'] = 200.0
args['relaxation_conductor'] = 200.0
args['tutor_tau_out'] = 40.0
args['tutor_rule_type'] = 'reinforcement'
args['tutor_rule_learning_rate'] = 0.004
args['tutor_rule_compress_rates'] = True
args['tutor_rule_relaxation'] = None
args['tutor_rule_tau'] = 0.0
args['plasticity_params'] = (1.0, 0.0)
args['plasticity_constrain_positive'] = True
args['plasticity_learning_rate'] = 7e-10
args_actual = dict(args)
args_actual['tracker_generator'] = tracker_generator
args_actual['snapshot_generator'] = snapshot_generator_pre
args_actual['progress_indicator'] = ProgressBar
simulator = SpikingLearningSimulation(**args_actual)
res = simulator.run(10000)
# save!
file_name = 'save/reinforcement_example_0ms.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res': res, 'args': args}, out, 2)
else:
raise Exception('File exists!')
"""
Explanation: Reinforcement example (0 ms)
End of explanation
"""
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# keep things arbitrary but reproducible
np.random.seed(212312)
args = dict(
target=target, tmax=tmax, dt=dt,
n_conductor=300, n_student_per_output=40,
relaxation=200.0, relaxation_conductor=200.0, # XXX different from blackbox!
conductor_rate_during_burst=769.7,
controller_mode='sum',
controller_scale=0.5,
tutor_tau_out=40.0,
tutor_rule_type='reinforcement',
tutor_rule_learning_rate=0.006,
tutor_rule_compress_rates=True,
tutor_rule_tau=80.0,
tutor_rule_relaxation=None, # XXX different from blackbox!
cs_weights_fraction=0.488, ts_weights=0.100,
plasticity_constrain_positive=True,
plasticity_learning_rate=6e-10,
plasticity_taus=(80.0, 40.0),
plasticity_params=(1.0, 0.0),
student_R=383.4, student_g_inh=1.406,
student_tau_ampa=5.390, student_tau_nmda=81.92,
student_tau_m=20.31, student_tau_ref=1.703,
student_vR=-74.39, student_v_th=-45.47
)
args_actual = dict(args)
args_actual['tracker_generator'] = tracker_generator
args_actual['snapshot_generator'] = snapshot_generator_pre
args_actual['progress_indicator'] = ProgressBar
simulator = SpikingLearningSimulation(**args_actual)
res = simulator.run(16000)
# save!
file_name = 'save/reinforcement_example_80ms.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res': res, 'args': args}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
"""
Explanation: Reinforcement example (80 ms)
End of explanation
"""
def tracker_generator(simulator, i, n):
""" Generate some trackers. """
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
""" Generate some pre-run snapshots. """
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# keep things arbitrary but reproducible
np.random.seed(12234)
args = dict(default_params)
args['relaxation'] = 200.0
args['relaxation_conductor'] = 200.0
args['tutor_tau_out'] = 40.0
args['tutor_rule_type'] = 'reinforcement'
args['tutor_rule_learning_rate'] = 0.004
args['tutor_rule_compress_rates'] = True
args['tutor_rule_relaxation'] = None
args['tutor_rule_tau'] = 440.0
args['plasticity_params'] = (10.0, 9.0)
args['plasticity_constrain_positive'] = True
args['plasticity_learning_rate'] = 7e-10
args_actual = dict(args)
args_actual['tracker_generator'] = tracker_generator
args_actual['snapshot_generator'] = snapshot_generator_pre
args_actual['progress_indicator'] = ProgressBar
simulator = SpikingLearningSimulation(**args_actual)
res = simulator.run(10000)
# save!
file_name = 'save/reinforcement_example_a10b9_440ms.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res': res, 'args': args}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
"""
Explanation: Reinforcement example (alpha=10, beta=9, tau=440 ms)
End of explanation
"""
file_name = 'spike_out/songspike_tscale_batch_8.8.160525.1530_summary.pkl'
with open(file_name, 'rb') as inp:
mismatch_data = pickle.load(inp)
make_heatmap_plot(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
vmin=1.0, vmax=10, sim_idx=250)
safe_save_fig('figs/spiking_mismatch_heatmap_sum_log_8', png=False)
make_convergence_map(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
max_steps=250)
safe_save_fig('figs/spiking_mismatch_convmap_sum_log_8', png=False)
"""
Explanation: Make figures
Tutor-student mismatch heatmap and convergence map -- blackbox spiking
The data for this needs to be generated using the summarize.py script from the results of the run_tscale_batch.py script, which is designed to run on a cluster.
End of explanation
"""
file_name = 'spike_out/songspike_tscale_batch_12.12.161122.1802_summary.pkl'
with open(file_name, 'rb') as inp:
mismatch_data = pickle.load(inp)
make_heatmap_plot(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
vmin=0.5, vmax=10, sim_idx=999)
safe_save_fig('figs/spiking_mismatch_heatmap_sum_log_12', png=False)
make_convergence_map(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
max_steps=999)
safe_save_fig('figs/spiking_mismatch_convmap_sum_log_12', png=False)
error_matrix = np.asarray([[_[-1] for _ in crt_res] for crt_res in mismatch_data['res_array']])
error_matrix[~np.isfinite(error_matrix)] = np.inf
tau_levels = np.asarray([_['tutor_rule_tau'] for _ in mismatch_data['args_array'][0]])
plt.semilogx(tau_levels, np.diag(error_matrix), '.-k')
"""
Explanation: Tutor-student bigger mismatch heatmap and convergence map -- blackbox spiking
The data for this needs to be generated using the summarize.py script from the results of the run_tscale_batch.py script, which is designed to run on a cluster.
End of explanation
"""
file_name = 'spike_out/song_reinf_tscale_batch_8.8.160607.1153_summary.pkl'
with open(file_name, 'rb') as inp:
mismatch_data = pickle.load(inp)
make_heatmap_plot(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
vmin=1.0, vmax=10)
safe_save_fig('figs/reinforcement_mismatch_heatmap_sum_log_8', png=False)
make_convergence_map(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'], max_error=12)
safe_save_fig('figs/reinforcement_mismatch_convmap_sum_log_8', png=False)
"""
Explanation: Tutor-student mismatch heatmap and convergence map -- reinforcement
End of explanation
"""
with open('save/spiking_example.pkl', 'rb') as inp:
spiking_example_data = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(spiking_example_data['res'], plt.gca(), target_lw=2,
inset=True, inset_pos=[0.4, 0.4, 0.4, 0.4],
alpha=spiking_example_data['params']['plasticity_params'][0],
beta=spiking_example_data['params']['plasticity_params'][1],
tau_tutor=spiking_example_data['params']['tutor_rule_tau'],
target=spiking_example_data['params']['target'])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/spiking_example_learning_curve', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][:5]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_juvenile', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][-5:]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_adult', png=False)
"""
Explanation: Spiking example learning curve and raster plots
End of explanation
"""
with open('save/spiking_example_realistic.pkl', 'rb') as inp:
spiking_example_data = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(spiking_example_data['res'], plt.gca(), target_lw=2,
inset=True, inset_pos=[0.4, 0.45, 0.4, 0.4],
legend_pos=(0.7, 1.1),
alpha=spiking_example_data['params']['plasticity_params'][0],
beta=spiking_example_data['params']['plasticity_params'][1],
tau_tutor=spiking_example_data['params']['tutor_rule_tau'],
target=spiking_example_data['params']['target'])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/spiking_example_realistic_learning_curve', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][:5]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 87, 123, 165])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_realistic_juvenile', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][-5:]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 87, 123, 165])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_realistic_adult', png=False)
make_convergence_movie('figs/spiking_convergence_movie_small_tau.mov',
spiking_example_data['res'], spiking_example_data['params']['target'],
idxs=range(0, 600), length=12.0,
ymax=80.0)
"""
Explanation: Spiking example learning curve and raster plots, realistic target
End of explanation
"""
with open('save/spiking_example_const_inh.pkl', 'rb') as inp:
spiking_example_data = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(spiking_example_data['res'], plt.gca(), target_lw=2,
inset=True, inset_pos=[0.4, 0.4, 0.4, 0.4],
alpha=spiking_example_data['params']['plasticity_params'][0],
beta=spiking_example_data['params']['plasticity_params'][1],
tau_tutor=spiking_example_data['params']['tutor_rule_tau'],
target=spiking_example_data['params']['target'])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/spiking_example_const_inh_learning_curve', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][:5]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][-5:]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
make_convergence_movie('figs/spiking_convergence_movie_const_inh.mov',
spiking_example_data['res'], spiking_example_data['params']['target'],
idxs=range(0, 200), length=4.0,
ymax=80.0)
"""
Explanation: Spiking example, constant inhibition, learning curve and raster plots
End of explanation
"""
with open('save/reinforcement_example_0ms.pkl', 'rb') as inp:
reinf_shorttau = pickle.load(inp)
plt.imshow(reinf_shorttau['res'][7500]['weights'], aspect='auto', interpolation='nearest',
cmap='Blues', vmin=0, vmax=0.3)
plt.colorbar()
plot_evolution(reinf_shorttau['res'],
reinf_shorttau['args']['target'],
reinf_shorttau['args']['dt'])
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(reinf_shorttau['res'][:-9], plt.gca(), target_lw=2,
inset=True,
alpha=reinf_shorttau['args']['plasticity_params'][0],
beta=reinf_shorttau['args']['plasticity_params'][1],
tau_tutor=reinf_shorttau['args']['tutor_rule_tau'],
target=reinf_shorttau['args']['target'],
inset_pos=[0.52, 0.45, 0.4, 0.4])
axs[0].set_xticks(range(0, 8001, 2000))
axs[0].set_ylim(0, 15);
axs[1].set_yticks(range(0, 81, 20));
safe_save_fig('figs/reinforcement_convergence_plot_small_tau', png=False)
plt.figure(figsize=(3, 1))
crt_res = reinf_shorttau['res'][:50:10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 45, 75, 65, 57])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_shorttau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
safe_save_fig('figs/reinforcement_simraster_juvenile', png=False)
plt.figure(figsize=(3, 1))
crt_res = reinf_shorttau['res'][-50::10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 45, 75, 65, 57])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_shorttau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
safe_save_fig('figs/reinforcement_simraster_adult', png=False)
make_convergence_movie('figs/reinforcement_convergence_movie_small_tau.mov',
reinf_shorttau['res'], reinf_shorttau['args']['target'],
idxs=range(0, 10000), length=10.0,
ymax=80.0)
"""
Explanation: Reinforcement example learning curves
Reinforcement learning curve, small tau
End of explanation
"""
with open('save/reinforcement_example_a10b9_440ms.pkl', 'rb') as inp:
reinf_longtau = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(reinf_longtau['res'][:-9], plt.gca(), target_lw=2,
inset=True,
alpha=reinf_longtau['args']['plasticity_params'][0],
beta=reinf_longtau['args']['plasticity_params'][1],
tau_tutor=reinf_longtau['args']['tutor_rule_tau'],
target=reinf_longtau['args']['target'],
inset_pos=[0.5, 0.45, 0.4, 0.4])
axs[0].set_xticks(range(0, 8001, 2000))
axs[0].set_ylim(0, 15);
axs[1].set_yticks(range(0, 81, 20));
safe_save_fig('figs/reinforcement_convergence_plot_large_tau', png=False)
plt.figure(figsize=(3, 1))
crt_res = reinf_longtau['res'][:50:10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[3, 48, 19, 62, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_longtau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
plt.figure(figsize=(3, 1))
crt_res = reinf_longtau['res'][-50::10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[3, 48, 19, 62, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_longtau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
make_convergence_movie('figs/reinforcement_convergence_movie_large_tau.mov',
reinf_longtau['res'], reinf_longtau['args']['target'],
idxs=range(0, 10000), length=10.0,
ymax=80.0)
"""
Explanation: Reinforcement learning curve, long tau
End of explanation
"""
with open('save/reinforcement_example_0ms.pkl', 'rb') as inp:
reinf_shorttau = pickle.load(inp)
motor_idxs = range(0, len(reinf_shorttau['res']), 50)
weight_sparsity = [np.sum(reinf_shorttau['res'][_]['weights'] > 0.01)/
(reinf_shorttau['args']['n_student_per_output']*len(reinf_shorttau['args']['target']))
for _ in motor_idxs]
plt.figure(figsize=(3, 2))
plt.plot(motor_idxs, weight_sparsity, color=[0.200, 0.357, 0.400])
plt.xlabel('repetitions')
plt.ylabel('HVC inputs per RA neuron')
plt.ylim(0, 200);
plt.grid(True)
safe_save_fig('figs/inputs_per_ra_evolution_reinf')
"""
Explanation: Reinforcement learning, evolution of synapse sparsity
End of explanation
"""
|
bert9bert/statsmodels | examples/notebooks/distributed_estimation.ipynb | bsd-3-clause | import numpy as np
from statsmodels.base.distributed_estimation import DistributedModel
def _exog_gen(exog, partitions):
"""partitions exog data"""
n_exog = exog.shape[0]
n_part = np.ceil(n_exog / partitions)
ii = 0
while ii < n_exog:
jj = int(min(ii + n_part, n_exog))
yield exog[ii:jj, :]
ii += int(n_part)
def _endog_gen(endog, partitions):
"""partitions endog data"""
n_endog = endog.shape[0]
n_part = np.ceil(n_endog / partitions)
ii = 0
while ii < n_endog:
jj = int(min(ii + n_part, n_endog))
yield endog[ii:jj]
ii += int(n_part)
"""
Explanation: This notebook goes through a couple of examples to show how to use distributed_estimation. We import the DistributedModel class and make the exog and endog generators.
End of explanation
"""
X = np.random.normal(size=(1000, 25))
beta = np.random.normal(size=25)
beta *= np.random.randint(0, 2, size=25)
y = X.dot(beta) + np.random.normal(size=1000)
m = 5
"""
Explanation: Next we generate some random data to serve as an example.
End of explanation
"""
debiased_OLS_mod = DistributedModel(m)
debiased_OLS_fit = debiased_OLS_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: This is the most basic fit, showing all of the defaults, which are to use OLS as the model class, and the debiasing procedure.
End of explanation
"""
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.genmod.families import Binomial
debiased_GLM_mod = DistributedModel(m, model_class=GLM,
init_kwds={"family": Binomial()})
debiased_GLM_fit = debiased_GLM_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: Then we run through a slightly more complicated example which uses the GLM model class.
End of explanation
"""
from statsmodels.base.distributed_estimation import _est_regularized_naive, _join_naive
naive_OLS_reg_mod = DistributedModel(m, estimation_method=_est_regularized_naive,
join_method=_join_naive)
naive_OLS_reg_params = naive_OLS_reg_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: We can also change the estimation_method and the join_method. The below example show how this works for the standard OLS case. Here we using a naive averaging approach instead of the debiasing procedure.
End of explanation
"""
from statsmodels.base.distributed_estimation import _est_unregularized_naive, DistributedResults
naive_OLS_unreg_mod = DistributedModel(m, estimation_method=_est_unregularized_naive,
join_method=_join_naive,
results_class=DistributedResults)
naive_OLS_unreg_params = naive_OLS_unreg_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: Finally, we can also change the results_class used. The following example shows how this work for a simple case with an unregularized model and naive averaging.
End of explanation
"""
|
vsporeddy/bigbang | examples/Auditing Fernando.ipynb | gpl-2.0 | from bigbang.archive import Archive
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
"""
Explanation: This note will show you how to use BigBang to investigate a particular project participant's activity.
We will focus on Fernando Perez's role within the IPython community.
First, imports.
End of explanation
"""
url = "ipython-user"
arx = Archive(url,archive_dir="../archives")
"""
Explanation: Let's get all the available date from the IPython community. For now, this is just the mailing lists. One day, BigBang will also get its issue tracker data! That will be very exciting.
End of explanation
"""
fernandos = Archive(arx.data[arx.data.From.map(lambda x: 'Fernando' in x)])
fernandos.data[:3]
"""
Explanation: Now let's isolate the messages involving Fernando Perez.
This includes both messages from Fernando, and messages to Fernando.
End of explanation
"""
[x for x in fernandos.get_activity()]
"""
Explanation: Note that our way of finding Fernando Perez was not very precise. We've picked up another Fernando.
End of explanation
"""
not_fernandos = Archive(arx.data[arx.data.From.map(lambda x: 'Fernando' not in x)])
not_fernandos.data[:3]
"""
Explanation: In future iterations, we will use a more sensitive entity recognition technique to find Fernando. This will have to do for now.
We will also need the data for all the emails that were not sent by Fernando.
End of explanation
"""
not_fernandos.get_activity().sum(1).values.shape
nf = pd.DataFrame(not_fernandos.get_activity().sum(1))
f = pd.DataFrame(fernandos.get_activity().sum(1))
both = pd.merge(nf,f,how="outer",left_index=True,right_index=True,suffixes=("_nf","_f")).fillna(0)
"""
Explanation: We now have two Archives made from the original Archive, with the same range of dates, but one with and the other without Fernando. Both contain emails from many addresses. We want to get a single metric of activity.
End of explanation
"""
fig = plt.figure(figsize=(12.5, 7.5))
fa = fernandos.get_activity()
d = np.row_stack((both['0_f'],
both['0_nf']))
plt.stackplot(both.index.values,d,linewidth=0,label='foo')
fig.axes[0].xaxis_date()
plt.show()
"""
Explanation: Let's make a stackplot of this data to see how much of the conversation on the IPython developer's list has been Fernando, over time.
End of explanation
"""
|
aidiary/notebooks | pytorch/180212-dqn-tutorial.ipynb | mit | import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from copy import deepcopy
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import torchvision.transforms as T
%matplotlib inline
# setup matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# if gpu is to be used
use_cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
ByteTensor = torch.cuda.ByteTensor if use_cuda else torch.ByteTensor
Tensor = FloatTensor
env = gym.make('CartPole-v0').unwrapped
env
"""
Explanation: Reinforcement Learning (DQN) tutorial
http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
OpenAI GymのCatPole task
環境の状態は (position, velocity, ...) など4つの数値が与えられるが
DQNではカートを中心とした画像を入力とする
厳密に言うと状態=現在の画像と1つ前の画像の差分
Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.
TODO: DQNではなく、4つの数値を状態としたQ-Learningで学習
OpenAI Gymを使うので pip install gym でインストール
End of explanation
"""
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
# namedtupleの使い方
t = Transition(1, 2, 3, 4)
print(t)
print(t.state, t.action, t.next_state, t.reward)
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Save a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
# memoryを使い切ったら古いのから上書きしていく
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# ReplayMemoryの動作確認
rm = ReplayMemory(3)
rm.push(1, 1, 1, 1)
rm.push(2, 2, 2, 2)
rm.push(3, 3, 3, 3)
print(len(rm))
print(rm.memory)
rm.push(4, 4, 4, 4)
print(len(rm))
print(rm.memory)
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
self.head = nn.Linear(448, 2)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
dqn = DQN()
dqn
resize = T.Compose([T.ToPILImage(),
T.Resize((40, 40), interpolation=Image.CUBIC),
T.ToTensor()])
"""
Explanation: Experience Replay
DQNは観測を蓄積しておいてあとでシャッフルしてサンプリングして使う
Transition - a named tuple representing a single transition in our environment
ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training.
End of explanation
"""
screen_width = 600
def get_cart_location():
"""台車の位置をピクセル単位で返す"""
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0)
"""
Explanation: https://github.com/openai/gym/wiki/CartPole-v0
state[0] = Cart Position (-2.4, 2.4)
env.x_threshold = 2.4
End of explanation
"""
def get_screen():
"""ゲーム画面を取得する"""
# env.reset() しておかないとrenderはNoneが変えるので注意
# PyTorchの (C, H, W) の順に変換する
# default: (3, 800, 1200)
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
# 台車を中心として 320 x 640 の範囲を切り出す
# 画面の上下の部分を除く(台車のある範囲のみ残す)
screen = screen[:, 320:640]
# 横幅は台車を中心としてこの範囲を切り出す
view_width = 640
cart_location = get_cart_location()
if cart_location < view_width // 2:
# view_widthの範囲を切り出すと左が画面からはみ出る場合
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
# view_widthの範囲を切り出すと右が画面からはみ出る場合
slice_range = slice(-view_width, None)
else:
# 両端が画面からはみ出ない場合
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
screen = screen[:, :, slice_range]
# TODO: ascontiguousarray()は高速化のため?
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
# Tensorに変換
screen = torch.from_numpy(screen)
# リサイズしてバッチ数の次元を4Dテンソルにして返す
return resize(screen).unsqueeze(0).type(Tensor)
get_screen()
"""
Explanation: Jupyter Notebook上だとrender()が動かない!
NotImplementedError: abstract
End of explanation
"""
env.reset()
patch = get_screen()
print(patch.size()) # torch.Size([1, 3, 40, 40])
# 切り取ったゲーム画面を描画
env.reset()
plt.figure()
# get_screen()は4Dテンソルで返ってくるので描画できるようにndarrayに戻す
patch = get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy()
plt.imshow(patch, interpolation='none')
plt.title('Example extracted screen')
plt.show()
"""
Explanation: ゲーム画面取得や描画関連のコードはコンソール上で実行すること!
End of explanation
"""
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9 # 探査率の開始値
EPS_END = 0.05 # 探査率の終了値
EPS_DECAY = 200 # 値が小さいほど低下が急激
model = DQN()
if use_cuda:
model.cuda()
optimizer = optim.RMSprop(model.parameters())
memory = ReplayMemory(10000)
steps_done = 0
model
"""
Explanation: 訓練コード
End of explanation
"""
# 探査率のスケジューリング
eps_list = []
for steps_done in range(2000):
eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * steps_done / EPS_DECAY)
eps_list.append(eps_threshold)
plt.plot(range(2000), eps_list)
plt.yticks(np.arange(0.0, 1.0, 0.1))
plt.xlabel('steps')
plt.ylabel('epsilon')
plt.grid()
"""
Explanation: 探査率は学習が進むにつれて徐々に減らすスケジューリングをしている
探査率の変化曲線を描画してみる
End of explanation
"""
|
woobe/h2o_tutorials | introduction_to_machine_learning/py_02_data_manipulation.ipynb | mit | # Start and connect to a local H2O cluster
import h2o
h2o.init(nthreads = -1)
"""
Explanation: Machine Learning with H2O - Tutorial 2: Basic Data Manipulation
<hr>
Objective:
This tutorial demonstrates basic data manipulation with H2O.
<hr>
Titanic Dataset:
Source: https://www.kaggle.com/c/titanic/data
<hr>
Full Technical Reference:
http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html
<br>
End of explanation
"""
# Import Titanic data (local CSV)
titanic = h2o.import_file("kaggle_titanic.csv")
# Explore the dataset using various functions
titanic.head(10)
"""
Explanation: <br>
End of explanation
"""
# Explore the column 'Survived'
titanic['Survived'].summary()
# Use hist() to create a histogram
titanic['Survived'].hist()
# Use table() to summarize 0s and 1s
titanic['Survived'].table()
# Convert 'Survived' to categorical variable
titanic['Survived'] = titanic['Survived'].asfactor()
# Look at the summary of 'Survived' again
# The feature is now an 'enum' (enum is the name of categorical variable in Java)
titanic['Survived'].summary()
"""
Explanation: <br>
Explain why we need to transform
<br>
End of explanation
"""
# Explore the column 'Pclass'
titanic['Pclass'].summary()
# Use hist() to create a histogram
titanic['Pclass'].hist()
# Use table() to summarize 1s, 2s and 3s
titanic['Pclass'].table()
# Convert 'Pclass' to categorical variable
titanic['Pclass'] = titanic['Pclass'].asfactor()
# Explore the column 'Pclass' again
titanic['Pclass'].summary()
"""
Explanation: <br>
Doing the same for 'Pclass'
<br>
End of explanation
"""
|
chapmanbe/utah_highschool_airquality | introducing_python/working_with_air_quality_data.ipynb | apache-2.0 | %matplotlib inline
import os
import pandas as pd
import datetime
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
#!pip install xlrd
"""
Explanation: Examining Weather Data and Air Quality Data
In this notebook we are going to learn how to read tabular data (e.g. spreadsheets) with the python package Pandas. Pandas is a very useful tool for datascience applications and provides a number of built in functions that makes it much easier for us to write programs.
End of explanation
"""
DATADIR = os.path.join(os.path.expanduser("~"), "DATA", "TimeSeries", "EPA")
os.path.exists(DATADIR)
"""
Explanation: We need to create a variable that will tell our program where the data are located
End of explanation
"""
files = os.listdir(DATADIR)
files
"""
Explanation: What files are in the directory?
End of explanation
"""
slc = pd.read_excel(os.path.join(DATADIR, 'Salt_Lake_2016_PM25.xlsx'))
"""
Explanation: Read the air quality data
We'll use the Pandas read_excel function to read in our data into a Pandas dataframe. Pandas will automatically recongize column names, and data types (e.g. text, numbers). After we read in the data, we'll take a quick look at what it looks like.
End of explanation
"""
print(slc.columns)
print(slc.shape)
"""
Explanation: A dataframe is an object with attributes and methods.
Some important attributes are
columns
shape
Useful methods include head and tail
End of explanation
"""
slc.head(10)
"""
Explanation: In addition to looking at the column names, we can also look at the data
See what happens when you pass an integer number to head (e.g. head(15). Try changing head to tail.
End of explanation
"""
YouTubeVideo("A4ZysWTWXEk")
slc = slc[["Date Local", "Time Local",
"Sample Measurement", "MDL",
"Latitude", "Longitude", "Site Num"]]
"""
Explanation: There is lots of stuff here, more than we're interested in. Thow it away
What columns do we want to keep?
What is the difference between Time local and Time GMT?
Hint: GMT is more appropriately called UTC
There are lots of ways to throw data away
We can tell Pandas what columns to read when reading in the data.
We can tell Pandas to drop particular columns
We can create a new pandas dataframe by explicitly stating which columns we want to use. (This is the approach we will use.)
End of explanation
"""
obs=20
print(slc.loc[obs]["Date Local"], slc.loc[obs]["Time Local"])
print(datetime.datetime.combine(slc.loc[obs]["Date Local"],
slc.loc[obs]["Time Local"]))
"""
Explanation: Comments:
Dates and times are split into separate columns
We have both local time and UTC time
Merging Dates and Time
Dates and times are an important data type, and tricky---think leap years, time zones, etc, days of the week, etc. Luckily, Python comes with date and time objects and lots of functions and methods for working with them.
We'll use the datetime object combine method to merge the date and time columns.
End of explanation
"""
slc["Date/Time Local"] = \
slc.apply(lambda x: datetime.datetime.combine(x["Date Local"],
x["Time Local"]),
axis=1)
slc["Date/Time Local"].tail()
"""
Explanation: Applying datetime.combine to all the dates and times in our dataframe
We're going to create a new column called "Date/Time Local" using the dataframe method apply. apply takes a function and applies it to the data in the dataframe. In this case we going to do some fancy Python and create what is called an anonymous function with a lambda statement.
See if you can describe what we are doing.
End of explanation
"""
slc[slc["Site Num"]==3006].plot(x="Date Local",
y="Sample Measurement")
"""
Explanation: Let's look at the data
Since we have two different measurement sites, we're going to select only the data for site 3006.
End of explanation
"""
slc_weather = pd.read_excel(os.path.join(DATADIR, 'SLC_Weather_2016.xlsx'))
slc_weather.head()
"""
Explanation: Read in weather data
End of explanation
"""
slc_weather = pd.read_excel(os.path.join(DATADIR,
'SLC_Weather_2016.xlsx'),
skiprows=[1],
na_values='-')
slc_weather.head()
slc_weather['Day'][0]
slc_weather.plot(x="Day", y="High")
"""
Explanation: The data file uses the 2nd row to describe the units.
This is confusing Pandas
Let's skip the second row
This file uses a "-" to indicate data are missing. We need to tell Pandas this.
End of explanation
"""
slc.groupby("Date Local", as_index=False).aggregate(np.mean).head()
"""
Explanation: Our Weather Data Have Resolution of Days
Our pollutant data has resolution of hours
What should we do?
We want to aggregate the data across days.
How might we do this?
What was the maximum value?
What was the minimum value?
What was the sum of the value?
What was the average (mean) of the value?
End of explanation
"""
slc.groupby("Date Local", as_index=False).aggregate(np.sum).head()
"""
Explanation: Group and take sum?
End of explanation
"""
slc_day_all = slc_day.merge(slc_weather,
left_on="Date Local",
right_on="Day")
slc_day_all.head()
"""
Explanation: Now we need to combine the pollution data with the weather data
End of explanation
"""
f, ax1 = plt.subplots(1)
slc_day_all[slc_day_all["Site Num"]==3006].plot(x="Date Local",
y="High", ax=ax1)
slc_day_all[slc_day_all["Site Num"]==3006].plot(secondary_y=True, x="Date Local",
y="Sample Measurement", ax=ax1)
"""
Explanation: Explore the Relationship between various weather variables and Sample Measurement
End of explanation
"""
|
markovmodel/adaptivemd | examples/tutorial/4_example_advanced_tasks.ipynb | lgpl-2.1 | from adaptivemd import Project, File, PythonTask, Task
"""
Explanation: AdaptiveMD
Example 4 - Custom Task objects
0. Imports
End of explanation
"""
project = Project('tutorial')
"""
Explanation: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
Open all connections to the MongoDB and Session so we can get started.
End of explanation
"""
print project.files
print project.generators
print project.models
"""
Explanation: Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
"""
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
"""
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
"""
task = engine.run(project.new_trajectory(pdb_file, 100))
task.script
"""
Explanation: A simple task
A task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.
The execution structure
Let's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.
A task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like
worker.0x5dcccd05097611e7829b000000000072L/
the long number is a hex representation of the UUID of the task. Just if you are curious type
print hex(my_task.__uuid__)
Then we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)
The script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure
Pre-Exec: Things to happen before the main command (optional)
Main: the main commands are executed
Post-Exec: Things to happen after the main command (optional)
Okay, lots of theory, now some real code for running a task that generated a trajectory
End of explanation
"""
print task.description
"""
Explanation: We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.
There is a way to list lot's of things about tasks and we will use it a lot to see our modifications.
End of explanation
"""
task.append('echo "This new line is pointless"')
print task.description
"""
Explanation: Modify a task
As long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now
Add a bash line
First, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command.
End of explanation
"""
traj = project.trajectories.one
transaction = traj.copy()
print transaction
"""
Explanation: As expected this line was added to the end of the script.
Add staging actions
To set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.
Let's move one trajectory (directory) around a little more as an example
End of explanation
"""
transaction = traj.copy('new_traj/')
print transaction
"""
Explanation: This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /)
End of explanation
"""
transaction = traj.copy('staging:///cached_trajs/')
print transaction
"""
Explanation: If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared://, sandbox://, staging:// as explained in the previous examples)
End of explanation
"""
transaction = pdb_file.copy('staging:///delete.pdb')
print transaction
transaction = pdb_file.move('staging:///delete.pdb')
print transaction
transaction = pdb_file.link('staging:///delete.pdb')
print transaction
"""
Explanation: Besides .copy you can also .move or .link files.
End of explanation
"""
new_pdb = File('file://../files/ntl9/ntl9.pdb').load()
"""
Explanation: Local files
Let's mention these because they require special treatment. We cannot copy files to the HPC, we need to store them in the DB first.
End of explanation
"""
print new_pdb.location
"""
Explanation: Make sure you use file:// to indicate that you are using a local file. The above example uses a relative path which will be replaced by an absolute one, otherwise we ran into trouble once we open the project at a different directory.
End of explanation
"""
print new_pdb.get_file()[:300]
"""
Explanation: Note that now there are 3 / in the filename, two from the :// and one from the root directory of your machine
The load() at the end really loads the file and when you save this File now it will contain the content of the file. You can access this content as seen in the previous example.
End of explanation
"""
transaction = new_pdb.transfer()
print transaction
task.append(transaction)
print task.description
"""
Explanation: For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.
Now, we want to add a command to the staging and see what happens.
End of explanation
"""
new_pdb.exists
"""
Explanation: We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists.
End of explanation
"""
task.append('stat ntl9.pdb')
"""
Explanation: Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats.
End of explanation
"""
project.queue(task)
"""
Explanation: Note that usually you place these stage commands at the top or your script.
Now we could run this task, as before and see, if it works. (Make sure you still have a worker running)
End of explanation
"""
task.state
"""
Explanation: And check, that the task is running
End of explanation
"""
print task.stdout
"""
Explanation: If we did not screw up the task, it should have succeeded and we can look at the STDOUT.
End of explanation
"""
from adaptivemd import WorkerScheduler
sc = WorkerScheduler(project._current_configuration)
"""
Explanation: Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb
How does a real script look like
Just for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook.
End of explanation
"""
sc.project = project
"""
Explanation: If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that.
End of explanation
"""
print '\n'.join(sc.task_to_script(task))
"""
Explanation: Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now.
End of explanation
"""
task = Task()
task.append('touch staging:///my_file.txt')
print '\n'.join(sc.task_to_script(task))
"""
Explanation: Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO: We might change this to just write to the target file. Need to check if that is still consistent)
A note on file locations
One problem with bash scripts is that when you create the tasks you have no concept on where the files actually are located. To get around this the created bash script will be scanned for paths, that contain prefixed like we are used to and are interpreted in the context of the worker / scheduler. The worker is the only instance to know all that is necessary so this is the place to fix that problem.
Let's see that in a little example, where we create an empty file in the staging area.
End of explanation
"""
task = Task()
"""
Explanation: And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!
A Task from scratch
If you want to start a new task you can begin with
End of explanation
"""
%%file my_rpc_function.py
def my_func(f):
import os
print f
return os.path.getsize(f)
"""
Explanation: as we did before.
Just start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.
The Python RPC Task
The tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.
Let's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now.
End of explanation
"""
task = PythonTask(modeller)
"""
Explanation: Now create a PythonTask instead
End of explanation
"""
from my_rpc_function import my_func
"""
Explanation: and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available.
End of explanation
"""
task.call(my_func, f=project.trajectories.one)
print task.description
"""
Explanation: We call the function my_func with one argument
End of explanation
"""
project.queue(task)
"""
Explanation: Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.
You see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!
Let's run it and see what happens.
End of explanation
"""
project.wait_until(task.is_done)
"""
Explanation: And wait until the task is done
End of explanation
"""
task.output
"""
Explanation: The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes
End of explanation
"""
task = modeller.execute(project.trajectories)
task.then_func_name
"""
Explanation: And you can use this information in an adaptive script to make decisions.
success callback
The last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions.
you can call a function of the related generator class. for this you need to create the task using PythonTask(generator)
the function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.
The Generator needs to be part of adaptivemd
So in the case of modeller.execute we create a PythonTask that references the following functions
End of explanation
"""
help(modeller.then_func)
"""
Explanation: So we will call the default then_func of modeller or the class modeller is of.
End of explanation
"""
project.close()
"""
Explanation: These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.
This is the actual code of the callback
py
@staticmethod
def then_func(project, task, model, inputs):
# add the input arguments for later reference
model.data['input']['trajectories'] = inputs['kwargs']['files']
model.data['input']['pdb'] = inputs['kwargs']['topfile']
project.models.add(model)
All it does is to add some of the input parameters to the model for later reference and then store the model in the project. You are free to define all sorts of actions here, even queue new tasks.
Next, we will talk about the factories for Task objects, called generators. There we will actually write a new class that does some stuff with the results.
End of explanation
"""
|
qinjian623/dlnotes | cs231n/assignments/assignment1/.ipynb_checkpoints/svm-checkpoint.ipynb | gpl-3.0 | # Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Multiclass Support Vector Machine exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
In this exercise you will:
implement a fully-vectorized loss function for the SVM
implement the fully-vectorized expression for its analytic gradient
check your implementation using numerical gradient
use a validation set to tune the learning rate and regularization strength
optimize the loss function with SGD
visualize the final learned weights
End of explanation
"""
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print 'Training data shape: ', X_train.shape
print 'Validation data shape: ', X_val.shape
print 'Test data shape: ', X_test.shape
print 'dev data shape: ', X_dev.shape
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print mean_image[:10] # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print X_train.shape, X_val.shape, X_test.shape, X_dev.shape
"""
Explanation: CIFAR-10 Data Loading and Preprocessing
End of explanation
"""
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.00001)
print 'loss: %f' % (loss, )
"""
Explanation: SVM Classifier
Your code for this section will all be written inside cs231n/classifiers/linear_svm.py.
As you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function.
End of explanation
"""
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad)
"""
Explanation: The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
End of explanation
"""
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# The losses should match but your vectorized implementation should be much faster.
print 'difference: %f' % (loss_naive - loss_vectorized)
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss and gradient: computed in %fs' % (toc - tic)
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss and gradient: computed in %fs' % (toc - tic)
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'difference: %f' % difference
"""
Explanation: Inline Question 1:
It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? Hint: the SVM loss function is not strictly speaking differentiable
Your Answer: fill this in.
End of explanation
"""
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=5e4,
num_iters=1500, verbose=True)
toc = time.time()
print 'That took %fs' % (toc - tic)
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )
y_val_pred = svm.predict(X_val)
print 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-7, 5e-5]
regularization_strengths = [5e4, 1e5]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
"""
Explanation: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
End of explanation
"""
|
lowks/micromeritics | documentation/BET.ipynb | gpl-3.0 | %matplotlib inline
from micromeritics import bet, util, isotherm_examples as ex, plots
s = ex.carbon_black() # example isotherm of Carbon Black with N2.
min = 0.05 # 0.05 to 0.30 range for BET
max = 0.3
Q,P = util.restrict_isotherm(s.Qads, s.Prel, min, max)
plots.plotIsotherm(s.Qads, s.Prel, s.descr[s.descr.find(':')+1:], min, max )
"""
Explanation: BET Surface Area
The BET equation for determining the specific surface area from multilayer adsorption of nitrogen was first reported in 1938.
Brunauer, Stephen, Paul Hugh Emmett, and Edward Teller. "Adsorption of gases in multimolecular layers." Journal of the American Chemical Society 60, no. 2 (1938): 309-319.
BET Surface Area Calculation Description
The BET data reduction applies to isotherm data. The isotherm consists of the quantity adsorbed $Q_i$ (in cm^3/g STP) and the relative pressure $P^{rel}_i$ for each point $i$ selected for the calculation.
THe BET model also requires the cross sectional area of the adsorptive, $\sigma_{ads}$ (in nm^2).
BET transformation calculation
The first thing is to calculate the BET transform $T_i$. This is done as follows:
$\displaystyle {T_i=\frac{1}{Q_i(1/P^{rel}_i-1)}}$
Then a least a least-squares fit is performed on the $T_i$ vs. $P^{rel}_i$ data. This fit calculates the following:
$m$: The slope of the best fit line.
$Y_0$: The Y-intercept of the best fit line.
$\sigma_m$: The uncertainty of the slope from the fit calculation.
$\sigma_{Y_0}$: The uncertainty of the Y-intercept the the fit calculation.
$r$ The correlation coefficient between the $T_i$ and $P^{rel}_i$.
Calculating the BET results
The slope of the line and intercept may be used to calculate the monolayer capacity $Q_m$ and the BET $c$ constant.
The fist thing to calculate is the BET $C$ value:
$\displaystyle {C = 1 + \frac{m}{Y_0}}$
From this we can caclulate the monolayer capacity $Q_m$:
$\displaystyle Q_m = \frac{1}{C*Y_0}$
Finally, we can calculate the BET surface area $A_{BET}$:
$\displaystyle A_{BET} =\frac{N_A \sigma_{ads}}{V_{STP} U_{mn^2,m^2} (m + Y_0)}$
Where:
* $V_{STP}$: volume of a molr of gas at STP $22414.0$ cm^3
* $N_A$: Number of molecules in a mole of gas: $6.02214129\times 10^{23}$.
* $U_{mn^2,m^2}$: Unit conversion form nm^2 to m^2: $10^{18}$.
Finally, we can find the uncertainty in the surface area $\sigma_{SA_{BET}}$ from the uncertainty in the line fit results:
$\displaystyle \sigma_{A_{BET}} = A_{BET} \frac{\sqrt{\sigma_m^2+\sigma_{Y_0}^2}}{m + Y_0}$
BET: Example Calculation
As an example of the BET calculation, we use the refernce calculation from the report-models-python on github from Micromeritics. This tool not only provides example the reference caclulations, but aslo provides exampel data to work with, and utilities to make working with the data batter.
For this example, we start with a Carbon Black Reference material $N_2$ istherm at 77K, and restrict it to the ususal 0.05 to 0.3 relative pressure range.
End of explanation
"""
B = bet.bet(Q, P, 0.162)
plots.plotBET(P, B.transform, B.line_fit.slope, B.line_fit.y_intercept, max )
"""
Explanation: We then use the BET reference calculation on that restricted range (along with the molecular cross sectional area of 0.162 nm^2 to do teh calculation above. Below we show the tranform plot along with its best fit line.
End of explanation
"""
print("BET surface area: %.4f ± %.4f m²/g" % (B.sa, B.sa_err))
print("C: %.6f" % B.C)
print("Qm: %.4f cm³/g STP" % B.q_m)
"""
Explanation: Finally, we show the BET results.
End of explanation
"""
|
barronh/GCandPython | PNC_03DC3Eval.ipynb | gpl-3.0 | # Prepare my slides
%pylab inline
%cd working
"""
Explanation: ICARTT Evaluation
Author: Barron H. Henderson
This presentation will teach basics of spatial interpolation and statistical evaluation using Python with PseudoNetCDF, numpy, and scipy.
End of explanation
"""
from PseudoNetCDF import PNC
"""
Explanation: Evaluation
Step one pairing data
Homogonize dimensions
Use test metrics
Pairing Data - Better Interpolation
In PNC_02Figures , we masked data instead of matching data.
Ideally, we would have a multi-dimensional interpolation.
For this we will use scipy KDTree
First, load the data again
End of explanation
"""
margs = PNC('--format=bpch,nogroup=("IJ-AVG-$","PEDGE-$"),vertgrid="GEOS-5-NATIVE"', \
"bpch/ctm.bpch.v10-01-public-Run0.2013050100")
mfile = margs.ifiles[0]
mlon = mfile.variables['longitude']
mlat = mfile.variables['latitude']
MLON, MLAT = np.meshgrid(mlon, mlat)
mO3 = mfile.variables['O3']
"""
Explanation: Read Model Data
End of explanation
"""
oargs = PNC("-f", "ffi1001",
"--rename=v,O3_ESRL,O3",
"--variables=Fractional_Day,O3_ESRL,LATITUDE,LONGITUDE,PRESSURE",
"--rename=v,LATITUDE,latitude",
"--expr=longitude=np.where(LONGITUDE>180,LONGITUDE-360,LONGITUDE);",
"--expr=time=Fractional_Day*24*3600;time.units=\"seconds since 2011-12-31\"",
"icartt/dc3-mrg60-dc8_merge_20120518_R7_thru20120622.ict")
ofile = oargs.ifiles[0]
olons = ofile.variables['longitude']
olats = ofile.variables['latitude']
oO3 = ofile.variables['O3']
"""
Explanation: Read Observations
End of explanation
"""
import scipy.spatial
?scipy.spatial
?scipy.spatial.KDTree
"""
Explanation: Scipy has a spatial library
End of explanation
"""
from scipy.spatial import KDTree
tree = KDTree(np.ma.array([MLAT.ravel(), MLON.ravel()]).T)
dists, idxs = tree.query(np.ma.array([olats, olons]).T)
latidxs, lonidxs = np.unravel_index(idxs, MLAT.shape)
plt.close()
mpO3 = mO3[:, :, latidxs, lonidxs]
mpO3.shape
moO3 = mpO3[:, 22]
moO3.shape
"""
Explanation: Perform 2-d nearest neighbor interpolation
End of explanation
"""
from scipy.stats.mstats import linregress
?linregress
lr = linregress(oO3[:], moO3[:])
lr
x = np.array(oO3[:].min(), oO3[:].max())
plt.scatter(oO3[:], moO3)
plt.plot(x, lr.slope * x + lr.intercept, ls = '--')
plt.text(x.mean(), plt.ylim()[1], 'r=%.2f; p=%.2f' % (lr.rvalue, lr.pvalue))
"""
Explanation: Simple Regression
End of explanation
"""
from scipy.spatial import KDTree
from PseudoNetCDF import PNC
# Read Model Data
margs = PNC('-c', "layer73,valid,0.5,0.5", \
'--format=bpch,nogroup=("IJ-AVG-$","PEDGE-$"),'+
'vertgrid="GEOS-5-NATIVE"', \
"bpch/ctm.bpch.v10-01-public-Run0.2013050100")
mfile = margs.ifiles[0]
mlon = mfile.variables['longitude']
mlat = mfile.variables['latitude']
mpres = mfile.variables['PSURF']
mO3 = mfile.variables['O3']
# Read Observations
oargs = PNC("-f", "ffi1001", "--rename=v,O3_ESRL,O3", \
"--variables=Fractional_Day,O3_ESRL,LATITUDE,LONGITUDE,PRESSURE", \
"--expr=latitude=LATITUDE;"+\
"longitude=np.where(LONGITUDE>180,LONGITUDE-360,LONGITUDE);"+
"time=Fractional_Day*24*3600;time.units=\"seconds since 2011-12-31\"",
"icartt/dc3-mrg60-dc8_merge_20120518_R7_thru20120622.ict")
ofile = oargs.ifiles[0]
olons = ofile.variables['longitude']
olats = ofile.variables['latitude']
opres = ofile.variables['PRESSURE']
oO3 = ofile.variables['O3']
"""
Explanation: 1 Include More Variables
End of explanation
"""
MLON, MLAT = np.meshgrid(mlon, mlat)
MLAT.shape
MLON = MLON * np.ones_like(mO3[0])
MLAT = MLAT * np.ones_like(mO3[0])
MLAT.shape
tree = KDTree(np.ma.array([mpres.ravel(), MLAT.ravel(), MLON.ravel()]).T)
dists, idxs = tree.query(np.ma.array([opres, olats, olons]).T)
presidxs, latidxs, lonidxs = np.unravel_index(idxs, MLAT.shape)
moO3 = mO3[:, presidxs, latidxs, lonidxs][0]
moO3.shape
"""
Explanation: 2 Nearest Neighbor using pressure/lon/lat as peers
End of explanation
"""
from scipy.stats.mstats import linregress
lr = linregress(oO3[:], moO3[:])
lr
plt.scatter(oO3[:], moO3)
plt.plot([0,500], [0,500], ls = '-', lw = 3, color = 'k')
x = np.array([0, 500])
plt.plot(x, lr.slope * x + lr.intercept, ls = '--')
plt.text(200, 500, 'r=%.2f; p=%.2f' % (lr.rvalue, lr.pvalue))
plt.xlim(0, 500)
plt.ylim(0, 500)
plt.ylabel('Model')
plt.xlabel('Obs');
"""
Explanation: Plot Paired Results
End of explanation
"""
MLON, MLAT = np.meshgrid(mlon, mlat)
tree = KDTree(np.ma.array([MLAT.ravel(), MLON.ravel()]).T)
dists, idxs = tree.query(np.ma.array([olats, olons]).T)
latidxs, lonidxs = np.unravel_index(idxs, MLAT.shape)
meO3 = mO3[:, :, latidxs, lonidxs]
meO3.shape
dp = opres[None, None,:] - mpres[:, :, latidxs, lonidxs]
pidx = np.abs(dp).argmin(1)[0]
moO3 = meO3[:, pidx, np.arange(pidx.size)][0]
moO3.shape
"""
Explanation: Lon/Lat First
End of explanation
"""
from scipy.stats.mstats import linregress
lr = linregress(oO3[:], moO3[:])
plt.scatter(oO3[:], moO3)
x = np.array([0, 500])
plt.plot(x, x, ls = '-', lw = 3, color = 'k')
plt.axis('square')
plt.xlim(*x)
plt.ylim(*x)
plt.ylabel('Model')
plt.xlabel('Obs')
plt.plot(x, lr.slope * x + lr.intercept, ls = '--')
plt.text(200, 500, 'r=%.2f; p=%.2f' % (lr.rvalue, lr.pvalue));
"""
Explanation: Plot the results
End of explanation
"""
idxs = [i for i in zip(pidx, latidxs, lonidxs)]
uniqueset = set(idxs)
idxs = np.array(idxs)
ovals = []
mvals = []
for idx in uniqueset:
oi = (idxs == idx).all(1)
ovals.append(oO3[oi].mean())
mvals.append(mO3[0][idx])
print(len(ovals), len(mvals))
"""
Explanation: Remove Assymetric Variability
Benchmark has 1 mean-value all months
DC3 has 1 per minute
End of explanation
"""
from scipy.stats.mstats import linregress
lr = linregress(ovals[:], mvals[:])
plt.scatter(ovals, mvals)
plt.ylabel('Model'); plt.xlabel('Obs')
plt.axis('square')
x = np.array([0, 350])
plt.xlim(*x); plt.ylim(*x)
plt.plot(x, lr.slope * x + lr.intercept, ls = '--')
plt.title('r=%.2f; p=%.2f' % (lr.rvalue, lr.pvalue));
"""
Explanation: Plot Paired Time-Means
End of explanation
"""
import scipy.interpolate
?scipy.interpolate.NearestNDInterpolator
?scipy.interpolate.LinearNDInterpolator
"""
Explanation: Interpolation not KDTree
KDTree is just finding a cell
End of explanation
"""
from scipy.stats.mstats import ttest_ind
ttr = ttest_ind(ovals, mvals)
ttr
"""
Explanation: Questions
Are KDTree and NearestNDInterpolator the same?
How different would an analysis be if we had:
used LinearNDInterpolator
would time normalization be necessary
Testing Populations
Typical with normal assumption
Non-parametric tests
Parametric Tests
Pearson Correlation (pearsonr)
Students t-test (ttest_ind)
Shapiro-Wilk (shapiro)
End of explanation
"""
from scipy.stats.mstats import mannwhitneyu
mwr = mannwhitneyu(ovals, mvals)
mwr
"""
Explanation: Non-Parametric Tests
Spearman Correlation (spearmanr)
Mann Whitney U (mannwhitneyu)
Wilcoxon signed-rank test (wilcoxon)
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2017-05-05-matplotlib-subplots.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Today we covered a very important part of matplotlib - how to create multiple subplots in one figure.
We touched upon the following topics:
<p><div class="lev1 toc-item"><a href="#Matplotlib-Axes" data-toc-modified-id="Matplotlib-Axes-1"><span class="toc-item-num">1 </span>Matplotlib Axes</a></div><div class="lev1 toc-item"><a href="#Matplotlib-subplots" data-toc-modified-id="Matplotlib-subplots-2"><span class="toc-item-num">2 </span>Matplotlib subplots</a></div><div class="lev2 toc-item"><a href="#Simple-way:--plt.subplot()-/-plt.subplots()-/-fig.add_subplot()" data-toc-modified-id="Simple-way:--plt.subplot()-/-plt.subplots()-/-fig.add_subplot()-21"><span class="toc-item-num">2.1 </span>Simple way: <code>plt.subplot()</code> / <code>plt.subplots()</code> / <code>fig.add_subplot()</code></a></div><div class="lev2 toc-item"><a href="#More-sophisticated-way:-GridSpec" data-toc-modified-id="More-sophisticated-way:-GridSpec-22"><span class="toc-item-num">2.2 </span>More sophisticated way: <code>GridSpec</code></a></div><div class="lev2 toc-item"><a href="#GridSpec-with-Varying-Cell-Sizes" data-toc-modified-id="GridSpec-with-Varying-Cell-Sizes-23"><span class="toc-item-num">2.3 </span><code>GridSpec</code> with Varying Cell Sizes</a></div><div class="lev1 toc-item"><a href="#Manual-adjustment-for-complex-plots" data-toc-modified-id="Manual-adjustment-for-complex-plots-3"><span class="toc-item-num">3 </span>Manual adjustment for complex plots</a></div><div class="lev1 toc-item"><a href="#AxesGrid-toolbox" data-toc-modified-id="AxesGrid-toolbox-4"><span class="toc-item-num">4 </span>AxesGrid toolbox</a></div><div class="lev2 toc-item"><a href="#AxesGrid" data-toc-modified-id="AxesGrid-41"><span class="toc-item-num">4.1 </span><code>AxesGrid</code></a></div><div class="lev2 toc-item"><a href="#Blending-it-with-cartopy-to-create-maps" data-toc-modified-id="Blending-it-with-cartopy-to-create-maps-42"><span class="toc-item-num">4.2 </span>Blending it with cartopy to create maps</a></div><div class="lev2 toc-item"><a href="#AxesDivider" data-toc-modified-id="AxesDivider-43"><span class="toc-item-num">4.3 </span><code>AxesDivider</code></a></div><div class="lev2 toc-item"><a href="#Anchored-Artists" data-toc-modified-id="Anchored-Artists-44"><span class="toc-item-num">4.4 </span>Anchored Artists</a></div><div class="lev1 toc-item"><a href="#Extra-question:-time-axis" data-toc-modified-id="Extra-question:-time-axis-5"><span class="toc-item-num">5 </span>Extra question: time axis</a></div>
OK, let's import the essentials:
End of explanation
"""
plt.rcParams['figure.facecolor'] = '0.9'
plt.rcParams['figure.figsize'] = (9, 6)
"""
Explanation: Make the figure bigger by default and use non-white facecolor to see the edges:
End of explanation
"""
txt_kw = dict(ha='left', va='center', size=16, alpha=.5)
plt.axes()
plt.xticks([]), plt.yticks([])
plt.text(0.1, 0.1, 'default axes', **txt_kw)
plt.axes([0.2, 0.7, 0.35, 0.15])
plt.xticks([]), plt.yticks([])
plt.text(0.1, 0.5, 'axes([0.2, 0.7, 0.35, 0.15])', **txt_kw)
plt.axes([0.7, 0.1, 0.4, 0.5])
plt.xticks([]), plt.yticks([])
plt.text(0.1, 0.5, 'axes([0.7, 0.1, 0.4, 0.5])',**txt_kw)
"""
Explanation: Matplotlib Axes
Axes are very similar to subplots but allow placement of plots at any location in the figure. So if we want to put a smaller plot inside a bigger one we do so with axes.
As an argument, plt.axes() takes a sequence [Left, Bottom, Width, Height]
End of explanation
"""
ax = plt.subplot()
"""
Explanation: Matplotlib subplots
Simple way: plt.subplot() / plt.subplots() / fig.add_subplot()
The simplest way to create a subplot(s) is this:
End of explanation
"""
fig, ax = plt.subplots()
"""
Explanation: To get a figure explicitly, you need a slightly different function:
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
"""
Explanation: which is equivalent to the following two lines of code:
End of explanation
"""
plt.subplots(ncols=3, nrows=2) # , sharex=True
"""
Explanation: Of course, this function wouldn't be named subplots if it wasn't used for creating multiple subplots. You can create a grid of axes by using the special keyword arguments:
End of explanation
"""
from matplotlib.gridspec import GridSpec
"""
Explanation: Note that it returns a tuple containing a figure object and a 2x3 numpy array of axes objects.
More sophisticated way: GridSpec
End of explanation
"""
fig = plt.figure()
gs = GridSpec(3, 3)
print('Type of gs is {}'.format(type(gs)))
ax1 = fig.add_subplot(gs[0, :])
ax2 = fig.add_subplot(gs[1,:-1])
ax3 = fig.add_subplot(gs[1:, -1])
ax4 = fig.add_subplot(gs[-1,0])
ax5 = fig.add_subplot(gs[-1,-2])
for ax in fig.axes:
ax.set_xticks([])
ax.set_yticks([])
"""
Explanation: For a more complex grid of plots, you can use GridSpec. A GridSpec instance provides array-like (2d or 1d) indexing that returns the SubplotSpec instance.
For a SubplotSpec that spans multiple cells, use slice.
End of explanation
"""
gs1 = GridSpec(3, 3)
gs1.update(left=0.05, right=0.48, wspace=0.05)
ax1 = plt.subplot(gs1[:-1, :])
ax2 = plt.subplot(gs1[-1, :-1])
ax3 = plt.subplot(gs1[-1, -1])
"""
Explanation: When a GridSpec is explicitly used, you can adjust the layout parameters of subplots.
End of explanation
"""
gs = GridSpec(2, 2,
width_ratios=[1, 2],
height_ratios=[4, 1])
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
"""
Explanation: GridSpec with Varying Cell Sizes
By default, GridSpec creates cells of equal sizes.
You can adjust relative heights and widths of rows and columns.
Note that absolute values are meaningless, only their relative ratios matter.
End of explanation
"""
from mpl_toolkits.axes_grid1 import AxesGrid
fig = plt.figure()
axgr = AxesGrid(fig, 111,
cbar_mode="edge",
cbar_location="right",
cbar_pad=0.05,
nrows_ncols=(2, 3), axes_pad=0.4)
for ax in axgr.axes_all:
h = ax.pcolormesh(np.random.rand(100,200))
fig.colorbar(h, cax=axgr.cbar_axes[0])
# fig.colorbar(h, cax=axgr.cbar_axes[1])
"""
Explanation: Manual adjustment for complex plots
Without relying on subplots() or GridSpec, you can use axes' get_position and set_position methods to get, modify and reset coordinates of the axes manually.
Example: https://gist.github.com/dennissergeev/b7da1bcf0a347caa8d42293131c0ef5d
AxesGrid toolbox
Another useful matplotlib toolbox that allows you to create grids of subplots is AxesGrid. It is especially convenient if you need colorbars to be placed neatly beside main axes.
AxesGrid
End of explanation
"""
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots()
im = ax.imshow(np.arange(100).reshape((10,10)))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
"""
Explanation: Blending it with cartopy to create maps
By default, AxesGrid creates a grid of standard matplotlib axes. However, it has an axes_class keyword that allows you to change the type of axes (all at the same time).
An example of exploiting option can be found on cartopy's website: AxesGrid example.
AxesDivider
Yet another way of attaching extra subplots to existing axes is provided by makes_axes_locatable() function.
Note: does not work with cartopy (?)
End of explanation
"""
from mpl_toolkits.axes_grid.anchored_artists import AnchoredText
"""
Explanation: An even better use case of this function is provided on the matplotlib website.
Anchored Artists
AxesGrid toolbox offers you a collection of anchored artists. It’s a collection of matplotlib artists whose location is anchored to the (axes) bounding box, like the legend.
End of explanation
"""
fig, ax = plt.subplots()
at = AnchoredText('Figure 1a',
loc=2, prop=dict(size=18), frameon=True)
ax.add_artist(at)
"""
Explanation: For example, to create a fixed annotation in a corner of the plot we can use the following code:
End of explanation
"""
import datetime
import matplotlib.dates as mdates
x = [datetime.datetime.now() + datetime.timedelta(hours=i)
for i in range(5)]
y = np.random.rand(len(x))
x
fig, ax = plt.subplots()
ax.plot(x, y)
min_loc = mdates.MinuteLocator(interval=15)
ax.xaxis.set_major_locator(min_loc)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%B %d %H'))
labels = ax.get_xticklabels()
_ = plt.setp(labels, rotation=90, fontsize=8)
"""
Explanation: panel-plots
If you are still struggling with subplots, try this little package: https://github.com/ajdawson/panel-plots
A simple to use and understand system for making panel plots with matplotlib. The panel-plots package is designed to make it easy to create panel plots with specific sizes, either panel size or overall figure size, making it particularly suitable for generating figures for journal articles where you need to prepare an image of a particular size and ensure consistent and correct font sizing.
Extra question: time axis
There was an extra question from the group, on how to deal with time axis in matplotlib: how to change the tick frequency, the label format, etc.
End of explanation
"""
HTML(html)
"""
Explanation: References
Matplotlib tutorial
Customizing Location of Subplot Using GridSpec
Overview of AxesGrid toolkit
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/20f35983ef279d1b30aa970c81aafe26/plot_read_events.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Chris Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
"""
Explanation: Reading an event file
Read events from a file. For a more detailed discussion of events in
MNE-Python, see tut-events-vs-annotations and tut-event-arrays.
End of explanation
"""
events_1 = mne.read_events(fname, include=1)
events_1_2 = mne.read_events(fname, include=[1, 2])
events_not_4_32 = mne.read_events(fname, exclude=[4, 32])
"""
Explanation: Reading events
Below we'll read in an events file. We suggest that this file end in
-eve.fif. Note that we can read in the entire events file, or only
events corresponding to particular event types with the include and
exclude parameters.
End of explanation
"""
print(events_1[:5], '\n\n---\n\n', events_1_2[:5], '\n\n')
for ind, before, after in events_1[:5]:
print("At sample %d stim channel went from %d to %d"
% (ind, before, after))
"""
Explanation: Events objects are essentially numpy arrays with three columns:
event_sample | previous_event_id | event_id
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
mne.viz.plot_events(events_1, axes=axs[0], show=False)
axs[0].set(title="restricted to event 1")
mne.viz.plot_events(events_1_2, axes=axs[1], show=False)
axs[1].set(title="restricted to event 1 or 2")
mne.viz.plot_events(events_not_4_32, axes=axs[2], show=False)
axs[2].set(title="keep all but 4 and 32")
plt.setp([ax.get_xticklabels() for ax in axs], rotation=45)
plt.tight_layout()
plt.show()
"""
Explanation: Plotting events
We can also plot events in order to visualize how events occur over the
course of our recording session. Below we'll plot our three event types
to see which ones were included.
End of explanation
"""
mne.write_events('example-eve.fif', events_1)
"""
Explanation: Writing events
Finally, we can write events to disk. Remember to use the naming convention
-eve.fif for your file.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/fio-ronm/cmip6/models/sandbox-1/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
teoguso/sol_1116 | cumulant-to-pdf.ipynb | mit | !gvim data/SF_Si_bulk/invar.in
"""
Explanation: Best report ever
Everything you see here is either markdown, LaTex, Python or BASH.
The spectral function
It looks like this:
\begin{equation}
A(\omega) = \mathrm{Im}|G(\omega)|
\end{equation}
GW vs Cumulant
Mathematically very different:
\begin{equation}
G^{GW} (\omega) = \frac1{ \omega - \epsilon - \Sigma (\omega) }
\end{equation}
\begin{equation}
G^C(t_1, t_2) = G^0(t_1, t_2) e^{ i \int_{t_1}^{t_2} \int_{t'}^{t_2} dt' dt'' W (t', t'') }
\end{equation}
BUT they connect through $\mathrm{Im} W (\omega) = \frac1\pi \mathrm{Im} \Sigma ( \epsilon - \omega )$.
Implementation
Using a multi-pole representation for $\Sigma^{GW}$:
\begin{equation}
\mathrm{Im} W (\omega) = \frac1\pi \mathrm{Im} \Sigma ( \epsilon - \omega )
\end{equation}
\begin{equation}
W (\tau) = - i \lambda \bigl[ e^{ i \omega_p \tau } \theta ( - \tau ) + e^{ - i \omega_p \tau } \theta ( \tau ) \bigr]
\end{equation}
GW vs Cumulant
GW:
\begin{equation}
A(\omega) = \frac1\pi \frac{\mathrm{Im}\Sigma (\omega)}
{ [ \omega - \epsilon - \mathrm{Re}\Sigma (\omega) ]^2 +
[ \mathrm{Im}\Sigma (\omega) ]^2}
\end{equation}
Cumulant:
\begin{equation}
A(\omega) = \frac1\pi \sum_{n=0}^{\infty} \frac{a^n}{n!} \frac{\Gamma}{ (\omega - \epsilon + n \omega_p)^2 + \Gamma^2 }
\end{equation}
Now some executable code (Python)
I have implemented the formulas above in my Python code.
I can just run it from here., but before let me check
if my input file is correct...
End of explanation
"""
%cd data/SF_Si_bulk/
%run ../../../../../Code/SF/sf.py
"""
Explanation: Now I can run my script:
End of explanation
"""
cd ../../../
"""
Explanation: Not very elegant, I know. It's just for demo pourposes.
End of explanation
"""
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = (9., 6.)
%matplotlib inline
"""
Explanation: I have first to import a few modules/set up a few things:
End of explanation
"""
sf_c = np.genfromtxt(
'data/SF_Si_bulk/Spfunctions/spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat')
sf_gw = np.genfromtxt(
'data/SF_Si_bulk/Spfunctions/spftot_gw_s1.0_p1.0_800ev.dat')
#!gvim spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat
"""
Explanation: Next I can read the data from a local folder:
End of explanation
"""
plt.plot(sf_c[:,0], sf_c[:,1], label='1-pole cumulant')
plt.plot(sf_gw[:,0], sf_gw[:,1], label='GW')
plt.xlim(-50, 0)
plt.ylim(0, 300)
plt.title("Bulk Si - Spectral function - ib=1, ikpt=1")
plt.xlabel("Energy (eV)")
plt.grid(); plt.legend(loc='best')
"""
Explanation: Now I can plot the stored arrays.
End of explanation
"""
!jupyter-nbconvert --to pdf cumulant-to-pdf.ipynb
pwd
!xpdf cumulant-to-pdf.pdf
"""
Explanation: Creating a PDF document
I can create a PDF version of this notebook from itself, using the command line:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ec-earth-consortium/cmip6/models/ec-earth3-aerchem/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-aerchem', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-AERCHEM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
retnuh/deep-learning | tensorboard/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inpe/cmip6/models/besm-2-7/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: INPE
Source ID: BESM-2-7
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
ethen8181/machine-learning | projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb | mit | from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[3])
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import numba
import numpy as np
import pandas as pd
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,numba
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Rossman-Data-Preparation" data-toc-modified-id="Rossman-Data-Preparation-1"><span class="toc-item-num">1 </span>Rossman Data Preparation</a></span><ul class="toc-item"><li><span><a href="#Individual-Data-Source" data-toc-modified-id="Individual-Data-Source-1.1"><span class="toc-item-num">1.1 </span>Individual Data Source</a></span></li><li><span><a href="#Merging-Various-Data-Source" data-toc-modified-id="Merging-Various-Data-Source-1.2"><span class="toc-item-num">1.2 </span>Merging Various Data Source</a></span></li><li><span><a href="#Final-Data" data-toc-modified-id="Final-Data-1.3"><span class="toc-item-num">1.3 </span>Final Data</a></span><ul class="toc-item"><li><span><a href="#Durations" data-toc-modified-id="Durations-1.3.1"><span class="toc-item-num">1.3.1 </span>Durations</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
data_dir = 'rossmann'
print('available files: ', os.listdir(data_dir))
file_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
path_names = {file_name: os.path.join(data_dir, file_name + '.csv')
for file_name in file_names}
df_train = pd.read_csv(path_names['train'], low_memory=False)
df_test = pd.read_csv(path_names['test'], low_memory=False)
print('training data dimension: ', df_train.shape)
print('testing data dimension: ', df_test.shape)
df_train.head()
"""
Explanation: Rossman Data Preparation
Individual Data Source
In addition to the data provided by the competition, we will be using external datasets put together by participants in the Kaggle competition. We can download all of them here. Then we should untar them in the directory to which data_dir is pointing to.
End of explanation
"""
df_train['StateHoliday'] = df_train['StateHoliday'] != '0'
df_test['StateHoliday'] = df_test['StateHoliday'] != '0'
"""
Explanation: We turn state Holidays to booleans, to make them more convenient for modeling.
End of explanation
"""
df_weather = pd.read_csv(path_names['weather'])
print('weather data dimension: ', df_weather.shape)
df_weather.head()
df_state_names = pd.read_csv(path_names['state_names'])
print('state names data dimension: ', df_state_names.shape)
df_state_names.head()
df_weather = df_weather.rename(columns={'file': 'StateName'})
df_weather = df_weather.merge(df_state_names, on="StateName", how='left')
df_weather.head()
"""
Explanation: For the weather and state names data, we perform a join on a state name field and create a single dataframe.
End of explanation
"""
df_googletrend = pd.read_csv(path_names['googletrend'])
print('google trend data dimension: ', df_googletrend.shape)
df_googletrend.head()
df_googletrend['Date'] = df_googletrend['week'].str.split(' - ', expand=True)[0]
df_googletrend['State'] = df_googletrend['file'].str.split('_', expand=True)[2]
df_googletrend.loc[df_googletrend['State'] == 'NI', 'State'] = 'HB,NI'
df_googletrend.head()
"""
Explanation: For the google trend data. We're going to extract the state and date information from the raw dataset, also replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'.
End of explanation
"""
DEFAULT_DT_ATTRIBUTES = [
'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end',
'Is_quarter_start', 'Is_year_end', 'Is_year_start'
]
def add_datepart(df, colname, drop_original_col=False,
dt_attributes=DEFAULT_DT_ATTRIBUTES,
add_elapse_col=True):
"""
Extract various date time components out of a date column, this modifies
the dataframe inplace.
References
----------
- https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-date-components
"""
df[colname] = pd.to_datetime(df[colname], infer_datetime_format=True)
if dt_attributes:
for attr in dt_attributes:
df[attr] = getattr(df[colname].dt, attr.lower())
# representing the number of seconds elapsed from 1970-01-01 00:00:00
# https://stackoverflow.com/questions/15203623/convert-pandas-datetimeindex-to-unix-time
if add_elapse_col:
df['Elapsed'] = df[colname].astype(np.int64) // 10 ** 9
if drop_original_col:
df = df.drop(colname, axis=1)
return df
df_weather.head()
df_weather = add_datepart(
df_weather, 'Date',
dt_attributes=None, add_elapse_col=False)
df_googletrend = add_datepart(
df_googletrend, 'Date', drop_original_col=True,
dt_attributes=['Year', 'Week'], add_elapse_col=False)
df_train = add_datepart(df_train, 'Date')
df_test = add_datepart(df_test, 'Date')
print('training data dimension: ', df_train.shape)
df_train.head()
"""
Explanation: The following code chunks extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
We should always consider this feature extraction step when working with date-time. Without expanding our date-time into these additional fields, we can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
End of explanation
"""
df_trend_de = df_googletrend.loc[df_googletrend['file'] == 'Rossmann_DE',
['Year', 'Week', 'trend']]
df_trend_de.head()
"""
Explanation: The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
End of explanation
"""
df_store = pd.read_csv(path_names['store'])
print('store data dimension: ', df_store.shape)
df_store.head()
df_store_states = pd.read_csv(path_names['store_states'])
print('store states data dimension: ', df_store_states.shape)
df_store_states.head()
df_store = df_store.merge(df_store_states, on='Store', how='left')
print('null count: ', len(df_store[df_store['State'].isnull()]))
df_store.head()
df_joined_train = df_train.merge(df_store, on='Store', how='left')
df_joined_test = df_test.merge(df_store, on='Store', how='left')
null_count_train = len(df_joined_train[df_joined_train['StoreType'].isnull()])
null_count_test = len(df_joined_test[df_joined_test['StoreType'].isnull()])
print('null count: ', null_count_train, null_count_test)
print('dimension: ', df_joined_train.shape)
df_joined_train.head()
df_joined_train.columns
df_joined_train = df_joined_train.merge(df_weather, on=['State', 'Date'], how='left')
df_joined_test = df_joined_test.merge(df_weather, on=['State', 'Date'], how='left')
null_count_train = len(df_joined_train[df_joined_train['Mean_TemperatureC'].isnull()])
null_count_test = len(df_joined_test[df_joined_test['Mean_TemperatureC'].isnull()])
print('null count: ', null_count_train, null_count_test)
print('dimension: ', df_joined_train.shape)
df_joined_train.head()
df_joined_train.columns
df_joined_train = df_joined_train.merge(df_googletrend,
on=['State', 'Year', 'Week'],
how='left')
df_joined_test = df_joined_test.merge(df_googletrend,
on=['State', 'Year', 'Week'],
how='left')
null_count_train = len(df_joined_train[df_joined_train['trend'].isnull()])
null_count_test = len(df_joined_test[df_joined_test['trend'].isnull()])
print('null count: ', null_count_train, null_count_test)
print('dimension: ', df_joined_train.shape)
df_joined_train.head()
df_joined_train.columns
df_joined_train = df_joined_train.merge(df_trend_de,
on=['Year', 'Week'],
suffixes=('', '_DE'),
how='left')
df_joined_test = df_joined_test.merge(df_trend_de,
on=['Year', 'Week'],
suffixes=('', '_DE'),
how='left')
null_count_train = len(df_joined_train[df_joined_train['trend_DE'].isnull()])
null_count_test = len(df_joined_test[df_joined_test['trend_DE'].isnull()])
print('null count: ', null_count_train, null_count_test)
print('dimension: ', df_joined_train.shape)
df_joined_train.head()
df_joined_train.columns
"""
Explanation: Merging Various Data Source
Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Aside: Why not just do an inner join? If we are assuming that all records are complete and match on the field we desire, an inner join will do the same thing as an outer join. However, in the event we are not sure, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is an equivalent approach).
During the merging process, we'll print out the first few rows of the dataframe and the column names so we can keep track of how the dataframe evolves as we join with a new data source.
End of explanation
"""
for df in (df_joined_train, df_joined_test):
df['CompetitionOpenSinceYear'] = (df['CompetitionOpenSinceYear']
.fillna(1900)
.astype(np.int32))
df['CompetitionOpenSinceMonth'] = (df['CompetitionOpenSinceMonth']
.fillna(1)
.astype(np.int32))
df['Promo2SinceYear'] = df['Promo2SinceYear'].fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df['Promo2SinceWeek'].fillna(1).astype(np.int32)
for df in (df_joined_train, df_joined_test):
df['CompetitionOpenSince'] = pd.to_datetime(dict(
year=df['CompetitionOpenSinceYear'],
month=df['CompetitionOpenSinceMonth'],
day=15
))
df['CompetitionDaysOpen'] = df['Date'].subtract(df['CompetitionOpenSince']).dt.days
"""
Explanation: Final Data
After merging all the various data source to create our master dataframe, we'll still perform some additional feature engineering steps including:
Some of the rows contain missing values for some columns, we'll impute them here. What values to impute is pretty subjective then we don't really know the root cause of why it is missing, we won't spend too much time on it here. One common strategy for imputing missing categorical features is to pick an arbitrary signal value that otherwise doesn't appear in the data, e.g. -1, -999. Or impute it with the mean, majority value and create another column that takes on a binary value indicating whether or not that value is missing in the first place.
Create some duration features with Competition and Promo column.
End of explanation
"""
for df in (df_joined_train, df_joined_test):
df['CompetitionMonthsOpen'] = df['CompetitionDaysOpen'] // 30
df.loc[df['CompetitionMonthsOpen'] > 24, 'CompetitionMonthsOpen'] = 24
df.loc[df['CompetitionMonthsOpen'] < -24, 'CompetitionMonthsOpen'] = -24
df_joined_train['CompetitionMonthsOpen'].unique()
"""
Explanation: For the CompetitionMonthsOpen field, we limit the maximum to 2 years to limit the number of unique categories.
End of explanation
"""
from isoweek import Week
for df in (df_joined_train, df_joined_test):
df['Promo2Since'] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))
df['Promo2Days'] = df['Date'].subtract(df['Promo2Since']).dt.days
for df in (df_joined_train, df_joined_test):
df['Promo2Weeks'] = df['Promo2Days'] // 7
df.loc[df['Promo2Weeks'] < 0, 'Promo2Weeks'] = 0
df.loc[df['Promo2Weeks'] > 25, 'Promo2Weeks'] = 25
df_joined_train['Promo2Weeks'].unique()
df_joined_train.columns
"""
Explanation: Repeat the same process for Promo
End of explanation
"""
columns = ['Date', 'Store', 'Promo', 'StateHoliday', 'SchoolHoliday']
df = df_joined_train[columns].append(df_joined_test[columns])
df['DateUnixSeconds'] = df['Date'].astype(np.int64) // 10 ** 9
df.head()
@numba.njit
def compute_duration(store_arr, date_unix_seconds_arr, field_arr):
"""
For each store, track the day since/before the occurrence of a field.
The store and date are assumed to be already sorted.
Parameters
----------
store_arr : 1d ndarray[int]
date_unix_seconds_arr : 1d ndarray[int]
The date should be represented in unix timestamp (seconds).
field_arr : 1d ndarray[bool]/ndarray[int]
The field that we're interested in. If int, it should take value
of 1/0 indicating whether the field/event occurred or not.
Returns
-------
result : list[int]
Days since/before the occurrence of a field.
"""
result = []
last_store = 0
zipped = zip(store_arr, date_unix_seconds_arr, field_arr)
for store, date_unix_seconds, field in zipped:
if store != last_store:
last_store = store
last_date = date_unix_seconds
if field:
last_date = date_unix_seconds
diff_day = (date_unix_seconds - last_date) // 86400
result.append(diff_day)
return result
df = df.sort_values(['Store', 'Date'])
start = time.time()
for col in ('SchoolHoliday', 'StateHoliday', 'Promo'):
result = compute_duration(df['Store'].values,
df['DateUnixSeconds'].values,
df[col].values)
df['After' + col] = result
end = time.time()
print('elapsed: ', end - start)
df.head(10)
"""
Explanation: Durations
It is common when working with time series data to extract features that captures relationships across rows instead of between columns. e.g. time until next event, time since last event.
Here, we would like to compute features such as days until next promotion or days before next promotion. And the same process can be repeated for state/school holiday.
End of explanation
"""
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
start = time.time()
for col in ('SchoolHoliday', 'StateHoliday', 'Promo'):
result = compute_duration(df['Store'].values,
df['DateUnixSeconds'].values,
df[col].values)
df['Before' + col] = result
end = time.time()
print('elapsed: ', end - start)
df.head(10)
"""
Explanation: If we look at the values in the AfterStateHoliday column, we can see that the first row of the StateHoliday column is True, therefore, the corresponding AfterStateHoliday is therefore 0 indicating it's a state holiday that day, after encountering a state holiday, the AfterStateHoliday column will start incrementing until it sees the next StateHoliday, which will then reset this counter.
Note that for Promo, it starts out with a 0, but the AfterPromo starts accumulating until it sees the next Promo. Here, we're not exactly sure when was the last promo before 2013-01-01 since we don't have the data for it. Nonetheless we'll still start incrementing the counter. Another approach is to fill it all with 0.
End of explanation
"""
df = df.drop(['Promo', 'StateHoliday', 'SchoolHoliday', 'DateUnixSeconds'], axis=1)
df_joined_train = df_joined_train.merge(df, on=['Date', 'Store'], how='inner')
df_joined_test = df_joined_test.merge(df, on=['Date', 'Store'], how='inner')
print('dimension: ', df_joined_train.shape)
df_joined_train.head()
df_joined_train.columns
"""
Explanation: After creating these new features, we join it back to the original dataframe.
End of explanation
"""
output_dir = 'cleaned_data'
if not os.path.isdir(output_dir):
os.makedirs(output_dir, exist_ok=True)
engine = 'pyarrow'
output_path_train = os.path.join(output_dir, 'train_clean.parquet')
output_path_test = os.path.join(output_dir, 'test_clean.parquet')
df_joined_train.to_parquet(output_path_train, engine=engine)
df_joined_test.to_parquet(output_path_test, engine=engine)
"""
Explanation: We save the cleaned data so we won't have to repeat this data preparation step again.
End of explanation
"""
|
pyqg/pyqg | docs/examples/barotropic.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pyqg
"""
Explanation: Barotropic Model
Here will will use pyqg to reproduce the results of the paper: <br />
J. C. Mcwilliams (1984). The emergence of isolated coherent vortices in turbulent flow. Journal of Fluid Mechanics, 146, pp 21-43 doi:10.1017/S0022112084001750
End of explanation
"""
# create the model object
m = pyqg.BTModel(L=2.*np.pi, nx=256,
beta=0., H=1., rek=0., rd=None,
tmax=40, dt=0.001, taveint=1,
ntd=4)
# in this example we used ntd=4, four threads
# if your machine has more (or fewer) cores available, you could try changing it
"""
Explanation: McWilliams performed freely-evolving 2D turbulence ($R_d = \infty$, $\beta =0$) experiments on a $2\pi\times 2\pi$ periodic box.
End of explanation
"""
# generate McWilliams 84 IC condition
fk = m.wv != 0
ckappa = np.zeros_like(m.wv2)
ckappa[fk] = np.sqrt( m.wv2[fk]*(1. + (m.wv2[fk]/36.)**2) )**-1
nhx,nhy = m.wv2.shape
Pi_hat = np.random.randn(nhx,nhy)*ckappa +1j*np.random.randn(nhx,nhy)*ckappa
Pi = m.ifft( Pi_hat[np.newaxis,:,:] )
Pi = Pi - Pi.mean()
Pi_hat = m.fft( Pi )
KEaux = m.spec_var( m.wv*Pi_hat )
pih = ( Pi_hat/np.sqrt(KEaux) )
qih = -m.wv2*pih
qi = m.ifft(qih)
# initialize the model with that initial condition
m.set_q(qi)
# define a quick function for plotting and visualize the initial condition
def plot_q(m, qmax=40):
fig, ax = plt.subplots()
pc = ax.pcolormesh(m.x,m.y,m.q.squeeze(), cmap='RdBu_r')
pc.set_clim([-qmax, qmax])
ax.set_xlim([0, 2*np.pi])
ax.set_ylim([0, 2*np.pi]);
ax.set_aspect(1)
plt.colorbar(pc)
plt.title('Time = %g' % m.t)
plt.show()
plot_q(m)
"""
Explanation: Initial condition
The initial condition is random, with a prescribed spectrum
$$
|\hat{\psi}|^2 = A \,\kappa^{-1}\left[1 + \left(\frac{\kappa}{6}\right)^4\right]^{-1}\,,
$$
where $\kappa$ is the wavenumber magnitude. The constant A is determined so that the initial energy is $KE = 0.5$.
End of explanation
"""
for _ in m.run_with_snapshots(tsnapstart=0, tsnapint=10):
plot_q(m)
"""
Explanation: Runing the model
Here we demonstrate how to use the run_with_snapshots feature to periodically stop the model and perform some action (in this case, visualization).
End of explanation
"""
energy = m.get_diagnostic('KEspec')
enstrophy = m.get_diagnostic('Ensspec')
# this makes it easy to calculate an isotropic spectrum
from pyqg import diagnostic_tools as tools
kr, energy_iso = tools.calc_ispec(m,energy.squeeze())
_, enstrophy_iso = tools.calc_ispec(m,enstrophy.squeeze())
ks = np.array([3.,80])
es = 5*ks**-4
plt.loglog(kr,energy_iso)
plt.loglog(ks,es,'k--')
plt.text(2.5,.0001,r'$k^{-4}$',fontsize=20)
plt.ylim(1.e-10,1.e0)
plt.xlabel('wavenumber')
plt.title('Energy Spectrum')
ks = np.array([3.,80])
es = 5*ks**(-5./3)
plt.loglog(kr,enstrophy_iso)
plt.loglog(ks,es,'k--')
plt.text(5.5,.01,r'$k^{-5/3}$',fontsize=20)
plt.ylim(1.e-3,1.e0)
plt.xlabel('wavenumber')
plt.title('Enstrophy Spectrum')
"""
Explanation: The genius of McWilliams (1984) was that he showed that the initial random vorticity field organizes itself into strong coherent vortices. This is true in significant part of the parameter space. This was previously suspected but unproven, mainly because people did not have computer resources to run the simulation long enough. Thirty years later we can perform such simulations in a couple of minutes on a laptop!
Also, note that the energy is nearly conserved, as it should be, and this is a nice test of the model.
Plotting spectra
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/time_series_prediction/solutions/1_optional_data_exploration.ipynb | apache-2.0 | import os
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
import numpy as np
import pandas as pd
import seaborn as sns
from google.cloud import bigquery
from IPython import get_ipython
from IPython.core.magic import register_cell_magic
from matplotlib import pyplot as plt
bq = bigquery.Client(project=PROJECT)
# Allow you to easily have Python variables in SQL query.
@register_cell_magic("with_globals")
def with_globals(line, cell):
contents = cell.format(**globals())
if "print" in line:
print(contents)
get_ipython().run_cell(contents)
"""
Explanation: Data Exploration
Learning objectives:
Learn useful patterns for exploring data before modeling
Gain an understanding of the dataset and identify any data issues.
The goal of this notebook is to explore our base tables before we began feature engineering and modeling. We will explore the price history of stock in the S&P 500.
Price history : Price history of stocks
S&P 500 : A list of all companies and symbols for companies in the S&P 500
For our analysis, let's limit price history since 2000. In general, the further back historical data is used the lower it's predictive power can be.
End of explanation
"""
!bq mk stock_src
%%bash
TABLE=price_history
SCHEMA=symbol:STRING,Date:DATE,Open:FLOAT,Close:FLOAT
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
%%bash
TABLE=eps
SCHEMA=date:DATE,company:STRING,symbol:STRING,surprise:STRING,reported_EPS:FLOAT,consensus_EPS:FLOAT
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
%%bash
TABLE=snp500
SCHEMA=company:STRING,symbol:STRING,industry:STRING
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
"""
Explanation: Preparing the dataset
Let's create the dataset in our project BiqQuery and import the stock data by running the following cells:
End of explanation
"""
%%with_globals
%%bigquery --project {PROJECT}
SELECT table_name, column_name, data_type
FROM `stock_src.INFORMATION_SCHEMA.COLUMNS`
ORDER BY table_name, ordinal_position
"""
Explanation: Let's look at the tables and columns we have for analysis.
Learning objective 1.
End of explanation
"""
def query_stock(symbol):
return bq.query(
"""
SELECT *
FROM `stock_src.price_history`
WHERE symbol="{}"
ORDER BY Date
""".format(
symbol
)
).to_dataframe()
df_stock = query_stock("GOOG")
df_stock.Date = pd.to_datetime(df_stock.Date)
ax = df_stock.plot(x="Date", y="Close", title="Google stock")
# Add smoothed plot.
df_stock["Close_smoothed"] = df_stock.Close.rolling(100, center=True).mean()
df_stock.plot(x="Date", y="Close_smoothed", ax=ax);
"""
Explanation: Price History
Retrieve Google's stock price history.
End of explanation
"""
df_sp = query_stock("gspc")
def plot_with_sp(symbol):
df_stock = query_stock(symbol)
df_stock.Date = pd.to_datetime(df_stock.Date)
df_stock.Date = pd.to_datetime(df_stock.Date)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax = df_sp.plot(
x="Date", y="Close", label="S&P", color="green", ax=ax1, alpha=0.7
)
ax = df_stock.plot(
x="Date",
y="Close",
label=symbol,
title=symbol + " and S&P index",
ax=ax2,
alpha=0.7,
)
ax1.legend(loc=3)
ax2.legend(loc=4)
ax1.set_ylabel("S&P price")
ax2.set_ylabel(symbol + " price")
ax.set_xlim(pd.to_datetime("2004-08-05"), pd.to_datetime("2013-08-05"))
plot_with_sp("GOOG")
"""
Explanation: Compare google to S&P
End of explanation
"""
plot_with_sp("IBM")
"""
Explanation: Learning objective 2
End of explanation
"""
%%with_globals
%%bigquery df --project {PROJECT}
WITH
with_year AS
(
SELECT symbol,
EXTRACT(YEAR FROM date) AS year,
close
FROM `stock_src.price_history`
WHERE symbol in (SELECT symbol FROM `stock_src.snp500`)
),
year_aggregated AS
(
SELECT year, symbol, AVG(close) as avg_close
FROM with_year
WHERE year >= 2000
GROUP BY year, symbol
)
SELECT year, symbol, avg_close as close,
(LAG(avg_close, 1) OVER (PARTITION BY symbol order by year DESC))
AS next_yr_close
FROM year_aggregated
ORDER BY symbol, year
"""
Explanation: Let's see how the price of stocks change over time on a yearly basis. Using the LAG function we can compute the change in stock price year-over-year.
Let's compute average close difference for each year. This line could, of course, be done in Pandas. Often times it's useful to use some combination of BigQuery and Pandas for exploration analysis. In general, it's most effective to let BigQuery do the heavy-duty processing and then use Pandas for smaller data and visualization.
Learning objective 1, 2
End of explanation
"""
df.dropna(inplace=True)
df["percent_increase"] = (df.next_yr_close - df.close) / df.close
"""
Explanation: Compute the year-over-year percentage increase.
End of explanation
"""
def get_random_stocks(n=5):
random_stocks = df.symbol.sample(n=n, random_state=3)
rand = df.merge(random_stocks)
return rand[["year", "symbol", "percent_increase"]]
rand = get_random_stocks()
for symbol, _df in rand.groupby("symbol"):
plt.figure()
sns.barplot(x="year", y="percent_increase", data=_df)
plt.title(symbol)
"""
Explanation: Let's visualize some yearly stock
End of explanation
"""
df.sort_values("percent_increase").head()
stock_symbol = "YHOO"
%%with_globals
%%bigquery df --project {PROJECT}
SELECT date, close
FROM `stock_src.price_history`
WHERE symbol='{stock_symbol}'
ORDER BY date
ax = df.plot(x="date", y="close")
"""
Explanation: There have been some major fluctations in individual stocks. For example, there were major drops during the early 2000's for tech companies.
End of explanation
"""
stock_symbol = "IBM"
%%with_globals
%%bigquery df --project {PROJECT}
SELECT date, close
FROM `stock_src.price_history`
WHERE symbol='{stock_symbol}'
ORDER BY date
IBM_STOCK_SPLIT_DATE = "1979-05-10"
ax = df.plot(x="date", y="close")
ax.vlines(
pd.to_datetime(IBM_STOCK_SPLIT_DATE),
0,
500,
linestyle="dashed",
color="grey",
alpha=0.7,
);
"""
Explanation: Stock splits can also impact our data - causing a stock price to rapidly drop. In practice, we would need to clean all of our stock data to account for this. This would be a major effort! Fortunately, in the case of IBM, for example, all stock splits occurred before the year 2000.
Learning objective 2
End of explanation
"""
%%with_globals
%%bigquery df --project {PROJECT}
SELECT *
FROM `stock_src.snp500`
df.industry.value_counts().plot(kind="barh");
"""
Explanation: S&P companies list
End of explanation
"""
%%with_globals
%%bigquery df --project {PROJECT}
WITH sp_prices AS
(
SELECT a.*, b.industry
FROM `stock_src.price_history` a
JOIN `stock_src.snp500` b
USING (symbol)
WHERE date >= "2000-01-01"
)
SELECT Date, industry, AVG(close) as close
FROM sp_prices
GROUP BY Date, industry
ORDER BY industry, Date
df.head()
"""
Explanation: We can join the price histories table with the S&P 500 table to compare industries:
Learning objective 1,2
End of explanation
"""
# Pandas `unstack` to make each industry a column. Useful for plotting.
df_ind = df.set_index(["industry", "Date"]).unstack(0).dropna()
df_ind.columns = [c[1] for c in df_ind.columns]
df_ind.head()
ax = df_ind.plot(figsize=(16, 8))
# Move legend down.
ax.legend(loc="upper center", bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2)
"""
Explanation: Using pandas we can "unstack" our table so that each industry has it's own column. This will be useful for plotting.
End of explanation
"""
def min_max_scale(df):
return (df - df.min()) / df.max()
scaled = min_max_scale(df_ind)
ax = scaled.plot(figsize=(16, 8))
ax.legend(loc="upper center", bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2);
"""
Explanation: Let's scale each industry using min/max scaling. This will put all of the stocks on the same scale. Currently it can be hard to see the changes in stocks over time across industries.
Learning objective 1
End of explanation
"""
SMOOTHING_WINDOW = 30 # Days.
rolling = scaled.copy()
for col in scaled.columns:
rolling[col] = scaled[col].rolling(SMOOTHING_WINDOW).mean()
ax = rolling.plot(figsize=(16, 8))
ax.legend(loc="upper center", bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2);
"""
Explanation: We can also create a smoothed version of the plot above using a rolling mean. This is a useful transformation to make when visualizing time-series data.
End of explanation
"""
|
5hubh4m/CS231n | Assignment2/ConvolutionalNetworks.ipynb | mit | # As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
"""
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
"""
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
"""
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
"""
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
"""
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
"""
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
"""
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
"""
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
"""
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
"""
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
"""
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
"""
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
"""
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss, np.log(10)
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss, np.log(10)
"""
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
"""
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
"""
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
"""
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
"""
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
"""
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=20, batch_size=500,
update_rule='adadelta',
optim_config={
'rho': 0.95,
'epsilon': 1e-8,
'learning_rate': 0.001
},
verbose=True, print_every=20)
solver.train()
"""
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
"""
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
print 'Final accuracy on test data using adadelta: %f' % solver.check_accuracy(data['X_test'], data['y_test'])
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, '-')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-')
plt.plot(solver.val_acc_history, '-')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
"""
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
"""
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
"""
model = ChaudharyNet(hidden_dim=4096, reg=0.5)
solver = Solver(model, data,
num_epochs=50, batch_size=500,
update_rule='adadelta',
optim_config={
'rho': 0.95,
'epsilon': 1e-8,
'learning_rate': 0.001
},
verbose=True, print_every=5)
solver.train()
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, '-')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-')
plt.plot(solver.val_acc_history, '-')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
model
"""
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation
"""
|
ptosco/rdkit | Docs/Notebooks/MolStandardize.ipynb | bsd-3-clause | import rdkit
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem.MolStandardize import rdMolStandardize
"""
Explanation: MolStandardize
This is a demonstration on how to use the rdMolStandardize module within RDKit. The structure and capabilities remain largely the same as MolVS (https://molvs.readthedocs.io/en/latest/) but will extended capabilities of user-supplied lists in some of the standardization tools.
End of explanation
"""
sm = "[Na]OC(=O)c1ccc(C[S+2]([O-])([O-]))cc1"
rdMolStandardize.StandardizeSmiles(sm)
"""
Explanation: The rdMolStandardize module contains the following convenience functions for quickly performing that standardization task:
- rdMolStandardize.StandardizeSmiles()
- rdMolStandardize.ValidateSmiles()
Other functions within the module:
- rdMolStandardize.Cleanup()
- rdMolStandardize.ChargeParent()
- rdMolStandardize.FragmentParent()
rdMolStandardize.Cleanup() is equivalent to the molvs.standardize.Standardizer().standardize() function in molvs
The rdMolStandardize module contains the following classes, that allow you to develop your custom standardization process.
- rdMolStandardize.CleanupParameters
- rdMolStandardize.Normalize
- rdMolStandardize.MetalDisconnector
- rdMolStandardize.FragmentParent
- rdMolStandardize.FragmentRemover
- rdMolStandardize.Reionizer
- rdMolStandardize.Uncharger
- rdMolStandardize.RDKitValidation
- rdMolStandardize.MolVSValidation
- rdMolStandardize.AllowedAtomsValidation
- rdMolStandardize.DisallowedAtomsValidation
TODO
- rdMolStandardize.tautomer
rdMolStandardize.StandardizeSmiles()
The rdMolStandardize.StandardizeSmiles() convenience function contains all sensible default functionality to help get started
End of explanation
"""
mol = Chem.MolFromSmiles(sm)
Chem.SanitizeMol(mol)
mol = Chem.RemoveHs(mol)
mol = rdMolStandardize.MetalDisconnector().Disconnect(mol)
mol = rdMolStandardize.Normalize(mol)
mol = rdMolStandardize.Reionize(mol)
Chem.AssignStereochemistry(mol, force=True, cleanIt=True)
Chem.MolToSmiles(mol)
"""
Explanation: The rdMolStandardize.StandardizeSmiles() function performs the following steps.
End of explanation
"""
rdMolStandardize.ValidateSmiles("ClCCCl.c1ccccc1O")
"""
Explanation: rdMolStandardize.ValidateSmiles()
The rdMolStandardize.ValidateSmiles() function is a convenience funtion that quickly validates a single SMILES string. It uses the rdMolStandardize.MolVSValidation() class with the default to do all validations.
End of explanation
"""
vm = rdMolStandardize.RDKitValidation()
mol = Chem.MolFromSmiles("CO(C)C", sanitize=False)
vm.validate(mol)
"""
Explanation: rdMolStandardize validation methods
There are four different ways of validating molecules, each can by run with their different classes:
- rdMolStandardize.RDKitValidation
- rdMolStandardize.MolVSValidation
- rdMolStandardize.AllowedAtomsValidation
- rdMolStandardize.DisallowedAtomsValidation
All the classes have a validate() method to perform the validation.
The rdMolStandardize.RDKitValidation class validates the valency of every atom in the input molecule (much like RDKit automatically does).
End of explanation
"""
vm = rdMolStandardize.MolVSValidation()
mol = Chem.MolFromSmiles("COc1cccc(C=N[N-]C(N)=O)c1[O-].O.O.O.O=[U+2]=O")
vm.validate(mol)
"""
Explanation: The rdMolStandardize.MolVSValidation class goes through similar validations as the standalone MolVS.
https://molvs.readthedocs.io/en/latest/api.html#molvs-validate
The default is to do all validations:
- NoAtomValidation
- FragmentValidation
- NeutralValidation
- IsotopeValidation
Using the rdMolStandardize.MolVSValidation class rather than the convenience function rdMolStandardize.ValidateSmiles() provides more flexibility when working with multiple molecules or when a custom validation list is required
End of explanation
"""
validations = [rdMolStandardize.NoAtomValidation()]
vm = rdMolStandardize.MolVSValidation(validations)
mol = Chem.MolFromSmiles("COc1cccc(C=N[N-]C(N)=O)c1[O-].O.O.O.O=[U+2]=O")
vm.validate(mol)
"""
Explanation: Or you can specify which subset of MolVSValidations you want when initializing
End of explanation
"""
from rdkit.Chem.rdchem import Atom
atomic_no = [6,7,8]
allowed_atoms = [Atom(i) for i in atomic_no]
vm = rdMolStandardize.AllowedAtomsValidation(allowed_atoms)
mol = Chem.MolFromSmiles("CC(=O)CF")
vm.validate(mol)
atomic_no = [9, 17, 35]
disallowed_atoms = [Atom(i) for i in atomic_no]
vm = rdMolStandardize.DisallowedAtomsValidation(disallowed_atoms)
mol = Chem.MolFromSmiles("CC(=O)CF")
msg = vm.validate(mol)
"""
Explanation: The rdMolStandardize.AllowedAtomsValidation class lets the user input a list of atoms, anything not on the list throws an error.
The rdMolStandardize.DisallowedAtomsValidation class also takes an input of a list of atoms and as long as there are no atoms from the list it is deemed acceptable.
End of explanation
"""
mol = Chem.MolFromSmiles("[Na]OC(=O)c1ccc(C[S+2]([O-])([O-]))cc1")
mol
smol = rdMolStandardize.Cleanup(mol)
smol
"""
Explanation: rdMolStandardize.Cleanup
Whilst rdMolStandardize.StandardizeSmiles() provides a quick and easy way to get standardized version of a SMILES string, it's inefficient when dealing with multiple molecules and doesn't allow customization of the standardize process.
The Cleanup function provides flexibility to specify custom cleanup files and parameters.
End of explanation
"""
params = rdMolStandardize.CleanupParameters()
print("Default acidbaseFile: %s" % params.acidbaseFile)
print("Default normalizationsFile: %s" % params.normalizationsFile)
print("Default fragmentFile: %s" % params.fragmentFile)
print("Default maxRestarts: %s" % params.maxRestarts)
print("Default preferOrganic: %s" % params.preferOrganic)
"""
Explanation: rdMolStandardize.Cleanup() function can take in an optional rdMolStandardize.CleanupParameters object which you can specify your own standardization reference files to use.
The member variables that you can change are:
- CleanupParameters.acidbaseFile
- CleanupParameters.normalizationsFile
- CleanupParameters.fragmentFile
- CleanupParameters.maxRestarts
- CleanupParameters.preferOrganic
End of explanation
"""
n = rdMolStandardize.Normalizer()
n.normalize(Chem.MolFromSmiles("C[S+2]([O-])([O-])O"))
"""
Explanation: rdMolStandardize.Normalize
The rdMolStandardize.Normalizer class is used to apply a series of Normalization transforms to correct functional groups and recombine charges.
End of explanation
"""
md = rdMolStandardize.MetalDisconnector()
md.Disconnect(Chem.MolFromSmiles("CCC(=O)O[Na]"))
"""
Explanation: rdMolStandardize.MetalDisconnector
The rdMolStandardize.MetalDisconnector class disconnects metal atoms that are defined as covalently bonded to non-metals.
End of explanation
"""
lfc = rdMolStandardize.LargestFragmentChooser()
lfc.choose(Chem.MolFromSmiles("O=C(O)CCC.O=C(O)CCCC.O=C(O)CCCCC.O=C(O)CCCC"))
"""
Explanation: rdMolStandardize.LargestFragmentChooser and rdMolStandardize.Remover
rdMolStandardize.LargestFragmentChooser class gets the largest fragment
End of explanation
"""
fr = rdMolStandardize.FragmentRemover()
fr.remove(Chem.MolFromSmiles("CN(C)C.Cl.Cl.Br"))
"""
Explanation: rdMolStandardize.FragmentParent() function is similar to rdMolStandardize.LargestFragmentChooser but first performs rdMolStandardize.Cleanup()
rdMolStandardize.FragmentRemover class filters out fragments
End of explanation
"""
mol = Chem.MolFromSmiles("[Na+].O=C([O-])c1ccccc1")
lfc = rdMolStandardize.LargestFragmentChooser()
mol = lfc.choose(mol)
mol
u = rdMolStandardize.Uncharger()
mol = u.uncharge(mol)
mol
r = rdMolStandardize.Reionizer()
mol = r.reionize(mol)
mol
mol = Chem.MolFromSmiles("[Na+].O=C([O-])c1ccccc1")
rdMolStandardize.ChargeParent(mol)
"""
Explanation: rdMolStandardize.Reionizer and rdMolStandardizer.Uncharger
rdMolStandardize.Reionizer ensure the strongest acid groups ionize first in partially ionized molecules.
rdMolStandardize.Uncharger attempts to neutralize charges by adding and/or removing hydrogens where possible.
rdMolStandardize.ChargeParent() method is the uncharged version of the fragment parent. It involves taking the fragment parent then applying Neutralize and Reionize.
End of explanation
"""
from rdkit.Chem.MolStandardize.standardize import enumerate_tautomers_smiles, canonicalize_tautomer_smiles
enumerate_tautomers_smiles("OC(C)=C(C)C")
canonicalize_tautomer_smiles("OC(C)=C(C)C")
"""
Explanation: TODO
MolStandardize.tautomer
TODO
Tautomer_enumeration generates all possible tautomers using a series of transform rules. It also removes stereochemistry from double bonds that are single in at least 1 tautomer.
Tautomer_canonicalization enumerates all possible tautomers using transform rules and uses a scoring system to determine canonical tautomer.
End of explanation
"""
|
materialsvirtuallab/matgenb | notebooks/2019-01-11-How to plot and evaluate output files from Lobster.ipynb | bsd-3-clause | from pymatgen.electronic_structure.plotter import CohpPlotter
from pymatgen.electronic_structure.cohp import CompleteCohp
%matplotlib inline
"""
Explanation: Introduction
This notebook was written by Janine George (E-mail: janine.george@uclouvain.be Université catholique de Louvain, https://jageo.github.io/).
This notebook shows how to plot Crystal Orbital Hamilton Population (COHP) and projected densities of states calculated with the Local-Orbital Basis Suite Towards Electronic-Structure Reconstruction (LOBSTER) code. Furtheremore, the classes Icohplist and Charge to evaluate ICOHPLIST.lobster and CHARGE.lobster are explained. See http://www.cohp.de for more information. The code to plot COHP and evaluate ICOHPLIST.lobster in pymatgen was started Marco Esters and Anubhav Jain and extended by Janine George. The classes to plot DOSCAR.lobster, and to evaluate CHARGE.lobster were written by Janine George.
Written using:
- pymatgen==2019.11.11
How to plot COHPCAR.lobster
get relevant classes
End of explanation
"""
COHPCAR_path = "lobster_data/GaAs/COHPCAR.lobster"
POSCAR_path = "lobster_data/GaAs/POSCAR"
completecohp=CompleteCohp.from_file(fmt="LOBSTER",filename=COHPCAR_path,structure_file=POSCAR_path)
"""
Explanation: get a completecohp object to simplify the plotting
End of explanation
"""
#search for the number of the COHP you would like to plot in ICOHPLIST.lobster (the numbers in COHPCAR.lobster are different!)
label="16"
cp=CohpPlotter()
#get a nicer plot label
plotlabel=str(completecohp.bonds[label]['sites'][0].species_string)+'-'+str(completecohp.bonds[label]['sites'][1].species_string)
cp.add_cohp(plotlabel,completecohp.get_cohp_by_label(label=label))
#check which COHP you are plotting
print("This is a COHP between the following sites: "+str(completecohp.bonds[label]['sites'][0])+' and '+ str(completecohp.bonds[label]['sites'][1]))
x = cp.get_plot(integrated=False)
x.ylim([-10, 6])
x.show()
"""
Explanation: plot certain COHP
You have to search for the label of the COHP you would like to plot in ICOHPLIST.lobster
End of explanation
"""
#labels of the COHPs that will be summed!
labelist = ["16", "21"]
cp = CohpPlotter()
# get a nicer plot label
plotlabel = "two Ga-As bonds"
cp.add_cohp(plotlabel, completecohp.get_summed_cohp_by_label_list(label_list=labelist, divisor=1))
x = cp.get_plot(integrated=False)
x.ylim([-10, 6])
x.show()
"""
Explanation: add several COHPs
End of explanation
"""
#search for the number of the COHP you would like to plot in ICOHPLIST.lobster (the numbers in COHPCAR.lobster are different!)
label="16"
cp=CohpPlotter()
#get orbital object
from pymatgen.electronic_structure.core import Orbital
#interaction between 4s and 4px
orbitals=[[4,Orbital.s], [4,Orbital.py]]
orbitals2=[[4,Orbital.s], [4,Orbital.pz]]
#get a nicer plot label
plotlabel=str(completecohp.bonds[label]['sites'][0].species_string)+'(4s)'+'-'+str(completecohp.bonds[label]['sites'][1].species_string)+'(4py)'
plotlabel2=str(completecohp.bonds[label]['sites'][0].species_string)+'(4s)'+'-'+str(completecohp.bonds[label]['sites'][1].species_string)+'(4pz)'
cp.add_cohp(plotlabel,completecohp.get_orbital_resolved_cohp(label=label, orbitals=orbitals))
cp.add_cohp(plotlabel2,completecohp.get_orbital_resolved_cohp(label=label, orbitals=orbitals2))
#check which COHP you are plotting
#with integrated=True, you can plot the integrated COHP
x = cp.get_plot(integrated=False)
x.ylim([-10, 6])
x.show()
"""
Explanation: focus on certain orbitals only
End of explanation
"""
from pymatgen.io.lobster import Icohplist
"""
Explanation: How to evaluate ICOHPLIST.lobster
get relevant classes
End of explanation
"""
icohplist=Icohplist(filename='lobster_data/GaAs/ICOHPLIST.lobster')
icohpcollection=icohplist.icohpcollection
"""
Explanation: read in ICOHPLIST.lobster and get Icohpcollection object
End of explanation
"""
#get icohp value by label (labelling according to ICOHPLIST.lobster)
#for spin polarized calculations you can also sum the spin channels
print('icohp value for certain bond by label')
label='16'
print(icohpcollection.get_icohp_by_label(label))
print()
#you can get all Icohpvalue objects for certain bond lengths
print('Icohp values for certain bonds with certain bond lengths')
for key,icohp in icohpcollection.get_icohp_dict_by_bondlengths(minbondlength=0.0, maxbondlength=3.0).items():
print(key+':'+str(icohp.icohp))
print()
#you can get all icohps for a certain site
print('ICOHP values of certain site')
for key,icohp in (icohpcollection.get_icohp_dict_of_site(site=0,minbondlength=0.0, maxbondlength=3.0).items()):
print(key+':'+str(icohp.icohp))
"""
Explanation: get interesting properties from ICOHPLIST.lobster
End of explanation
"""
#relevant classes
from pymatgen.io.lobster import Doscar
from pymatgen.electronic_structure.plotter import DosPlotter
from pymatgen.core.composition import Element
%matplotlib inline
"""
Explanation: How to plot DOSCAR.lobster:
get relevant classes
End of explanation
"""
#read in DOSCAR.lobster
doscar=Doscar(doscar="lobster_data/GaAs/DOSCAR.lobster",structure_file="lobster_data/GaAs/POSCAR")
complete_dos=doscar.completedos
#get structure object
structure=complete_dos.structure
"""
Explanation: read in DOSCAR.lobster and get structure object for later
End of explanation
"""
#plot total dos
Plotter=DosPlotter()
Plotter.add_dos("Total Dos",doscar.tdos)
Plotter.get_plot().show()
"""
Explanation: plot total density of states
End of explanation
"""
#plot DOS of s,p, and d orbitals for certain element
Plotter=DosPlotter()
el=Element("Ga")
Plotter.add_dos_dict(complete_dos.get_element_spd_dos(el=el))
Plotter.get_plot().show()
"""
Explanation: plot DOS projected on s, p, and d orbitals for certain element
End of explanation
"""
Plotter=DosPlotter()
#choose the sites you would like to plot
for isite,site in enumerate(structure[0:1]):
#name the orbitals you would like to include
#the other orbitals are named in a similar way. The orbitals are called: "s", "p_y", "p_z", "p_x", "d_xy", "d_yz", "d_z^2","d_xz", "d_x^2-y^2", "f_y(3x^2-y^2)", "f_xyz","f_yz^2", "f_z^3", "f_xz^2", "f_z(x^2-y^2)", "f_x(x^2-3y^2)"
for orbital in ["4s"]:
Plotter.add_dos("Ga"+str(isite+1)+":"+orbital,complete_dos.get_site_orbital_dos(site,orbital))
Plotter.get_plot().show()
"""
Explanation: plot DOS for cetain sites and orbitals
End of explanation
"""
from pymatgen.io.lobster import Charge
"""
Explanation: evaluate CHARGE.lobster
get relevant classes
End of explanation
"""
charge=Charge(filename='lobster_data/GaAs/CHARGE.lobster')
newstructure=charge.get_structure_with_charges(structure_filename='lobster_data/GaAs/POSCAR')
print(newstructure)
"""
Explanation: read in charge and produce a structure with the charge as a property
End of explanation
"""
from pymatgen.io.lobster import Grosspop
grosspop=Grosspop(filename="lobster_data/GaAs/GROSSPOP.lobster")
print(grosspop.list_dict_grosspop)
"""
Explanation: evaluate GROSSPOP.lobster
get relevant class
End of explanation
"""
newstructure=grosspop.get_structure_with_total_grosspop('lobster_data/GaAs/POSCAR')
print("Structure:")
print(newstructure)
"""
Explanation: get a structure with total gross populations
End of explanation
"""
from pymatgen.io.lobster import Fatband
from pymatgen.electronic_structure.plotter import BSPlotterProjected, BSDOSPlotter, BSPlotter
"""
Explanation: FATBAND plot
get relevant classes
End of explanation
"""
fatband=Fatband(filenames="lobster_data/GaAs",vasprun="lobster_data/GaAs/vasprun.xml",
Kpointsfile="lobster_data/GaAs/KPOINTS")
#get a band structure object
bssymline=fatband.get_bandstructure()
#print(bssymline.as_dict())
#this can be plotted with the classes to plot bandstructures from vasp
BSDOSPlotter(bs_projection="elements",dos_projection="elements").get_plot(bs=bssymline,dos=complete_dos).show()
#another plot type from pymatgen:
bsplotter=BSPlotterProjected(bssymline)
bsplotter.get_projected_plots_dots({"Ga":["4s","4p","3d"],"As":["4s","4p"]}).show()
"""
Explanation: get a bandstructure plot that is combined with a DOS plot
End of explanation
"""
from pymatgen.io.lobster import Lobsterout
"""
Explanation: Read lobsterout
End of explanation
"""
lobsterout=Lobsterout("lobster_data/GaAs/lobsterout")
document=lobsterout.get_doc()
"""
Explanation: get all relevant infos from lobsterout
End of explanation
"""
print(document["chargespilling"])
"""
Explanation: charge spilling can be accessed easily
End of explanation
"""
from pymatgen.io.lobster import Lobsterin
"""
Explanation: Create input files for vasp and lobster automatically
relevant classes
End of explanation
"""
lobsterin = Lobsterin.standard_calculations_from_vasp_files("lobster_data/GaAs/POSCAR",
"lobster_data/GaAs/INCAR", "lobster_data/GaAs/POTCAR",
option='standard')
"""
Explanation: a Lobsterin object with standard settings is created, a standard basis is used
End of explanation
"""
lobsterin.write_lobsterin(path="lobsterin")
file=open('./lobsterin','r')
print(file.read())
"""
Explanation: writes lobsterin
End of explanation
"""
lobsterin.write_INCAR(incar_input="lobster_data/GaAs/INCAR", incar_output="INCAR.lobster",
poscar_input="lobster_data/GaAs/POSCAR", isym=-1, further_settings={"IBRION":-1})
file=open('./INCAR.lobster','r')
print(file.read())
"""
Explanation: will change ISYM to -1, NSW to 0, insert NBANDS, and set LWAVE to True in the INCAR
additional changes to the INCAR are allowed via further_settings
End of explanation
"""
lobsterin = Lobsterin.standard_calculations_from_vasp_files("lobster_data/GaAs/POSCAR", "lobster_data/GaAs/INCAR",
"lobster_data/GaAs/POTCAR", option='standard',
dict_for_basis={"Ga": '4s 4p', "As": '4s 4p'})
#writes lobsterin
lobsterin.write_lobsterin(path="lobsterin")
file=open('./lobsterin','r')
print(file.read())
"""
Explanation: a Lobsterin object with standard settings is created, a basis given by the user is used
End of explanation
"""
|
windj007/tablex-dataset | dataset_from_latex.ipynb | apache-2.0 | # frequent_errors = collections.Counter(err
# for f in glob.glob('./data/arxiv/err_logs/*.log')
# for err in {line
# for line in open(f, 'r', errors='replace')
# if "error:" in line})
# frequent_errors.most_common(10)
"""
Explanation: Analyze error logs
End of explanation
"""
# preprocess_latex_file('./data/arxiv/1/44/The_Chiral_Anomaly_Final_Posting.tex')
compile_latex('./111/tex-playground/')
# !mkdir ./data/arxiv/1/44/pages/
pages = pdf_to_pages('./111/tex-playground/playground.pdf', './111/tex-playground/pages/')
with open('./111/tex-playground/playground.tex') as f:
soup = TexSoup.TexSoup(f.read())
# test_latex = r'''
# \documentclass{llncs}
# \usepackage{graphicx}
# \usepackage{multirow}
# \usepackage{hyperref}
# \usepackage[a4paper, landscape, margin={0.1in, 0.1in}]{geometry}
# \usepackage{tabularx}
# \usepackage{makecell}
# \begin{document}
# \begin{table}
# \renewcommand{\arraystretch}{0.42}
# \setlength{\tabcolsep}{1.52pt}
# \begin{tabular}{ c c r|c|r|l|c|}
# & .Myrnvnl & \multicolumn{5}{ c }{Bd iH VXDy -aL} \\
# & & \multicolumn{2}{|c|}{AlUBLk.cv} & \multicolumn{2}{ c }{ \makecell{ nUd qLoieco jVsmTLRAf \\ UPS TJL xGIH } } & qe.V.. \\
# & & \makecell{ MG MTBSgR, \\ ,lHm Ihmd \\ lbrT } & -OfQuxW & MeY XR & kSG,dEFX & \\
# \hline \makecell{ LuekQjL NSs TVq \\ NDC } & 8.80 Mv & osw & K*Dgc & 53.16 Tr & 8.92 & 44.18 j- \\
# \hline oL & 55.67 UueS & vGkGl & -MUJhqduw & 67.86 sxRy- & 63.51 & 10.85 A*,hKg \\
# nA & 7.46 ll & yVw,P & vuege & 96.36 FuEa & 80.27 & 40.46 NeWuNVi \\
# fA & 0.47 j,Gg.Gv & TrwtXRS & yfhyTWJ & 42.20 sWdg & 8.76 & 98.68 ND \\
# \hline \makecell{ hD XXOl dMCTp Yib \\ p.IE TcBn } & 7.90 Pm & CbyWQtUTY, & FPFh.M & 22.38 Hs & 16.03 & 33.20 hU \\
# \hline \makecell{ LAxtFM cmBvrJj hCRx, \\ LiQYh } & 97.15 *a & ..pb & ejNtniag & 84.67 F.xHN & 10.31 & 23.57 R,rdK \\
# x*d afKGwJw & 82.46 REuwGLME & cIQv & iCLkFNY & 95.92 iHL & 79.26 & 80.85 L-NR \\
# \end{tabular}
# \end{table}
# \end{document}
# '''
# soup = TexSoup.TexSoup(test_latex)
# !cat -n ./data/arxiv/1/44/The_Chiral_Anomaly_Final_Posting.tex
tables = list(soup.find_all('table'))
t = tables[0]
t.tabular
qq = structurize_tabular_contents(t.tabular)
qq
list(get_all_tokens(qq.rows[8][2]))
ww = next(iter(get_all_tokens(qq.rows[6][0])))
print(ww)
print(type(ww))
src_pos = soup.char_pos_to_line(ww.position + len(ww.text) // 2)
src_pos
o = subprocess.check_output(['synctex', 'view',
'-i', '{}:{}:{}'.format(src_pos[0] + 1,
src_pos[1] + 1,
'playground.tex'),
'-o', 'playground.pdf'],
cwd='./111/tex-playground/').decode('ascii')
p = parse_synctex_output(o)
page_i, boxes = list(p.items())[0]
box = boxes[2]
print(page_i, boxes)
pdf = PdfMinerWrapper('./111/tex-playground/playground.pdf')
pdf.load()
page_info = pdf.get_page(page_i-1)
found_boxes = list(pdf.get_boxes(page_i-1, [convert_coords_to_pq(b, page_info[1].cropbox)
for b in boxes]))
print('; '.join(pdf.get_text(page_i-1,
[convert_coords_to_pq(b, page_info[1].cropbox)])
for b in boxes))
table_info = list(get_table_info(soup))[1]
page_img = load_image_opaque(pages[page_i - 1])
make_demo_mask(page_img,
[(1,
(convert_coords_from_pq(fb.bbox, page_info[1].cropbox) * POINTS_TO_PIXELS_FACTOR).astype('int'))
for fb in found_boxes] +
[(1, (numpy.array(b) * POINTS_TO_PIXELS_FACTOR).astype('int')) for b in boxes])
pdf_latex_to_samples('1',
'.',
'./111/tex-playground/playground.tex',
'./111/tex-playground/playground.pdf',
'./111/tex-playground/',
get_table_info,
boxes_aggregator=aggregate_object_bboxes,
display_demo=True)
# print('\n*********\n'.join(map(str, get_all_tokens(t.tabular))))
"""
Explanation: Debug
End of explanation
"""
# table_def = gen_table_contents()
# print('columns', len(table_def[2][0]), 'rows', len(table_def[2]))
# # %%prun
# render_table(table_def, '/notebook/templates/springer/', '/notebook/data/generated/1.pdf',
# print_latex_content=True,
# display_demo=True,
# on_wrong_parse='ignore')
def gen_and_save_table(i, seed):
numpy.random.seed(seed)
table_def = gen_table_contents()
render_table(table_def, '/notebook/templates/springer/', '/notebook/data/generated_with_char_info/big_simple_lined/src/{}'.format(i))
seeds = numpy.random.randint(0, 2000, size=2000)
joblib.Parallel(n_jobs=6)(joblib.delayed(gen_and_save_table)(i, s) for i, s in enumerate(seeds))
# for dirname in ['complex_clean', 'dense', 'lined', 'multiline_lined', 'no_lined', 'big_simple_lined', 'big_simple_no_lined']:
# print(dirname)
# for subdir in ['demo', 'src']:
# print(subdir)
# src_full_dirname = os.path.join('./data/generated', dirname, subdir)
# target_full_dirname = os.path.join('./data/generated/full', subdir)
# for fname in tqdm.tqdm(os.listdir(src_full_dirname)):
# shutil.copy2(os.path.join(src_full_dirname, fname),
# os.path.join(target_full_dirname, dirname + '_' + fname))
"""
Explanation: Generate tables
End of explanation
"""
archive_files = list(glob.glob('./data/arxiv/sources/*.tar.gz'))
print('Total downloaded', len(archive_files))
# def _get_archive_content_type(fname):
# return read_metadata(fname)['content_type']
# print('Types:\n', collections.Counter(joblib.Parallel(n_jobs=-1)(joblib.delayed(_get_archive_content_type)(archive)
# for archive in archive_files)).most_common())
# print()
"""
Explanation: Get some statistics
End of explanation
"""
good_papers = set()
bad_papers = set()
if os.path.exists('./good_papers.lst'):
with open('./good_papers.lst', 'r') as f:
good_papers = set(line.strip() for line in f)
if os.path.exists('./bad_papers.lst'):
with open('./bad_papers.lst', 'r') as f:
bad_papers = set(line.strip() for line in f)
print('Good papers', len(good_papers))
print('Bad papers', len(bad_papers))
# def check_archive_func(fname):
# return (fname,
# contains_something_interesting(fname, get_table_info))
# archive_files_with_check_res = joblib.Parallel(n_jobs=12)(joblib.delayed(check_archive_func)(fname)
# for fname in archive_files
# if not (fname in bad_papers or fname in good_papers))
# for fname, is_good in archive_files_with_check_res:
# if is_good:
# good_papers.add(fname)
# else:
# bad_papers.add(fname)
# with open('./good_papers.lst', 'w') as f:
# f.write('\n'.join(sorted(good_papers)))
# with open('./bad_papers.lst', 'w') as f:
# f.write('\n'.join(sorted(bad_papers)))
"""
Explanation: Total downloaded 208559
Types:
[('application/x-eprint-tar', 149642), ('application/x-eprint', 40360), ('application/pdf', 18292), ('application/vnd.openxmlformats-officedocument.wordprocessingml.document', 218), ('application/postscript', 47)]
End of explanation
"""
ARXIV_INOUT_PAIRS_DIR = './data/arxiv/inout_pairs/'
def _pdf2samples_mp(archive):
try:
pdf2samples(archive,
ARXIV_INOUT_PAIRS_DIR,
lambda s: get_table_info(s, extract_cells=False),
aggregate_object_bboxes)
except Exception as ex:
with open(os.path.join(ARXIV_INOUT_PAIRS_DIR, os.path.basename(archive) + '.log'), 'w') as f:
f.write(str(ex) + '\n')
f.write(traceback.format_exc())
_ = joblib.Parallel(n_jobs=10)(joblib.delayed(_pdf2samples_mp)(arc)
for arc in good_papers)
"""
Explanation: Apply pipeline to some papers
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.1/examples/notebooks/generated/stationarity_detrending_adf_kpss.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
"""
Explanation: Stationarity and detrending (ADF/KPSS)
Stationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.
Two statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.
This first cell imports standard packages and sets plots to appear inline.
End of explanation
"""
sunspots = sm.datasets.sunspots.load_pandas().data
"""
Explanation: Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.
End of explanation
"""
sunspots.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del sunspots["YEAR"]
"""
Explanation: Some preprocessing is carried out on the data. The "YEAR" column is used in creating index.
End of explanation
"""
sunspots.plot(figsize=(12,8))
"""
Explanation: The data is plotted now.
End of explanation
"""
from statsmodels.tsa.stattools import adfuller
def adf_test(timeseries):
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print (dfoutput)
"""
Explanation: ADF test
ADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are:
Null Hypothesis: The series has a unit root.
Alternate Hypothesis: The series has no unit root.
If the null hypothesis in failed to be rejected, this test may provide evidence that the series is non-stationary.
A function is created to carry out the ADF test on a time series.
End of explanation
"""
from statsmodels.tsa.stattools import kpss
def kpss_test(timeseries):
print ('Results of KPSS Test:')
kpsstest = kpss(timeseries, regression='c', nlags="auto")
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
for key,value in kpsstest[3].items():
kpss_output['Critical Value (%s)'%key] = value
print (kpss_output)
"""
Explanation: KPSS test
KPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.
Null Hypothesis: The process is trend stationary.
Alternate Hypothesis: The series has a unit root (series is not stationary).
A function is created to carry out the KPSS test on a time series.
End of explanation
"""
adf_test(sunspots['SUNACTIVITY'])
"""
Explanation: The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
ADF test is now applied on the data.
End of explanation
"""
kpss_test(sunspots['SUNACTIVITY'])
"""
Explanation: Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.
The KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
KPSS test is now applied on the data.
End of explanation
"""
sunspots['SUNACTIVITY_diff'] = sunspots['SUNACTIVITY'] - sunspots['SUNACTIVITY'].shift(1)
sunspots['SUNACTIVITY_diff'].dropna().plot(figsize=(12,8))
"""
Explanation: Based upon the significance level of 0.05 and the p-value of KPSS test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is non-stationary as per the KPSS test.
It is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows:
Case 1: Both tests conclude that the series is not stationary - The series is not stationary
Case 2: Both tests conclude that the series is stationary - The series is stationary
Case 3: KPSS indicates stationarity and ADF indicates non-stationarity - The series is trend stationary. Trend needs to be removed to make series strict stationary. The detrended series is checked for stationarity.
Case 4: KPSS indicates non-stationarity and ADF indicates stationarity - The series is difference stationary. Differencing is to be used to make series stationary. The differenced series is checked for stationarity.
Here, due to the difference in the results from ADF test and KPSS test, it can be inferred that the series is trend stationary and not strict stationary. The series can be detrended by differencing or by model fitting.
Detrending by Differencing
It is one of the simplest methods for detrending a time series. A new series is constructed where the value at the current time step is calculated as the difference between the original observation and the observation at the previous time step.
Differencing is applied on the data and the result is plotted.
End of explanation
"""
adf_test(sunspots['SUNACTIVITY_diff'].dropna())
"""
Explanation: ADF test is now applied on these detrended values and stationarity is checked.
End of explanation
"""
kpss_test(sunspots['SUNACTIVITY_diff'].dropna())
"""
Explanation: Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.
KPSS test is now applied on these detrended values and stationarity is checked.
End of explanation
"""
|
ibmkendrick/streamsx.health | samples/HealthcareJupyterDemo/notebooks/HealthcareDemo-Distributed.ipynb | apache-2.0 | !pip install --upgrade streamsx
!pip install --upgrade "git+https://github.com/IBMStreams/streamsx.health.git#egg=healthdemo&subdirectory=samples/HealthcareJupyterDemo/package"
"""
Explanation: Healthcare Python Streaming Application Demo
This application demonstrates how users can develop Python Streaming Applications from a Jupyter Notebook. The Jupyter Notebook ultimately submits two Streams applications to a local Streams cluster. The first application is a pre-compiled SPL application that simulates patient waveform and vital data, as well as publishes the data to a topic. The second application is a Python Topology application written using the Streams Python API. This application subscribes to the topic containing the patient data, performs analysis on the waveforms and sends all of the data, including the results of the analysis, to the Streams view server.
Submitting the Python application from the Notebook allows for connecting to the Streams view server in order to retrieve the data. Once the data has been retrieved, it can be analyzed, manipulated or visualized like any other data accessed from a notebook. In the case of this demo, waveform graphs and numerical widgets are being used to display the healthcare data of the patient.
The following diagram outlines the architecture of the demo.
Cell Description
This cell is responsible for installing python modules required for running this notebook
End of explanation
"""
import getpass
from streamsx.topology.topology import Topology, schema
from streamsx.topology.context import ConfigParams, submit
from streamsx.topology import functions
from healthdemo.streamtool import Streamtool
print ('Submitting to a distributed instance.')
username = input('Username: ')
password = getpass.getpass(prompt='Password: ')
## display Streams Console link
print("Streams Console: ", end='')
streamtool = Streamtool()
streamtool.geturl()
numPatients = 3
## submit patient ingest microservice for 3 patients
streamtool.submitjob('../services/com.ibm.streamsx.health.physionet.PhysionetIngestServiceMulti.sab',
params=['-P', 'num.patients=%d' % (numPatients)])
"""
Explanation: Cell Description
This cell is responsible for building and submitting the Streams applications to the Streams cluster.
PhysionetIngestServiceMulti microservice
This microservice comes in the form of a pre-compiled SAB file. The microservice retrieves patient waveform and vital data from a Physionet database (https://www.physionet.org/). 3 different sets of data are used as source. The patient data is submitted to the ingest-physionet topic, which allows it to be consumed from downtstream applications or services.
End of explanation
"""
from healthdemo.patientmonitoring_functions import streaming_rpeak
from healthdemo.healthcare_functions import PatientFilter, GenTimestamp, aggregate
from healthdemo.windows import SlidingWindow
def getPatientView(patient_id):
'''
Select data of given patient_id, perform analysis and return view.
Parameters
----------
patient_id: int
patient_id (1-based)
Returns
-------
view: topology.View
view data from Streams server
'''
topo = Topology('HealthcareDemo_Patient%d' % (patient_id))
## Ingest, preprocess and aggregate patient data
sample_rate = 125
patient_data = topo.subscribe('ingest-physionet', schema.CommonSchema.Json) \
.map(functions.identity) \
.filter(PatientFilter('patient-%d' % (patient_id))) \
.transform(GenTimestamp(sample_rate)) \
.transform(SlidingWindow(length=sample_rate, trigger=sample_rate-1)) \
.transform(aggregate)
## Calculate RPeak and RR delta
patient_data = streaming_rpeak(patient_data, sample_rate, data_label='ECG Lead II')
## Create a view of the data
patient_view = patient_data.view()
submit('DISTRIBUTED', topo, username=username, password=password)
return patient_view
# Retrieve view for a patient
patient_view = getPatientView(2)
print('DONE')
"""
Explanation: Cell Description
This cell is responsible for building and submitting the Streams applications to the Streams cluster.
Healthcare Patient Python Topology Application
This cell contains source code for the Python Topology application. As described in the above architecture, this is a Streaming Python application that ingests the patient data from the ingest-physionet topic, performs filtering and analysis on the data, and then sends the data to the Streams view server.
End of explanation
"""
from healthdemo.medgraphs import ECGGraph, PoincareGraph, NumericText, ABPNumericText
## load BokehJS visualization library (must be loaded in a separate cell)
from bokeh.io import output_notebook, push_notebook
from bokeh.resources import INLINE
output_notebook(resources=INLINE)
%autosave 0
%reload_ext autoreload
%aimport healthdemo.utils
%aimport healthdemo.medgraphs
%autoreload 1
## create the graphs ##
graphs = []
ecg_leadII_graph = ECGGraph(signal_label='ECG Lead II', title='ECG Lead II', plot_width=600, min_range=-0.5, max_range=2.0)
graphs.append(ecg_leadII_graph)
leadII_poincare = PoincareGraph(signal_label='Poincare - ECG Lead II', title='Poincare - ECG Lead II')
graphs.append(leadII_poincare)
ecg_leadV_graph = ECGGraph(signal_label='ECG Lead V', title='ECG Lead V', plot_width=600)
graphs.append(ecg_leadV_graph)
resp_graph = ECGGraph(signal_label='Resp', title='Resp', min_range=-1, max_range=3, plot_width=600)
graphs.append(resp_graph)
pleth_graph = ECGGraph(signal_label='Pleth', title='Pleth', min_range=0, max_range=5, plot_width=600)
graphs.append(pleth_graph)
hr_numeric = NumericText(signal_label='HR', title='HR', color='#7cc7ff')
graphs.append(hr_numeric)
pulse_numeric = NumericText(signal_label='PULSE', title='PULSE', color='#e71d32')
graphs.append(pulse_numeric)
spo2_numeric = NumericText(signal_label='SpO2', title='SpO2', color='#8cd211')
graphs.append(spo2_numeric)
abp_numeric = ABPNumericText(abp_sys_label='ABP Systolic', abp_dia_label='ABP Diastolic', title='ABP', color='#fdd600')
graphs.append(abp_numeric)
## retrieve data from Streams view in a background job ##
def data_collector(view, graphs):
for d in iter(view.get, None):
for g in graphs:
g.add(d)
from IPython.lib import backgroundjobs as bg
jobs = bg.BackgroundJobManager()
jobs.new(data_collector, patient_view.start_data_fetch(), graphs)
"""
Explanation: Cell Description
This cell initializes all of the graphs that will be used as well as creates the background job that access the view data.
The view data is continuously retrieved from the Streams view server in a background job. Each graph object receives a copy of the data. The graph objects extracts and stores the data that is relevant for that particular graph. Each time a call to update() is made on a graph object, the next data point is retrieved and displayed. Each graph object maintains an internal queue so that each time a call to update() is made, the next element in the queue is retrieved and removed.
End of explanation
"""
import time
from bokeh.io import show
from bokeh.layouts import column, row, widgetbox
## display graphs
show(
row(
column(
ecg_leadII_graph.get_figure(),
ecg_leadV_graph.get_figure(),
resp_graph.get_figure(),
pleth_graph.get_figure()
),
column(
leadII_poincare.get_figure(),
widgetbox(hr_numeric.get_figure()),
widgetbox(pulse_numeric.get_figure()),
widgetbox(spo2_numeric.get_figure()),
widgetbox(abp_numeric.get_figure())
)
),
# If using bokeh <= 0.12.2, comment out the following argument
notebook_handle=True
)
cnt = 0
while True:
## update graphs
for g in graphs:
g.update()
## update notebook
cnt += 1
if cnt % 5 == 0:
push_notebook() ## refresh the graphs
cnt = 0
time.sleep(0.008)
"""
Explanation: Cell Description
This cell is responsible for laying out and displaying the graphs. There is an infinite loop that continuously calls the update() method on each of the graphs. After each graph has been updated, a call to push_notebook() is made, which causes the notebook to update the graphics.
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tutorials/load_data/unicode.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
"""
Explanation: Unicode 字符串
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/unicode">
<img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />
在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/unicode.ipynb">
<img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />
在 Google Colab 中运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/unicode.ipynb">
<img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />
在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/unicode.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a>
</td>
</table>
简介
处理自然语言的模型通常使用不同的字符集来处理不同的语言。Unicode 是一种标准的编码系统,用于表示几乎所有语言的字符。每个字符使用 <code>0</code> 和 0x10FFFF 之间的唯一整数码位进行编码。Unicode 字符串是由零个或更多码位组成的序列。
本教程介绍了如何在 TensorFlow 中表示 Unicode 字符串,以及如何使用标准字符串运算的 Unicode 等效项对其进行操作。它会根据字符体系检测将 Unicode 字符串划分为不同词例。
End of explanation
"""
tf.constant(u"Thanks 😊")
"""
Explanation: tf.string 数据类型
您可以使用基本的 TensorFlow tf.string dtype 构建字节字符串张量。Unicode 字符串默认使用 UTF-8 编码。
End of explanation
"""
tf.constant([u"You're", u"welcome!"]).shape
"""
Explanation: tf.string 张量可以容纳不同长度的字节字符串,因为字节字符串会被视为原子单元。字符串长度不包括在张量维度中。
End of explanation
"""
# Unicode string, represented as a UTF-8 encoded string scalar.
text_utf8 = tf.constant(u"语言处理")
text_utf8
# Unicode string, represented as a UTF-16-BE encoded string scalar.
text_utf16be = tf.constant(u"语言处理".encode("UTF-16-BE"))
text_utf16be
# Unicode string, represented as a vector of Unicode code points.
text_chars = tf.constant([ord(char) for char in u"语言处理"])
text_chars
"""
Explanation: 注:使用 Python 构造字符串时,v2 和 v3 对 Unicode 的处理方式有所不同。在 v2 中,Unicode 字符串用前缀“u”表示(如上所示)。在 v3 中,字符串默认使用 Unicode 编码。
表示 Unicode
在 TensorFlow 中有两种表示 Unicode 字符串的标准方式:
string 标量 - 使用已知字符编码对码位序列进行编码。
int32 向量 - 每个位置包含单个码位。
例如,以下三个值均表示 Unicode 字符串 "语言处理":
End of explanation
"""
tf.strings.unicode_decode(text_utf8,
input_encoding='UTF-8')
tf.strings.unicode_encode(text_chars,
output_encoding='UTF-8')
tf.strings.unicode_transcode(text_utf8,
input_encoding='UTF8',
output_encoding='UTF-16-BE')
"""
Explanation: 在不同表示之间进行转换
TensorFlow 提供了在下列不同表示之间进行转换的运算:
tf.strings.unicode_decode:将编码的字符串标量转换为码位的向量。
tf.strings.unicode_encode:将码位的向量转换为编码的字符串标量。
tf.strings.unicode_transcode:将编码的字符串标量转换为其他编码。
End of explanation
"""
# A batch of Unicode strings, each represented as a UTF8-encoded string.
batch_utf8 = [s.encode('UTF-8') for s in
[u'hÃllo', u'What is the weather tomorrow', u'Göödnight', u'😊']]
batch_chars_ragged = tf.strings.unicode_decode(batch_utf8,
input_encoding='UTF-8')
for sentence_chars in batch_chars_ragged.to_list():
print(sentence_chars)
"""
Explanation: 批次维度
解码多个字符串时,每个字符串中的字符数可能不相等。返回结果是 tf.RaggedTensor,其中最里面的维度的长度会根据每个字符串中的字符数而变化:
End of explanation
"""
batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)
print(batch_chars_padded.numpy())
batch_chars_sparse = batch_chars_ragged.to_sparse()
"""
Explanation: 您可以直接使用此 tf.RaggedTensor,也可以使用 tf.RaggedTensor.to_tensor 和 tf.RaggedTensor.to_sparse 方法将其转换为带有填充的密集 tf.Tensor 或 tf.SparseTensor。
End of explanation
"""
tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],
output_encoding='UTF-8')
"""
Explanation: 在对多个具有相同长度的字符串进行编码时,可以将 tf.Tensor 用作输入:
End of explanation
"""
tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')
"""
Explanation: 当对多个具有不同长度的字符串进行编码时,应将 tf.RaggedTensor 用作输入:
End of explanation
"""
tf.strings.unicode_encode(
tf.RaggedTensor.from_sparse(batch_chars_sparse),
output_encoding='UTF-8')
tf.strings.unicode_encode(
tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),
output_encoding='UTF-8')
"""
Explanation: 如果您的张量具有填充或稀疏格式的多个字符串,请在调用 unicode_encode 之前将其转换为 tf.RaggedTensor:
End of explanation
"""
# Note that the final character takes up 4 bytes in UTF8.
thanks = u'Thanks 😊'.encode('UTF-8')
num_bytes = tf.strings.length(thanks).numpy()
num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()
print('{} bytes; {} UTF-8 characters'.format(num_bytes, num_chars))
"""
Explanation: Unicode 运算
字符长度
tf.strings.length 运算具有 unit 参数,该参数表示计算长度的方式。unit 默认为 "BYTE",但也可以将其设置为其他值(例如 "UTF8_CHAR" 或 "UTF16_CHAR"),以确定每个已编码 string 中的 Unicode 码位数量。
End of explanation
"""
# default: unit='BYTE'. With len=1, we return a single byte.
tf.strings.substr(thanks, pos=7, len=1).numpy()
# Specifying unit='UTF8_CHAR', we return a single character, which in this case
# is 4 bytes.
print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())
"""
Explanation: 字符子字符串
类似地,tf.strings.substr 运算会接受 "unit" 参数,并用它来确定 "pos" 和 "len" 参数包含的偏移类型。
End of explanation
"""
tf.strings.unicode_split(thanks, 'UTF-8').numpy()
"""
Explanation: 拆分 Unicode 字符串
tf.strings.unicode_split 运算会将 Unicode 字符串拆分为单个字符的子字符串:
End of explanation
"""
codepoints, offsets = tf.strings.unicode_decode_with_offsets(u"🎈🎉🎊", 'UTF-8')
for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):
print("At byte offset {}: codepoint {}".format(offset, codepoint))
"""
Explanation: 字符的字节偏移量
为了将 tf.strings.unicode_decode 生成的字符张量与原始字符串对齐,了解每个字符开始位置的偏移量很有用。方法 tf.strings.unicode_decode_with_offsets 与 unicode_decode 类似,不同的是它会返回包含每个字符起始偏移量的第二张量。
End of explanation
"""
uscript = tf.strings.unicode_script([33464, 1041]) # ['芸', 'Б']
print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]
"""
Explanation: Unicode 字符体系
每个 Unicode 码位都属于某个码位集合,这些集合被称作字符体系。某个字符的字符体系有助于确定该字符可能所属的语言。例如,已知 'Б' 属于西里尔字符体系,表明包含该字符的现代文本很可能来自某个斯拉夫语种(如俄语或乌克兰语)。
TensorFlow 提供了 tf.strings.unicode_script 运算来确定某一给定码位使用的是哪个字符体系。字符体系代码是对应于国际 Unicode 组件 (ICU) UScriptCode 值的 int32 值。
End of explanation
"""
print(tf.strings.unicode_script(batch_chars_ragged))
"""
Explanation: tf.strings.unicode_script 运算还可以应用于码位的多维 tf.Tensor 或 tf.RaggedTensor:
End of explanation
"""
# dtype: string; shape: [num_sentences]
#
# The sentences to process. Edit this line to try out different inputs!
sentence_texts = [u'Hello, world.', u'世界こんにちは']
"""
Explanation: 示例:简单分词
分词是将文本拆分为类似单词的单元的任务。当使用空格字符分隔单词时,这通常很容易,但是某些语言(如中文和日语)不使用空格,而某些语言(如德语)中存在长复合词,必须进行拆分才能分析其含义。在网页文本中,不同语言和字符体系常常混合在一起,例如“NY株価”(纽约证券交易所)。
我们可以利用字符体系的变化进行粗略分词(不实现任何 ML 模型),从而估算词边界。这对类似上面“NY株価”示例的字符串都有效。这种方法对大多数使用空格的语言也都有效,因为各种字符体系中的空格字符都归类为 USCRIPT_COMMON,这是一种特殊的字符体系代码,不同于任何实际文本。
End of explanation
"""
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j] is the codepoint for the j'th character in
# the i'th sentence.
sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')
print(sentence_char_codepoint)
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_scripts[i, j] is the unicode script of the j'th character in
# the i'th sentence.
sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)
print(sentence_char_script)
"""
Explanation: 首先,我们将句子解码为字符码位,然后查找每个字符的字符体系标识符。
End of explanation
"""
# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_starts_word[i, j] is True if the j'th character in the i'th
# sentence is the start of a word.
sentence_char_starts_word = tf.concat(
[tf.fill([sentence_char_script.nrows(), 1], True),
tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],
axis=1)
# dtype: int64; shape: [num_words]
#
# word_starts[i] is the index of the character that starts the i'th word (in
# the flattened list of characters from all sentences).
word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)
print(word_starts)
"""
Explanation: 接下来,我们使用这些字符体系标识符来确定添加词边界的位置。我们在每个句子的开头添加一个词边界;如果某个字符与前一个字符属于不同的字符体系,也为该字符添加词边界。
End of explanation
"""
# dtype: int32; shape: [num_words, (num_chars_per_word)]
#
# word_char_codepoint[i, j] is the codepoint for the j'th character in the
# i'th word.
word_char_codepoint = tf.RaggedTensor.from_row_starts(
values=sentence_char_codepoint.values,
row_starts=word_starts)
print(word_char_codepoint)
"""
Explanation: 然后,我们可以使用这些起始偏移量来构建 RaggedTensor,它包含了所有批次的单词列表:
End of explanation
"""
# dtype: int64; shape: [num_sentences]
#
# sentence_num_words[i] is the number of words in the i'th sentence.
sentence_num_words = tf.reduce_sum(
tf.cast(sentence_char_starts_word, tf.int64),
axis=1)
# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]
#
# sentence_word_char_codepoint[i, j, k] is the codepoint for the k'th character
# in the j'th word in the i'th sentence.
sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(
values=word_char_codepoint,
row_lengths=sentence_num_words)
print(sentence_word_char_codepoint)
"""
Explanation: 最后,我们可以将词码位 RaggedTensor 划分回句子中:
End of explanation
"""
tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()
"""
Explanation: 为了使最终结果更易于阅读,我们可以将其重新编码为 UTF-8 字符串:
End of explanation
"""
|
datosgobar/pydatajson | samples/caso-uso-1-pydatajson-xlsx-justicia-valido.ipynb | mit | import arrow
import os, sys
sys.path.insert(0, os.path.abspath(".."))
from pydatajson import DataJson #lib y clase
from pydatajson.readers import read_catalog # lib, modulo ... metodo Lle el catalogo -json o xlsx o (local o url) dicc- y lo transforma en un diccionario de python
from pydatajson.writers import write_json_catalog
"""
Explanation: Caso de uso 1 - Validación, transformación y harvesting con el catálogo del Ministerio de Justicia
Caso 1: catálogo válido
En esta prueba se realiza el proceso completo de validación, transformación y harvesting a partir de un archivo xlsx que contiene los metadatos pertenecientes al catálogo del Ministerio de Justicia.
Nota: Se trata de un catálogo conocido y válido en cuanto a su estructura y metadatos. Archivo utilizado: catalogo-justicia.xlsx
Setup
Importación de metodos y clases
End of explanation
"""
#completar con lo que corresponda
ORGANISMO = 'justicia'
catalogo_xlsx = os.path.join("archivos-tests", "excel-validos", "catalogo-justicia.xlsx")
#NO MODIFICAR
#Creo la estructura de directorios necesaria si no existe
if not os.path.isdir("archivos-generados"):
os.mkdir("archivos-generados")
for directorio in ["jsons", "reportes", "configuracion"]:
path = os.path.join("archivos-generados", directorio)
if not os.path.isdir(path):
os.mkdir(path)
# Declaro algunas variables de interés
HOY = arrow.now().format('YYYY-MM-DD-HH_mm')
catalogo_a_json = os.path.join("archivos-generados","jsons","catalogo-{}-{}.json".format(ORGANISMO, HOY))
reporte_datasets = os.path.join("archivos-generados", "reportes", "reporte-catalogo-{}-{}.xlsx".format(ORGANISMO, HOY))
archivo_config_sin_reporte = os.path.join("archivos-generados", "configuracion", "archivo-config_-{}-{}-sinr.csv".format(ORGANISMO, HOY))
archivo_config_con_reporte = os.path.join("archivos-generados", "configuracion", "archivo-config-{}-{}-conr.csv".format(ORGANISMO, HOY))
"""
Explanation: Declaración de variables y paths
End of explanation
"""
catalogo = read_catalog(catalogo_xlsx)
# En el caso que quiera trabajarse con un archivo remoto:
#catalogo = read_catalog("https://raw.githubusercontent.com/datosgobar/pydatajson/master/tests/samples/catalogo_justicia.json")
"""
Explanation: Validación del archivo xlsx y transformación a json
Validación del catálogo en xlsx
End of explanation
"""
write_json_catalog(catalogo, catalogo_a_json)
##write_json_catalog(catalog, target_file) escrie un dicc a un archivo json
"""
Explanation: Transformación del catálogo, de xlsx a json
End of explanation
"""
dj = DataJson()
"""
Explanation: Validación del catalogo en json y harvesting
Validación del catálogo en json
Instanciación de la clase DataJson
End of explanation
"""
dj.is_valid_catalog(catalogo)
"""
Explanation: Validación -V/F- del catálogo en json
End of explanation
"""
dj.validate_catalog(catalogo)
"""
Explanation: Validación detallada del catálogo en json
End of explanation
"""
dj.generate_datasets_report(catalogo, harvest='valid',export_path=reporte_datasets)
# proceso el repote, 0 y 1s
"""
Explanation: Harvesting
Generación del archivo de reporte de datasets
End of explanation
"""
# usando el reporte
dj.generate_harvester_config(harvest='report', report=reporte_datasets, export_path=archivo_config_con_reporte)
# sin usar el reporte
dj.generate_harvester_config(catalogs=catalogo, harvest='valid', export_path=archivo_config_sin_reporte)
#(catalogs=None, harvest=u'valid', report=None, export_path=None)
"""
Explanation: Generación del archivo de configuración para el harvester
End of explanation
"""
|
nd1/women_in_tech_summit_DC2017 | workshop_notebooks/workshop_api_notebook.ipynb | mit | import json
import urllib.request
data = json.loads(urllib.request.urlopen('http://www.omdbapi.com/?t=Game%20of%20Thrones&Season=1').read().\
decode('utf8'))
"""
Explanation: APIs
Let's start by looking at OMDb API.
The OMDb API is a free web service to obtain movie information, all content and images on the site are contributed and maintained by our users.
The Python package urllib can be used to fetch resources from the internet.
OMDb tells us what kinds of requests we can make. We are going to do a title search. As you can see below, we have an additional parameter "&Season=1" which does not appear in the parameter tables. If you read through the change log, you will see it documented there.
Using the urllib and json packages allow us to call an API and store the results locally.
End of explanation
"""
print(type(data))
"""
Explanation: What should we expect the type to be for the variable data?
End of explanation
"""
data
"""
Explanation: What do you think the data will look like?
End of explanation
"""
for episode in data['Episodes']:
print(episode['Title'], episode['imdbRating'])
"""
Explanation: We now have a dictionary object of our data. We can use python to manipulate it in a variety of ways. For example, we can print all the titles of the episodes.
End of explanation
"""
import pandas as pd
df = pd.DataFrame.from_dict(data['Episodes'])
df
"""
Explanation: We can use pandas to convert the episode information to a dataframe.
End of explanation
"""
with open('omdb_api_data.json', 'w') as f:
json.dump(data, f)
"""
Explanation: And, we can save our data locally to use later.
End of explanation
"""
# execute this on OS X or Linux
! curl -v -XPOST http://api.dp.la/v2/api_key/nicole@nicoledonnelly.me
"""
Explanation: Let's try an API that requires an API key!
"The Digital Public Library of America brings together the riches of America’s libraries, archives, and museums, and makes them freely available to the world. It strives to contain the full breadth of human expression, from the written word, to works of art and culture, to records of America’s heritage, to the efforts and data of science."
And, they have an API.
In order to use the API, you need to request a key. You can do this with an HTTP POST request.
If you are using OS X or Linux, replace "YOUR_EMAIL@example.com" in the cell below with your email address and execute the cell. This will send the rquest to DPLA and they will email your API key to the email address you provided. To successfully query the API, you must include the ?api_key= parameter with the 32-character hash following.
End of explanation
"""
#execute this on Windows
Invoke-WebRequest -Uri ("http://api.dp.la/v2/api_key/YOUR_EMAIL@example.com") -Method POST -Verbose -usebasicparsing
"""
Explanation: If you are on Windows 7 or 10, open PowerShell. Replace "YOUR_EMAIL@example.com" in the cell below with your email address. Copy the code and paste it at the command prompt in PowerShell. This will send the rquest to DPLA and they will email your API key to the email address you provided. To successfully query the API, you must include the ?api_key= parameter with the 32-character hash following.
End of explanation
"""
with open("../api/dpla_config_secret.json") as key_file:
key = json.load(key_file)
key
"""
Explanation: You will get a response similar to what is shown below and will receive an email fairly quickly from DPLA with your key.
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
* Trying 52.2.169.251...
* Connected to api.dp.la (52.2.169.251) port 80 (#0)
> POST /v2/api_key/YOUR_EMAIL@example.com HTTP/1.1
> Host: api.dp.la
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 201 Created
< Access-Control-Allow-Origin: *
< Cache-Control: max-age=0, private, must-revalidate
< Content-Type: application/json; charset=utf-8
< Date: Thu, 20 Oct 2016 20:53:24 GMT
< ETag: "8b66d9fe7ded79e3151d5a22f0580d99"
< Server: nginx/1.1.19
< Status: 201 Created
< X-Request-Id: d61618751a376452ac3540b3157dcf48
< X-Runtime: 0.179920
< X-UA-Compatible: IE=Edge,chrome=1
< Content-Length: 89
< Connection: keep-alive
<
* Connection #0 to host api.dp.la left intact
{"message":"API key created and sent via email. Be sure to check your Spam folder, too."}
It is good practice not to put your keys in your code. You should store them in a file and read them in from there. If you are pushing your code to GitHub, make sure you put your key files in .gitignore.
I created a file on my drive called "dpla_config_secret.json". The contents of the file look like this:
{
"api_key" : "my api key here"
}
I can then write code to read the information in.
End of explanation
"""
import requests
# we are specifying our url and parameters here as variables
url = 'http://api.dp.la/v2/items/'
params = {'api_key' : key['api_key'], 'q' : 'goats+AND+cats'}
# we are creating a response object, r
r = requests.get(url, params=params)
type(r)
# we can look at the url that was created by requests with our specified variables
r.url
# we can check the status code of our request
r.status_code
"""
Explanation: Then, when I create my API query, I can use a variable in place of my actual key.
The Requests library allows us to build urls with different parameters. You build the parameters as a dictionary that contains key/value pairs for everything after the '?' in your url.
End of explanation
"""
# we can look at the content of our request
print(r.content)
"""
Explanation: HTTP Status Codes
End of explanation
"""
params = {'api_key' : key['api_key'], 'q' : 'goats+AND+cats', 'page_size': 500}
r = requests.get(url, params=params)
print(r.content)
"""
Explanation: By default, DPLA returns 10 items at a time. We can see from the count value, our query has 29 results. DPLA does give us a paramter we can set to change this to get up to 500 items at a time.
End of explanation
"""
|
evanmiltenburg/python-for-text-analysis | Chapters/Chapter 07 - Sets.ipynb | apache-2.0 | a_set = {1, 2, 3}
a_set
empty_set = set() # you have to use set() to create an empty set! (we will see why later)
print(empty_set)
"""
Explanation: Chapter 7 - Sets
This chapter will introduce a different kind of container: sets. Sets are unordered lists with no duplicate entries. You might wonder why we need different types of containers. We will postpone that discussion until chapter 8.
At the end of this chapter, you will be able to:
* create a set
* add items to a set
* extract/inspect items in a set
If you want to learn more about these topics, you might find the following links useful:
* Python documentation
* A tutorial on sets
If you have questions about this chapter, please contact us (cltl.python.course@gmail.com).
1. How to create a set
It's quite simple to create a set.
End of explanation
"""
a_set = {1, 2, 1, 1}
print(a_set)
"""
Explanation: Curly brackets surround sets, and commas separate the elements in the set
A set can be empty (use set() to create it)
Sets do not allow duplicates
sets are unordered (the order in which you add items is not important)
A set can only contain immutable objects (for now that means only strings and integers can be added)
A set can not contain mutable objects, hence no lists or sets
Please note that sets do not allow duplicates. In the example below, the integer 1 will only be present once in the set.
End of explanation
"""
a_set = {1, 3, 2}
print(a_set)
"""
Explanation: Please note that sets are unordered. This means that it can occur that if you print a set, it looks different than how you created it
End of explanation
"""
{1, 2, 3} == {2, 3, 1}
"""
Explanation: This also means that you can check if two sets are the same even if you don't know the order in which items were put in:
End of explanation
"""
a_set = {1, 'a'}
print(a_set)
"""
Explanation: Please note that sets can only contain immutable objects. Hence the following examples will work, since we are adding immutable objects
End of explanation
"""
a_set = {1, []}
"""
Explanation: But the following example will result in an error, since we are trying to create a set with a mutable object
End of explanation
"""
a_set = set()
a_set.add(1)
print(a_set)
a_set = set()
a_set = a_set.add(1)
print(a_set)
"""
Explanation: 2. How to add items to a set
The most common way of adding an item to a set is by using the add method. The add method has one positional parameter, namely what you are going to add to the set, and it returns None.
End of explanation
"""
dir(set)
"""
Explanation: 3. How to extract/inspect items in a set
When you use sets, you usually want to compare the elements of different sets, for instance, to determine how much overlap there is or how many of the items in set1 are not members of set2. Sets can be used to carry out mathematical set operations like union, intersection, difference, and symmetric difference. Please take a look at this website if you prefer a more visual and more complete explanation.
You can ask Python to show you all the set methods by using dir. All the methods that do not start with '__' are relevant for you.
End of explanation
"""
help(set.union)
"""
Explanation: You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the union method.
End of explanation
"""
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_union = set1.union(set2)
print(the_union)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 6, 7, 8, 9}
the_union = set1.union(set2, set3)
print(the_union)
"""
Explanation: Python shows dots (...) for the parameters of the union method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them.
End of explanation
"""
help(set.intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
the_intersection = set1.intersection(set2)
print(the_intersection)
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}
set3 = {5, 8, 9, 10}
the_intersection = set1.intersection(set2, set3)
print(the_intersection)
"""
Explanation: The intersection method has works in a similar manner as the union method, but returns a new set containing only the intersection of the sets.
End of explanation
"""
a_set = set()
a_set.add(1)
a_set.add(2)
a_set[0]
"""
Explanation: Since sets are unordered, you can not use an index to extract an element from a set.
End of explanation
"""
nums = {3, 41, 12, 9, 74, 15}
print(len(nums)) # number of items in a set
print(max(nums)) # highest value in a set
print(min(nums)) # lowest value in a set
print(sum(nums)) # sum of all values in a set
"""
Explanation: 4. Using built-in functions on sets
The same range of functions that operate on lists also work with sets. We can easily get some simple calculations done with these functions:
End of explanation
"""
set_a = {1, 2, 3}
set_b = {4, 5, 6}
an_element = 4
print(set_a)
#do some operations
set_a.add(an_element) # Add an_element to set_a
print(set_a)
set_a.update(set_b) # Add the elements of set_b to set_a
print(set_a)
set_a.pop() # Remove and return an arbitrary set element. How does this compare to the list method pop?
print(set_a)
set_a.remove(an_element) # Remove an_element from set_a
print(set_a)
"""
Explanation: 5. An overview of set operations
There are many more operations which we can perform on sets. Here is an overview of some of them.
In order to get used to them, please call the help function on each of them (e.g., help(set.union)). This will give you the information about the positional parameters, keyword parameters, and what is returned by the method.
End of explanation
"""
dir(set)
"""
Explanation: Before diving into some exercises, you may want to the dir built-in function again to see an overview of all set methods:
End of explanation
"""
set_1 = {'just', 'some', 'words'}
set_2 = {'some', 'other', 'words'}
# your code here
"""
Explanation: Exercises
Exercise 1:
Please create an empty set and use the add method to add four items to it: 'a', 'set', 'is', 'born'
Exercise 2:
Please use a built-in method to count how many items your set has
Exercise 3:
How would you remove one item from the set?
Exercise 4:
Please check which items are in both sets:
End of explanation
"""
|
bourneli/deep-learning-notes | DAT236x Deep Learning Explained/.ipynb_checkpoints/Lab1_MNIST_DataLoader-checkpoint.ipynb | mit | # Import the relevant modules to be used later
from __future__ import print_function
import gzip
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import struct
import sys
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Config matplotlib for inline plotting
%matplotlib inline
"""
Explanation: Lab 1: MNIST Data Loader
This notebook is the first lab of the "Deep Learning Explained" course. It is derived from the tutorial numbered CNTK_103A in the CNTK repository. This notebook is used to download and pre-process the MNIST digit images to be used for building different models to recognize handwritten digits.
Note: This notebook must be run to completion before the other course notebooks can be run.
End of explanation
"""
# Functions to load MNIST images and unpack into train and test set.
# - loadData reads image data and formats into a 28x28 long array
# - loadLabels reads the corresponding labels data, 1 for each image
# - load packs the downloaded image and labels data into a combined format to be read later by
# CNTK text reader
def loadData(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x3080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))[0]
if n != cimg:
raise Exception('Invalid file: expected {0} entries.'.format(cimg))
crow = struct.unpack('>I', gz.read(4))[0]
ccol = struct.unpack('>I', gz.read(4))[0]
if crow != 28 or ccol != 28:
raise Exception('Invalid file: expected 28 rows/cols per image.')
# Read data.
res = np.fromstring(gz.read(cimg * crow * ccol), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, crow * ccol))
def loadLabels(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x1080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))
if n[0] != cimg:
raise Exception('Invalid file: expected {0} rows.'.format(cimg))
# Read labels.
res = np.fromstring(gz.read(cimg), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, 1))
def try_download(dataSrc, labelsSrc, cimg):
data = loadData(dataSrc, cimg)
labels = loadLabels(labelsSrc, cimg)
return np.hstack((data, labels))
"""
Explanation: Data download
We will download the data onto the local machine. The MNIST database is a standard set of handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 grayscale pixels. This set is easy to use visualize and train on any computer.
End of explanation
"""
# URLs for the train image and labels data
url_train_image = 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz'
url_train_labels = 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz'
num_train_samples = 60000
print("Downloading train data")
train = try_download(url_train_image, url_train_labels, num_train_samples)
url_test_image = 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz'
url_test_labels = 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
num_test_samples = 10000
print("Downloading test data")
test = try_download(url_test_image, url_test_labels, num_test_samples)
"""
Explanation: Download the data
In the following code, we use the functions defined above to download and unzip the MNIST data into memory. The training set has 60000 images while the test set has 10000 images.
End of explanation
"""
# Plot a random image
sample_number = 5001
plt.imshow(train[sample_number,:-1].reshape(28,28), cmap="gray_r")
plt.axis('off')
print("Image Label: ", train[sample_number,-1])
"""
Explanation: Visualize the data
Here, we use matplotlib to display one of the training images and it's associated label.
End of explanation
"""
# Save the data files into a format compatible with CNTK text reader
def savetxt(filename, ndarray):
dir = os.path.dirname(filename)
if not os.path.exists(dir):
os.makedirs(dir)
if not os.path.isfile(filename):
print("Saving", filename )
with open(filename, 'w') as f:
labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str)))
for row in ndarray:
row_str = row.astype(str)
label_str = labels[row[-1]]
feature_str = ' '.join(row_str[:-1])
f.write('|labels {} |features {}\n'.format(label_str, feature_str))
else:
print("File already exists", filename)
# Save the train and test files (prefer our default path for the data)
data_dir = os.path.join("..", "Examples", "Image", "DataSets", "MNIST")
if not os.path.exists(data_dir):
data_dir = os.path.join("data", "MNIST")
print ('Writing train text file...')
savetxt(os.path.join(data_dir, "Train-28x28_cntk_text.txt"), train)
print ('Writing test text file...')
savetxt(os.path.join(data_dir, "Test-28x28_cntk_text.txt"), test)
print('Done')
"""
Explanation: Save the images
Save the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points).
The labels are encoded as 1-hot encoding (label of 3 with 10 digits becomes 0001000000, where the first index corresponds to digit 0 and the last one corresponds to digit 9.
End of explanation
"""
|
jni/aspp2015 | Introduction to Cython.ipynb | bsd-3-clause | def f(x):
y = x**4 - 3*x
return y
def integrate_f(a, b, n):
dx = (b - a) / n
dx2 = dx / 2
s = f(a) * dx2
for i in range(1, n):
s += f(a + i * dx) * dx
s += f(b) * dx2
return s
"""
Explanation: Intro to Cython
Why Cython
Outline:
Speed up Python code
Interact with NumPy arrays
Release GIL and get parallel performance
Wrap C/C++ code
Part 1: speed up your Python code
We want to integrate the function $f(x) = x^4 - 3x$.
End of explanation
"""
%timeit integrate_f(-100, 100, int(1e5))
"""
Explanation: Now, let's time this:
End of explanation
"""
%load_ext cython
%%cython
def f2(x):
y = x**4 - 3*x
return y
def integrate_f2(a, b, n):
dx = (b - a) / n
dx2 = dx / 2
s = f2(a) * dx2
for i in range(1, n):
s += f2(a + i * dx) * dx
s += f2(b) * dx2
return s
%timeit integrate_f2(-100, 100, int(1e5))
"""
Explanation: Not too bad, but this can add up. Let's see if Cython can do better:
End of explanation
"""
%%cython
def f3(double x):
y = x**4 - 3*x
return y
def integrate_f3(double a, double b, int n):
dx = (b - a) / n
dx2 = dx / 2
s = f3(a) * dx2
for i in range(1, n):
s += f3(a + i * dx) * dx
s += f3(b) * dx2
return s
%timeit integrate_f3(-100, 100, int(1e5))
"""
Explanation: That's a little bit faster, which is nice since all we did was to call Cython on the exact same code. But can we do better?
End of explanation
"""
%%cython
def f4(double x):
y = x**4 - 3*x
return y
def integrate_f4(double a, double b, int n):
cdef:
double dx = (b - a) / n
double dx2 = dx / 2
double s = f4(a) * dx2
int i = 0
for i in range(1, n):
s += f4(a + i * dx) * dx
s += f4(b) * dx2
return s
%timeit integrate_f4(-100, 100, int(1e5))
"""
Explanation: The final bit of "easy" Cython optimization is "declaring" the variables inside the function:
End of explanation
"""
%%cython -a
def f4(double x):
y = x**4 - 3*x
return y
def integrate_f4(double a, double b, int n):
cdef:
double dx = (b - a) / n
double dx2 = dx / 2
double s = f4(a) * dx2
int i = 0
for i in range(1, n):
s += f4(a + i * dx) * dx
s += f4(b) * dx2
return s
"""
Explanation: 4X speedup with so little effort is pretty nice. What else can we do?
Cython has a nice "-a" flag (for annotation) that can provide clues about why your code is slow.
End of explanation
"""
import numpy as np
def mean3filter(arr):
arr_out = np.empty_like(arr)
for i in range(1, arr.shape[0] - 1):
arr_out[i] = np.sum(arr[i-1 : i+1]) / 3
arr_out[0] = (arr[0] + arr[1]) / 2
arr_out[-1] = (arr[-1] + arr[-2]) / 2
return arr_out
%timeit mean3filter(np.random.rand(1e5))
%%cython
import cython
import numpy as np
@cython.boundscheck(False)
def mean3filter2(double[::1] arr):
cdef double[::1] arr_out = np.empty_like(arr)
cdef int i
for i in range(1, arr.shape[0]-1):
arr_out[i] = np.sum(arr[i-1 : i+1]) / 3
arr_out[0] = (arr[0] + arr[1]) / 2
arr_out[-1] = (arr[-1] + arr[-2]) / 2
return np.asarray(arr_out)
%timeit mean3filter2(np.random.rand(1e5))
"""
Explanation: That's a lot of yellow still! How do we reduce this?
Exercise: change the f4 declaration to C
Part 2: work with NumPy arrays
This is a very small subset of Python. Most scientific application deal not with single values, but with arrays of data.
End of explanation
"""
%%cython -a
import cython
from cython.parallel import prange
import numpy as np
@cython.boundscheck(False)
def mean3filter3(double[::1] arr, double[::1] out):
cdef int i, j, k = arr.shape[0]-1
with nogil:
for i in prange(1, k-1, schedule='static',
chunksize=(k-2) // 2, num_threads=2):
for j in range(i-1, i+1):
out[i] += arr[j]
out[i] /= 3
out[0] = (arr[0] + arr[1]) / 2
out[-1] = (arr[-1] + arr[-2]) / 2
return np.asarray(out)
rin = np.random.rand(1e7)
rout = np.empty_like(rin)
%timeit mean3filter2(rin, rout)
%timeit mean3filter3(rin, rout)
"""
Explanation: Rubbish! How do we fix this?
Exercise: use %%cython -a to speed up the code
Part 3: write parallel code
Warning:: Dragons afoot.
End of explanation
"""
%%cython -a
# distutils: language=c++
import cython
from libcpp.vector cimport vector
@cython.boundscheck(False)
def build_list_with_vector(double[::1] in_arr):
cdef vector[double] out
cdef int i
for i in range(in_arr.shape[0]):
out.push_back(in_arr[i])
return out
build_list_with_vector(np.random.rand(10))
"""
Explanation: Exercise (if time)
Write a parallel matrix multiplication routine.
Part 4: interact with C/C++ code
End of explanation
"""
%%cython -a
#distutils: language=c++
from cython.operator cimport dereference as deref, preincrement as inc
from libcpp.vector cimport vector
from libcpp.map cimport map as cppmap
cdef class Graph:
cdef cppmap[int, vector[int]] _adj
cpdef int has_node(self, int node):
return self._adj.find(node) != self._adj.end()
cdef void add_node(self, int new_node):
cdef vector[int] out
if not self.has_node(new_node):
self._adj[new_node] = out
def add_edge(self, int u, int v):
self.add_node(u)
self.add_node(v)
self._adj[u].push_back(v)
self._adj[v].push_back(u)
def __getitem__(self, int u):
return self._adj[u]
cdef vector[int] _degrees(self):
cdef vector[int] deg
cdef int first = 0
cdef vector[int] edges
cdef cppmap[int, vector[int]].iterator it = self._adj.begin()
while it != self._adj.end():
deg.push_back(deref(it).second.size())
it = inc(it)
return deg
def degrees(self):
return self._degrees()
g0 = Graph()
g0.add_edge(1, 5)
g0.add_edge(1, 6)
g0[1]
g0.has_node(1)
g0.degrees()
import networkx as nx
g = nx.barabasi_albert_graph(100000, 6)
with open('graph.txt', 'w') as fout:
for u, v in g.edges_iter():
fout.write('%i,%i\n' % (u, v))
%timeit list(g.degree())
myg = Graph()
def line2edges(line):
u, v = map(int, line.rstrip().split(','))
return u, v
edges = map(line2edges, open('graph.txt'))
for u, v in edges:
myg.add_edge(u, v)
%timeit mydeg = myg.degrees()
"""
Explanation: Example: C++ int graph
End of explanation
"""
from mean3 import mean3filter
mean3filter(np.random.rand(10))
"""
Explanation: Using Cython in production code
Use setup.py to build your Cython files.
```python
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension("prange_demo", ["prange_demo.pyx"],
include_dirs=[np.get_include()],
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp', '-lgomp']),
]
)
```
Exercise
Write a Cython module with a setup.py to run the mean-3 filter, then import from the notebook.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nuist/cmip6/models/sandbox-1/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-1', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
PmagPy/2017_MagIC_Workshop_PmagPy_Tutorial | PmagPy_structure_notebook.ipynb | bsd-3-clause | def fshdev(k):
"""
Generate a random draw from a Fisher distribution with mean declination
of 0 and inclination of 90 with a specified kappa.
Parameters
----------
k : kappa (precision parameter) of the distribution
Returns
----------
dec, inc : declination and inclination of random Fisher distribution draw
"""
R1=random.random()
R2=random.random()
L=numpy.exp(-2*k)
a=R1*(1-L)+L
fac=numpy.sqrt((-numpy.log(a))/(2*k))
inc=90.-2*numpy.arcsin(fac)*180./numpy.pi
dec=2*numpy.pi*R2*180./numpy.pi
return dec,inc
def dodirot(D,I,Dbar,Ibar):
"""
Rotate a declination/inclination pair by the difference between dec=0 and
inc = 90 and the provided desired mean direction
Parameters
----------
D : declination to be rotated
I : inclination to be rotated
Dbar : declination of desired mean
Ibar : inclination of desired mean
Returns
----------
drot, irot : rotated declination and inclination
"""
d,irot=dogeo(D,I,Dbar,90.-Ibar)
drot=d-180.
if drot<360.:drot=drot+360.
if drot>360.:drot=drot-360.
return drot,irot
"""
Explanation: PmagPy structure
The GUI programs
Some of the common data conversion and analysis functionality of PmagPy is exposed within the GUI programs as we saw yesterday. However, the capabilities of PmagPy extend well beyond what is in the GUIs and can access within the command line programs and as functions that can imported as Python modules.
The command line programs
As an example, let's consider a situation where we want to draw random samples from a specified Fisher distribution. This can be done with the command line program fishrot.py. We can learn a little bit about fishrot.py in the PmagPy Cookbook. The command line programs take flags. Type this command at the terminal to learn about them:
fishrot.py -h
For example, if we want to sample 10 directions from a Fisher distribution with a mean declination of 45, a mean inclination of 30 and a kappa of 20, we would use this command in terminal:
fishrot.py -k 20 -n 10 -D 45 -I 30
Let's look at the code of fishrot.py on Github: https://github.com/PmagPy/PmagPy/blob/master/programs/fishrot.py
We can see within the program that the program uses two functions within pmag.py (pmag.fshdev and pmag.dodirot). Let's look at the source of these functions which is taken from pmag.py into the code cell below:
End of explanation
"""
import pmagpy.pmag as pmag
import pmagpy.ipmag as ipmag
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Using functions in the notebook
To use functions in the notebook we can import PmagPy function modules as in the code cell below. These function modules are: pmag, a module with functions for analyzing paleomagnetic and rock magnetic data that are used within the command lines programs, the GUIs and within ipmag, a module with functions that combine and extend pmag functions and exposes pmagplotlib functions in order to generate output that works well within the Jupyter notebook environment.
End of explanation
"""
ipmag.fishrot?
ipmag.fishrot??
"""
Explanation: Putting a question mark after a function results in the docstring being displayed.
End of explanation
"""
ipmag.fishrot??
"""
Explanation: Putting two questions marks displays the docstring and the source code.
End of explanation
"""
ipmag.fishrot(k=20, n=10, dec=45, inc=30)
directions = ipmag.fishrot(k=20, n=10, dec=45, inc=30)
directions
"""
Explanation: Using ipmag.fishrot to simulate Fisher distributed directions
The docstring helps guide us to know what parameters to put into the function. Here let's take 10 samples from a distribution with mean of dec=45, inc= 30 and kappa precision parameter of 20. If simply call the function it will print to the console as in the code cell below. In the code cell below that, we assign the variable to be this this block of declination/inclination values which we call a di_block.
End of explanation
"""
ipmag.plot_di()
"""
Explanation: Plotting directions
Now let's say we want to plot these directions. Let's use the ipmag.plot_di function to do so. We can learn about that function by putting questions marks after it. Shift+tab also displays such information about a function.
End of explanation
"""
fignum = 1
plt.figure(num=fignum,figsize=(4,4),dpi=160)
ipmag.plot_net(fignum)
ipmag.plot_di(di_block=directions)
"""
Explanation: Following the instructions in the documentation string, let's make a plot and plot the directions.
End of explanation
"""
decs = [100,120,115]
incs = [24,26,10]
mean = ipmag.fisher_mean(dec=decs,inc=incs)
print(mean)
lots_of_directions = ipmag.fishrot(k=50, n=100, dec=45, inc=30)
mean = ipmag.fisher_mean(di_block=lots_of_directions)
print(mean)
ipmag.print_direction_mean(mean)
"""
Explanation: Calculating and plotting a Fisher mean
Let's get more directions and then take their mean. We can calculate the mean using the function ipmag.fisher_mean. That function returns a handy python data structure called a dictionary.
End of explanation
"""
mean['dec']
"""
Explanation: To access a single element in the dictionary, say the declination ('dec'), we would do it by putting the name of a dictionary and following it with the key related to the value we want to see. We will use this approach to extract and plot the mean dec, inc and $\alpha_{95}$ in the plot below.
End of explanation
"""
fignum = 1
plt.figure(num=fignum,figsize=(4,4),dpi=160)
ipmag.plot_net(fignum)
ipmag.plot_di(di_block=lots_of_directions)
ipmag.plot_di_mean(mean['dec'],mean['inc'],mean['alpha95'],color='r')
plt.show()
"""
Explanation: Let's plot both the directions (using ipmag.plot_di) and the mean (using ipmag.plot_di_mean).
End of explanation
"""
pmag.doigrf??
years = np.arange(1900,2020,1)
years
field = ipmag.igrf([year,0,La_Jolla_lat,La_Jolla_lon])
field
local_field = []
local_field_intensity = []
La_Jolla_lat = 32.8328
La_Jolla_lon = -117.2713
for year in years:
field = ipmag.igrf([year,0,La_Jolla_lat,La_Jolla_lon])
local_field.append(field)
local_field_intensity.append(field[2])
fignum = 1
plt.figure(num=fignum,figsize=(8,8),dpi=160)
ipmag.plot_net(fignum)
ipmag.plot_di(di_block=local_field,markersize=10,label='field since 1900 in La Jolla')
ipmag.plot_di(di_block=local_field[-2:],color='red',label='field today in La Jolla')
plt.legend()
plt.show()
plt.plot(years,local_field_intensity)
"""
Explanation: Calculating directions from the IGRF model using ipmag.igrf
Let's use another ipmag function to determine the field predicted by the IGRF model here in La Jolla over the past 100 hundred years.
Before we do so, let's all open up ipmag.py in a text editor. Find the function ipmag.igrf and report back what functions it uses (hint they are from the pmag.py module).
To determine the IGRF field at a bunch of different years we will first create a list of years using the numpy function np.linspace. We can then take these years along with the igrf function and the location of La Jolla to calculate the local direction through time using a for loop.
End of explanation
"""
|
honux77/practice | python/csvgen/simple csv generator.ipynb | mit | import random
import string
def genID(n):
str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=n))
return str
l = "김이박정최정강조윤장임한호서신권황"
m = "동해물과백두산이마르고도록하느님이보우하사우리나라만세무궁화삼천리화려강산대한사람"
def genName():
return random.choice(l) + "".join(random.choices(m, k=2))
import random
import time
def strTimeProp(start, end, format, prop):
stime = time.mktime(time.strptime(start, format))
etime = time.mktime(time.strptime(end, format))
ptime = stime + prop * (etime - stime)
return time.strftime(format, time.localtime(ptime))
def randomDate(start, end, prop):
return strTimeProp(start, end, '%Y/%m/%d %H:%M:%S', prop)
def genDate():
return randomDate("2015/1/1 9:30:00", "2018/8/9 17:00:00", random.random())
record = "{},{},{},{}".format(genID(10), genName(),genDate(), genDate())
print(record)
r1 = 20000
with open("user.csv","w") as f:
for i in range(r1):
record = "{},{},{},{}\n".format(genID(10), genName(),genDate(), genDate())
f.write(record)
!pwd
"""
Explanation: user data 생성
USER(UID , NAME, LASTDATE, STARTDATE);
GLOG(LOGID,UID, GAMEDATE, SCORE);
```SQL
DROP TABLE IF EXISTS USER;
CREATE TABLE USER(
UID CHAR(10) PRIMARY KEY,
NAME VARCHAR(32),
LASTDATE TIMESTAMP,
STARTDATE DATETIME
);
DROP TABLE IF EXISTS GLOG;
CREATE TABLE GLOG(
LOGID INT PRIMARY KEY AUTO_INCREMENT,
UID CHAR(10),
GAMEDATE DATETIME,
SCORE INT,
FOREIGN KEY (UID) REFERENCES USER(UID)
);
```
End of explanation
"""
import pandas as pd
df = pd.read_csv('./user.csv', header=None,
names=['ID','NAME','LAST','START'])
random.choice(df['ID'])
random.choice(df['ID'])
"""
Explanation: LOAD USER FILE FROM CSV
End of explanation
"""
random.randint(1000,1000000)
r2 = 3000000
with open('glog.csv','w') as gl:
for i in range(r2):
uid = random.choice(df['ID'])
date = genDate()
score = random.randint(1000,1000000)
record = ",{},{},{}\n".format(uid,date,score)
gl.write(record)
!ls
## 기록
- 그냥 load: 22.45초
- set autocommit=0, unique_checks=0,
"""
Explanation: Gen GLG
GLOG(LOGID,UID, GAMEDATE, SCORE);
End of explanation
"""
|
INM-6/elephant | doc/tutorials/granger_causality.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
from elephant.causality.granger import pairwise_granger, conditional_granger
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2)
# Indirect causal influence diagram
node1 = plt.Circle((0.2, 0.2), 0.1, color='red')
node2 = plt.Circle((0.5, 0.6), 0.1, color='red')
node3 = plt.Circle((0.8, 0.2), 0.1, color='red')
ax1.set_aspect(1)
ax1.arrow(0.28, 0.3, 0.1, 0.125, width=0.02, color='k')
ax1.arrow(0.6, 0.5, 0.1, -0.125, width=0.02, color='k')
ax1.add_artist(node1)
ax1.add_artist(node2)
ax1.add_artist(node3)
ax1.text(0.2, 0.2, 'Y', horizontalalignment='center', verticalalignment='center')
ax1.text(0.5, 0.6, 'Z', horizontalalignment='center', verticalalignment='center')
ax1.text(0.8, 0.2, 'X', horizontalalignment='center', verticalalignment='center')
ax1.set_title('Indirect only')
ax1.set_xbound((0, 1))
ax1.set_ybound((0, 0.8))
ax1.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False,
right=False, left=False, labelleft=False)
# Both direct and indirect causal influence diagram
node1 = plt.Circle((0.2, 0.2), 0.1, color='g')
node2 = plt.Circle((0.5, 0.6), 0.1, color='g')
node3 = plt.Circle((0.8, 0.2), 0.1, color='g')
ax2.set_aspect(1)
ax2.arrow(0.28, 0.3, 0.1, 0.125, width=0.02, color='k')
ax2.arrow(0.35, 0.2, 0.2, 0.0, width=0.02, color='k')
ax2.arrow(0.6, 0.5, 0.1, -0.125, width=0.02, color='k')
ax2.add_artist(node1)
ax2.add_artist(node2)
ax2.add_artist(node3)
ax2.text(0.2, 0.2, 'Y', horizontalalignment='center', verticalalignment='center')
ax2.text(0.5, 0.6, 'Z', horizontalalignment='center', verticalalignment='center')
ax2.text(0.8, 0.2, 'X', horizontalalignment='center', verticalalignment='center')
ax2.set_xbound((0, 1))
ax2.set_ybound((0, 0.8))
ax2.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False,
right=False, left=False, labelleft=False)
ax2.set_title('Both direct and indirect')
plt.tight_layout()
plt.show()
def generate_data(length=30000, causality_type="indirect"):
"""
Recreated from Example 2 section 5.2 of :cite:'granger-Ding06-0608035'.
Parameters
----------
length : int
The length of the signals to be generated (i.e. shape(signal) = (3, length))
causality_type: str
Type of causal influence in the data:
'indirect' for indirect causal influence only (i.e. Y -> Z -> X)
'both' for direct and indirect causal influence
Notes
-----
Taken from elephant.test.test_causality.ConditionalGrangerTestCase
"""
if causality_type == "indirect":
y_t_lag_2 = 0
elif causality_type == "both":
y_t_lag_2 = 0.2
else:
raise ValueError("causality_type should be either 'indirect' or "
"'both'")
order = 2
signal = np.zeros((3, length + order))
weights_1 = np.array([[0.8, 0, 0.4],
[0, 0.9, 0],
[0., 0.5, 0.5]])
weights_2 = np.array([[-0.5, y_t_lag_2, 0.],
[0., -0.8, 0],
[0, 0, -0.2]])
weights = np.stack((weights_1, weights_2))
noise_covariance = np.array([[0.3, 0.0, 0.0],
[0.0, 1., 0.0],
[0.0, 0.0, 0.2]])
for i in range(length):
for lag in range(order):
signal[:, i + order] += np.dot(weights[lag],
signal[:, i + 1 - lag])
rnd_var = np.random.multivariate_normal([0, 0, 0],
noise_covariance)
signal[:, i + order] += rnd_var
signal = signal[:, 2:]
return signal.T
"""
Explanation: Time-domain Granger Causality
Pairwise Granger Causality
The Granger causality is a method to determine functional connectivity between time-series using autoregressive modelling. In the simpliest pairwise Granger causality case for signals X and Y the data are modelled as autoregressive processes. Each of these processes has two representations. The first representation contains the history of the signal X itself and a prediction error (or noise a.k.a. residual), whereas the second also incorporates the history of the other signal.
If inclusion of the history of Y next to the history of X into X model reduces the prediction error compared to just the history of X alone, Y is said to Granger cause X. The same can be done by interchanging the signals to determine if X Granger causes Y.
Conditional Granger Causality
Conditional Granger causality can be used to further investigate this functional connectivity. Given signals X, Y and Z, we find that Y Granger causes X, but we want to test if this causality is mediated through Z. We can use Z as a condition for the aforementioned Granger causality.
In order to illustrate the function of time-domain Granger causality we will be using examples from Ding et al. (2006) chapter. Specifically, we will have two cases of three signals. In the first case we will have indirect connectivity only, whereas in the second case both direct and indirect connectivities will be present.
References: Ding M., Chen Y. and Bressler S.L. (2006) Granger Causality: Basic Theory and Application to Neuroscience. https://arxiv.org/abs/q-bio/0608035
End of explanation
"""
np.random.seed(1)
# Indirect causality
xyz_indirect_sig = generate_data(length=10000, causality_type='indirect')
xy_indirect_sig = xyz_indirect_sig[:, :2]
indirect_pairwise_gc = pairwise_granger(xy_indirect_sig, max_order=10, information_criterion='aic')
print(indirect_pairwise_gc)
# Indirect causality (conditioned on z)
indirect_cond_gc = conditional_granger(xyz_indirect_sig, max_order=10, information_criterion='aic')
print(indirect_cond_gc)
print('Zero value indicates total dependence on signal Z')
"""
Explanation: Indirect causality
End of explanation
"""
# Both direct and indirect causality
xyz_both_sig = generate_data(length=10000, causality_type='both')
xy_both_sig = xyz_both_sig[:, :2]
both_pairwise_gc = pairwise_granger(xy_both_sig, max_order=10, information_criterion='aic')
print(both_pairwise_gc)
# Both direct and indirect causality (conditioned on z)
both_cond_gc = conditional_granger(xyz_both_sig, max_order=10, information_criterion='aic')
print(both_cond_gc)
print('Non-zero value indicates the presence of direct Y to X influence')
"""
Explanation: Both direct and indirect causality
End of explanation
"""
|
dbouquin/AstroHackWeek2015 | day3-machine-learning/05 - Cross-validation.ipynb | gpl-2.0 | from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
from sklearn.cross_validation import cross_val_score
from sklearn.svm import LinearSVC
cross_val_score(LinearSVC(), X, y, cv=5)
cross_val_score(LinearSVC(), X, y, cv=5, scoring="f1_macro")
"""
Explanation: Cross-Validation
<img src="figures/cross_validation.svg" width=100%>
End of explanation
"""
y % 2
cross_val_score(LinearSVC(), X, y % 2)
cross_val_score(LinearSVC(), X, y % 2, scoring="average_precision")
cross_val_score(LinearSVC(), X, y % 2, scoring="roc_auc")
from sklearn.metrics.scorer import SCORERS
print(SCORERS.keys())
"""
Explanation: Let's go to a binary task for a moment
End of explanation
"""
def my_accuracy_scoring(est, X, y):
return np.mean(est.predict(X) == y)
cross_val_score(LinearSVC(), X, y, scoring=my_accuracy_scoring)
def my_super_scoring(est, X, y):
return np.mean(est.predict(X) == y) - np.mean(est.coef_ != 0)
from sklearn.grid_search import GridSearchCV
y = iris.target
grid = GridSearchCV(LinearSVC(C=.01, dual=False),
param_grid={'penalty' : ['l1', 'l2']},
scoring=my_super_scoring)
grid.fit(X, y)
print(grid.best_params_)
"""
Explanation: Implementing your own scoring metric:
End of explanation
"""
from sklearn.cross_validation import ShuffleSplit
shuffle_split = ShuffleSplit(len(X), 10, test_size=.4)
cross_val_score(LinearSVC(), X, y, cv=shuffle_split)
from sklearn.cross_validation import StratifiedKFold, KFold, ShuffleSplit
def plot_cv(cv, n_samples):
masks = []
for train, test in cv:
mask = np.zeros(n_samples, dtype=bool)
mask[test] = 1
masks.append(mask)
plt.matshow(masks)
plot_cv(StratifiedKFold(y, n_folds=5), len(y))
plot_cv(KFold(len(iris.target), n_folds=5), len(iris.target))
plot_cv(ShuffleSplit(len(iris.target), n_iter=20, test_size=.2),
len(iris.target))
"""
Explanation: There are other ways to do cross-valiation
End of explanation
"""
# %load solutions/cross_validation_iris.py
"""
Explanation: Exercises
Use KFold cross validation and StratifiedKFold cross validation (3 or 5 folds) for LinearSVC on the iris dataset.
Why are the results so different? How could you get more similar results?
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/ed1a04dd775648ca869bfcffae26faca/30_mne_dspm_loreta.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
"""
Explanation: Source localization with MNE, dSPM, sLORETA, and eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
minimum-norm inverse method on evoked/raw/epochs data.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path / 'MEG' / 'sample' / 'sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('meg', 'eog'), baseline=baseline, reject=reject)
"""
Explanation: Process MEG data
End of explanation
"""
noise_cov = mne.compute_covariance(
epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
"""
Explanation: Compute regularized noise covariance
For more details see tut-compute-covariance.
End of explanation
"""
evoked = epochs.average().pick('meg')
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
"""
Explanation: Compute the evoked response
Let's just use the MEG channels for simplicity.
End of explanation
"""
evoked.plot_white(noise_cov, time_unit='s')
del epochs, raw # to save memory
"""
Explanation: It's also a good idea to look at whitened data:
End of explanation
"""
fname_fwd = data_path / 'MEG' / 'sample' / 'sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
"""
Explanation: Inverse modeling: MNE/dSPM on evoked and raw data
Here we first read the forward solution. You will likely need to compute
one for your own data -- see tut-forward for information on how
to do it.
End of explanation
"""
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, loose=0.2, depth=0.8)
del fwd
# You can write it to disk with::
#
# >>> from mne.minimum_norm import write_inverse_operator
# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
# inverse_operator)
"""
Explanation: Next, we make an MEG inverse operator.
End of explanation
"""
method = "dSPM"
snr = 3.
lambda2 = 1. / snr ** 2
stc, residual = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None,
return_residual=True, verbose=True)
"""
Explanation: Compute inverse solution
We can use this to compute the inverse solution and obtain source time
courses:
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(1e3 * stc.times, stc.data[::100, :].T)
ax.set(xlabel='time (ms)', ylabel='%s value' % method)
"""
Explanation: Visualization
We can look at different dipole activations:
End of explanation
"""
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
for text in list(ax.texts):
text.remove()
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
"""
Explanation: Examine the original data and the residual after fitting:
End of explanation
"""
vertno_max, time_max = stc.get_peak(hemi='rh')
subjects_dir = data_path / 'subjects'
surfer_kwargs = dict(
hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=10)
brain = stc.plot(**surfer_kwargs)
brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',
scale_factor=0.6, alpha=0.5)
brain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',
font_size=14)
# The documentation website's movie is generated with:
# brain.save_movie(..., tmin=0.05, tmax=0.15, interpolation='linear',
# time_dilation=20, framerate=10, time_viewer=True)
"""
Explanation: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
End of explanation
"""
|
phanrahan/magmathon | notebooks/advanced/coreir.ipynb | mit | import magma as m
# default mantle target is coreir, so no need to do this unless you want to be explicit
# m.set_mantle_target("coreir")
from mantle import Counter
from loam.boards.icestick import IceStick
icestick = IceStick()
icestick.Clock.on()
icestick.D5.on()
N = 22
main = icestick.main()
counter = Counter(N)
m.wire(counter.O[N-1], main.D5)
m.EndCircuit()
"""
Explanation: CoreIR
This notebook uses the "coreir" mantle backend on the icestick.
We begin by building a normal Magma circuit using Mantle
and the Loam IceStick board.
End of explanation
"""
m.compile("build/blink_coreir", main, output="coreir")
"""
Explanation: To compile to coreir, we simply set the output parameter to the m.compile command to "coreir".
End of explanation
"""
%cat build/blink_coreir.json
"""
Explanation: We can inspect the generated .json file.
End of explanation
"""
%%bash
coreir -i build/blink_coreir.json -o build/blink_coreir.v
"""
Explanation: We can use the coreir command line tool to generate verilog.
End of explanation
"""
%cat build/blink_coreir.v
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif blink_coreir.blif' blink_coreir.v
arachne-pnr -q -d 1k -o blink_coreir.txt -p blink_coreir.pcf blink_coreir.blif
icepack blink_coreir.txt blink_coreir.bin
#iceprog blink_coreir.bin
"""
Explanation: And now we can inspect the generated verilog from coreir, notice that includes the verilog implementations of all the coreir primitives.
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Machine Learning using GraphLab/Document Retrieval using GraphLab Create.ipynb | mit | import graphlab
"""
Explanation: Document retrieval from wikipedia data
Import GraphLab Create
End of explanation
"""
people = graphlab.SFrame('people_wiki.gl/')
"""
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
"""
people.head()
len(people)
"""
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
"""
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
"""
clooney = people[people['name'] == 'George Clooney']
clooney['text']
"""
Explanation: Exploring the entry for actor George Clooney
End of explanation
"""
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
"""
Explanation: Get the word counts for Obama article
End of explanation
"""
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
"""
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
"""
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
"""
Explanation: Sorting the word counts to show most common words at the top
End of explanation
"""
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
# Earlier versions of GraphLab Create returned an SFrame rather than a single SArray
# This notebook was created using Graphlab Create version 1.7.1
if graphlab.version <= '1.6.1':
tfidf = tfidf['docs']
tfidf
people['tfidf'] = tfidf
"""
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
"""
Explanation: Examine the TF-IDF for the Obama article
End of explanation
"""
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
"""
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
"""
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
"""
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
"""
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
"""
knn_model.query(obama)
"""
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
"""
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
"""
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/tutorials/pblum.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: Passband Luminosity
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
b.add_dataset('lc', times=phoebe.linspace(0,1,101), dataset='lc01')
"""
Explanation: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
End of explanation
"""
b.set_value('irrad_method', 'none')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'linear')
b.set_value_all('ld_coeffs', [0.])
b.set_value_all('ld_mode_bol', 'manual')
b.set_value_all('ld_func_bol', 'linear')
b.set_value_all('ld_coeffs_bol', [0.])
b.set_value_all('atm', 'blackbody')
"""
Explanation: Lastly, just to make things a bit easier and faster, we'll turn off irradiation (reflection), use blackbody atmospheres, and disable limb-darkening (so that we can play with weird temperatures without having to worry about falling of the grids).
End of explanation
"""
print(b.get_parameter(qualifier='pblum_mode', dataset='lc01'))
"""
Explanation: Relevant Parameters & Methods
A pblum_mode parameter exists for each LC dataset in the bundle. This parameter defines how passband luminosities are handled. The subsections below describe the use and parameters exposed depening on the value of this parameter.
End of explanation
"""
print(b.compute_pblums())
"""
Explanation: For any of these modes, you can expose the intrinsic (excluding extrinsic effects such as spots and irradiation) and extrinsic computed luminosities of each star (in each dataset) by calling b.compute_pblums.
Note that as its an aspect-dependent effect, boosting is ignored in all of these output values.
End of explanation
"""
print(b.filter(qualifier='pblum'))
print(b.get_parameter(qualifier='pblum_component'))
b.set_value('pblum_component', 'secondary')
print(b.filter(qualifier='pblum'))
"""
Explanation: For more details, see the section below on "Accessing Model Luminosities" as well as the b.compute_pblums API docs
The table below provides a brief summary of all available pblum_mode options. Details are given in the remainder of the tutorial.
| pblum_mode | intent |
|-------------------|--------|
| component-coupled | provide pblum for one star (by default L1), compute pblums for other stars from atmosphere tables |
| decoupled | provide pblums for each star independently |
| absolute | obtain unscaled pblums, in passband watts, computed from atmosphere tables |
| dataset-scaled | calculate each pblum from the scaling factor between absolute fluxes and each dataset |
| dataset-coupled | same as above, but all datasets are scaled with the same scaling factor |
pblum_mode = 'component-coupled'
pblum_mode='component-coupled' is the default option and maintains the default behavior from previous releases. Here the user provides passband luminosities for a single star in the system for the given dataset/passband, and all other stars are scaled accordingly.
By default, the value of pblum is set for the primary star in the system, but we can instead provide pblum for the secondary star by changing the value of pblum_component.
End of explanation
"""
b.set_value('pblum_component', 'primary')
print(b.get_parameter(qualifier='pblum', component='primary'))
"""
Explanation: Note that in general (for the case of a spherical star), a pblum of 4pi will result in an out-of-eclipse flux of ~1.
Now let's just reset to the default case where the primary star has a provided (default) pblum of 4pi.
End of explanation
"""
print(b.compute_pblums())
"""
Explanation: NOTE: other parameters also affect flux-levels, including limb darkening, third light, boosting, irradiation, and distance
If we call b.compute_pblums, we'll see that the computed intrinsic luminosity of the primary star (pblum@primary@lc01) matches the value of the parameter above.
End of explanation
"""
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Since the secondary star in the default binary is identical to the primary star, we'd expect an out-of-eclipse flux of the binary to be ~2.
End of explanation
"""
b.set_value('pblum', component='primary', value=2*np.pi)
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: If we now set pblum to be only 2 pi, we should expect the luminosities as well as entire light curve to be scaled in half.
End of explanation
"""
b.set_value('teff', component='secondary', value=0.5 * b.get_value('teff', component='primary'))
print(b.filter(qualifier='teff'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
End of explanation
"""
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
"""
Explanation: Let us undo our changes before we look at decoupled luminosities.
End of explanation
"""
b.set_value('pblum_mode', 'decoupled')
"""
Explanation: pblum_mode = 'decoupled'
The luminosities are decoupled when pblums are provided for the individual components. To accomplish this, set pblum_mode to 'decoupled'.
End of explanation
"""
print(b.filter(qualifier='pblum'))
"""
Explanation: Now we see that both pblum parameters are available and can have different values.
End of explanation
"""
b.set_value_all('pblum', 4*np.pi)
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
End of explanation
"""
print(b.filter(qualifier='teff'))
b.set_value('teff', component='secondary', value=3000)
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii - although in that case the eclipse widths would still change due to the change in geometry).
End of explanation
"""
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
"""
Explanation: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Now we'll just undo our changes before we look at accessing model luminosities.
End of explanation
"""
b.set_value('pblum_mode', 'absolute')
"""
Explanation: pblum_mode = 'absolute'
By setting pblum_mode to 'absolute', luminosities and fluxes will be returned in absolute units and not rescaled. Note that third light and distance will still affect the resulting flux levels.
End of explanation
"""
print(b.filter(qualifier='pblum'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: As we no longer provide pblum values to scale, those parameters are not visible when filtering.
End of explanation
"""
fluxes = b.get_value('fluxes', context='model') * 0.8 + (np.random.random(101) * 0.1)
b.set_value('fluxes', context='dataset', value=fluxes)
afig, mplfig = b.plot(context='dataset', show=True)
"""
Explanation: (note the exponent on the y-axis of the above figure)
pblum_mode = 'dataset-scaled'
Setting pblum_mode to 'dataset-scaled' is only allowed if fluxes are attached to the dataset itself. Let's use our existing model to generate "fake" data and then populate the dataset.
End of explanation
"""
b.set_value('pblum_mode', 'dataset-scaled')
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
"""
Explanation: Now if we set pblum_mode to 'dataset-scaled', the resulting model will be scaled to best fit the data. Note that in this mode we cannot access computed luminosities via b.compute_pblums (without providing model - we'll get back to that in a minute), nor can we access scaled intensities from the mesh.
End of explanation
"""
print(b.get_parameter(qualifier='flux_scale', context='model'))
"""
Explanation: The model stores the scaling factor used between the absolute fluxes and the relative fluxes that best fit to the observational data.
End of explanation
"""
print(b.compute_pblums(model='latest'))
"""
Explanation: We can then access the scaled luminosities by passing the model tag to b.compute_pblums. Keep in mind this only scales the absolute luminosities by flux_scale so assumes a fixed distance@system. This is useful though if we wanted to use 'dataset-scaled' to get an estimate for pblum before changing to 'component-coupled' and optimizing or marginalizing over pblum.
End of explanation
"""
b.set_value('pblum_mode', 'component-coupled')
b.set_value('fluxes', context='dataset', value=[])
"""
Explanation: Before moving on, let's remove our fake data (and reset pblum_mode or else PHOEBE will complain about the lack of data).
End of explanation
"""
b.add_dataset('lc', times=phoebe.linspace(0,1,101),
ld_mode='manual', ld_func='linear', ld_coeffs=[0],
passband='Johnson:B', dataset='lc02')
b.set_value('pblum_mode', dataset='lc02', value='dataset-coupled')
"""
Explanation: pblum_mode = 'dataset-coupled'
Setting pblum_mode to 'dataset-coupled' allows for the same scaling factor to be applied to two different datasets. In order to see this in action, we'll add another LC dataset in a different passband.
End of explanation
"""
print(b.filter('pblum*'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True, legend=True)
"""
Explanation: Here we see the pblum_mode@lc01 is set to 'component-coupled' meaning it will follow the rules described earlier where pblum is provided for the primary component and the secondary is coupled to that. pblum_mode@lc02 is set to 'dataset-coupled' with pblum_dataset@lc01 pointing to 'lc01'.
End of explanation
"""
print(b.compute_pblums())
"""
Explanation: Accessing Model Luminosities
Passband luminosities at t0@system per-star (including following all coupling logic) can be computed and exposed on the fly by calling compute_pblums.
End of explanation
"""
print(b.compute_pblums(dataset='lc01', component='primary'))
"""
Explanation: By default this exposes 'pblum' and 'pblum_ext' for all component-dataset pairs in the form of a dictionary. Alternatively, you can pass a label or list of labels to component and/or dataset.
End of explanation
"""
b.add_dataset('mesh', times=np.linspace(0,1,5), dataset='mesh01', columns=['areas', 'pblum_ext@lc01', 'ldint@lc01', 'ptfarea@lc01', 'abs_normal_intensities@lc01', 'normal_intensities@lc01'])
b.run_compute()
"""
Explanation: For more options, see the b.compute_pblums API docs.
Note that this same logic is applied (at t0) to initialize all passband luminosities within the backend, so there is no need to call compute_pblums before run_compute.
In order to access passband luminosities at times other than t0, you can add a mesh dataset and request the pblum_ext column to be exposed. For stars that have pblum defined (as opposed to coupled to another star or dataset), this value should be equivalent to the value of the parameter (at t0 if no features or irradiation are present, and in simple circular cases will probably be equivalent at all times).
Let's create a mesh dataset at a few times and then access the synthetic luminosities.
End of explanation
"""
print(b.filter(qualifier='pblum_ext', context='model').twigs)
"""
Explanation: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
End of explanation
"""
t0 = b.get_value('t0@system')
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value('pblum@primary@dataset'))
print(b.compute_pblums(component='primary', dataset='lc01'))
"""
Explanation: Now let's compare the value of the synthetic luminosities to those of the input pblum
End of explanation
"""
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value(qualifier='pblum_ext', time=t0, component='secondary', kind='mesh', context='model'))
"""
Explanation: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
End of explanation
"""
b['teff@secondary@component'] = 3000
print(b.compute_pblums(dataset='lc01'))
b.run_compute()
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value(qualifier='pblum_ext', time=t0, component='secondary', kind='mesh', context='model'))
"""
Explanation: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
End of explanation
"""
print(b['ld_mode'])
print(b['atm'])
b.run_compute(irrad_method='horvat')
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value('pblum@primary@dataset'))
print(b.compute_pblums(dataset='lc01', irrad_method='horvat'))
"""
Explanation: And lastly, if we re-enable irradiation, we'll see that the extrinsic luminosities do not match the prescribed value of pblum (an intrinsic luminosity).
End of explanation
"""
b.set_value_all('teff@component', 6000)
"""
Explanation: Now, we'll just undo our changes before continuing
End of explanation
"""
b.run_compute()
areas = b.get_value(qualifier='areas', dataset='mesh01', time=t0, component='primary', unit='m^2')
ldint = b.get_value(qualifier='ldint', component='primary', time=t0)
ptfarea = b.get_value(qualifier='ptfarea', component='primary', time=t0)
abs_normal_intensities = b.get_value(qualifier='abs_normal_intensities', dataset='lc01', time=t0, component='primary')
normal_intensities = b.get_value(qualifier='normal_intensities', dataset='lc01', time=t0, component='primary')
"""
Explanation: Role of Pblum
Let's now look at the intensities in the mesh to see how they're being scaled under-the-hood. First we'll recompute our model with the equal temperatures and irradiation disabled (to ignore the difference between pblum and pblum_ext).
End of explanation
"""
print(np.median(abs_normal_intensities))
"""
Explanation: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
End of explanation
"""
print(np.median(normal_intensities))
"""
Explanation: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
End of explanation
"""
pblum = b.get_value(qualifier='pblum', component='primary', context='dataset')
print(np.sum(normal_intensities * ldint * np.pi * areas) * ptfarea, pblum)
"""
Explanation: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
Here we compute the luminosity by summing over each triangle's intensity in the normal direction, and multiply it by pi to account for blackbody intensity emitted in all directions in the solid angle, and by the area of that triangle.
End of explanation
"""
|
IS-ENES-Data/submission_forms | test/prov/old/prov-test1.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
from prov.model import ProvDocument
d1 = ProvDocument()
d1.deserialize?
"""
Explanation: Representation of data submission workflow components based on W3C-PROV
End of explanation
"""
from IPython.display import display, Image
Image(filename='key-concepts.png')
from dkrz_forms import form_handler
#from project_cordex import cordex_dict
#from project_cordex import name_space
# add namespaces for submission provenance capture
#for key,value in name_space.iteritems():
# d1.add_namespace(key,value)
#d1.add_namespace()
# to do: look into some predefined vocabs, e.g. dublin core, iso19139,foaf etc.
d1.add_namespace("enes_entity",'http://www.enes.org/enes_entitiy#')
d1.add_namespace('enes_agent','http://www.enes.org/enes_agent#')
d1.add_namespace('data_collection','http://www.enes.org/enes_entity/file_collection')
d1.add_namespace('data_manager','http://www.enes.org/enes_agent/data_manager')
d1.add_namespace('data_provider','http://www.enes.org/enes_agent/data_provider')
d1.add_namespace('subm','http://www.enes.org/enes_entity/data_submsission')
d1.add_namespace('foaf','http://xmlns.com/foaf/0.1/')
"""
Explanation: Model is along the concept described in https://www.w3.org/TR/prov-primer/
End of explanation
"""
# later: organize things in bundles
data_manager_ats = {'foaf:givenName':'Peter','foaf:mbox':'lenzen@dkzr.de'}
d1.entity('sub:empty')
def add_stage(agent,activity,in_state,out_state):
# in_stage exists, out_stage is generated
d1.agent(agent, data_manager_ats)
d1.activity(activity)
d1.entity(out_state)
d1.wasGeneratedBy(out_state,activity)
d1.used(activity,in_state)
d1.wasAssociatedWith(activity,agent)
d1.wasDerivedFrom(out_state,in_state)
import json
form_file = open('/home/stephan/tmp/CORDEX/Kindermann_test1.json',"r")
json_info = form_file.read()
#json_info["__type__"] = "sf",
form_file.close()
sf_dict = json.loads(json_info)
sf = form_handler.dict_to_form(sf_dict)
print sf.__dict__
data_provider = sf.first_name+'_'+sf.last_name
submission_manager = sf.sub['responsible_person']
ingest_manager = sf.ing['responsible_person']
qa_manager = sf.ing['responsible_person']
publication_manager = sf.pub['responsible_person']
add_stage(agent='data_provider:test_user_id',activity='subm:submit',in_state="subm:empty",out_state='subm:out1_sub')
add_stage(agent='data_manager:peter_lenzen_id',activity='subm:review',in_state="subm:out1_sub",out_state='subm:out1_rev')
add_stage(agent='data_manager:peter_lenzen_id',activity='subm:ingest',in_state="subm:out1_rev",out_state='subm:out1_ing')
add_stage(agent='data_manager:hdh_id',activity='subm:check',in_state="subm:out1_ing",out_state='subm:out1_che')
add_stage(agent='data_manager:katharina_b_id',activity='subm:publish',in_state="subm:out1_che",out_state='subm:out1_pub')
add_stage(agent='data_manager:lta_id',activity='subm:archive',in_state="subm:out1_pub",out_state='subm:out1_arch')
"""
Explanation: Example name spaces
(from DOI: 10.3390/ijgi5030038 , mehr unter https://github.com/tsunagun/vocab/blob/master/all_20130125.csv)
owl Web Ontology Language http://www.w3.org/2002/07/owl#
dctype DCMI Type Vocabulary http://purl.org/dc/dcmitype/
dco DCO Ontology http://info.deepcarbon.net/schema#
prov PROV Ontology http://www.w3.org/ns/prov#
skos Simple Knowledge
Organization System http://www.w3.org/2004/02/skos/core#
foaf FOAF Ontology http://xmlns.com/foaf/0.1/
vivo VIVO Ontology http://vivoweb.org/ontology/core#
bibo Bibliographic Ontology http://purl.org/ontology/bibo/
xsd XML Schema Datatype http://www.w3.org/2001/XMLSchema#
rdf Resource Description
Framework http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs Resource Description
Framework Schema http://www.w3.org/2000/01/rdf-schema#
End of explanation
"""
%matplotlib inline
d1.plot()
d1.wasAttributedTo(data_submission,'????')
"""
Explanation: assign information to provenance graph nodes and edges
End of explanation
"""
#d1.get_records()
submission = d1.get_record('subm:out1_sub')[0]
review = d1.get_record('subm:out1_rev')[0]
ingest = d1.get_record('subm:out1_ing')[0]
check = d1.get_record('subm:out1_che')[0]
publication = d1.get_record('subm:out1_pub')[0]
lta = d1.get_record('subm:out1_arch')[0]
res = form_handler.prefix_dict(sf.sub,'sub',sf.sub.keys())
res['sub:status']="fertig"
print res
ing = form_handler.prefix_dict(sf.ing,'ing',sf.ing.keys())
che = form_handler.prefix_dict(sf.che,'che',sf.che.keys())
pub = form_handler.prefix_dict(sf.pub,'pub',sf.pub.keys())
submission.add_attributes(res)
ingest.add_attributes(ing)
check.add_attributes(che)
publication.add_attributes(pub)
che_act = d1.get_record('subm:check')
tst = che_act[0]
test_dict = {'subm:test':'test'}
tst.add_attributes(test_dict)
print tst
tst.FORMAL_ATTRIBUTES
tst.
che_act = d1.get_record('subm:check')
#tst.formal_attributes
#tst.FORMAL_ATTRIBUTES
tst.add_attributes({'foaf:name':'tst'})
print tst.attributes
#for i in tst:
# print i
#tst.insert([('subm:givenName','sk')])
import sys
sys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms')
from dkrz_forms import form_handler
sf,repo = form_handler.init_form("CORDEX")
init_dict = sf.__dict__
sub_form = form_handler.prefix(sf,'subm',sf.__dict__.keys())
sub_dict = sub_form.__dict__
#init_state = d1.get_record('subm:empty')[0]
#init_state.add_attributes(init_dict)
sub_state = d1.get_record('subm:out1_sub')[0]
init_state.add_attributes(sub_dict)
tst_dict = {'test1':'val1','test2':'val2'}
tst = form_handler.submission_form(tst_dict)
print tst.__dict__
print result.__dict__
dict_from_class(sf)
"""
Explanation: Transform submission object to a provenance graph
End of explanation
"""
|
dnc1994/MachineLearning-UW | ml-classification/module-6-decision-tree-practical-assignment-solution.ipynb | mit | import graphlab
"""
Explanation: Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will:
Implement binary decision trees with different early stopping methods.
Compare models with different stopping parameters.
Visualize the concept of overfitting in decision trees.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
"""
loans = graphlab.SFrame('lending-club-data.gl/')
"""
Explanation: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
End of explanation
"""
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
"""
Explanation: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
"""
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
"""
Explanation: We will be using the same 4 categorical features as in the previous assignment:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding.
End of explanation
"""
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
"""
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed = 1 so everyone gets the same results.
End of explanation
"""
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
"""
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
Since we are implementing binary decision trees, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
"""
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
"""
Explanation: The feature columns now look like this:
End of explanation
"""
train_data, validation_set = loans_data.random_split(.8, seed=1)
"""
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
"""
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
return len(data) <= min_node_size
"""
Explanation: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:
Reached a maximum depth. (set by parameter max_depth).
Reached a minimum node size. (set by parameter min_node_size).
Don't split if the gain in error reduction is too small. (set by parameter min_error_reduction).
For the rest of this assignment, we will refer to these three as early stopping conditions 1, 2, and 3.
Early stopping condition 1: Maximum depth
Recall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.
We will be reusing code from the previous assignment and then building upon this. We will alert you when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code.
Early stopping condition 2: Minimum node size
The function reached_minimum_node_size takes 2 arguments:
The data (from a node)
The minimum number of data points that a node is allowed to split on, min_node_size.
This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
"""
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
return error_before_split - error_after_split
"""
Explanation: Quiz question: Given an intermediate node with 6 safe loans and 3 risky loans, if the min_node_size parameter is 10, what should the tree learning algorithm do next?
Early stopping condition 3: Minimum gain in error reduction
The function error_reduction takes 2 arguments:
The error before a split, error_before_split.
The error after a split, error_after_split.
This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
"""
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
num_positive = sum([x == 1 for x in labels_in_node])
# Count the number of -1's (risky loans)
## YOUR CODE HERE
num_negative = sum([x == -1 for x in labels_in_node])
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
return min(num_positive, num_negative)
"""
Explanation: Quiz question: Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the minimum gain in error reduction parameter is set to 0.2, what should the tree learning algorithm do next?
Grabbing binary decision tree helper functions from past assignment
Recall from the previous assignment that we wrote a function intermediate_node_num_mistakes that calculates the number of misclassified examples when predicting the majority class. This is used to help determine which feature is best to split on at a given node of the tree.
Please copy and paste your code for intermediate_node_num_mistakes here.
End of explanation
"""
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans'])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans'])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = float(left_mistakes + right_mistakes) / len(data)
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature, best_error = feature, error
return best_feature # Return the best feature we found
"""
Explanation: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
End of explanation
"""
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
"""
Explanation: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
End of explanation
"""
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data, min_node_size): ## YOUR CODE HERE
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values) ## YOUR CODE HERE
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans']) ## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans']) ## YOUR CODE HERE
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values) ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
"""
Explanation: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2: minimum node size:
Step 1: Use the function reached_minimum_node_size that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the min_node_size argument.
Step 2: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Implementing early stopping condition 3: minimum error reduction:
Note: This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.
Step 1: Calculate the classification error before splitting. Recall that classification error is defined as:
$$
\text{classification error} = \frac{\text{# mistakes}}{\text{# total examples}}
$$
* Step 2: Calculate the classification error after splitting. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.
* Step 3: Use the function error_reduction to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (min_error_reduction). Don't forget to use that argument.
* Step 4: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
"""
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
"""
Explanation: Here is a function to count the nodes in your tree:
End of explanation
"""
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 5'
"""
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
"""
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
"""
Explanation: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning: This code block may take a minute to learn.
End of explanation
"""
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
"""
Explanation: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
End of explanation
"""
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
### YOUR CODE HERE
"""
Explanation: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
End of explanation
"""
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
"""
Explanation: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
End of explanation
"""
classify(my_decision_tree_new, validation_set[0], annotate = True)
"""
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
"""
classify(my_decision_tree_old, validation_set[0], annotate = True)
"""
Explanation: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
End of explanation
"""
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
return (prediction != data['safe_loans']).sum() / float(len(data))
"""
Explanation: Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for validation_set[0] shorter, longer, or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for any point always shorter, always longer, always the same, shorter or the same, or longer or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For a tree trained on any dataset using max_depth = 6, min_node_size = 100, min_error_reduction=0.0, what is the maximum number of splits encountered while making a single prediction?
Evaluating the model
Now let us evaluate the model that we have trained. You implemented this evautation in the function evaluate_classification_error from the previous assignment.
Please copy and paste your evaluate_classification_error code here.
End of explanation
"""
evaluate_classification_error(my_decision_tree_new, validation_set)
"""
Explanation: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
End of explanation
"""
evaluate_classification_error(my_decision_tree_old, validation_set)
"""
Explanation: Now, evaluate the validation error using my_decision_tree_old.
End of explanation
"""
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2, min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14, min_node_size = 0, min_error_reduction=-1)
"""
Explanation: Quiz question: Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment?
Exploring the effect of max_depth
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (too small, just right, and too large).
Train three models with these parameters:
model_1: max_depth = 2 (too small)
model_2: max_depth = 6 (just right)
model_3: max_depth = 14 (may be too large)
For each of these three, we set min_node_size = 0 and min_error_reduction = -1.
Note: Each tree can take up to a few minutes to train. In particular, model_3 will probably take the longest to train.
End of explanation
"""
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
"""
Explanation: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data:
End of explanation
"""
print "Validation data, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "Validation data, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "Validation data, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
"""
Explanation: Now evaluate the classification error on the validation data.
End of explanation
"""
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
"""
Explanation: Quiz Question: Which tree has the smallest error on the validation data?
Quiz Question: Does the tree with the smallest error in the training data also have the smallest error in the validation data?
Quiz Question: Is it always true that the tree with the lowest classification error on the training set will result in the lowest classification error in the validation set?
Measuring the complexity of the tree
Recall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as
complexity(T) = number of leaves in the tree T
Here, we provide a function count_leaves that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in model_1, model_2, and model_3.
End of explanation
"""
print count_leaves(model_1)
print count_leaves(model_2)
print count_leaves(model_3)
"""
Explanation: Compute the number of nodes in model_1, model_2, and model_3.
End of explanation
"""
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=5)
"""
Explanation: Quiz question: Which tree has the largest complexity?
Quiz question: Is it always true that the most complex tree will result in the lowest classification error in the validation_set?
Exploring the effect of min_error
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (negative, just right, and too positive).
Train three models with these parameters:
1. model_4: min_error_reduction = -1 (ignoring this early stopping condition)
2. model_5: min_error_reduction = 0 (just right)
3. model_6: min_error_reduction = 5 (too positive)
For each of these three, we set max_depth = 6, and min_node_size = 0.
Note: Each tree can take up to 30 seconds to train.
End of explanation
"""
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
"""
Explanation: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
End of explanation
"""
print count_leaves(model_4)
print count_leaves(model_5)
print count_leaves(model_6)
"""
Explanation: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
End of explanation
"""
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 50000, min_error_reduction=-1)
"""
Explanation: Quiz Question: Using the complexity definition above, which model (model_4, model_5, or model_6) has the largest complexity?
Did this match your expectation?
Quiz Question: model_4 and model_5 have similar classification error on the validation set but model_5 has lower complexity? Should you pick model_5 over model_4?
Exploring the effect of min_node_size
We will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (too small, just right, and just right).
Train three models with these parameters:
1. model_7: min_node_size = 0 (too small)
2. model_8: min_node_size = 2000 (just right)
3. model_9: min_node_size = 50000 (too large)
For each of these three, we set max_depth = 6, and min_error_reduction = -1.
Note: Each tree can take up to 30 seconds to train.
End of explanation
"""
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_set)
"""
Explanation: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
End of explanation
"""
print count_leaves(model_7)
print count_leaves(model_8)
print count_leaves(model_9)
"""
Explanation: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9).
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow | exercises_notebooks/Strip-of-land-both-sides-equal-rise.ipynb | gpl-3.0 | # import modules we need
import matplotlib.pyplot as plt
import numpy as np
from scipy.special import erfc
"""
Explanation: <figure>
<IMG SRC="../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
Strip of land with same rise at both sides
@Theo Olsthoorn
2019-12-21
This exercise was done in class on 2019-01-10
End of explanation
"""
kD = 900 # m2/d
S = 0.2 # [-]
L = 2000 # m
b = L/2 # half width
A = 2. # m
x = np.linspace(0, 0.6 * b, 101)
times = [0.1, 0.25, 0.5, 1, 2, 4, 8] # d
plt.title('Half infinite aquifer, sudden change')
plt.xlabel('x [m]')
plt.ylabel('head [m]')
plt.grid()
for t in times:
s = A * erfc (x * np.sqrt(S / (4 * kD * t)))
plt.plot(x, s, label='t={:.2f} d'.format(t))
plt.legend(loc='center left')
plt.show()
"""
Explanation: The uniform aquifer of half-infinite extent ($0 \le x \le \infty$)
The solution is valid for a uniform aquifer of half infinite extent, i.e. $x \ge 0$ and $kD$ and $S$ are constant. Further $s(0, x) = 0$. And we have as boundary condition that $s(t, 0) = A$ for $t \ge 0$:
$$ s(x, t) = A \, \mathtt{erfc} \left( u \right)$$
where
$$ u = \sqrt {\frac {x^2 S} {4 kD t} } = x \sqrt{ \frac {S} {4 kD t} }, \,\,\, x \ge 0 $$
The erfc function is a well-known function in engineering, which is derived from statistics. It is mathematically defined as
$$ \mathtt{erfc}(z) = \frac 2 {\sqrt{\pi}} \intop _z ^{\infty} e^{-y^2} dy $$
We don't have to implement it, because it is available in scipy.special as erfc.
However, we need the mathematical expression to get its derivation, which we need to compute the flow
$$ Q = -kD \frac {\partial s} {\partial x} $$
$$ = -kD \, A \, \frac {\partial \mathtt{erfc} (u)} {\partial x} $$
$$ = kD \, A \,\frac 2 {\sqrt \pi} e^{-u^2} \frac {\partial u} {\partial x} $$
$$ = kD \, A \, \frac 2 {\sqrt \pi} e^{-u^2} \sqrt {\frac S {4 kD t}} $$
$$ Q = A \,\sqrt {\frac {kD S} {\pi t}}e^{-u^2} $$
A half-infinite aquifer first
End of explanation
"""
plt.title('strip of width {}'.format(L))
plt.xlabel('x [m]')
plt.ylabel('s [m]')
plt.grid()
T50 = 0.28 * b**2 * S / kD # halftime of the head decline
times = np.array([0.1, 1, 2, 3, 4, 5, 6]) * T50 # multiple halftimes
x = np.linspace(-b, b, 101)
for t in times:
s = A + np.zeros_like(x)
for i in range(1, 20):
si = A *(-1)**(i-1) * (
erfc(((2 * i - 1) * b + x) * np.sqrt(S / (4 * kD * t)))
+ erfc(((2 * i - 1) * b - x) * np.sqrt(S / (4 * kD * t)))
)
s -= si
plt.plot(x, s, label='t={:.2f} d'.format(t))
plt.legend()
plt.show()
"""
Explanation: Strip of width $L = 2b$, head $s(t, x=\pm b) = A$ for $t>0$ and $s(t=0, x) = 0$
A strip of limited width requires mirroring to ensure tha tthe head at the head at each of the two boundaries $x = \pm b $ remains at the desired value. In this case, the we implement the situation where the head at both ends of the strip is suddenly raised to $A$ at $t=0$ and is kept at that value thereafter.
By starting with $s(0, x) = A$ and subtracting the solution, we get the situation where the head starts at $A$ and is suddenly lowered to $0$ at $t=0$. This allows comparison with the example hereafter.
We show the result (head as a function of $x$) for different times. The times are chosen equal to a multiple of the half-time $T_{50\%} \approx 0.28 \frac {b^2 S} {kD}$, so that the head of each next line should be reduced by 50\% relative to the previous time.
$$ s(x, t) = A \sum _{i=1} ^\infty \left{
(-1) ^{i-1} \left[
\mathtt{erfc}\left( \left( (2 i -1) b + x \right) \sqrt {\frac S {4 kD t}} \right)
+
\mathtt{erfc}\left( \left( (2 i -1) b - x \right) \sqrt {\frac S {4 kD t}} \right) \right]
\right} $$
End of explanation
"""
T = b**2 * S / kD
plt.title('strip of width symmetrical {}'.format(L))
plt.xlabel('x [m]')
plt.ylabel('s [m]')
plt.grid()
for t in times:
s = np.zeros_like(x)
for i in range(1, 20):
si = ((-1)**(i - 1) / (2 * i - 1) *
np.cos((2 * i - 1) * (np.pi / 2) * x /b) *
np.exp(-(2 * i - 1)**2 * (np.pi / 2)**2 * t/T))
s += A * 4 / np.pi * si
plt.plot(x, s, label='t = {:.2f} d'.format(t))
plt.legend()
plt.show()
"""
Explanation: Symmetrical solution of a draining strip of land
This solution describes the head in a strip land of size $L = 2b$ where the initial head is everywhere equal to $A$ and where the head at $x = \pm b$ is suddenly lowered to zero at $t=0$. Hence, the groundwater will gradually drain until the head reaches zero everywhere at $t \rightarrow\infty$. Therefore, we should get exactly the same result as in the previous example, although the solution looks completely different mathematically.
$$ s(x, t) = A \frac 4 \pi \sum _{i=1} ^\infty \left{
\frac {(-1)^{i-1}} {2i - 1} \cos \left[ (2 i - 1) \left( \frac \pi 2\right) \frac x b \right] \exp \left[ -(2 i - 1)^2 \left( \frac \pi 2 \right) ^2
\frac {kD } {b^2 S} t \right] \right} $$
End of explanation
"""
|
AllenDowney/ThinkStats2 | solutions/chap03soln.ipynb | gpl-3.0 | from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import numpy as np
"""
Explanation: Chapter 3
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
import nsfg
import first
import thinkstats2
import thinkplot
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
"""
Explanation: Again, I'll load the NSFG pregnancy file and select live births:
End of explanation
"""
hist = thinkstats2.Hist(live.birthwgt_lb, label="birthwgt_lb")
thinkplot.Hist(hist)
thinkplot.Config(xlabel="Birth weight (pounds)", ylabel="Count")
"""
Explanation: Here's the histogram of birth weights:
End of explanation
"""
n = hist.Total()
pmf = hist.Copy()
for x, freq in hist.Items():
pmf[x] = freq / n
"""
Explanation: To normalize the disrtibution, we could divide through by the total count:
End of explanation
"""
thinkplot.Hist(pmf)
thinkplot.Config(xlabel="Birth weight (pounds)", ylabel="PMF")
"""
Explanation: The result is a Probability Mass Function (PMF).
End of explanation
"""
pmf = thinkstats2.Pmf([1, 2, 2, 3, 5])
pmf
"""
Explanation: More directly, we can create a Pmf object.
End of explanation
"""
pmf.Prob(2)
"""
Explanation: Pmf provides Prob, which looks up a value and returns its probability:
End of explanation
"""
pmf[2]
"""
Explanation: The bracket operator does the same thing.
End of explanation
"""
pmf.Incr(2, 0.2)
pmf[2]
"""
Explanation: The Incr method adds to the probability associated with a given values.
End of explanation
"""
pmf.Mult(2, 0.5)
pmf[2]
"""
Explanation: The Mult method multiplies the probability associated with a value.
End of explanation
"""
pmf.Total()
"""
Explanation: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
End of explanation
"""
pmf.Normalize()
pmf.Total()
"""
Explanation: Normalize divides through by the total probability, making it 1 again.
End of explanation
"""
pmf = thinkstats2.Pmf(live.prglngth, label="prglngth")
"""
Explanation: Here's the PMF of pregnancy length for live births.
End of explanation
"""
thinkplot.Hist(pmf)
thinkplot.Config(xlabel="Pregnancy length (weeks)", ylabel="Pmf")
"""
Explanation: Here's what it looks like plotted with Hist, which makes a bar graph.
End of explanation
"""
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel="Pregnancy length (weeks)", ylabel="Pmf")
"""
Explanation: Here's what it looks like plotted with Pmf, which makes a step function.
End of explanation
"""
live, firsts, others = first.MakeFrames()
"""
Explanation: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
End of explanation
"""
first_pmf = thinkstats2.Pmf(firsts.prglngth, label="firsts")
other_pmf = thinkstats2.Pmf(others.prglngth, label="others")
"""
Explanation: Here are the distributions of pregnancy length.
End of explanation
"""
width = 0.45
axis = [27, 46, 0, 0.6]
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align="right", width=width)
thinkplot.Hist(other_pmf, align="left", width=width)
thinkplot.Config(xlabel="Pregnancy length(weeks)", ylabel="PMF", axis=axis)
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel="Pregnancy length(weeks)", axis=axis)
"""
Explanation: And here's the code that replicates one of the figures in the chapter.
End of explanation
"""
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='Difference (percentage points)')
"""
Explanation: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
End of explanation
"""
d = {7: 8, 12: 8, 17: 14, 22: 4, 27: 6, 32: 12, 37: 8, 42: 3, 47: 2}
pmf = thinkstats2.Pmf(d, label="actual")
"""
Explanation: Biasing and unbiasing PMFs
Here's the example in the book showing operations we can perform with Pmf objects.
Suppose we have the following distribution of class sizes.
End of explanation
"""
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
"""
Explanation: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
End of explanation
"""
biased_pmf = BiasPmf(pmf, label="observed")
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(xlabel="Class size", ylabel="PMF")
"""
Explanation: The following graph shows the difference between the actual and observed distributions.
End of explanation
"""
print("Actual mean", pmf.Mean())
print("Observed mean", biased_pmf.Mean())
"""
Explanation: The observed mean is substantially higher than the actual.
End of explanation
"""
def UnbiasPmf(pmf, label=None):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf[x] *= 1 / x
new_pmf.Normalize()
return new_pmf
"""
Explanation: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
End of explanation
"""
unbiased = UnbiasPmf(biased_pmf, label="unbiased")
print("Unbiased mean", unbiased.Mean())
"""
Explanation: We can unbias the biased PMF:
End of explanation
"""
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, unbiased])
thinkplot.Config(xlabel="Class size", ylabel="PMF")
"""
Explanation: And plot the two distributions to confirm they are the same.
End of explanation
"""
import numpy as np
import pandas
array = np.random.randn(4, 2)
df = pandas.DataFrame(array)
df
"""
Explanation: Pandas indexing
Here's an example of a small DataFrame.
End of explanation
"""
columns = ["A", "B"]
df = pandas.DataFrame(array, columns=columns)
df
"""
Explanation: We can specify column names when we create the DataFrame:
End of explanation
"""
index = ["a", "b", "c", "d"]
df = pandas.DataFrame(array, columns=columns, index=index)
df
"""
Explanation: We can also specify an index that contains labels for the rows.
End of explanation
"""
df["A"]
"""
Explanation: Normal indexing selects columns.
End of explanation
"""
df.loc["a"]
"""
Explanation: We can use the loc attribute to select rows.
End of explanation
"""
df.iloc[0]
"""
Explanation: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute:
End of explanation
"""
indices = ["a", "c"]
df.loc[indices]
"""
Explanation: loc can also take a list of labels.
End of explanation
"""
df["a":"c"]
"""
Explanation: If you provide a slice of labels, DataFrame uses it to select rows.
End of explanation
"""
df[0:2]
"""
Explanation: If you provide a slice of integers, DataFrame selects rows by integer index.
End of explanation
"""
def PmfMean(pmf):
"""Computes the mean of a PMF.
Returns:
float mean
"""
return sum(p * x for x, p in pmf.Items())
def PmfVar(pmf, mu=None):
"""Computes the variance of a PMF.
mu: the point around which the variance is computed;
if omitted, computes the mean
returns: float variance
"""
if mu is None:
mu = PmfMean(pmf)
return sum(p * (x - mu) ** 2 for x, p in pmf.Items())
"""
Explanation: But notice that one method includes the last elements of the slice and one does not.
In general, I recommend giving labels to the rows and names to the columns, and using them consistently.
Exercises
Exercise: In Chapter 3 we computed the mean of a sample by adding up
the elements and dividing by n. If you are given a PMF, you can
still compute the mean, but the process is slightly different:
%
$$ \xbar = \sum_i p_i~x_i $$
%
where the $x_i$ are the unique values in the PMF and $p_i=PMF(x_i)$.
Similarly, you can compute variance like this:
%
$$ S^2 = \sum_i p_i~(x_i - \xbar)^2 $$
%
Write functions called PmfMean and PmfVar that take a
Pmf object and compute the mean and variance. To test these methods,
check that they are consistent with the methods Mean and Var
provided by Pmf.
End of explanation
"""
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz")
resp = nsfg.ReadFemResp()
# Solution
pmf = thinkstats2.Pmf(resp.numkdhh, label="numkdhh")
# Solution
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel="Number of children", ylabel="PMF")
# Solution
biased = BiasPmf(pmf, label="biased")
# Solution
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased])
thinkplot.Config(xlabel="Number of children", ylabel="PMF")
# Solution
pmf.Mean()
# Solution
biased.Mean()
"""
Explanation: Exercise: Something like the class size paradox appears if you survey children and ask how many children are in their family. Families with many children are more likely to appear in your sample, and families with no children have no chance to be in the sample.
Use the NSFG respondent variable numkdhh to construct the actual distribution for the number of children under 18 in the respondents' households.
Now compute the biased distribution we would see if we surveyed the children and asked them how many children under 18 (including themselves) are in their household.
Plot the actual and biased distributions, and compute their means.
End of explanation
"""
live, firsts, others = first.MakeFrames()
preg_map = nsfg.MakePregMap(live)
# Solution
hist = thinkstats2.Hist()
for caseid, indices in preg_map.items():
if len(indices) >= 2:
pair = preg.loc[indices[0:2]].prglngth
diff = np.diff(pair)[0]
hist[diff] += 1
# Solution
thinkplot.Hist(hist)
# Solution
pmf = thinkstats2.Pmf(hist)
pmf.Mean()
"""
Explanation: Exercise: I started this book with the question, "Are first babies more likely to be late?" To address it, I computed the difference in means between groups of babies, but I ignored the possibility that there might be a difference between first babies and others for the same woman.
To address this version of the question, select respondents who have at least two live births and compute pairwise differences. Does this formulation of the question yield a different result?
Hint: use nsfg.MakePregMap:
End of explanation
"""
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/relay.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/Apr25_27thAn_set1.shtml")
import relay
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, "actual speeds")
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel="Speed (mph)", ylabel="PMF")
# Solution
def ObservedPmf(pmf, speed, label=None):
"""Returns a new Pmf representing speeds observed at a given speed.
The chance of observing a runner is proportional to the difference
in speed.
Args:
pmf: distribution of actual speeds
speed: speed of the observing runner
label: string label for the new dist
Returns:
Pmf object
"""
new = pmf.Copy(label=label)
for val in new.Values():
diff = abs(val - speed)
new[val] *= diff
new.Normalize()
return new
# Solution
biased = ObservedPmf(pmf, 7, label="observed speeds")
thinkplot.Pmf(biased)
thinkplot.Config(xlabel="Speed (mph)", ylabel="PMF")
"""
Explanation: Exercise: In most foot races, everyone starts at the same time. If you are a fast runner, you usually pass a lot of people at the beginning of the race, but after a few miles everyone around you is going at the same speed.
When I ran a long-distance (209 miles) relay race for the first time, I noticed an odd phenomenon: when I overtook another runner, I was usually much faster, and when another runner overtook me, he was usually much faster.
At first I thought that the distribution of speeds might be bimodal; that is, there were many slow runners and many fast runners, but few at my speed.
Then I realized that I was the victim of a bias similar to the effect of class size. The race was unusual in two ways: it used a staggered start, so teams started at different times; also, many teams included runners at different levels of ability.
As a result, runners were spread out along the course with little relationship between speed and location. When I joined the race, the runners near me were (pretty much) a random sample of the runners in the race.
So where does the bias come from? During my time on the course, the chance of overtaking a runner, or being overtaken, is proportional to the difference in our speeds. I am more likely to catch a slow runner, and more likely to be caught by a fast runner. But runners at the same speed are unlikely to see each other.
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners’ speeds, and the speed of a running observer, and returns a new Pmf representing the distribution of runners’ speeds as seen by the observer.
To test your function, you can use relay.py, which reads the results from the James Joyce Ramble 10K in Dedham MA and converts the pace of each runner to mph.
Compute the distribution of speeds you would observe if you ran a relay race at 7 mph with this group of runners.
End of explanation
"""
|
yangliuy/yangliuy.github.io | markdown_generator/talks.ipynb | mit | import pandas as pd
import os
"""
Explanation: Talks markdown generator for academicpages
Takes a TSV of talks with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in talks.py. Run either from the markdown_generator folder after replacing talks.tsv with one containing your data.
TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.
End of explanation
"""
!cat talks.tsv
"""
Explanation: Data format
The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.
Fields that cannot be blank: title, url_slug, date. All else can be blank. type defaults to "Talk"
date must be formatted as YYYY-MM-DD.
url_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper.
The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]
The combination of url_slug and date must be unique, as it will be the basis for your filenames
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
End of explanation
"""
talks = pd.read_csv("talks.tsv", sep="\t", header=0)
talks
"""
Explanation: Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
End of explanation
"""
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
if type(text) is str:
return "".join(html_escape_table.get(c,c) for c in text)
else:
return "False"
"""
Explanation: Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
End of explanation
"""
loc_dict = {}
for row, item in talks.iterrows():
md_filename = str(item.date) + "-" + item.url_slug + ".md"
html_filename = str(item.date) + "-" + item.url_slug
year = item.date[:4]
md = "---\ntitle: \"" + item.title + '"\n'
md += "collection: talks" + "\n"
if len(str(item.type)) > 3:
md += 'type: "' + item.type + '"\n'
else:
md += 'type: "Talk"\n'
md += "permalink: /talks/" + html_filename + "\n"
if len(str(item.venue)) > 3:
md += 'venue: "' + item.venue + '"\n'
if len(str(item.location)) > 3:
md += "date: " + str(item.date) + "\n"
if len(str(item.location)) > 3:
md += 'location: "' + str(item.location) + '"\n'
md += "---\n"
if len(str(item.talk_url)) > 3:
md += "\n[More information here](" + item.talk_url + ")\n"
if len(str(item.description)) > 3:
md += "\n" + html_escape(item.description) + "\n"
md_filename = os.path.basename(md_filename)
#print(md)
with open("../_talks/" + md_filename, 'w') as f:
f.write(md)
"""
Explanation: Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
End of explanation
"""
!ls ../_talks
!cat ../_talks/2013-03-01-tutorial-1.md
"""
Explanation: These files are in the talks directory, one directory below where we're working from.
End of explanation
"""
|
vitojph/kschool-nlp | notebooks-py2/nltk-pos.ipynb | gpl-3.0 | from __future__ import print_function
from __future__ import division
import nltk
"""
Explanation: Resumen NLTK: Etiquetado morfológico (part-of-speech tagging)
Este resumen se corresponde con el capítulo 5 del NLTK Book Categorizing and Tagging Words. La lectura del capítulo es muy recomendable.
Etiquetado morfológico con NLTK
NLTK propociona varias herramientas para poder crear fácilmente etiquetadores morfológicos (part-of-speech taggers). Veamos algunos ejemplos.
Para empezar, necesitamos importar el módulo nltk que nos da acceso a todas las funcionalidades:
End of explanation
"""
oracion1 = 'This is the lost dog I found at the park'.split()
oracion2 = 'The progress of the humankind as I progress'.split()
print(nltk.pos_tag(oracion1))
print(nltk.pos_tag(oracion2))
"""
Explanation: Como primer ejemplo, podemos utilizar la función nltk.pos_tag para etiquetar morfológicamente una oración en inglés, siempre que la especifiquemos como una lista de palabras o tokens.
End of explanation
"""
oracion3 = 'Green colorless ideas sleep furiously'.split()
print(nltk.pos_tag(oracion3))
print(nltk.pos_tag('colorless green ideas sleep furiously'.split()))
print(nltk.pos_tag(["My", "name", "is", "Prince"]))
print(nltk.pos_tag('He was born during the summer of 1988'.split()))
print(nltk.pos_tag('''She's Tony's sister'''.split()))
print(nltk.pos_tag('''My name is Sasolamatom and I have the stromkupft dog'''.split()))
"""
Explanation: El etiquetador funciona bastante bien aunque comete errores, obviamente. Si probamos con la famosa frase de Chomksy detectamos palabras mal etiquetadas.
End of explanation
"""
from nltk.corpus import brown
brown_sents = brown.sents(categories='news')
brown_tagged_sents = brown.tagged_sents(categories='news')
"""
Explanation: ¿Cómo funciona este etiquetador? nltk.pos_tag es un etiquetador morfológico basado en aprendizaje automático. A partir de miles de ejemplos de oraciones etiquetadas manualmente, el sistema ha aprendido, calculando frecuencias y generalizando cuál es la categoría gramatical más probable para cada token.
Como sabes, desde NLTK podemos acceder a corpus que ya están etiquetados. Vamos a utilizar alguno de los que ya conoces, el corpus de Brown, para entrenar nuestros propios etiquetadores.
Para ello importamos el corpus de Brown y almacenamos en un par de variables las noticias de este corpus en su versión etiquetada morfológicamente y sin etiquetar.
End of explanation
"""
# imprimimos la primera oración de las noticias de Brown
print(brown_sents[0]) # sin anotar
print(brown_tagged_sents[0]) # etiquetada morfológicamente
"""
Explanation: Para comparar ambas versiones, podemos imprimir la primera oración de este corpus en su versión sin etiquetas (fíjate que se trata de una lista de tokens, sin más) y en su versión etiquetada (se trata de una lista de tuplas donde el primer elemento es el token y el segundo es la etiqueta morfológica).
End of explanation
"""
defaultTagger = nltk.DefaultTagger('NN')
print(defaultTagger.tag(oracion1))
print(defaultTagger.tag(oracion2))
"""
Explanation: Etiquetado morfológico automático
Etiquetador por defecto
NLTK nos da acceso a tipos de etiquetadores morfológicos. Veamos cómo utilizar algunos de ellos.
A la hora de enfrentarnos al etiquetado morfológico de un texto, podemos adoptar una estrategia sencilla consistente en etiquetar por defecto todas las palabras con la misma categoría gramatical. Con NLTK podemos utilizar un DefaultTagger que etiquete todos los tokens como sustantivo. Las categoría sustantivo singular (NN en el esquema de etiquetas de Treebank) suele ser la más frecuente. Veamos qué tal funciona.
End of explanation
"""
defaultTagger.evaluate(brown_tagged_sents)
"""
Explanation: Obviamente no funciona bien, pero ojo, en el ejemplo anterior con oracion1 hemos etiquetado correctamente 2 de 10 tokens. Si lo evaluamos con un corpus más grande, como el conjunto de oraciones de Brown que ya tenemos, obtenemos una precisión superior al 13%:
End of explanation
"""
patrones = [
(r'[Aa]m$', 'BEM'), # irregular forms of 'to be'
(r'[Aa]re$', 'BER'), #
(r'[Ii]s$', 'BEZ'), #
(r'[Ww]as$', 'BEDZ'), #
(r'[Ww]ere$', 'BED'), #
(r'[Bb]een$', 'BEN'), #
(r'[Hh]ave$', 'HV'), # irregular forms of 'to be'
(r'[Hh]as$', 'HVZ'), #
(r'[Hh]ad$', 'HVD'), #
(r'^I$', 'PRP'), # personal pronouns
(r'[Yy]ou$', 'PRP'), #
(r'[Hh]e$', 'PRP'), #
(r'[Ss]he$', 'PRP'), #
(r'[Ii]t$', 'PRP'), #
(r'[Tt]hey$', 'PRP'), #
(r'[Aa]n?$', 'AT'), #
(r'[Tt]he$', 'AT'), #
(r'[Ww]h.+$', 'WP'), # wh- pronoun
(r'.*ing$', 'VBG'), # gerunds
(r'.*ed$', 'VBD'), # simple past
(r'.*es$', 'VBZ'), # 3rd singular present
(r'[Cc]an(not|n\'t)?$', 'MD'), # modals
(r'[Mm]ight$', 'MD'), #
(r'[Mm]ay$', 'MD'), #
(r'.+ould$', 'MD'), # modals: could, should, would
(r'.*ly$', 'RB'), # adverbs
(r'.*\'s$', 'NN$'), # possessive nouns
(r'.*s$', 'NNS'), # plural nouns
(r'-?[0-9]+(.[0-9]+)?$', 'CD'), # cardinal numbers
(r'^to$', 'TO'), # to
(r'^in$', 'IN'), # in prep
(r'^[A-Z]+([a-z])*$', 'NNP'), # proper nouns
(r'.*', 'NN') # nouns (default)
]
regexTagger = nltk.RegexpTagger(patrones)
print(regexTagger.tag('I was taking a sunbath in Alpedrete'.split()))
print(regexTagger.tag('She would have found 100 dollars in the bag'.split()))
print(regexTagger.tag('DSFdfdsfsd 1852 to dgdfgould fXXXdg in XXXfdg'.split()))
"""
Explanation: el método .evaluate que podemos ejecutar con cualquier etiquetador si especificamos como argumento una colección de referencia que ya esté etiquetada, nos devuelve un número: la precisión. Esta precisión se calcula como el porcentaje de tokens correctamente etiquetados por el tagger, teniendo en cuenta el corpus especificado como referencia.
Obviamente, los resultados son malos. Probemos con otras opciones más sofisticadas.
Etiquetador basado en Expresiones Regulares
Las expresiones regulares consisten en un lenguaje formal que nos permite especificar cadenas de texto. Ya las hemos utilizado en ocasiones anteriores. Pues bien, ahora podemos probar a definir distintas categorías morfológicas a partir de patrones, al menos para fenómenos morfológicos regulares.
A continuación definimos la variable patrones como una lista de tuplas, cuyo primer elemento se corresponde con la expresion regular que queremos capturar y el segundo elemento como la categoría gramatical. Y a partir de estas expresiones regulares creamos un RegexpTagger.
End of explanation
"""
regexTagger.evaluate(brown_tagged_sents)
"""
Explanation: Cuando probamos a evaluarlo con un corpus de oraciones más grande, vemos que nuestra precisión sube por encima del 32%.
End of explanation
"""
print(len(brown_tagged_sents))
print((len(brown_tagged_sents) * 90) / 100)
size = int(len(brown_tagged_sents) * 0.9) # asegúrate de que conviertes esto a entero
corpusEntrenamiento = brown_tagged_sents[:size]
corpusTest = brown_tagged_sents[size:]
# como ves, estos corpus contienen oraciones diferentes
print(corpusEntrenamiento[0])
print(corpusTest[0])
"""
Explanation: Etiquetado basado en ngramas
Antes hemos dicho que la función nltk.pos_tag tenía un etiquetador morfológico que funcionaba con información estadística. A continuación vamos a reproducir, a pequeña escala, el proceso de entrenamiento de un etiquetador morfológico basado en aprendizaje automático.
En general, los sistemas de aprendizaje automático funcionan del siguiente modo:
Se etiqueta manualmente una colección de datos. Cuanto más grande y más representativa sea la colección, mejores datos podremos extraer.
El sistema de aprendizaje procesa la colección de entrenamiento y "aprende" a partir de los ejemplos observados. En el caso del etiquetado morfológico, el sistema calcula frecuencias y generaliza (de distintas maneras) cuáles son las categorías gramaticales más probables para cada palabra.
Una vez que el sistema ha sido entrenado, se evalúa su rendimiento sobre datos no observados.
Nosotros tenemos un pequeño corpus de ejemplos etiquetados: las oraciones del corpus de Brown de la categoría "noticias". Lo primero que necesitamos hacer es separar nuestros corpus de entrenamiento y test. En este caso, vamos a reservar el primer 90% de las oraciones para el entrenamiento (serán los ejemplos observados a partir de los cuales nuestro etiquetador aprenderá) y vamos a dejar el 10% restante para comprobar qué tal funciona.
End of explanation
"""
unigramTagger = nltk.UnigramTagger(corpusEntrenamiento)
print(unigramTagger.evaluate(corpusTest))
# ¿qué tal se etiquetan nuestras oraciones de ejemplo?
print(unigramTagger.tag(oracion1))
print(unigramTagger.tag(oracion2))
print(unigramTagger.tag(oracion3))
"""
Explanation: A continuación vamos a crear un etiquetador basado en unigramas (secuencias de una palabra o palabras sueltas) a través de la clase nltk.UnigramTagger, proporcionando nuestro corpusEntrenamiento para que aprenda. Una vez entrenado, vamos a evaluar su rendimiento sobre corpusTest.
End of explanation
"""
bigramTagger = nltk.BigramTagger(corpusEntrenamiento)
trigramTagger = nltk.TrigramTagger(corpusEntrenamiento)
# funciona fatal :-(
print(bigramTagger.tag(oracion2))
print(trigramTagger.tag(oracion2))
#aquí hago trampas, le pido que analice una oración que ya ha visto durante el entrenamiento
print(bigramTagger.tag(['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of',
"Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.']))
"""
Explanation: Los etiquetadores basados en unigramas se construyen a partir del simple cálculo de una distribución de frecuencia para cada token (palabra) y asignan siempre la etiqueta morfológica más probable. En nuestro caso, esta estrategia funciona relativamente bien: el tagger supera el 81% de precisión. Sin embargo, esta aproximación presenta numerosos problemas a la hora de etiquetar palabras homógrafas (un mismo token funcionando con más de una categoría gramatical). Si probamos con nuestra oracion2, comprobamos que la segunda aparición de progress no es etiquetada correctamente.
Intuitivamente, podemos pensar que sabríamos distinguir ambas categorías si tuviéramos en cuenta algo del contexto de aparición de las palabras: progress es un sustantivo cuando aparece después del artículo the y es verbo cuando aparece tras un pronombre personal como I. Si en lugar de calcular frecuencias de unigramas, extendiéramos los cálculos a secuencias de dos o tres palabras, podríamos tener mejores resultados. Y precisamente por eso vamos a calcular distribuciones de frecuencias condicionales: asignaremos a cada token la categoría gramatical más frecuente teniendo en cuenta la categoría gramatical de la(s) palabra(s) inmediatamente anterior(es).
Creamos un par de etiquetadores basado en bigramas (secuencias de dos palabras) o trigramas (secuencias de tres palabras) a través de las clases nltk.BigramTagger y nltk.TrigramTagger. Y los probamos con nuestra oracion2.
End of explanation
"""
print(bigramTagger.evaluate(corpusTest))
print(trigramTagger.evaluate(corpusTest))
"""
Explanation: Como se ve en los ejemplos, los resultados son desastrosos. La mayoría de los tokens se quedan sin etiqueta y se muestran como None.
Si los evaluamos con nuestra colección de test, vemos que apenas superan el 10% de precisión. Peores resultados que nuestro DefaultTagger.
End of explanation
"""
unigramTagger = nltk.UnigramTagger(corpusEntrenamiento, backoff=regexTagger)
bigramTagger = nltk.BigramTagger(corpusEntrenamiento, backoff=unigramTagger)
trigramTagger = nltk.TrigramTagger(corpusEntrenamiento, backoff=bigramTagger)
print(trigramTagger.evaluate(corpusTest))
print(trigramTagger.tag(oracion1))
print(trigramTagger.tag(oracion2))
print(trigramTagger.tag(oracion3))
"""
Explanation: ¿Por qué ocurre esto? La intuición no nos engaña, y es verdad que si calculamos distribuciones de frecuencia condicionales teniendo en cuenta secuencias de palabras más largas, nuestros datos serán más finos. Sin embargo, cuando consideramos secuencias de tokens más largos nos arriesgamos a que dichas secuencias no aparezcan como tales en el corpus de entrenamiento.
En el ejemplo de oracion2, nuestro bigramTagger es incapaz de etiquetar la palabra progress porque no ha encontrado en el corpus de entrenamiento ni el bigrama (The, progress) ni (I, progress). Obviamente, nuestro trigramTagger tampoco ha encontrado los trigramas (INICIO_DE_ORACIÓN, The, progress) o (as, I, progress). Si esas secuencias no aparecen en el corpus de entrenamiento, no hay nada que aprender.
Combinamos etiquetadores
En estos casos, la solución más satisfactoria consiste en combinar de manera incremental la potencia de todos nuestros etiquetadores. Vamos a crear nuevos taggers que utilicen otros como respaldo.
Utilizaremos un tagger de trigramas que, cuando no tenga respuesta para etiquetar un determinado token, utilizará como respaldo el tagger de bigramas. A su vez, el tagger de bigramas tirará del de unigramas cuando no tenga respuesta. Por último, el de unigramas utilizará como respaldo el tagger de expresiones regulares que definimos antes. De esta manera aumentamos la precisión hasta casi el 87%.
End of explanation
"""
# escribe tu código aquí
"""
Explanation: Ejercicio en clase
Vamos a mejorar nuestro etiquetador extendiendo el corpus de entrenamiento. En lugar de usar solo los textos de noticias, vamos a utilizar el corpus de Brown completo.
End of explanation
"""
|
ComputationalPhysics2015-IPM/Python-01 | Python-01.ipynb | gpl-2.0 | 3 + 4
3 * 5
4 / 3
4 // 3
4 % 3
abs(-3)
2**2024+2**1024
2.0**1024
1.0+2
2**-30
1.0e-10
"""
Explanation: Python as a calculator
Python can be used as calculator + - * / () ** // % abs min max, int and float
End of explanation
"""
True
False
2 == 3
3<5
3 < 6 and 2 == 3
True + 2
True * 10
"""
Explanation: Boolean
True False ==, <, >, <=, >=, !=, and, or
End of explanation
"""
a = 3
a
a,b = 3,4
a
b
a,b = b,a
a
b
حسین = 3
حسین + b
3 = a
"""
Explanation: Variable names
good practice: temp_var, tempVar, TempVar, unicode, = , a,b = b,a
End of explanation
"""
s = 'This is a python string'
"is" in s
'is' not in s
s[5]
len(s)
'String one, ' + 'String tow'
3 * 'A '
"""
Explanation: String
in, not in, +, *, [i], len()
End of explanation
"""
l = ['akbar', 'mamad', 'mahmoud', 'hasan']
l
l[2]
l[2]=['mahmoud', 'ahmad']
l
len(l)
l[-1]
ln = [2,5,6,1,3,5]
max(ln)
sum(ln)
ln1 = [1,2,4,5]
ln1+ln
l+ln+ln1
"""
Explanation: List
in, not in, +, *, [], len(), min(), max(), sum(), help()
End of explanation
"""
ll = []
ll.append(1)
ll
l.reverse()
l
l.pop()
l
"""
Explanation: List
.append(), .count(), .index(), .pop(), .remove(), .reverse(), help()
End of explanation
"""
sqrt(2)
import math
math.sqrt(2)
from math import sqrt
sqrt(2)
help(math)
"""
Explanation: Math
End of explanation
"""
print(l)
print(l,ln,ln1)
'b is {1} and a is {0}, {2:5.3}.'.format(a,b, 0.3333333333)
a = input("Engter somthing:")
a
eval(a)
eval('math.sqrt(2)')
"""
Explanation: Input and outputs
print(), '{0:8.4}'.format(), input(), eval()
End of explanation
"""
if 2>3:
# This is inside of loop
print('this')
print('this')
print('this')
if 2<5:
print('2<5')
else:
print('that')
print('that')
print("end")
list(range(3,10,2))
for i in range(10):
print(i)
for i in l:
print(i)
for i in [0, 0.1, 0.2, 0.3]:
print(i)
for i in range(10):
print(i*0.1)
"""
Explanation: Flow Controls
if else, for
End of explanation
"""
def f(x):
'''
Add one to x and return it.
This is a simple function to demonstrate a concept.
'''
x+=1
return x
help(f)
a = 3
f(a)
a
def g(l):
l[0] = 'saeed'
return l
l
g(l)
l
help(g)
"""
Explanation: function
comments, docstrings, numbers and lists
End of explanation
"""
|
jdvelasq/ingenieria-economica | 07-impuestos.ipynb | mit | import cashflows as cf
# representación del flujo de fondos
cflo = cf.cashflow(const_value=[1000]*5+[-500]*5,
start='2016',
freq='A')
cf.textplot(cflo)
## cómputo del impuesto
tax_rate = cf.interest_rate(const_value=[30]*10, start=2016, freq='A')
x = cf.after_tax_cashflow(cflo, # flujo de efectivo
tax_rate=tax_rate) # impuesto de renta
cf.textplot(x)
"""
Explanation: Impuestos Corporativos
Juan David Velásquez Henao
jdvelasq@unal.edu.co
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Haga click aquí para acceder a la última versión online
Haga click aquí para ver la última versión online en nbviewer.
Preparación
Tipos de Impuestos.
Impuestos sobre activos tangibles o intangibles.
Licencias para establecimiento de negocios.
Impuestos al consumo sobre productos básicos.
Impuestos a las ventas.
Impuestos a la renta.
Deducciones.
El impuesto de renta es calculado como el ingreso bruto menos las deducciones permitidas:
Salarios.
Rentas.
Intereses pagados.
Publicidad.
Planes de pensión.
Gastos de investigación.
Depreciación, amortización y agotamiento.
Notas sobre el mercado eléctrico colombiano
Impuesto de industria y comercio: pagado a una tarifa fija por la capacidad instalada del proyecto indexado por la inflación ($ 5 de 1981 por kilovatio)
Artículo 22, Ley 143 de 1994 (Ley Eléctrica): 0.1% de los gastos de administración de los proyectos de generación (cubrimiento del costo del servicio de regulación).
Artículo 222, Ley 1450 de 2011. Transferencias del Sector Eléctrico: 6% de las ventas brutas para centrales hidroeléctricas (incluye la tasa por utilización de aguas) y 4% para centrales térmicas.
Impuesto predial.
Sobretasa predial.
Costos CND, ASIC y otros.
Costos FAZNI.
Para las fuentes no convecionales de energía, el Artículo 11 de la Ley 1715 de 2014 permite descontar del impuesto de renta hasta el 50% de las inversiones durante los cinco años siguientes al año de causación, sin superar el 50% del impuesto a pagar si no se tiene en cuenta este incentivo.
after_tax_cashflow(cflo, tax_rate)
cflo-- flujo de efectivo
tax_rate -- tasa de interés.
Permite calcular el flujo de efectivo correspondiente a los impuestos expresados como una tasa sobre el flujo de efectivo.
Retorna el flujo de efectivo después de impuestos. Nóte que los impuestos sólo se calculan para valores positivos.
cflo -- flujo de efectivo.
tax_rate -- tasa impositiva.
Ejemplo.-- Sea un flujo constante de $ 1000 para los períodos 1 a 5 y $ -90 para los períodos 6 a 10. Calcule el impuesto de renta para una tasa impositiva del 30%.
End of explanation
"""
cflo = cf.cashflow(const_value=[100]*10, start=2000)
tax_rate = cf.interest_rate(const_value=[25]*10, start=2000, chgpts={5:35})
x = cf.after_tax_cashflow(cflo, # flujo de efectivo
tax_rate=tax_rate) # impuesto de renta
cf.textplot(x)
"""
Explanation: Ejemplo.-- Considere un flujo de caja de $ 100, y una duración de 10 períodos. Calcule el impuesto de renta si la tasa es del 30% para los períodos 1 a 5 y del 35% para los períodos restantes.
End of explanation
"""
## costo de la inversión, se hace al final del año 0
avaluo = cf.cashflow([0.8 * 500] + [0] * 10, start = 2000)
avaluo
## valor en libros
bookval = avaluo.cumsum()
bookval
## tasa de impuesto predial
trate = cf.interest_rate([0] + [3] * 10, start=2000)
trate
## impuesto predial
cf.after_tax_cashflow(bookval, tax_rate=trate)
"""
Explanation: Ejemplo.-- En el año 0 se compra un terreno por $ 500 para la construcción de una central térmica. Si el avalúo para efectos del cálculo del impuesto predíal es del 80% del valor de compra y el impuesto es del 0.3% del avalúo, construya el flujo de efectivo que representa el pago del impuesto predial para los siguientes 10 años.
End of explanation
"""
|
rhiever/crowd-machines | Crowd machines demo.ipynb | mit | import pandas as pd
import numpy as np
breast_cancer_data = pd.read_csv('data/breast-cancer-wisconsin.tsv.gz',
sep='\t',
compression='gzip')
"""
Explanation: Breast cancer data set
End of explanation
"""
from collections import Counter
Counter(breast_cancer_data['class'].values)
"""
Explanation: Class frequencies
End of explanation
"""
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
cross_val_score(RandomForestClassifier(n_estimators=100, n_jobs=-1),
breast_cancer_data.drop('class', axis=1).values,
breast_cancer_data.loc[:, 'class'].values, cv=StratifiedKFold(n_splits=5, shuffle=True))
"""
Explanation: Compute the cross-validation scores
Here, the scores are accuracy on the data set.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(breast_cancer_data.drop('class', axis=1).values,
breast_cancer_data['class'].values,
stratify=breast_cancer_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.swarmplot(y_train, clf.predict(X_train))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
"""
Explanation: Visualize the predictions vs. actual status
Each dot corresponds to one prediction.
Training data
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(breast_cancer_data.drop('class', axis=1).values,
breast_cancer_data['class'].values,
stratify=breast_cancer_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.swarmplot(y_test, clf.predict(X_test))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
"""
Explanation: Testing data
End of explanation
"""
import pandas as pd
import numpy as np
from sklearn.pipeline import make_pipeline, make_union
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, VotingClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import cross_val_score, StratifiedKFold
breast_cancer_data = pd.read_csv('data/breast-cancer-wisconsin.tsv.gz',
sep='\t',
compression='gzip')
all_features = breast_cancer_data.drop('class', axis=1).values
all_classes = breast_cancer_data['class'].values
union_ops = [SelectKBest(k='all')]
for i, mwfl in enumerate(np.arange(0., 0.21, 0.01)):
union_ops.append(VotingClassifier(estimators=[('rf-mwfl={}'.format(mwfl),
RandomForestRegressor(n_estimators=100,
n_jobs=-1,
min_weight_fraction_leaf=mwfl))]))
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1, min_weight_fraction_leaf=mwfl)
print('RF w/ mwfl={:0.2f} CV score: {:0.3f}'.format(
mwfl,
np.mean(cross_val_score(clf, all_features, all_classes, cv=StratifiedKFold(n_splits=5, shuffle=True)))))
clf = make_pipeline(make_union(*union_ops), RandomForestClassifier(n_estimators=100, n_jobs=-1))
print('Crowd machine CV score: {:0.3f}'.format(np.mean(cross_val_score(clf, all_features, all_classes, cv=StratifiedKFold(n_splits=5, shuffle=True)))))
"""
Explanation: Crowd machine
Run random forest with 15 or 20 different terminal node sizes, on the same training data, in each case getting the probability for each subject or instance;
Use the output from each as a new synthetic feature, which is then input to another (single) random random forest, also run in regression mode; In this case the probability estimates from each synthetic feature will be sort of continuous as they are probability estimates and not just zero or one things;
Generate some simple plots for the crowd;
Compare the crowd results to some individual random forest runs, using some two or three terminal node settings.
End of explanation
"""
import pandas as pd
spambase_data = pd.read_csv('data/spambase.tsv.gz',
sep='\t',
compression='gzip')
"""
Explanation: Spambase data set
End of explanation
"""
from collections import Counter
Counter(spambase_data['class'].values)
"""
Explanation: Class frequencies
End of explanation
"""
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
cross_val_score(RandomForestClassifier(n_estimators=100, n_jobs=-1),
spambase_data.drop('class', axis=1).values,
spambase_data.loc[:, 'class'].values,
cv=StratifiedKFold(n_splits=5, shuffle=True))
"""
Explanation: Compute the cross-validation scores
Here, the scores are accuracy on the data set.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spambase_data.drop('class', axis=1).values,
spambase_data['class'].values,
stratify=spambase_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.boxplot(y_train, clf.predict(X_train))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
"""
Explanation: Visualize the predictions vs. actual status
Each dot corresponds to one prediction.
Training data
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spambase_data.drop('class', axis=1).values,
spambase_data['class'].values,
stratify=spambase_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.boxplot(y_test, clf.predict(X_test))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
"""
Explanation: Testing data
End of explanation
"""
import pandas as pd
import numpy as np
from sklearn.pipeline import make_pipeline, make_union
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, VotingClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import cross_val_score, StratifiedKFold
spambase_data = pd.read_csv('data/spambase.tsv.gz',
sep='\t',
compression='gzip')
all_features = spambase_data.drop('class', axis=1).values
all_classes = spambase_data['class'].values
union_ops = [SelectKBest(k='all')]
for i, mwfl in enumerate(np.arange(0., 0.21, 0.01)):
union_ops.append(VotingClassifier(estimators=[('rf-mwfl={}'.format(mwfl),
RandomForestRegressor(n_estimators=100,
n_jobs=-1,
min_weight_fraction_leaf=mwfl))]))
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1, min_weight_fraction_leaf=mwfl)
print('RF w/ mwfl={:0.2f} CV score: {:0.3f}'.format(
mwfl,
np.mean(cross_val_score(clf, all_features, all_classes, cv=StratifiedKFold(n_splits=5, shuffle=True)))))
clf = make_pipeline(make_union(*union_ops), RandomForestClassifier(n_estimators=100, n_jobs=-1))
print('Crowd machine CV score: {:0.3f}'.format(np.mean(cross_val_score(clf, all_features, all_classes,
cv=StratifiedKFold(n_splits=5, shuffle=True)))))
"""
Explanation: Crowd machine
Run random forest with 15 or 20 different terminal node sizes, on the same training data, in each case getting the probability for each subject or instance;
Use the output from each as a new synthetic feature, which is then input to another (single) random random forest, also run in regression mode; In this case the probability estimates from each synthetic feature will be sort of continuous as they are probability estimates and not just zero or one things;
Generate some simple plots for the crowd;
Compare the crowd results to some individual random forest runs, using some two or three terminal node settings.
End of explanation
"""
|
PyPSA/PyPSA | examples/notebooks/spatial-clustering.ipynb | mit | import pypsa
import re
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import pandas as pd
from pypsa.networkclustering import get_clustering_from_busmap, busmap_by_kmeans
n = pypsa.examples.scigrid_de()
n.lines["type"] = np.nan # delete the 'type' specifications to make this example easier
"""
Explanation: Network Clustering
In this example, we show how pypsa can deal with spatial clustering of networks.
End of explanation
"""
groups = n.buses.operator.apply(lambda x: re.split(" |,|;", x)[0])
busmap = groups.where(groups != "", n.buses.index)
"""
Explanation: The important information that pypsa needs for spatial clustering is in the busmap. It contains the mapping of which buses should be grouped together, similar to the groupby groups as we know it from pandas.
You can either calculate a busmap from the provided clustering algorithms or you can create/use your own busmap.
Cluster by custom busmap
Let's start with creating our own.
In the following, we group all buses together which belong to the same operator. Buses which do not have a specific operator just stay on its own.
End of explanation
"""
C = get_clustering_from_busmap(n, busmap)
"""
Explanation: Now we cluster the network based on the busmap.
End of explanation
"""
nc = C.network
"""
Explanation: C is a Clustering object which contains all important information.
Among others, the new network is now stored in that Clustering object.
End of explanation
"""
fig, (ax, ax1) = plt.subplots(
1, 2, subplot_kw={"projection": ccrs.EqualEarth()}, figsize=(12, 12)
)
plot_kwrgs = dict(bus_sizes=1e-3, line_widths=0.5)
n.plot(ax=ax, title="original", **plot_kwrgs)
nc.plot(ax=ax1, title="clustered by operator", **plot_kwrgs)
fig.tight_layout()
"""
Explanation: We have a look at the original and the clustered topology
End of explanation
"""
weighting = pd.Series(1, n.buses.index)
busmap2 = busmap_by_kmeans(n, bus_weightings=weighting, n_clusters=50)
"""
Explanation: Looks a bit messy as over 120 buses do not have assigned operators.
Clustering by busmap created from K-means
Let's now make a clustering based on the kmeans algorithm.
Therefore we calculate the busmap from a non-weighted kmeans clustering.
End of explanation
"""
C2 = get_clustering_from_busmap(n, busmap2)
nc2 = C2.network
"""
Explanation: We use this new kmeans-based busmap to create a new clustered method.
End of explanation
"""
fig, (ax, ax1) = plt.subplots(
1, 2, subplot_kw={"projection": ccrs.EqualEarth()}, figsize=(12, 12)
)
plot_kwrgs = dict(bus_sizes=1e-3, line_widths=0.5)
n.plot(ax=ax, title="original", **plot_kwrgs)
nc2.plot(ax=ax1, title="clustered by kmeans", **plot_kwrgs)
fig.tight_layout()
"""
Explanation: Again, let's plot the networks to compare:
End of explanation
"""
|
atcemgil/notes | BayesianNetworks.ipynb | mit |
import numpy as np
import scipy as sc
from scipy.special import gammaln
from scipy.special import digamma
%matplotlib inline
from itertools import combinations
import pygraphviz as pgv
from IPython.display import Image
from IPython.display import display
def normalize(A, axis=None):
"""Normalize a probability table along a specified axis"""
Z = np.sum(A, axis=axis,keepdims=True)
idx = np.where(Z == 0)
Z[idx] = 1
return A/Z
def find(cond):
"""
finds indices where the given condition is satisfied.
"""
return list(np.where(cond)[0])
"""
Explanation: Bayesian Network in a Jupyter Notebook (BJN)
Ali Taylan Cemgil, 2018
Bogazici University
This notebook contains a self contained python implementation of the junction tree algorithm for exact inference in Bayesian networks with discrete probability tables. The aim is pedagogical, to provide a compact implementation so that the algorithms can be run in a notebook.
Bayesian Networks
A Bayesian Network is a probability model that has the form
$
\newcommand{\pa}[1]{ {\mathop{pa}({#1})}}
$
$$
p(x_{1:N}) = \prod_{n=1}^N p(x_n | x_\pa{n})
$$
The distribution is said to respect a directed acyclic graph (DAG) $\mathcal{G} = (V_\mathcal{G}, E_\mathcal{G})$ with $N$ nodes, where each random variable $x_n$ is associated with the node $n$ of $\mathcal{G}$.
For each node $n$, the set of (possibly empty) parents nodes is denoted by $\pa{n}$.
We assume that for each node $j$ in $\pa{n}$, we have a directed edge $j\rightarrow n$, denoted as the pair $(j,n)$. The edge set of $\mathcal{G}$ is the union of all such edges
$$
E_\mathcal{G} = \bigcup_{n \in [N]} \bigcup_{j \in \pa{n}} {(j,n)}
$$
To be associated with a proper probability model, the graph $G$ must be acyclic, that is it must be free of any directed cycles.
In BJN, the elements of a Bayesian Network model are:
alphabet:
The alphabet is a set of strings for each random variable $x_n$ for $n \in [N]$. For each random variable $x_n$ that we include in our model (an observable or a hidden) we have a corresponding unique identifier in the alphabet. To enforce a user specified ordering, the alphabet is represented as a list index_names where index_names[n] is the string identifier for $x_n$.
parents:
parents is a catalog that defines the edges of the directed acyclic graph $\mathcal{G}$. For each random variable $x_n$, we specify the set of parents $\pa{n}$, the variables that directly effect $x_n$. Here, each element of the alphabet is a key, and parents[index_names[n]] = $\pa{n}$ is the parent list.
states: states is a catalog, where each key is a name from the alphabet and the value is a list of strings that specify a human readable string with each state of the random variable.
In a discrete model each random variable $x_n$ has $I_n$ possible states. We derive a list cardinalities from states that is ordered according to the order specified by index_names where cardinalities[n] = $I_n$. The list cardinalities and the dictionary states must be consistent: cardinalities[n] = len(states[index_names[n]])
theta: A catalog or list of conditional probability tables, indexed by $n =0 \dots N-1$ . Each table theta[n] is an array that has $N$ dimensions. The dimensions that are in ${n} \cup \pa{n}$ have a cardinality $I_n$. The remaining dimensions have a cardinality of one, given by cardinality[n].
Utility Functions
End of explanation
"""
def random_alphabet(N=20, first_letter='A'):
"""Generates unique strings to be used as index_names"""
if N<27:
alphabet = [chr(i+ord(first_letter)) for i in range(N)]
else:
alphabet = ['X'+str(i) for i in range(N)]
return alphabet
def random_parents(alphabet, max_indeg=3):
"""Random DAG generation"""
N = len(alphabet)
print(alphabet)
indeg = lambda: np.random.choice(range(1,max_indeg+1))
parents = {a:[b for b in np.random.choice(alphabet[0:(1 if i==0 else i)], replace=False, size=min(indeg(),i))] for i,a in enumerate(alphabet)}
return parents
def random_cardinalities(alphabet, cardinality_choices=[2,3,4,5]):
"""Random cardinalities"""
return [np.random.choice(cardinality_choices) for a in alphabet]
def states_from_cardinalities(alphabet, cardinalities):
"""Generate generic labels for each state"""
return {a:[a+"_state_"+str(u) for u in range(cardinalities[i])] for i,a in enumerate(alphabet)}
def cardinalities_from_states(alphabet, states):
"""Count each cardinality according to the order implied by the alphabet list"""
return [len(states[a]) for a in alphabet]
def random_observations(cardinalities, visibles):
"""
Samples a tensor of the shape of visibles. This function does not sample
from the joint distribution implied by the graph and the probability tables
"""
return np.random.choice(range(10), size=clique_shape(cardinalities, visibles))
def random_dirichlet_cp_table(gamma, cardinalities, n, pa_n):
'''
gamma : Dirichlet shape parameter
cardinalities : List of number of states of each variable
n, pa_n : Output a table of form p(n | pa_n ), n is an index, pa_n is the list of parents of n
'''
N = len(cardinalities)
cl_shape = clique_shape(cardinalities, [n]+pa_n)
U = clique_prior_marginal(cardinalities, cl_shape)
return normalize(np.random.gamma(shape=gamma*U, size=cl_shape), axis=n)
def random_cp_tables(index_names, cardinalities, parents, gamma):
"""
Samples a set of conditional probability tables consistent with the factorization
implied by the graph.
"""
N = len(index_names)
theta = [[]]*N
for n,a in enumerate(index_names):
theta[n] = random_dirichlet_cp_table(gamma, cardinalities, n, index_names_to_num(index_names, parents[a]))
#print(a, parents[a])
#print(theta[n].shape)
#print('--')
return theta
def random_model(N=10, max_indeg=4):
"""
Generates a random Bayesian Network
"""
index_names = random_alphabet(N)
parents = random_parents(index_names)
cardinalities = random_cardinalities(index_names)
states = states_from_cardinalities(index_names, cardinalities)
return index_names, parents, cardinalities, states
"""
Explanation: Random structure and parameter generators
End of explanation
"""
def clique_shape(cardinalities, family):
N = len(cardinalities)
size = [1]*N
for i in family:
size[i] = cardinalities[i]
return size
def clique_prior_marginal(cardinalities, shape):
U = 1
for a1,a2 in zip(shape, cardinalities):
U = U*a2/a1
return U
def index_names_to_num(index_names, names):
name2idx = {name: i for i,name in enumerate(index_names)}
return [name2idx[nm] for nm in names]
def show_dag_image(index_names, parents, imstr='_BJN_tempfile.png'):
name2idx = {name: i for i,name in enumerate(index_names)}
A = pgv.AGraph(directed=True)
for i_n in index_names:
A.add_node(name2idx[i_n], label=i_n)
for j_n in parents[i_n]:
A.add_edge(name2idx[j_n], name2idx[i_n])
A.layout(prog='dot')
A.draw(imstr)
display(Image(imstr))
return
def show_ug_image(UG, imstr='_BJN_tempfile.png'):
A = pgv.AGraph(directed=False)
for i_n in range(UG.shape[0]):
A.add_node(i_n, label=i_n)
for j_n in find(UG[i_n,:]):
if j_n>i_n:
A.add_edge(j_n, i_n)
A.layout(prog='dot')
A.draw(imstr)
display(Image(imstr))
return
def make_cp_tables(index_names, cardinalities, cp_tables):
N = len(index_names)
theta = [[]]*N
for c in cp_tables:
if not isinstance(c, tuple):
nums = index_names_to_num(index_names, (c,))
else:
nums = index_names_to_num(index_names, c)
#print(nums)
n = nums[0]
idx = list(reversed(nums))
theta[n] = np.einsum(np.array(cp_tables[c]), idx, sorted(idx)).reshape(clique_shape(cardinalities,idx))
return theta
def make_adjacency_matrix(index_names, parents):
nVertex = len(index_names)
name2idx = {name: i for i,name in enumerate(index_names)}
## Build Graph data structures
# Adjacency matrix
adj = np.zeros((nVertex, nVertex), dtype=int)
for i_name in parents.keys():
i = name2idx[i_name]
for m_name in parents[i_name]:
j = name2idx[m_name]
adj[i, j] = 1
return adj
def make_families(index_names, parents):
nVertex = len(index_names)
adj = make_adjacency_matrix(index_names, parents)
# Possibly check topological ordering
# toposort(adj)
# Family, Parents and Children
fa = [[]]*nVertex
#pa = [[]]*nVertex
#ch = [[]]*nVertex
for n in range(nVertex):
p = find(adj[n,:])
#pa[n] = p
fa[n] = [n]+p
#c = find(adj[:,n])
#ch[n] = c
return fa
def permute_table(index_names, cardinalities, visible_names, X):
'''
Given a network with index_names and cardinalities, reshape a table X with
the given order as in visible_names so that it fits the storage convention of BNJNB.
'''
nums = index_names_to_num(index_names, visible_names)
osize = [cardinalities[n] for n in nums]
idx = list(nums)
shape = clique_shape(cardinalities,idx)
return np.einsum(X, idx, sorted(idx)).reshape(shape)
def make_cliques(families, cardinalities, visibles=None, show_graph=False):
'''
Builds the set of cliques of a triangulated graph.
'''
N = len(families)
if visibles:
C = families+[visibles]
else:
C = families
# Moral Graph
MG = np.zeros((N, N))
for F in C:
for edge in combinations(F,2):
MG[edge[0], edge[1]] = 1
MG[edge[1], edge[0]] = 1
# if show_graph:
# show_ug_image(MG,imstr='MG.png')
elim = []
Clique = []
visited = [False]*N
# Find an elimination sequence
# Based on greedy search
# Criteria, select the minimum induced clique size
for j in range(N):
min_clique_size = np.inf
min_idx = -1
for i in range(N):
if not visited[i]:
neigh = find(MG[i,:])
nm = np.prod(clique_shape(cardinalities, neigh+[i]))
if min_clique_size > nm:
min_idx = i
min_clique_size = nm
neigh = find(MG[min_idx,:])
temp = set(neigh+[min_idx])
is_subset = False
for CC in Clique:
if temp.issubset(CC):
is_subset=True
if not is_subset:
Clique.append(temp)
# Remove the node from the moral graph
for edge in combinations(neigh,2):
MG[edge[0], edge[1]] = 1
MG[edge[1], edge[0]] = 1
MG[min_idx,:] = 0
MG[:, min_idx] = 0
elim.append(min_idx)
visited[min_idx] = True
# if show_graph:
# show_ug_image(MG,imstr='MG'+str(j)+'.png')
return Clique, elim
def topological_order(index_names, parents):
"""
returns a topological ordering of the graph
"""
adj = make_adjacency_matrix(index_names, parents)
nVertex = len(index_names)
indeg = np.sum(adj, axis = 1)
zero_in = find(indeg==0)
topo_order = []
while zero_in:
n = zero_in.pop(0)
topo_order.append(n)
for j in find(adj[:,n]):
indeg[j] -= 1
if indeg[j] == 0:
zero_in.append(j)
if len(topo_order)<nVertex:
return []
else:
return topo_order
"""
Explanation: Graph Utilities and Visualizations
End of explanation
"""
def mst(E, N):
"""
Generate a Spanning Tree of a graph with N nodes by Kruskal's algorithm,
given preordered edge set E with each edge as (weight, v1, v2)
For a minimum spanning tree, use
E.sort()
mst(E, N)
For a maximum spanning tree, use
E.sort(reverse=True)
mst(E, N)
"""
parent = list(range(N))
spanning_tree = {i:[] for i in range(N)}
def find_v(vertex):
v = vertex
while parent[v] != v:
v = parent[v]
return v
def union(v1, v2):
root1 = find_v(v1)
root2 = find_v(v2)
if root1 != root2:
parent[root2] = root1
for edge in E:
weight, v1, v2 = edge
p1, p2 = find_v(v1), find_v(v2)
if p1 != p2:
union(p1, p2)
spanning_tree[v1].append(v2)
spanning_tree[v2].append(v1)
return spanning_tree
def bfs(adj_list, root):
"""
Breadth-first search starting from the root
adj_list : A list of lists where adj_list[n] denotes the set of nodes that can be reached from node n
Returns a BFS order, and a BFS tree as an array parent[i]
The root node has parent[rootnode] = -1
"""
N = len(adj_list)
visited = [False]*N
parent = [-1]*N
queue = [root]
order = []
while queue:
v = queue.pop(0)
if not visited[v]:
visited[v] = True
for w in adj_list[v]:
if not visited[w]:
parent[w] = v
queue.append(w)
order.append(v)
return order, parent
def is_leaf(i, parent):
return not (i in parent)
def is_root(i, parent):
return parent[i] == -1
def make_list_receive_from(parent):
lst = [[] for i in range(len(parent)) ]
for i,p in enumerate(parent):
if p!= -1:
lst[p].append(i)
return lst
"""
Explanation: Spanning tree and graph traversal
End of explanation
"""
def sample_indices(index_names, parents, cardinalities, theta, num_of_samples=1):
'''
Sample directly the indices given a Bayesian network
'''
N = len(index_names)
order = topological_order(index_names, parents)
X = []
for count in range(num_of_samples):
x = [[]]*N
for n in order:
varname = index_names[n]
idx = index_names_to_num(index_names,parents[varname])
j = [0]*N
for i in idx:
j[i] = x[i]
I_n = cardinalities[n]
j[n] = tuple(range(I_n))
#print(j)
#print(theta[n][j])
x[n] = np.random.choice(I_n, p=theta[n][j].flatten())
#print(x)
X.append(x)
return X
def sample_states(var_names, states, index_names, parents, theta, num_of_samples=1):
"""
Returns a dict with keys as state_name tuples and values as counts.
This function generates each sample separately, so if
num_of_samples is large, consider using sample_counts
"""
N = len(index_names)
order = topological_order(index_names, parents)
X = dict()
nums = index_names_to_num(index_names,var_names)
cardinalities = cardinalities_from_states(index_names, states)
shape = clique_shape(cardinalities, nums)
for count in range(num_of_samples):
x = [[]]*N
for n in order:
varname = index_names[n]
idx = index_names_to_num(index_names,parents[varname])
j = [0]*N
for i in idx:
j[i] = x[i]
I_n = cardinalities[n]
j[n] = tuple(range(I_n))
#print(j)
#print(theta[n][j])
x[n] = np.random.choice(I_n, p=theta[n][j].flatten())
#print(x)
key = tuple((states[index_names[n]][x[n]] for n in nums))
X[key] = X.get(key, 0) + 1
return X
def counts_to_table(var_names, ev_counts, index_names, states):
"""
Given observed variables names as var_names and
observations as key-value pairs {state_configuration: count}
create a table of counts.
A state configuration is a tuple (state_name_0, ..., state_name_{K-1})
where K is the lenght of var_names, and state_name_k is a state
from states[var_names[k]]
"""
var_nums = list(index_names_to_num(index_names, var_names))
cardinalities = cardinalities_from_states(index_names, states)
shape = clique_shape(cardinalities, var_nums)
C = np.zeros(shape=shape)
N = len(index_names)
for rec in ev_counts.keys():
conf = [0]*N
for key, val in zip(var_names, rec):
s = states[key].index(val)
n = index_names_to_num(index_names, [key])[0]
conf[n] = s
#print(conf)
# Final value is the count that the pair is observed
C[tuple(conf)] += ev_counts[rec]
return C
def table_to_counts(T, var_names, index_names, states, clamped=[], threshold = 0):
"""
Convert a table on index_names clamped on setdiff(index_names, var_names)
to a dict of counts. Keys are state configurations.
"""
var_nums = list(index_names_to_num(index_names, var_names))
M = len(index_names)
ev_count = {}
for u in zip(*(T>threshold).nonzero()):
if not clamped:
key = tuple((states[v][u[var_nums[i]]] for i,v in enumerate(var_names)))
else:
key = tuple((states[v][u[var_nums[i]]] if clamped[var_nums[i]] is None else states[v][clamped[var_nums[i]]] for i,v in enumerate(var_names)))
ev_count[key] = T[u]
return ev_count
"""
Explanation: Samplers
End of explanation
"""
def clamped_pot(X, ev_states):
"""
Returns a subslice of a table. Used for clamping conditional probability tables
to a given set of evidence.
X: table
ev_states: list of clamped states ev_states[i]==e (use None if not clamped)
"""
# var is clamped, var not clamped
# ev_states[i]==e ev_states[i]==None
# var is member idx[i] = e idx[i] = slice(0, X.shape[i])
# var not member idx[i] = None idx[i] = slice(0, X.shape[i])
card = list(X.shape)
N = len(card)
idx = [[]]*N
for i,e in enumerate(ev_states):
if e is None and X.shape[i]>1: # the variable is unclamped or it is not a member of the potential
idx[i] = slice(0, X.shape[i])
else:
if X.shape[i]==1:
idx[i] = 0
else:
idx[i] = e
card[i] = 1
return X[tuple(idx)].reshape(card)
sz = (2,4,3,1,5)
T = np.random.choice([1,2,3,4], size=sz)
print(T.shape)
#print(T[1,:,:,0,0].shape)
print('*')
print(clamped_pot(T, [1, None, None, 7, 0]).shape)
def multiply(theta, idx):
"""Multiply a subset of a given list of potentials"""
par = [f(n) for n in idx for f in (lambda n: theta[n], lambda n: range(len(theta)))]+[range(len(theta))]
return np.einsum(*par)
def condition_and_multiply(theta, idx, ev_states):
"""Multiply a subset of a given list of potentials"""
par = [f(n) for n in idx for f in (lambda n: clamped_pot(theta[n], ev_states), lambda n: range(len(theta)))]+[range(len(theta))]
return np.einsum(*par)
def marginalize(Cp, idx, cardinalities):
return np.einsum(Cp, range(len(cardinalities)), [int(s) for s in sorted(idx)]).reshape(clique_shape(cardinalities,idx))
class Engine():
def __init__(self, index_names, parents, states, theta, visible_names=[]):
self.states = states
cardinalities = [len(states[a]) for a in index_names]
families = make_families(index_names, parents)
self.cardinalities = cardinalities
self.index_names = index_names
visibles = index_names_to_num(index_names, visible_names)
self.Clique, self.elim = make_cliques(families, cardinalities, visibles)
# Assign each conditional Probability table to one of the Clique potentials
# Clique2Pot is the assignments
self.Pot = families
self.Clique2Pot = np.zeros((len(self.Clique), len(self.Pot)))
selected = [False]*len(self.Pot)
for i,c in enumerate(self.Clique):
for j,p in enumerate(self.Pot):
if not selected[j]:
self.Clique2Pot[i,j] = set(p).issubset(c)
if self.Clique2Pot[i,j]:
selected[j] = True
# Find the root clique
# In our case it will be the one where all the visibles are a subset of
self.RootClique = -1
for i,c in enumerate(self.Clique):
if set(visibles).issubset(c):
self.RootClique = i
break
# Build the junction graph and compute a spanning tree
junction_graph_edges = []
for i,p in enumerate(self.Clique):
for j,q in enumerate(self.Clique):
ln = len(p.intersection(q))
if i<j and ln>0:
junction_graph_edges.append((ln,i,j))
junction_graph_edges.sort(reverse=True)
self.mst = mst(junction_graph_edges, len(self.Clique))
self.order, self.parent = bfs(self.mst, self.RootClique)
self.receive_from = make_list_receive_from(self.parent)
self.visibles = visibles
# Setup the data structures for the Junction tree algorithm
self.SeparatorPot = dict()
self.CliquePot = dict()
self.theta = theta
self.cardinalities_clamped = []
def propagate_observation(self, observed_configuration={}):
ev_names = list(observed_configuration.keys())
observed_states = [self.states[nm].index(observed_configuration[nm]) for nm in ev_names]
nums = index_names_to_num(self.index_names, ev_names)
#cardinalities_clamped = self.cardinalities.copy()
cardinalities_clamped = [1 if i in nums else c for i,c in enumerate(self.cardinalities)]
ev_states = [None]*len(self.cardinalities)
for i,e in zip(nums, observed_states):
ev_states[i] = e
# Collect stage
for c in reversed(self.order):
self.CliquePot[c] = np.ones(clique_shape(cardinalities_clamped, self.Clique[c]))
for p in self.receive_from[c]:
self.CliquePot[c] *= self.SeparatorPot[(p,c)]
# Prepare Clique Potentials
# Find probability tables that need to be multiplied into
# the Clique potential
idx = find(self.Clique2Pot[c, :])
if idx:
#print(idx)
#print(ev_states)
self.CliquePot[c] *= condition_and_multiply(self.theta, idx, ev_states)
# Set the separator potential
if not is_root(c, self.parent):
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.SeparatorPot[(c,self.parent[c])] = marginalize(self.CliquePot[c], idx, cardinalities_clamped)
# Distribution Stage
for c in self.order[1:]:
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.CliquePot[c] *= marginalize(self.CliquePot[self.parent[c]], idx, cardinalities_clamped)/self.SeparatorPot[(c,self.parent[c])]
self.cardinalities_clamped = cardinalities_clamped
self.values_clamped = ev_states
def propagate_table(self, X=None):
# Reset
self.values_clamped = [None]*len(self.cardinalities)
# Collect stage
for c in reversed(self.order):
self.CliquePot[c] = np.ones(clique_shape(self.cardinalities, self.Clique[c]))
for p in self.receive_from[c]:
self.CliquePot[c] *= self.SeparatorPot[(p,c)]
# Prepare Clique Potentials
# Find probability tables that need to be multiplied into
# the Clique potential
idx = find(self.Clique2Pot[c, :])
if idx:
self.CliquePot[c] *= multiply(self.theta, idx)
# Set the separator potential
if not is_root(c, self.parent):
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.SeparatorPot[(c,self.parent[c])] = marginalize(self.CliquePot[c], idx, self.cardinalities)
if X is not None:
SepX = marginalize(self.CliquePot[self.RootClique], self.visibles, self.cardinalities)
# Note: Take care of zero divide
self.CliquePot[self.RootClique] *= X/SepX
# Distribution Stage
for c in self.order[1:]:
idx = self.Clique[self.parent[c]].intersection(self.Clique[c])
self.CliquePot[c] *= marginalize(self.CliquePot[self.parent[c]], idx, self.cardinalities)/self.SeparatorPot[(c,self.parent[c])]
# def propagate(self, ev_names=[],ev_counts=None):
#
# if ev_names:
# X = evidence_to_table(ev_names, ev_counts, self.index_names, self.cardinalities, self.states)
# else:
# X = None
def compute_ESS(self, X=[]):
"""Compute Expected Sufficient Statistics for each probability table"""
E_S = dict()
self.propagate_table(X)
for c in self.order:
for n in find(self.Clique2Pot[c, :]):
E_S[n] = marginalize(self.CliquePot[c], self.Pot[n], self.cardinalities)
return E_S
def compute_marginal(self, var_names, normalization=False):
"""
Compute a marginal table on variables in var_names
if the variables are the subset of a clique, otherwise returns None.
var_names can be forced to be a subset of a clique by specifying
Engine(..., visible_names=var_names)
"""
var_indices = index_names_to_num(self.index_names, var_names)
idx = set(var_indices)
j = None
for c in self.order:
if idx.issubset(self.Clique[c]):
j = c
break
if j is not None:
if self.cardinalities_clamped:
if normalization:
return normalize(marginalize(self.CliquePot[j], var_indices, self.cardinalities_clamped))
else:
return marginalize(self.CliquePot[j], var_indices, self.cardinalities_clamped)
else:
if normalization:
return normalize(marginalize(self.CliquePot[j], var_indices, self.cardinalities))
else:
return marginalize(self.CliquePot[j], var_indices, self.cardinalities)
else:
print('Desired marginal is not a subset of any clique')
return None
def singleton_marginals(self, var_names, normalization=False):
""" For each variable in var_names compute its marginal """
L = {}
var_indices = index_names_to_num(self.index_names, var_names)
for j, v in enumerate(var_names):
marg = self.compute_marginal([v])
if normalization:
marg = normalize(marg)
if self.values_clamped[var_indices[j]] is None:
L[v] = {self.states[v][i]: p for i,p in enumerate(marg.flatten())}
else:
L[v] = {self.states[v][self.values_clamped[var_indices[j]]]: p for i,p in enumerate(marg.flatten())}
return L
def sample_table(self, var_names, num_of_samples=1):
#self.propagate_observation({})
P = self.compute_marginal(var_names)
if P is not None:
return np.random.multinomial(num_of_samples, P.flatten()).reshape(P.shape)
else:
return None
def marginal_table(self, marg_names, normalization=False):
clamped = self.values_clamped
return table_to_counts(self.compute_marginal(marg_names, normalization), marg_names, self.index_names, self.states, clamped)
"""
Explanation: The inference Engine
End of explanation
"""
## Define the model
index_names = ['Box', 'Fruit']
parents = {'Box': [], 'Fruit': ['Box']}
show_dag_image(index_names, parents)
"""
Explanation: Examples
Example: Oranges, Apples and Bananas
Alice has two boxes, Box1 and Box2. In Box1, there are $10$ oranges, $4$ apples and $1$ banana, in Box2 there are $2$ oranges, $6$ apples and $2$ bananas. Alice chooses one of the boxes randomly, with equal probability and
selects one of the fruits randomly, with equal probability.
What is the probability of choosing a Banana?
Given a banana is chosen, what is the probability that Alice chose Box1?
Definition of the Model Structure
End of explanation
"""
states = {'Box': ['Box1', 'Box2'],
'Fruit': ['Apple', 'Orange', 'Banana']}
cardinalities = cardinalities_from_states(index_names, states)
# Conditional Probability Tables
cp_tables = {('Box'): [0.5, 0.5],
('Fruit', 'Box'): [[10./15, 4./15, 1./15],[2./10, 6./10, 2./10]]
}
# Initialize the correct index order for strided access by computing the necessary permutations
theta = make_cp_tables(index_names, cardinalities, cp_tables)
"""
Explanation: Definition of Model Parameters
End of explanation
"""
eng = Engine(index_names, parents, states, theta)
"""
Explanation: Creation of the Inference Engine
End of explanation
"""
# Using the BJN
eng.propagate_observation()
#eng.propagate_table()
print(eng.compute_marginal(['Fruit']))
# Independent verification
print(marginalize(multiply(theta, [0,1]), [1], cardinalities))
"""
Explanation: Formulation of the Query
Example Query: What is the probability of choosing a Banana?
We need to compute the marginal $p(\text{Fruit}) = \sum_{\text{Box}} p(\text{Fruit}|\text{Box}) p(\text{Box})$
End of explanation
"""
#obs = {'Fruit': 'Orange', 'Box': 'Box1'}
#obs = {'Fruit': 'Orange'}
obs = {'Fruit':'Banana'}
eng = Engine(index_names, parents, states, theta)
eng.propagate_observation(obs)
marg_names = ['Box']
print(normalize(eng.compute_marginal(marg_names)))
# Independent verification
#print(normalize(multiply(theta, [0,1])[:,1]))
print(normalize(multiply(theta, [0,1])[:,2]))
"""
Explanation: Example Query: Given a banana is chosen, what is the probability that Alice took it from Box1?
We need to compute the posterior $$p(\text{Box}| \text{Fruit} = \text{banana}) = \frac{ p(\text{Fruit} = \text{banana}|\text{Box}) p(\text{Box})}{p(\text{Fruit} = \text{banana})}$$
BJN implements two methods for calculating the answer for this query.
propagate_observation
propagate_table
In the first method, propagate_observation, a single observation is represented by a catalog with key-value pairs where keys are observed variable names and values are observed states. This is the standard method for inference in Bayesian networks.
End of explanation
"""
ev_names = ['Fruit']
ev_counts = {('Banana',): 1}
ev_table = counts_to_table(ev_names, ev_counts, index_names, states)
eng = Engine(index_names, parents, states, theta, visible_names=ev_names)
eng.propagate_table(ev_table)
marg_names = ['Box']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
# Independent verification
print(normalize(multiply(theta, [0,1])[:,2]))
"""
Explanation: In the second method, propagate_table, we assume that we have obtained a collection
of observations and we are given the counts of each possible configuration.
End of explanation
"""
ev_names = ['Fruit']
ev_counts = {('Orange',):3, ('Banana',):1}
#ev_counts = {('Orange',):2, ('Banana',):5, ('Apple',):12}
#ev_counts = {('Orange',):1}
C = counts_to_table(ev_names, ev_counts, index_names, states)
eng.propagate_table(C)
marg_names = ['Box']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
eng.marginal_table(marg_names)
ev_names = ['Fruit','Box']
ev_counts = {('Apple','Box1'):12, ('Orange','Box2'):2, ('Banana','Box2'):5}
eng = Engine(index_names, parents, states, theta, visible_names=ev_names)
C = counts_to_table(ev_names, ev_counts, index_names, states)
eng.propagate_table(C)
marg_names = ['Fruit', 'Box']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
var_names = ['Fruit', 'Box']
ev_counts = sample_states(var_names, states, index_names, parents, theta, num_of_samples=1000)
C = counts_to_table(var_names, ev_counts, index_names, states)
#eng = Engine(index_names, visible_names, parents, cardinalities, theta)
#C = evidence_to_counts(X, index_names)
print(ev_counts)
print(C)
"""
Explanation: The advantage of using propagate_table is that if several observations are given on
the same variables, as is often the case in applications, the expected sufficient statistics can be computed in one collect-distribute iteration instead of a separate one for each observation.
End of explanation
"""
from itertools import product
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
## Define the model
index_names = ['Die1', 'Die2', 'Sum']
parents = {'Die1': [], 'Die2': [], 'Sum': ['Die1', 'Die2']}
states = {'Die1': list(range(1,7)), 'Die2': list(range(1,7)), 'Sum': list(range(2,13))}
cardinalities = cardinalities_from_states(index_names, states)
show_dag_image(index_names, parents)
S = np.zeros(cardinalities)
for i,j in product(range(1,7),repeat=2):
S[i-1,j-1,i+j-2] = 1
cp_tables = {('Die1',): [1/6, 1/6,1/6,1/6,1/6,1/6],
('Die2',): [1/6, 1/6,1/6,1/6,1/6,1/6],
('Sum','Die1','Die2'): S
}
theta = make_cp_tables(index_names, cardinalities, cp_tables )
eng = Engine(index_names, parents, states, theta)
eng.propagate_observation({'Sum':8})
marg = eng.compute_marginal(['Die1', 'Die2'], normalization=True)
marg
vname = 'Die2'
eng.propagate_observation({'Sum':9})
marg = eng.singleton_marginals([vname], states)
a = [x for x in marg[vname].keys()]
b = [marg[vname][x] for x in marg[vname].keys()]
plt.title(vname)
plt.bar(a, normalize(b))
plt.ylim([0, 1])
plt.show()
ev_names = ['Sum']
ev_counts = {(2,):1, (3,):1, (4,):1}
C = counts_to_table(ev_names, ev_counts, index_names, states)
eng.propagate_table(C)
marg_names = ['Die1']
#idx = index_names_to_num(index_names, ['Box'])
print(normalize(eng.compute_marginal(marg_names)))
#marg_names = ['Sum','Die2', 'Die1']
marg_names = ['Die2', 'Die1']
eng.propagate_observation({'Sum':9})
clamped = eng.values_clamped
table_to_counts(eng.compute_marginal(marg_names), marg_names, index_names, states, clamped)
eng.compute_marginal(marg_names)
marg_names = ['Sum','Die2', 'Die1']
eng.propagate_observation({'Sum': 11})
eng.marginal_table(marg_names, normalization=True)
marg_names = ['Sum','Die2', 'Die1']
T = eng.singleton_marginals(marg_names, normalization=True)
T
"""
Explanation: Example: Two dice
Two fair die are thrown. Their sum comes up $9$. What is the face value of the first dice?
End of explanation
"""
## Define the model
index_names = ['i', 'j', 'k']
cardinalities = [10, 20, 3]
parents = {'i': [], 'j': ['k'], 'k': ['i']}
show_dag_image(index_names, parents)
#parents = {'k': [], 'i': ['k'], 'j': ['k']}
visible_names = ['i','j']
visibles = index_names_to_num(index_names, visible_names)
"""
Explanation: Example: A Simple chain
End of explanation
"""
# [A][S][T|A][L|S][B|S][E|T:L][X|E][D|B:E]
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
parents = {'A':[], 'S':[], 'T':['A'], 'L':['S'], 'B':['S'], 'E':['T','L'], 'X':['E'], 'D':['B','E']}
## A Method for systematically entering the conditional probability tables
# P(A = Yes) = 0.01
# P(S = Yes) = 0.5
# P(T=Positive | A=Yes) = 0.05
# P(T=Positive | A=No) = 0.01
# P(L=Positive | S=Yes) = 0.1
# P(L=Positive | S=No) = 0.01
# P(B=Positive | S=Yes) = 0.6
# P(B=Positive | S=No) = 0.3
# P(E=True | L=Positive, T=Positive) = 1
# P(E=True | L=Positive, T=Negative) = 1
# P(E=True | L=Negative, T=Positive) = 1
# P(E=True | L=Negative, T=Negative) = 0
# P(X=Positive | E=True) = 0.98
# P(X=Positive | E=False) = 0.05
# P(D=Positive | E=True, B=Positive) = 0.9
# P(D=Positive | E=True, B=Negative) = 0.7
# P(D=Positive | E=False, B=Positive) = 0.8
# P(D=Positive | E=False, B=Negative) = 0.1
states = {'A':['No', 'Yes'],
'S':['No', 'Yes'],
'T':['Negative','Positive'],
'L':['Negative','Positive'],
'B':['Negative','Positive'],
'E':['False', 'True'],
'X':['Negative','Positive'],
'D':['Negative','Positive']}
cardinalities = cardinalities_from_states(index_names, states)
# Conditional Probability Tables
cp_tables = {('A',): [0.99, 0.01],
('S',): [0.5, 0.5],
('T','A'): [[0.99, 0.01],[0.95,0.05]],
('L','S'): [[0.99, 0.01],[0.9,0.1]],
('B','S'): [[0.7,0.3],[0.4, 0.6]],
('E','T','L'): [[[1.,0.],[0.,1.]] , [[0.,1.],[0.,1.]]],
('X','E'): [[0.95, 0.05], [0.02, 0.98]],
('D','B','E'):[[[0.9,0.1],[0.2,0.8]],[[0.3,0.7],[0.1,0.9]]]
}
# Todo: write a converter from a standard bn format
theta = make_cp_tables(index_names, cardinalities, cp_tables)
show_dag_image(index_names, parents)
eng = Engine(index_names, parents, states, theta)
eng.propagate_observation({'X': 'Positive', 'D':'Positive', 'S':'Yes'})
#normalize(eng.compute_marginal(['T']))
eng.marginal_table(['T'])
eng.marginal_table(['T'])
vis_names = ['A','B']
eng = Engine(index_names, parents, states, theta, visible_names=vis_names)
eng.propagate_observation({'X': 'Positive', 'D':'Positive', 'S':'Yes'})
eng.marginal_table(vis_names)
marg = eng.compute_marginal(['A'])
marg
eng.marginal_table(['A'])
#eng.propagate_observation({'X': 'Positive', 'D':'Positive', 'S':'Yes'})
#eng.propagate_observation({'A':'Yes','S':'Yes','X':'Positive','D':'Positive'})
#eng.propagate_observation({'A':'Yes','S':'Yes'})
eng.propagate_observation({'X': 'Positive', 'A':'Yes', 'D':'Positive','S':'Yes'})
#eng.propagate_observation({'S':'Yes'})
#eng.propagate_observation({})
eng.singleton_marginals(['A','T','L','B'], normalization=True)
normalize(eng.compute_marginal(['T','L']))
eng.propagate_observation({'X':'Positive'})
var_names = ['T','L']
X = eng.sample_table(var_names, num_of_samples=10000)
table_to_counts(X, index_names, index_names, states, clamped=eng.values_clamped)
visibles = index_names_to_num(index_names, var_names)
visibles
X.shape
P = multiply(eng.theta, range(len(cardinalities)))
P /= marginalize(P, visibles, cardinalities)
E_S = X*P
marginalize(E_S, [1], cardinalities)
"""
Explanation: Example: Chest Clinic
The chest clinic model (originally known also as Asia) is a famous toy medical expert system example from the classical 1988 paper of Lauritzen and Spiegelhalter.
https://www.jstor.org/stable/pdf/2345762.pdf
Shortness-of-breath (dyspnoea) may be due to tuberculosis, lung cancer or
bronchitis, or none of them, or more than one of them. A recent visit to Asia
increases the chances of tuberculosis, while smoking is known to be a risk factor
for both lung cancer and bronchitis. The results of a single chest X-ray do not
discriminate between lung cancer and tuberculosis, as neither does the presence
or absence of dyspnoea.
Also see: http://www.bnlearn.com/documentation/man/asia.html
In this toy domain, a patient can have three diseases:
T: Tuberculosis
L: Lung Cancer
B: Bronchitis
We also have the following possible a-priori effects on contracting a disease:
A: There is a Tuberculosis epidemic in Asia, so a visit may have an effect on Tuberculosis
S: Smoking history is known to have an effect both on Lung cancer and Bronchitis
Logical variables may be present, for example
* E: Either T or L or both
Symptoms or medical test results
D: The patient can show Dyspnea: Difficult breathing; shortness of breath, depending on T,L or B
X: X Ray gives a positive response if either Lung Cancer or Tuberclosis, or both. B does not have a direct efect on the outcome of the X-Ray
End of explanation
"""
#index_names = [a for a in set(['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D'])]
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
parents = random_parents(index_names)
show_dag_image(index_names, parents)
visible_names = ['X','A','B']
visibles = index_names_to_num(index_names, visible_names)
families = make_families(index_names, parents)
cardinalities = random_cardinalities(index_names)
states = states_from_cardinalities(index_names, cardinalities)
gamma = 0.01
theta = random_cp_tables(index_names, cardinalities, parents, gamma)
X = random_observations(cardinalities, visibles)
print(parents)
print(cardinalities)
print(states)
theta[0].shape
"""
Explanation: A Random Graph
End of explanation
"""
index_names, parents, cardinalities, states = random_model(10, max_indeg=4)
show_dag_image(index_names, parents)
#theta = make_random_cp_tables(index_names, cardinalities, parents, gamma=0.01)
visible_names = index_names[-4:]
families = make_families(index_names, parents)
visibles = index_names_to_num(index_names, visible_names)
Clique, elim_seq = make_cliques(families, cardinalities, visibles, show_graph=False)
print(elim_seq)
print(Clique)
"""
Explanation: Obsolete
Example 4: A random Model
End of explanation
"""
index_names = ['X1', 'X2', 'X3', 'X4', 'Y1', 'Y2', 'Y3', 'Y4']
parents = {'X1':[], 'Y1':['X1'], 'X2':['X1'], 'Y2':['X2'], 'X3':['X2'], 'Y3':['X3'], 'X4':['X3'], 'Y4':['X4'] }
cardinalities = [2]*8
show_dag_image(index_names, parents)
"""
Explanation: Example 5: A Hidden Markov Model
End of explanation
"""
#index_names = [a for a in set(['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D'])]
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
visible_names = ['X', 'A', 'D']
parents = random_parents(index_names)
cardinalities = random_cardinalities(index_names)
states = random_states(index_names, cardinalities)
show_dag_image(index_names, parents)
families = make_families(index_names, parents)
visibles = make_visibles(index_names, visible_names)
Clique, elim_seq = make_cliques(families, cardinalities, visibles=None)
gamma = 0.01
theta = make_random_cp_tables(index_names, cardinalities, parents, gamma)
index_names = ['A', 'S', 'T', 'L', 'B', 'E', 'X', 'D']
parents = {'A':[], 'S':[], 'T':['A'], 'L':['S'], 'B':['S'], 'E':['T','L'], 'X':['E'], 'D':['B','E']}
show_dag_image(index_names, parents)
cardinalities = [2,2,2,2,2,2,2,2]
states = states_from_cardinalities(index_names, cardinalities)
visible_names = ['A', 'X', 'D']
visibles = index_names_to_num(index_names, visible_names)
## Generate random potentials
gamma = 0.01
theta = random_cp_tables(index_names, cardinalities, parents, gamma)
#X = random_observations(cardinalities, visibles)
eng = Engine(index_names, parents, states, theta, visible_names)
eng.propagate_observation({})
eng.sample_table(visible_names, num_of_samples=1000 )
eng.propagate_observation()
eng.marginal_table(visible_names)
X = eng.sample_table(visible_names, num_of_samples=1000)
X
E_S_new = eng.compute_ESS()
P = multiply(eng.theta, range(len(cardinalities)))
E_S = P
E_S_new = eng.propagate(X)
P = multiply(eng.theta, range(len(cardinalities)))
Px = marginalize(P, visibles, cardinalities)
P /= Px
E_S = X*P
"""
Explanation: Building an inference engine
End of explanation
"""
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import clear_output, display, HTML
from matplotlib import rc
mpl.rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
mpl.rc('text', usetex=True)
fig = plt.figure(figsize=(5,5))
ax = plt.gca()
ln = plt.Line2D([0],[0])
ax.add_line(ln)
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
ax.set_axis_off()
plt.close(fig)
def set_line(th):
ln.set_xdata([np.cos(th), -np.cos(th)])
ln.set_ydata([np.sin(th), -np.sin(th)])
display(fig)
interact(set_line, th=(0.0, 2*np.pi,0.01))
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.2f',
)
w = widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
display(w)
w.get_interact_value()
w = widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
display(w)
w.value
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.set_title(0, 'An accordion')
tab_nest.set_title(1, 'Copy of the accordion')
tab_nest
items = [widgets.Label(str(i)) for i in range(4)]
w2 = widgets.HBox(items)
display(w2)
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
w = widgets.Dropdown(
options=[('One', 1), ('Two', 2), ('Three', 3)],
value=2,
description='Number:',
)
display(w)
w = widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast','Ultra'],
description='Speed:',
disabled=False,
button_style='warning', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
#icons=['check'] * 3
)
display(w)
w.
w = widgets.Select(
options=['Linux', 'Windows', 'OSX', '?'],
value='OSX',
rows=4,
description='OS:',
disabled=False
)
w2 = widgets.Select(
options=['A', 'B', 'C','D', '?'],
value='?',
rows=5,
description='Class:',
disabled=True
)
display(w, w2)
w2.disabled = False
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
from IPython.display import display
button = widgets.Button(description="Click Me!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("Button clicked.")
print(b)
button.on_click(on_button_clicked)
int_range = widgets.IntSlider()
output2 = widgets.Output()
display(int_range, output2)
def on_value_change(change):
with output2:
print(change['new'])
int_range.observe(on_value_change, names='value')
"""
Explanation: https://github.com/jupyter-widgets/tutorial/blob/master/notebooks/04.00-widget-list.ipynb
End of explanation
"""
|
jdvelasq/ingenieria-economica | 2016-03/IE-01-calculos-basicos.ipynb | mit | ## cálculo manual
-450 * (1 + 0.07 * 60 / 360)
"""
Explanation: Modelos de Valor del Dinero en el Tiempo
Notas de clase sobre ingeniería economica avanzada usando Python
Juan David Velásquez Henao
jdvelasq@unal.edu.co
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Software utilizado
Este es un documento interactivo escrito como un notebook de Jupyter , en el cual se presenta un tutorial sobre finanzas corporativas usando Python. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Linux y OS X.
Haga click aquí para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.
Descargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en Try Jupyter!
Contenido
Bibliografía
[1] SAS/ETS 14.1 User's Guide, 2015.
[2] hp 12c platinum financial calculator. User's guide.
[3] HP Business Consultant II Owner's manual.
[4] C.S. Park and G.P. Sharp-Bette. Advanced Engineering Economics. John Wiley & Sons, Inc., 1990.
Diagrama de flujo de dinero
Para la realización de cálculos financieros, el flujo de dinero en el tiempo es representado mediante el diagrama de flujo de dinero. En este, las flechas hacia arriba representan dinero recibido, mientras que las flechas hacia abajo representan dinero pagado.
<img src="images/cash-flow.png" width=600>
Valor del dinero en el tiempo
Contenido
Un peso hoy vale más que un peso mañana, debido a que un peso hoy se puede invertir (hoy) para obtener intereses.
<img src="images/cashflow-1-period.png" width=600>
De forma general:
$PV$ -- Valor actual
$FV$ -- valor futuro (FV)
$r$ -- tasa de interés
$$FV = PV \times (1 + r)$$
$$PV= \left( \displaystyle\frac{1}{1+r}\right) \times FV $$
Componentes de la tasa de interés:
$$1+r = \left( \displaystyle\ 1 + r_\alpha\right) \left( \displaystyle\ 1 + f\right) \left( \displaystyle\ 1 + r_\pi\right)$$
$r_{α}$ -- Interés real
$f$ -- Inflación
$r_{π}$ -- Componente de riesgo
Interés simple
En el interés simple se reciben directamente los intereses del capital sin reinvertirlos. Es típico en algunos tipos de préstamos.
Ejemplo.-- [2, pag. 43] Un amigo necesita un préstamo de 450 por 60 días. Usted le presta a un interés simple del 7%, calculado sobre una base anual de 360 días. Cuál es la cantidad total que usted debe recibir al final de periodo?
<img src="images/simple-interest.png" width=350>
End of explanation
"""
## cálculo manual
-450 * (1 + 0.07 * 60 / 365)
"""
Explanation: Ejemplo.-- [2, pag. 43] Realice el mismo ejemplo anterior sobre una base de 365 días.
End of explanation
"""
## cálculo manual
2300 * (1 + 0.25 * 3)
"""
Explanation: Ejemplo.-- ¿Qué cantidad de dinero se poseerá después de prestar \$ 2300 al 25% de interés simple anual durante 3 años?
End of explanation
"""
-7800 / ((1 + 0.02) ** 20)
"""
Explanation: Ejercicio.-- ¿Qué cantidad de dinero se poseerá después de prestar \$ 10000 al 30% de interés simple anual durante 2 años? (R/ \$ 16000)
Interés compuesto
Contenido
En el interés simple se reciben directamente los pagos del interés. En el interés compuesto, los intereses se suman al capital $(P)$, tal que se reciben intereses sobre los intereses de periodos pasados.
Diferencia:
$$P[(1+i)^N-1]-iNP = P[(1+i)^N-(1+iN)]$$
Concepto de equivalencia financiera
Contenido
Dos flujos de fondos son equivalentes a la tasa de interés $r$ si el uno puede ser convertido en el otro usando las tranformaciones apropiadas de interés compuesto.
<img src="images/equivalencia.png" width=700>
Modelos de valor del dinero en el tiempo
Contenido
Equivalencia entre un pago actual y un pago futuro.
Contenido
<img src="images/equiv-pago-actual-futuro-disc.png" width=400>
$$F = - P * (1+r)^n$$
$$P = - F * (1 + r)^{-n} = -\frac{F}{(1+r)^n}$$
Para resolver este tipo de problemas se requieren tres de las cuatro variables de la formula.
Ejercicio.-- Exprese $r$ en función de $P$, $F$ y $n$.
Ejercicio.-- Exprese $n$ en función de $P$, $F$ y $r$.
Ejemplo.-- ¿Cuánto dinero se deberá invertir hoy si se quiere obtener al final de 5 años \$ 7800 a un interés trimestral del 2%?
<img src="images/sesion-1-ejemplo-1.png" width=400>
End of explanation
"""
730 * (1 + 0.023)**12
"""
Explanation: Ejemplo.-- Cual es el valor futuro de \$ 730 dentro de un año con un interés mensual del 2.3%
End of explanation
"""
# importa la librería financiera.
# solo es necesario ejecutar la importación una sola vez.
import cashflows as cf
"""
Explanation: Equivalencia entre un pago actual y una serie de pagos iguales perpetuos.
Contenido
<img src="images/equiv-pmt-inf.png" width=300>
Se calcula $P$ como el valor presente de los pagos iguales $A$:
$$ P=\frac{A}{(1+r)} + \frac{A}{(1+r)^2} + ... $$
Sumando $A$ a ambos lados de la ecuación:
$$P+A~=~A \left [ 1 + \left(\frac{1}{1+r}\right) + \left(\frac{1}{1+r}\right) ^2 + ... \right ]~=~A\left [ \frac{1}{1-\left(\frac{1}{1+r}\right)}\right] ~=~ A + \frac{A}{r}$$
Al despejar $P$ se obtiene:
$$P=\frac{A}{r}$$
Equivalencia pagos iguales finitos.
Contenido
<img src="images/equiv-pmt-finitos.png" width=500>
Las formulas financieras para anualidades se obtienen al considerar la resta de dos anualidades infinitas; la primera se inicia en el instánte 0 y la segunda en el instánte $n$:
$$F~=~ P\times(1+r)^n - P^* ~=~\frac{A}{r} \times (1+r)^n - \frac{A}{r} ~=~ A \left [ \frac{(1 + r)^n -1}{r} \right ]$$
Ejercicio.-- Derive la ecuación para calcular el valor presente ($P$) en función de $A$, $r$ y $n$.
$$\frac{1}{8} * A ^2$$
Ejercicio.-- Escriba $A$ en función de $P$, $r$ y $n$.
Ejercicio.-- Escriba $r$ en función de $P$, $n$ y $A$.
Ejercicio.-- Escriba $n$ en función de $n$ en función de $P$, $A$ y $r$.
Ejercicio.-- Cuál tasa de interés hace que los siguientes flujos sean equivalentes?
Recibir \$ 1000 hoy.
Recibir \$ 600 al final del periodo 1 y otros \$ 600 al final del periodo 2.
<img src="images/sesion-1-ejemplo-2.png" width=400>
Modo de pago de anualidades finitas
Contenido
En el caso de pagos iguales periodicos, es posible establecer el pago al principio o al final del periodo.
<img src="images/payment-mode.png" width=500>
Ejercicio.-- Derive las ecuaciones de equivalencia entre una anualidad finita anticipada y su valor futuro equivalente y despeje $A$, $F$, $r$ y $n$.
<img src="images/eqiv-pmt-ant-finita.png" width=400>
Modelo general de equivalencia
Los modelos de valor de dinero en el tiempo (TVM -- time value of money) se basan en los siguientes esquemas.
<img src="images/tvm.png" width=700>
Ejercicio.-- ¿Cuál es el valor de A para que los dos flujos sean equivalentes usando una tasa ($r$) del 8%?
<img src="images/ejercicio-A.png" width=500>
Flujos típicos
Contenido
Para resolver un problema, primero se debe identificar el flujo de efectivo típico [2, pag. 48]
<img src="images/flujos-tipicos.png" width=700>
Ejercicio.-- Cuál es el valor presente de un bono (obligación) a dos años, cuyo pago (principal) es de \$ 100, con pagos trimestrales de intereses a una tasa del 10%?
Libreria cashflows
cashflows es una librería para la ejecución de cálculos financieros. Las funciones implementadas son similares a las usadas en Microsoft Excel, las calculadoras financieras y otros softwares similares.
End of explanation
"""
cf.pvfv(nrate = 7.2, # tasa de interes
pval = -2000, # valor presente
fval = +3000) # valor futuro
# Ya que nper es un valor entre 5 y 6, se requieren 6 años
# para tener un balance de al menos $ 3000.
# El balance al final de los seis años es (R/ 3035.28):
cf.pvfv(nrate = 7.2, # tasa de interes
pval = -2000, # valor presente
nper = 6) # numero de periodos
"""
Explanation: tvm es un modelo para la realización de cálculos simples del valor del dinero en el tiempo. Este modelo usa variables internas para almacenar la información y funciones para realizar los cálculos.
Nomenclatura para los parámetros:
pval -- valor presente.
fval -- valor futuro.
pmt -- pago periodico.
nper -- cantidad de periodos.
nrate -- tasa de interés por periodo.
pyr -- número de periodos por año.
when -- momento del periodo en que se paga la anualidad: 'end' (o 0) indica el pago al final del periodo; 'begin' (o 1) indica el pago al principio del periodo.
Nomenclatura para las funciones de equivalencia financiera:
pvfv(pval=None, fval=None, nrate=None, nper=None, pyr=1, noprint=True) -- valor presente - valor futuro.
pmtfv(pmt=None, fval=None, nrate=None, nper=None, pyr=1, noprint=True) -- pago periodico - valor futuro.
pvpmt(pmt=None, pval=None, nrate=None, nper=None, pyr=1, noprint=True) -- valor presente - pago periodico.
tvmm(pval=None, fval=None, pmt=None, nrate=None, nper=None, due=0, pyr=1, noprint=True) -- modelos de valor del dinero en el tiempo.
amortize(pval=None, fval=None, pmt=None, nrate=None, nper=None, due=0, pyr=1, noprint=True) -- imprime la tabla de amortizaciones para los cálculos realizados.
A continuación se presentan varios ejemplos de su uso.
Ejemplo (cuenta de ahorros).-- [3, pág. 88] Se depositan \$ 2000 en una cuenta de ahorros que paga un interés anual del 7.2% (calculado anualmente). Si no se hacen otros depósitos en la cuenta, cuanto tiempo se requiere para que la cuenta tenga \$ 3000? R/ 5.83
<img src="images/sesion-1-ejemplo-4.png" width=350>
End of explanation
"""
cf.pvfv(nrate = 2.0, # tasa de interes
fval = 7800, # valor futuro
pyr = 1,
nper = 5*4) # número de periodos: 5 años * 4 trimestres por año
# calcula el valor presente
"""
Explanation: Ejemplo (cuenta de ahorro).-- ¿Cuánto dinero se deberá invertir hoy si se quiere obtener al final de 5 años \$ 7800 a un interés trimestral del 2%? R/ -5249.18
<img src="images/sesion-1-ejemplo-1.png" width=400>
End of explanation
"""
cf.pvfv(nrate = 2.3,
nper = 12,
pval = -730)
"""
Explanation: Ejemplo (valorización).-- Cuál es el valor futuro de \$ 730 dentro de un año con un interés mensual del 2.3%? R/ \$ 959.03
<img src="images/sesion-1-ejemplo-3.png" width=350>
End of explanation
"""
# se pagan 47 cuotas de $ 2400 al principio del mes.
cf.tvmm(nper = 47,
pmt = -2400,
due = 1,
fval = 0,
nrate = 1.5)
# valor presente de las 47 cuotas al principio del periodo R/ $ 81735.58
x = _ + 2400 # + cuota adicional al principio del leasing R/ $ 84135.58
x
x + cf.pvfv(nper = 48,
fval = -15000,
nrate = 1.5) # opcion de compra R/ $ 91476.00
"""
Explanation: Ejemplo (leasing).-- [3, pág 02] Se hace un leasing por una maquinaria por 4 años (48 meses) con pagos mensuales de \$ 2400; se debe pagar una cuota adicional de \$ 2400 al principio del leasing para reemplazar la última cuota (que ocurre al principio del mes 48). El contrato incluye una opción de compra al final del periodo de leasing por \$ 15000. Cual valor capitalizado del leasing para una tasa del 1.5% mensual?
<img src="images/sesion-1-ejemplo-5.png" width=350>
End of explanation
"""
## crea una instancia del modelo y se almacena en una variable
m = cf.tvmm(nrate = 10.5/12, # tasa de interes
pval = 35000, # valor presente
pmt = -325,
fval = 0,
due = 0) # 'end'
m
"""
Explanation: Ejemplo (hipoteca).-- [2, pág. 50] Se hará un préstamo de \$ 35000 con un interes del (10.5% / 12). Si se hacen pagos mensuales de \$ 325 al final de cada mes, cuánto tiempo se requiere para cancelar la deuda?
End of explanation
"""
# pago en exceso el valor de la deuda
m = cf.tvmm(nrate = 10.5/12, # tasa de interes
pval = 35000, # valor presente
pmt = -325,
nper = 327,
due = 0) # 'end'
m
# pago en exceso el valor de la deuda
m * (1 + 10.5/12/100) + 325
"""
Explanation: Si se hacen 327 pagos de \$ 325, cuánto será el pago No. 328?
End of explanation
"""
m = cf.tvmm(nrate = 10.5/12, # tasa de interes
pval = 35000, # valor presente
pmt = -325,
nper = 327,
due = 0) # 'end'
m
-325 + m
"""
Explanation: Se se hacen únicamente 327 pagos, cuál es el valor del pago final para cancelar completamente la deuda?
End of explanation
"""
cf.tvmm(nrate = 6.25/24, # tasa de interes
pval = -775, # deposito inicial
pmt = -50, # depositos periodicos
fval = 4000, # saldo final
due = 0) # depositos al final del periodo
# saldo al final de 58 periodos
cf.tvmm(nrate = 6.25/24, # tasa de interes
nper = 58, # numero de periodos
pval = -775, # deposito inicial
pmt = -50, # depositos periodicos
due = 0) # depositos al final del periodo
"""
Explanation: Ejemplo.-- [2, pág. 53] Se abre una cuenta hoy con un depósito de \$ 775. La tasa de interés es de (6.25%/24). Si se siguen realizando depósitos de \$ 50, cuánto tiempo se requiere para alcanzar un saldo de \$ 4000?
End of explanation
"""
cf.pvfv(pval = -6000, # deposito inicial
nper = 32, # numero de periodos
fval = 10000) # saldo final
cf.tvmm(pval = -6000, # deposito inicial
nper = 32, # numero de periodos
pmt = 0, # pago periodico
fval = 10000) # saldo final
"""
Explanation: Ejemplo.-- [2, pág. 55] Qué tasa de interés debe obtenerse para acumular \$ 10000 en 32 periodos si se hace una inversión de \$ 6000? R/ 1.61%
End of explanation
"""
cf.pvpmt(pmt = -450, # pago mensual
nrate = 5.9/12, # tasa de interés
nper = 48) # numero de periodos
_ + 1500
"""
Explanation: Ejemplo.-- [2, pág. 57] Si se va a realizar un leasing a una tasa de interés de (5.9%/12) y se deben realizar 48 pagos de \$ 450 y un pago inicial de \$ 1500 al constituirse el crédito, cuál es el monto del préstamo?
End of explanation
"""
cf.tvmm(pmt = 17500, # pago periodico anual
fval = 540000, # valor de venta
nrate = 12.0, # tasa de interés
nper = 5) # numero de periodos
"""
Explanation: Ejemplo.-- [2, pág. 58] Cuánto se puede pagar por una propiedad que generará un flujo neto anual de \$ 17500 durante 5 años, si al final la propiedad se puede vender en \$ 540.000? (la tasa de interés es del 12%)
End of explanation
"""
cf.pvpmt(pval = 243400, # monto
nrate = 5.25/12, # tasa de interés
nper = 348) # numero de periodos
"""
Explanation: Ejemplo.-- [2, pág. 59] Calcule el pago mensual de una hipoteca por \$ 243400 pagada en 348 meses a una tasa del 5.25%/12.
End of explanation
"""
cf.tvmm(pval = -3200, # apertura
fval = 60000, # saldo futuro
nper = 30, # numero de periodos
nrate = 9.75/2) # tasa de interés
"""
Explanation: Ejemplo.-- [2, pág. 59] Cuánto es el monto periodico que debe consignarse mensualmente en una cuenta de ahorros si el saldo inicial es de \$ 3200, el saldo final es de \$ 60000, la tasa es de 9.75%/2 y plazo es de 30 meses?
End of explanation
"""
cf.tvmm(pval = 243400, # monto
nrate = 5.25/12, # tasa de interés
pmt = -1363.29, # pago mensual
nper = 60) # numero de periodos
"""
Explanation: Ejemplo.-- [2, pág. 61] Si se tiene una hipoteca de \$ 243400 con un pago mensual de \$ 1363.29 en 348 meses a una tasa del 5.25%/12, cuál es el pago que debe realizarse en la cuota 60 para cancelar completamente la deuda?
End of explanation
"""
cf.tvmm(pval = 0, # monto
nrate = 6.25/12, # tasa de interés
pmt = -50.0, # pago mensual
due = 1, # pago al principio del periodo
nper = 24) # numero de periodos
"""
Explanation: Ejemplo.-- [2, pág. 61] Si se consignan \$ 50 al principio de cada mes en una nueva cuenta que paga 6.25%/12, cuál es el saldo al final de 24 meses?
End of explanation
"""
cf.pvfv(pval = -32000,
nrate = -2.0,
nper = 6)
"""
Explanation: Ejemplo.-- [2, 62] Se compra una propiedad por \$ 32000. Si se presenta una depreciación del 2% por año, cuál será el valor de la propiedad al final de 6 años? R/ \$ 28346.96
End of explanation
"""
cf.pvpmt(pval = 7250-1500,
nrate = 10.5/12,
nper = 36)
# si se desea reducir el pago periodico en $ 10, cual tasa de interés debería obtenerse.
cf.pvpmt(pval = 7250-1500,
pmt = -176.89,
nper = 36)
"""
Explanation: Ejemplo.-- [3, pág. 81] Se está financiando la compra de un carro nuevo con un leasing a tres años a una tasa de interés del 10.5%/12. El precio del carro es de \$ 7250. Se debe realizar un pago inicial de \$ 1500. Cuánto es el pago mensual si los pagos se hacen al final del mes?
End of explanation
"""
cf.pvpmt(pmt = -630,
nrate = 11.5/12,
nper = 30*12) + 12000
"""
Explanation: Ejemplo.-- [3, pág. 82] El pago máximo mensual por una hipoteca a 30 años es de \$ 630. Si usted puede realizar un pago inicial de \$ 12.000 y la tasa es de 11.5%/12, cuál es el precio máximo de la propiedad?
End of explanation
"""
# (a) R/ $ 894.33
cf.pvpmt(pval = 75250,
nrate = 13.8/12,
nper = 25*12)
# (b) R/ $ -73408.81
cf.tvmm(pval = 75250,
pmt = -894.33,
nrate = 13.8/12,
nper = 4*12)
"""
Explanation: Ejemplo.-- [3, pág. 83] Se constituye una hipoteca a 25 años por un monto de \$ 75250 y una tasa de interés mensual de 13.8%/12. (a) cuál es el pago mensual; (b) Si se anticipa la venta de la propiedad al final de 4 años, cuánto debe pagarse para cancelar la deuda?
End of explanation
"""
cf.tvmm(pval = -2000,
pmt = -80,
nper = 15*12*2,
nrate = 0.346)
"""
Explanation: Ejemplo (ahorro programado).-- Se abre una cuenta ahorros con un depósito inicial de \$ 2000, y posteriormente se hacen depósitos quincenales de \$ 80 durante 15 años. La cuenta paga un interés del 0.346% quincenal. Cuánto dinero habrá en la cuenta después del último depósito? R/ 63988.44
End of explanation
"""
cf.tvmm(pval = 13500,
fval = -7500,
nper = 36,
due = 1,
nrate = 1.16)
"""
Explanation: Ejemplo (leasing).-- Se hace un leasing por 3 años para un carro nuevo que vale hoy \$ 13500, con la opción de comprar el carro por \$ 7500 al final del leasing. Cuánto es el pago mensual si el interés es del 1.16% mensual? (tenga en cuenta que en el leasing se paga la cuota al principio del periodo) R/ $ 288.49
End of explanation
"""
x = cf.pvpmt(pval = 35000,
nrate = 0.88,
pmt = -325)
x # R/ 336.77
# OPCION 1: se hace nper = 337 para calcular cuanto se paga
# como excedente en la última cuota. R/ $ 74.51
cf.tvmm(pval = 35000,
pmt = -325,
nrate = 0.88,
nper = 337)
325 - _ # Pago final, restando el excedente. R/ \$ 250.49
# OPCION 2: Se hace un pago adicional con la
# cuota 336 para cancelar el préstamo
cf.tvmm(pval = 35000,
pmt = -325,
nrate = 0.88,
nper = 336)
-325 + _ # pago final: cuota actual + saldo por cancelar
"""
Explanation: Ejemplo (préstamo).-- Se va a comprar un equipo con un préstamo por \$ 35000 con pagos mensuales de \$ 325 mensuales. Si la tasa de interés es del 0.88% mensual, cuántos periodos se requieren para realizar el pago de la deuda?
End of explanation
"""
cf.pvfv(pval = -6000,
fval = +10000,
nper = 4*8)
"""
Explanation: Ejemplo (cálculo de la tasa de interés).-- Si hoy se ahorran \$ 6000 y sin hacer más depósitos se tienen \$ 10000 al cabo de 8 años, ¿cuál es la tasa de interés, si los intereses son liquidados trimestralmente? R/ 1.61%
End of explanation
"""
cf.tvmm(fval = 540000,
pmt = 17500,
nrate = 12.0,
nper = 5)
"""
Explanation: Ejemplo (inversión).-- Se comprarán unas bodegas para arrendamiento. Si el arriendo es de \$ 17500 anual, y las bodegas se venden al final del año 5 por \$ 540000. Cuál es el monto máximo que puede pagarse por las bodegas si la tasa es del 12% anual? R/ \$ 369494.09
End of explanation
"""
cf.tvmm(nrate = 2.0,
pval = 0,
nper = 24,
pmt = -100,
due = 1)
"""
Explanation: Ejemplo (ahorro programado).-- Si al principio de cada mes se ahorran \$ 100 a una tasa del 2% durante 24 meses, ¿cuánto dinero se tendrá al final del mes 24?.
End of explanation
"""
cf.pvpmt(nrate = 2.0,
pval = 16700,
nper = 5*12)
"""
Explanation: Ejemplo (préstamo).-- Se desea determinar el valor de las cuotas mensuales iguales que se deberán pagar por un préstamo de \$ 16700 financiado a 5 años al 2% mensual
End of explanation
"""
cf.tvmm(nrate = 0.12,
pval = 243400,
nper = 5*12,
pmt = -4000)
"""
Explanation: Ejemplo (préstamo con pago final adicional).-- Se realiza una hipoteca por \$ 243400 a 5 años y con una tasa de interes de 0.12% con pagos mensuales de \$ 4000 y un pago adicional en el último mes. ¿cuál es el valor de dicho pago adicional?
End of explanation
"""
cf.pvfv(pval = [100, 200, 300, 400], # una de las variables puede ser un vector
nper = 5,
nrate = 3.0)
"""
Explanation: Ejemplo.-- Cuál será el valor futuro de \$100, \$ 200, \$ 300 y \$ 400 en 5 años a una tasa de interés de 3% anual?
End of explanation
"""
cf.pvfv(pval = 100, # una de las variables puede ser un vector
nper = [1, 2, 3, 4],
nrate = 3.0)
"""
Explanation: Ejemplo.-- Cuál será el valor futuro de \$100 en 1, 2, 3 y 4 años a una tasa de interés de 3% anual?
End of explanation
"""
cf.tvmm(pval = 100, # si varias variables son listas
pmt = 0, # todas ellas deben tener la misma
nper = [1, 2, 3], # longitud
nrate = [1.0, 1.5, 1.7]) #
"""
Explanation: Ejemplo.-- Cuál será el valor futuro de \$100 si se invierte a un año al 1%, a 2 años al 1.5% y a 3 años al 1.7%, con una capitalización anual?
End of explanation
"""
cf.tvmm(pval = [ 100, 110, 110, 105 ],
fval = [ -20, -10, -20, 0 ],
nper = [ 5, 5, 6, 7 ],
nrate = [ 2.0, 1.7, 1.6, 1.7 ])
"""
Explanation: Ejemplo.-- Cuál es la amortización para los siguientes préstamos? (fv es el pago final residual (como en el leasing)
```
plazo 5, 5, 6, 7
pv 100, 110, 110, 105
fv -20, -10, -20, 0
tasa 0.020, 0.017, 0.016, 0.017
```
End of explanation
"""
# principal, interest, payment, balance
ppal, interest, payment, balance = cf.amortize(pval=1000, fval=0, pmt=None, nrate=1.0, nper=6, due=0)
ppal
sum(ppal)
interest
sum(interest)
payment
sum(payment)
"""
Explanation: Ejemplo (tablas de amortización - Parte 1).-- Construya la tabla de amortización (balance) para un préstamo de \$ 1000 a 6 meses con pagos mensuales iguales a una tasa de interés del 1% mensual.
End of explanation
"""
|
suresh/notebooks | Intro to Data Science - Seattle Fremont Bridge.ipynb | mit | url = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
df = pd.read_csv(url, parse_dates=True)
df.head()
df.shape
df.index = pd.DatetimeIndex(df.Date)
df.head()
df.drop(columns=['Date'], inplace=True)
df.head()
df['total'] = df['Fremont Bridge East Sidewalk'] + df['Fremont Bridge West Sidewalk']
df.drop(columns=['Fremont Bridge East Sidewalk', 'Fremont Bridge West Sidewalk'], inplace=True)
"""
Explanation: Let's get the data from Seattle open data site
Data is published on a hourly basis from 2012 to now and can be found in the below url. Make a pandas dataframe from this for our analysis.
End of explanation
"""
df.resample('D').sum().plot()
df.resample('M').sum().plot()
df.groupby(df.index.hour).mean().plot()
"""
Explanation: Let's start with visualization to understand this data
End of explanation
"""
pivoted_data = df.pivot_table('total', index=df.index.hour, columns=df.index.date)
pivoted_data.iloc[:5, :5]
# plot of this dates together
pivoted_data.plot(legend=False, alpha=0.1)
"""
Explanation: Let's look a this data on hourly basis across days
End of explanation
"""
from sklearn.decomposition import PCA
X = pivoted_data.fillna(0).T.values;
X.shape
pca = PCA(2, svd_solver='full').fit(X)
X_PCA = pca.transform(X)
X_PCA.shape
plt.scatter(X_PCA[:, 0], X_PCA[:, 1])
dayofweek = pd.DatetimeIndex(pivoted_data.columns).dayofweek
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar();
"""
Explanation: Let's do unsupervised learning to detangle this chart
End of explanation
"""
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(2)
gmm.fit(X_PCA)
labels = gmm.predict(X_PCA)
labels
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=labels, cmap='rainbow')
"""
Explanation: Let's do a simple clustering to identify these two groups
End of explanation
"""
|
KECB/learn | BAMM.101x/Networks_new.ipynb | mit | import networkx as nx
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
simple_network = nx.Graph()
nodes = [1,2,3,4,5,6,7,8]
edges = [(1,2),(2,3),(1,3),(4,5),(2,7),(1,9),(3,4),(4,5),(4,9),(5,6),(7,8),(8,9)]
simple_network.add_nodes_from(nodes)
simple_network.add_edges_from(edges)
nx.draw(simple_network)
"""
Explanation: <h1>Network Analysis with Python</h1>
<li>Networks are connected bi-directional graphs
<li>Nodes mark the entities in a network
<li>Edges mark the relationships in a network
<h2>Examples of networks</h2>
<li>Facebook friends
<li>Other social networks
<li>transportation networks
<li>Power grids
<li>Internet routers
<li>Activity networks
<li>Many others
<h2>Questions we're interested in</h2>
<li>Shortest path between two nodes
<li>Connectedness
<li>Centrality
<li>Clustering
<li>Communicability
<h1>networkx</h1>
<li>Python package for networks
<li>Nodes and edges can contain data
<li>Nodes can be (hashable!) python objects
<h3>Constructing a simple network</h3>
<b>Necessary imports</b>
End of explanation
"""
pos=nx.spring_layout(simple_network) # positions for all nodes
# nodes
nx.draw_networkx_nodes(simple_network,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
#nx.draw_networkx_edges(sub_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(simple_network,pos,
edgelist=edges,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in simple_network.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(simple_network,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
"""
Explanation: <h1>Add labels to the nodes</h1>
End of explanation
"""
simple_network.has_edge(2,9)
#simple_network.has_node(2)
#simple_network.number_of_edges()
#simple_network.number_of_nodes()
#simple_network.order()
#len(simple_network)
"""
Explanation: <h4>Simple queries on the network</h4>
End of explanation
"""
for n in simple_network.nodes_iter():
print(n)
for a in simple_network.adjacency_iter():
print(a)
for e in simple_network.edges_iter():
print(e)
for d in simple_network.degree_iter():
print(d)
"""
Explanation: <h3>Iterating over a network</h3>
End of explanation
"""
G = nx.Graph() #Undirected simple graph
d = nx . DiGraph () #directed simple graph
m = nx . MultiGraph () #undirected with parallel edges
h = nx . MultiDiGraph () #directed with parallel edges
"""
Explanation: <h3>Types of graph</h3>
End of explanation
"""
print(nx.shortest_path(simple_network,6,8))
print(nx.shortest_path_length(simple_network,6,8))
"""
Explanation: <h4>Shortest path</h4>
End of explanation
"""
#Our geocoding data getter is useful here!
def get_json_data(response,country,types):
data = response.json()
result_list = list()
for result in data['results']:
if not country == 'ALL':
if not country in [x['long_name'] for x in result['address_components'] if 'country' in x['types']]:
continue
address = result['formatted_address']
lat = result['geometry']['location']['lat']
lng = result['geometry']['location']['lng']
if types:
result_list.append((address,lat,lng,result['types']))
else:
result_list.append((address,lat,lng))
return result_list
def get_geolocation_data(address_string,format="JSON",country="ALL",types=False):
format = format.lower()
address = '_'.join(address_string.split())
url = 'https://maps.googleapis.com/maps/api/geocode/%s?address=%s' %(format,address)
try:
import requests
response=requests.get(url)
if not response.status_code == 200: return None
func='get_'+format+'_data'
return globals()[func](response,country,types)
except:
return None
def get_lat_lon(address):
data = get_geolocation_data(address,format='JSON')
return str(data[0][1]) + ',' + str(data[0][2])
get_lat_lon('New York, NY')
"""
Explanation: <h2>Weighted Edges</h2>
<li>Example: A network of travel times between locations
<h4>We can use Google Distance Matrix API to get travel times</h4>
<li>Uses addresses to construct a distance matrix
<li>Free version uses latitudes and longitudes
<li>We can find latitudes and longitudes using the function we wrote as homework
<h4>We'll add a get_lat_lon function to our geocoding function to return lat,lon in google's required format</h4>
End of explanation
"""
addresses = [
"Columbia University, New York, NY",
"Amity Hall Uptown, Amsterdam Avenue, New York, NY",
"Ellington in the Park, Riverside Drive, New York, NY",
'Chaiwali, Lenox Avenue, New York, NY',
"Grant's Tomb, West 122nd Street, New York, NY",
'Pisticci, La Salle Street, New York, NY',
'Nicholas Roerich Museum, West 107th Street, New York, NY',
'Audubon Terrace, Broadway, New York, NY',
'Apollo Theater, New York, NY'
]
latlons=''
for address in addresses:
latlon=get_lat_lon(address)
latlons += latlon + '|'
print(latlons)
distance_url = 'https://maps.googleapis.com/maps/api/distancematrix/json?origins='
distance_url+=latlons
distance_url+='&destinations='
distance_url+=latlons
#Set the mode walking, driving, cycling
mode='walking'
distance_url+='&mode='+mode
print(distance_url)
"""
Explanation: <h4>Now we can construct the distance matrix api url</h4>
End of explanation
"""
import requests
data=requests.get(distance_url).json()
all_rows = data['rows']
address_graph=nx.Graph()
address_graph.add_nodes_from(addresses)
for i in range(len(all_rows)):
origin = addresses[i]
for j in range(len(all_rows[i]['elements'])):
duration = all_rows[i]['elements'][j]['duration']['value']
destination = addresses[j]
address_graph.add_edge(origin,destination,d=duration)
#print(origin,destination,duration)
nx.draw(address_graph)
nx.draw(address_graph)
"""
Explanation: <h4>Then let's get the distances and construct a graph</h4>
End of explanation
"""
def get_route_graph(address_list,mode='walking'):
latlons=''
for address in addresses:
latlon=get_lat_lon(address)
latlons += latlon + '|'
distance_url = 'https://maps.googleapis.com/maps/api/distancematrix/json?origins='
distance_url+=latlons
distance_url+='&destinations='
distance_url+=latlons
#Set the mode walking, driving, cycling
mode='driving'
distance_url+='&mode='+mode
import requests
data=requests.get(distance_url).json()
all_rows = data['rows']
address_graph = nx.Graph()
address_graph.add_nodes_from(addresses)
for i in range(len(all_rows)):
origin = addresses[i]
for j in range(len(all_rows[i]['elements'])):
if i==j:
continue
duration = all_rows[i]['elements'][j]['duration']['value']
destination = addresses[j]
address_graph.add_edge(origin,destination,d=duration)
return address_graph
address_graph = get_route_graph(addresses)
"""
Explanation: <h4>Functionalize this for reuse</h4>
End of explanation
"""
for edge in address_graph.edges():
print(edge,address_graph.get_edge_data(*edge))
for n in address_graph.edges_iter():
print(n)
address_graph = get_route_graph(addresses)
pos=nx.circular_layout(address_graph) # positions for all nodes
# nodes
nx.draw_networkx_nodes(address_graph,pos,
node_color='r',
node_size=2000,
alpha=0.001)
# edges
nx.draw_networkx_edges(address_graph,pos,edgelist=address_graph.edges(),width=8,alpha=0.5,edge_color='b')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=10)
node_name={}
for node in address_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(address_graph,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
"""
Explanation: <h4>Test the function by drawing it with node and edge labels</h4>
End of explanation
"""
for edge in address_graph.edges():
print(edge,address_graph.get_edge_data(*edge))
"""
Explanation: <h3>Yikes! Unreadable!</h3>
<li>Let's see what the edge weights are</li>
End of explanation
"""
for edge in address_graph.edges():
duration = address_graph.get_edge_data(*edge)['d']
address_graph.get_edge_data(*edge)['d'] = int(duration/60)
print(address_graph.get_edge_data(*edge))
"""
Explanation: <h4>Let's make this readable</h4>
End of explanation
"""
pos=nx.circular_layout(address_graph) # positions for all nodes
fig=plt.figure(1,figsize=(12,12)) #Let's draw a big graph so that it is clearer
# nodes
nx.draw_networkx_nodes(address_graph,pos,
node_color='r',
node_size=2000,
alpha=0.001)
# edges
nx.draw_networkx_edges(address_graph,pos,edgelist=address_graph.edges(),width=8,alpha=0.5,edge_color='b')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=10)
node_name={}
for node in address_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(address_graph,pos,node_name,font_size=16)
#fig.axis('off')
fig.show() # display
def get_route_graph(address_list,mode='walking'):
latlons=''
for address in addresses:
latlon=get_lat_lon(address)
latlons += latlon + '|'
distance_url = 'https://maps.googleapis.com/maps/api/distancematrix/json?origins='
distance_url+=latlons
distance_url+='&destinations='
distance_url+=latlons
#Set the mode walking, driving, cycling
mode='driving'
distance_url+='&mode='+mode
import requests
data=requests.get(distance_url).json()
all_rows = data['rows']
address_graph = nx.Graph()
address_graph.add_nodes_from(addresses)
for i in range(len(all_rows)):
origin = addresses[i]
for j in range(len(all_rows[i]['elements'])):
if i==j:
continue
duration = all_rows[i]['elements'][j]['duration']['value']
destination = addresses[j]
address_graph.add_edge(origin,destination,d=int(duration/60))
return address_graph
address_graph = get_route_graph(addresses)
"""
Explanation: <h4>Now let's look a the graph</h4>
End of explanation
"""
for edge in address_graph.edges():
import random
r = random.random()
if r <0.75: #get rid of 60% of the edges
address_graph.remove_edge(*edge)
"""
Explanation: <h4>Let's remove a few edges (randomly)</h4>
End of explanation
"""
pos=nx.circular_layout(address_graph) # positions for all nodes
plt.figure(1,figsize=(12,12)) #Let's draw a big graph so that it is clearer
# nodes
nx.draw_networkx_nodes(address_graph,pos,
node_color='r',
node_size=2000,
alpha=0.001)
# edges
nx.draw_networkx_edges(address_graph,pos,edgelist=address_graph.edges(),width=8,alpha=0.5,edge_color='b')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=7)
node_name={}
for node in address_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(address_graph,pos,node_name,font_size=16)
#fig.axis('off')
fig.show() # display
print(addresses)
"""
Explanation: <h4>And draw it again</h4>
End of explanation
"""
print(nx.shortest_path(address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY', 'Chaiwali, Lenox Avenue, New York, NY'))
print(nx.dijkstra_path(address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY', 'Chaiwali, Lenox Avenue, New York, NY'))
print(nx.dijkstra_path_length (address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY', 'Chaiwali, Lenox Avenue, New York, NY',weight='d'))
#[print(n1,n2,nx.shortest_path_length(n1,n2),nx.dijkstra_path_length(n1,n2,weight='d')) for n1 in address_graph.nodes() for n2 in address_graph.nodes()]
[print(n1,n2,
nx.shortest_path_length(address_graph,n1,n2),
nx.dijkstra_path_length(address_graph,n1,n2,weight='d'),
) for n1 in address_graph.nodes() for n2 in address_graph.nodes() if not n1 == n2]
for edge in address_graph.edges():
print(edge,address_graph.get_edge_data(*edge))
"""
Explanation: <h4>Shortest path and shortest duration</h4>
End of explanation
"""
#Divide edges into two groups based on weight
#Easily extendable to n-groups
elarge=[(u,v) for (u,v,d) in address_graph.edges(data=True) if d['d'] >5]
esmall=[(u,v) for (u,v,d) in address_graph.edges(data=True) if d['d'] <=5]
pos=nx.spring_layout(address_graph) # positions for all nodes
plt.figure(1,figsize=(12,12)) #Let's draw a big graph so that it is clearer
# nodes
nx.draw_networkx_nodes(address_graph,pos,node_size=700)
# edges. draw the larger weight edges in solid lines and smaller weight edges in dashed lines
nx.draw_networkx_edges(address_graph,pos,edgelist=elarge,
width=6)
nx.draw_networkx_edges(address_graph,pos,edgelist=esmall,
width=6,alpha=0.5,edge_color='b',style='dashed')
# labels
nx.draw_networkx_labels(address_graph,pos,font_size=20,font_family='sans-serif')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=7)
plt.axis('off')
#plt.savefig("address_graph.png") # save as png if you need to use it in a report or web app
fig.show() # display
"""
Explanation: <h2>Graph drawing options</h2>
<li>nltk uses matplotlib to draw graphs
<li>limited, but useful, functionalities
<h3>Let's take a look!</h3>
<b>Differnetiating edges by weight</b>
End of explanation
"""
origin = 'Amity Hall Uptown, Amsterdam Avenue, New York, NY'
destination = 'Chaiwali, Lenox Avenue, New York, NY'
shortest_path = nx.dijkstra_path(address_graph,origin,destination)
shortest_path_edges = list()
for i in range(len(shortest_path)-1):
shortest_path_edges.append((shortest_path[i],shortest_path[i+1]))
shortest_path_edges.append((shortest_path[i+1],shortest_path[i]))
path_edges=list()
other_edges=list()
node_label_list = dict()
node_label_list = {n:'' for n in address_graph.nodes()}
for edge in address_graph.edges():
if edge in shortest_path_edges:
path_edges.append(edge)
node_label_list[edge[0]] = edge[0]
node_label_list[edge[1]] = edge[1]
else:
other_edges.append(edge)
pos=nx.spring_layout(address_graph) # positions for all nodes
fig=plt.figure(1,figsize=(12,12))
# nodes
nx.draw_networkx_nodes(address_graph,pos,node_size=700)
# edges. draw the larger weight edges in solid lines and smaller weight edges in dashed lines
nx.draw_networkx_edges(address_graph,pos,edgelist=path_edges,
width=6)
nx.draw_networkx_edges(address_graph,pos,edgelist=other_edges,
width=6,alpha=0.5,edge_color='b',style='dashed')
# labels
nx.draw_networkx_labels(address_graph,pos,font_size=20,font_family='sans-serif',labels=node_label_list)
nx.draw_networkx_edge_labels(address_graph,pos,font_size=7)
plt.axis('off')
#plt.savefig("address_graph.png") # save as png if you need to use it in a report or web app
plt.show() # display
"""
Explanation: <h4>highlight the shortest path</h4>
End of explanation
"""
location = 'Amity Hall Uptown, Amsterdam Avenue, New York, NY'
distance_list = list()
for node in address_graph.nodes():
if node == location:
continue
distance = nx.dijkstra_path_length(address_graph,location,node)
distance_list.append((node,distance))
from operator import itemgetter
print(sorted(distance_list,key=itemgetter(1)))
"""
Explanation: <b>Question</b> How would you remove edge labels from all but the shortest path?
<h4>Working with a network</h4>
<b>Given an address, generate a <i>sorted by distance</i> list of all other addresses
End of explanation
"""
list(nx.all_simple_paths(address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY','Chaiwali, Lenox Avenue, New York, NY'))
nx.all_simple_paths(address_graph,
'Amity Hall Uptown, Amsterdam Avenue, New York, NY',
'Chaiwali, Lenox Avenue, New York, NY')
"""
Explanation: <b>Get all paths from one location to another</b>
End of explanation
"""
import json
import datetime
datafile='yelp_academic_dataset_user.json'
user_id_count = 1
user_id_dict = dict()
with open(datafile,'r') as f:
for line in f:
data = json.loads(line)
user_id = data.get('user_id')
friends = data.get('friends')
try:
user_id_dict[user_id]
except:
user_id_dict[user_id] = user_id_count
user_id_count+=1
print(len(user_id_dict))
user_data=list()
friends_data=list()
with open(datafile,'r') as f:
count=0
for line in f:
data=json.loads(line)
user_id=user_id_dict[data.get('user_id')]
name=data.get('name')
review_count=data.get('review_count')
average_stars=data.get('average_stars')
try:
yelping_since=datetime.datetime.strptime(data.get('yelping_since').strip(),"%Y-%m").date()
except:
yelping_since=datetime.datetime.now()
fans=data.get('fans')
user_friends=data.get('friends')
user_friends_list = list()
for i in range(len(user_friends)):
try:
user_friends_list.append(user_id_dict[user_friends[i]])
except:
continue
user_data.append([user_id,name,review_count,yelping_since,average_stars,fans])
friends_data.append([user_id,user_friends_list])
count+=1
print(count)
friends_data[0:10]
"""
Explanation: <h2>Social networks</h2>
<br>
We will use the <a href="https://www.yelp.com/dataset_challenge">Yelp database challenge</a><br>
Data on:
users,
businesses,
reviews,
tips (try the mushroom burger!),
check-in (special offers from yelp)
<h3>We're use the data in the users file (yelp_academic_dataset_user.json)</h3>
<h1>Important note!</h1>
<h3>The data on the yelp site has changed. If you want to follow along with the class video, do the following:</h3>
<li>Download the file "friends_graph" from the Files unit at the top of this week's class material
<li>Scroll down to the cell that says "G = nx.read_gpickle('friend_graph')" and run the rest of the notebook from that point onward
<li>If you want to use the new file, then do all the following cells EXCEPT for the "G = nx.read_gpickle('friend_graph')" cell</li>
<h4>Read the data from the data file and create several list variables to hold the data</h4>
<li>You could also use objects to store the data </li>
End of explanation
"""
#Select a random(ish) list of nodes
friends_of_list = [1,5,15,100,2200,3700,13500,23800,45901,78643,112112,198034,267123,298078,301200,353216]
node_super_set = set(friends_of_list)
#Get a superset of these nodes - the friends they are connected to
for n in friends_of_list:
friends = friends_data[n-1][1]
node_super_set = node_super_set.union({f for f in friends})
node_super_list = list(node_super_set)
#Collect node data and edges for these nodes
node_data = dict()
edge_list = list()
for node in node_super_list:
node_data[node]=user_data[node-1]
friends = friends_data[node-1][1]
edges = [(node,e) for e in friends if e in node_super_list]
edge_list.extend(edges)
print(len(edge_list),len(node_super_list),len(node_data))
for e in edge_list:
if e[0] in node_super_list:
continue
if e[1] in node_super_list:
continue
print(e[0],e[1])
"""
Explanation: <h2>Too much data for this class so let's cut it down</h2>
End of explanation
"""
import networkx as nx
friend_graph=nx.Graph()
friend_graph.add_nodes_from(node_super_list)
friend_graph.add_edges_from(edge_list)
print(friend_graph.number_of_nodes(),friend_graph.number_of_edges())
nx.write_gpickle(friend_graph,'./friend_graph')
G = nx.read_gpickle('friend_graph')
len(G.neighbors(1))
#Querying the graph
len(friend_graph.neighbors(1))
%matplotlib inline
nx.draw(friend_graph)
"""
Explanation: <h3>Make the graph</h3>
End of explanation
"""
count = 0
for n in friend_graph.nodes_iter():
if friend_graph.degree(n) == 1:
print(n)
nodes = friend_graph.nodes()
for node in nodes:
if friend_graph.degree(node) == 0:
friend_graph.remove_node(node)
pos=nx.spring_layout(friend_graph) # positions for all nodes
import matplotlib.pyplot as plt
fig = plt.figure(1,figsize=(12,12))
#pos
# nodes
nx.draw_networkx_nodes(friend_graph,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
nx.draw_networkx_edges(friend_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(friend_graph,pos,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in friend_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(friend_graph,pos,node_name,font_size=16)
fig.show()
"""
Explanation: <h4>Let's remove disconnected nodes</h4>
End of explanation
"""
nx.shortest_path(friend_graph,420079,3700)
#If using the file friend_graph, comment the line above and uncomment the line below this one
#nx.shortest_path(friend_graph,100219,19671)
nx.shortest_path_length(friend_graph,420079,928795)
#If using the file friend_graph, comment the line above and uncomment the line below this one
#nx.shortest_path_length(friend_graph,167099,47622)
"""
Explanation: <h3>Start looking at different aspects of the graph</h3>
End of explanation
"""
print(len(list(nx.connected_components(friend_graph))))
for comp in nx.connected_components(friend_graph):
print(comp)
"""
Explanation: <h3>Graph components</h3>
<li>Let's see the number of connected components
<li>And then each connected component
End of explanation
"""
largest_size=0
largest_graph = None
for g in nx.connected_component_subgraphs(friend_graph):
if len(g) > largest_size:
largest_size = len(g)
largest_graph = g
nx.draw(largest_graph)
"""
Explanation: <h4>Largest connected component subgraph</h4>
End of explanation
"""
smallest_size=100000
smallest_graph = None
for g in nx.connected_component_subgraphs(friend_graph):
if len(g) < smallest_size:
smallest_size = len(g)
smallest_graph = g
nx.draw(smallest_graph)
#Find out node degrees in the graph
nx.degree(friend_graph)
"""
Explanation: <h4>Smallest connected component</h4>
End of explanation
"""
#Highest degree
print(max(nx.degree(friend_graph).values()))
#Node with highest degree value
degrees = nx.degree(friend_graph)
print(max(degrees,key=degrees.get))
"""
Explanation: <h4>Max degree. The yelp user with the most friends</h4>
End of explanation
"""
pos=nx.spring_layout(friend_graph) # positions for all nodes
fig = plt.figure(1,figsize=(12,12))
#pos
# nodes
nx.draw_networkx_nodes(friend_graph,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
nx.draw_networkx_edges(friend_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(friend_graph,pos,
edgelist=edges,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in friend_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(friend_graph,pos,node_name,font_size=16)
fig.show()
nx.clustering(friend_graph)
nx.average_clustering(friend_graph)
G=nx.complete_graph(4)
nx.draw(G)
nx.clustering(G)
G.remove_edge(1,2)
pos=nx.spring_layout(G) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
#nx.draw_networkx_edges(sub_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(G,pos,
edgelist=G.edges(),
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
nx.clustering(G)
"""
Explanation: <h2>Network analysis algorithms</h2>
https://networkx.github.io/documentation/networkx-1.10/reference/algorithms.html
<h3>Clustering</h3>
Clustering is a measure of how closely knit the nodes in a graph are. We can measure the degree to which a node belongs to a cluster and the degree to which the graph is clustered
- Node clustering coefficient: A measure that shows the degree to which a node belongs to a cluster
- Graph clustering coefficient: A measure that shows the degree to which a graph is clustered
End of explanation
"""
from networkx.algorithms.centrality import closeness_centrality, communicability
"""
Explanation: <h3>Node 0 has two neighbors: 1 and 2. Of the three possible edges, only two are actually present. So, its clustering coefficient is 2/3 or 0.667</h3>
<h2>Centrality and communicability</h2>
<b>Centrality</b> deals with identifying the most important nodes in a graph<p>
<b>Communicability</b> measures how easy it is to send a message from node i to node j
<li>closeness_centrality: (n-1)/sum(shortest path to all other nodes)
<li>betweenness_centrality: fraction of pair shortest paths that pass through node n
<li>degree centrality: fraction of nodes that n is connected to
<li>communicability: the sum of all walks from one node to every other node
End of explanation
"""
type(closeness_centrality(friend_graph))
from collections import OrderedDict
cc = OrderedDict(sorted(
closeness_centrality(friend_graph).items(),
key = lambda x: x[1],
reverse = True))
cc
"""
Explanation: <h3>Closeness centrality is a measure of how near a node is to every other node in a network</h3>
<h3>The higher the closeness centrality, the more central a node is</h3>
<h3>Roughly, because it can get to more nodes in shorter jumps</h3>
End of explanation
"""
G=nx.complete_graph(4)
nx.closeness_centrality(G)
G.remove_edge(1,2)
pos=nx.spring_layout(G) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
#nx.draw_networkx_edges(sub_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(G,pos,
edgelist=G.edges(),
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
nx.closeness_centrality(G)
"""
Explanation: <h3>Understanding closeness centrality</h3>
End of explanation
"""
G = nx.Graph([(0,1),(1,2),(1,5),(5,4),(2,4),(2,3),(4,3),(3,6)])
nx.communicability(G)
#Define a layout for the graph
pos=nx.spring_layout(G) # positions for all nodes
# draw the nodes: red, sized, transperancy
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=1)
# draw the edges
nx.draw_networkx_edges(G,pos,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
# communicability is the sum of closed walks of different lengths between nodes.
#communicability(friend_graph) #Costly operation, we won't do this. Try it at home!
"""
Explanation: <li>n=4
<li>shortest paths from 2 (2-0:1, 2-3:1, 2-1:2)
<li> (n-1)/sum = 3/4 = 0.75
<h2>Communicability</h2>
A measure of the degree to which one node can communicate with another<p>
Takes into account all paths between pairs of nodes<p>
The more paths, the higher the communicability
End of explanation
"""
G=nx.complete_graph(4)
nx.betweenness_centrality(G)
"""
Explanation: <h2>Betweenness centrality</h2>
<h3>measures of the extent to which a node is connected to other nodes that are not connected to each other. </h3>
<h3>It’s a measure of the degree to which a node serves as a connector</h3>
<h3>Example: a traffic bottleneck</h3>
<h4>The number of shortest paths that go through node n/total number of shortest paths</h4>
End of explanation
"""
G.remove_edge(1,2)
nx.betweenness_centrality(G)
#Define a layout for the graph
pos=nx.spring_layout(G) # positions for all nodes
# draw the nodes: red, sized, transperancy
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=1)
# draw the edges
nx.draw_networkx_edges(G,pos,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
nx.all_pairs_shortest_path(G)
"""
Explanation: <h3>When the graph is fully connected, no shortest paths go through the node. So the numerator is zero</h3>
End of explanation
"""
nx.betweenness_centrality(friend_graph)
"""
Explanation: <h3>There are 12 shortest paths in total</h3>
<h3>Two go through 0 (1, 0, 2) and (2, 0, 1)</h3>
<h3> Betweeness centrality: 2/12</h3>
End of explanation
"""
G = nx.complete_graph(4)
nx.eccentricity(G)
G.remove_edge(1,2)
nx.eccentricity(G)
"""
Explanation: <h3>Dispersion in fully connected graphs</h3>
<li>Eccentricity: the max distance from one node to all other nodes (least eccentric is more central)
<li>diameter: the max eccentricity of all nodes in a graph (the longest shortest path)
<li>periphery: the set of nodes with eccentricity = diameter
End of explanation
"""
nx.diameter(G)
nx.periphery(G)
nx.diameter(friend_graph)
nx.periphery(friend_graph)
G = nx.complete_graph(4)
print(nx.diameter(G))
print(nx.periphery(G))
G.remove_edge(1,2)
print(nx.diameter(G))
print(nx.periphery(G))
"""
Explanation: <h2>Diameter</h2>
The longest shortest path in the graph
<h2>Periphery</h2>
The nodes with the longest shortest paths (the peripheral nodes)
End of explanation
"""
from networkx.algorithms.clique import find_cliques, cliques_containing_node
for clique in find_cliques(friend_graph):
print(clique)
cliques_containing_node(friend_graph,2)
#nx.draw(nx.make_max_clique_graph(friend_graph))
"""
Explanation: <h3>Cliques</h3>
A clique is a subgraph in which every node is connected to every other node
End of explanation
"""
from networkx.algorithms.distance_measures import center
center(largest_graph)
"""
Explanation: <h3>Center: The set of nodes that are the most central (they have the smallest distance to any other node)</h3>
Graph must be fully connected
End of explanation
"""
|
abhipr1/DATA_SCIENCE_INTENSIVE | Data_Story_1/Adv_vs_Trade.ipynb | apache-2.0 | est_m=smf.ols(formula='Sales ~ TV', data=adv).fit()
est_m.summary()
"""
Explanation: Determine if TV advertisement has any impact on sell.
End of explanation
"""
# Plot the data and fitted line
x_prime = pd.DataFrame({'TV': np.linspace(adv.TV.min(),
adv.TV.max(),
100)})
y_hat = est_m.predict(x_prime)
plt.xlabel("TV")
plt.ylabel("Sales")
plt.title("Example of Heteroskedasticity")
plt.scatter(adv.TV, adv.Sales, alpha=0.3)
plt.plot(x_prime, y_hat, 'r', linewidth=2, alpha=0.9)
# View the residuals
plt.figure()
plt.scatter(est_m.predict(adv), est_m.resid, alpha=0.3)
plt.xlabel("Predicted Sales")
plt.ylabel("Residuals")
"""
Explanation: *TV avd has an impact on sell.
End of explanation
"""
est_l = smf.ols(formula='np.log(Sales) ~ TV', data=adv).fit()
y_hat = est_l.predict(x_prime)
# Plot data
plt.figure()
plt.xlabel("TV")
plt.ylabel("log(Sales)")
plt.title("Log Transformation of y")
plt.scatter(adv.TV, np.log(adv.Sales), alpha=0.3)
plt.plot(x_prime, y_hat, 'r', linewidth=2, alpha=0.9)
plt.ylim(1.5, 3.5)
"""
Explanation: Log Transform
End of explanation
"""
est_multi = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=adv).fit()
est_multi.summary()
"""
Explanation: Multiple Regression
End of explanation
"""
est_news = smf.ols(formula='Sales ~ Newspaper', data=adv).fit()
est_news.summary()
"""
Explanation: single LR with Newspaper
End of explanation
"""
est_tv_radio = smf.ols(formula='Sales ~ TV + Radio', data=adv).fit()
est_tv_radio.summary()
"""
Explanation: Question
-- While condidering linear regression between Sales and Newspaper Advertisement coefficient is +ve but when it comes with all medium it is negative. What does it mean?
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session02/Day2/ModelSelection_ExerciseSolutions.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
# comment out this line if you don't have seaborn installed
import seaborn as sns
sns.set_palette("colorblind")
import numpy as np
"""
Explanation: Model Selection For Machine Learning
In this exercise, we will explore methods to do model selection in a machine learning context, in particular cross-validation and information criteria. At the same time, we'll learn about scikit-learn's class structure and how to build a pipeline.
Why Model Selection?
There are several reasons why you might want to perform model selection:
You might not be sure which machine learning algorithm is most appropriate.
The algorithm you have chosen might have a regularization parameter whose value you want to determine.
The algorithm you have chosen might have other parameters (e.g. the depth of a decision tree, the number of clusters in KMeans, ...) you would like to determine.
You might not be sure which of your features are the most useful/predictive.
Question: Can you think of other reasons and contexts in which model selection might be important?
Your decisions for how to do model selection depend very strongly (like everything else in machine learning) on the purpose of your machine learning procedure. Is your main purpose to accurately predict outcomes for new samples? Or are you trying to infer something about the system?
Inference generally restricts the number of algorithms you can reasonably use, and also the number of model selection procedures you can apply. In the following, assume that everything below works for prediction problems; I will point out methods for inference where appropriate. Additionally, assume that everything below works for supervised machine learning. We will cover unsupervised methods further below.
Imports
Let's first import some stuff we're going to need.
End of explanation
"""
# execute this line:
from astroquery.sdss import SDSS
TSquery = """SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
"""
SDSSts = SDSS.query_sql(TSquery)
SDSSts
"""
Explanation: First, we're going to need some data. We'll work with the star-galaxy data from the first session. This uses the astroquery package and then queries the top 10000 observations from SDSS (see this exercise for more details):
End of explanation
"""
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
# set the random state
rs = 23 # we are in Chicago after all
# extract feature names, remove class
feats = list(SDSSts.columns)
feats.remove('class')
# cast astropy table to pandas, remove classes
X = np.array(SDSSts[feats].to_pandas())
# our classes are the outcomes to classify on
y = np.array(SDSSts['class'])
# let's do a split in training and test set:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = rs)
# we'll leave the test set for later.
# instantiate the random forest classifier:
RFmod = RandomForestClassifier()
# do a grid search over the free random forest parameters:
pars = {"n_estimators": [10, 100, 300],
"max_features": [1, 3, 7],
"min_samples_leaf": [1,10]}
grid_results = GridSearchCV(RandomForestClassifier(),
pars,
cv = 5)
grid_results.fit(X_train, y_train)
"""
Explanation: Exercise 1: Visualize this data set. What representation is most appropriate, do you think?
Exercise 2: Let's now do some machine learning. In this exercise, you are going to use a random forest classifier to classify this data set. Here are the steps you'll need to perform:
* Split the column with the classes (stars and galaxies) from the rest of the data
* Cast the features and the classes to numpy arrays
* Split the data into a test set and a training set. The training set will be used to train the classifier; the test set we'll reserve for the very end to test the final performance of the model (more on this on Friday). You can use the scikit-learn function test_train_split for this task
* Define a RandomForest object from the sklearn.ensemble module. Note that the RandomForest class has three parameters:
- n_estimators: The number of decision trees in the random forest
- max_features: The maximum number of features to use for the decision trees
- min_samples_leaf: The minimum number of samples that need to end up in a terminal leaf (this effectively limits the number of branchings each tree can make)
* We'll want to use cross-validation to decide between parameters. You can do this with the scikit-learn class GridSearchCV. This class takes a classifier as an input, along with a dictionary of the parameter values to search over.
In the earlier lecture, you learned about four different types of cross-validation:
* hold-out cross validation, where you take a single validation set to compare your algorithm's performance to
* k-fold cross validation, where you split your training set into k subsets, each of which holds out a different portion of the data
* leave-one-out cross validation, where you have N different subsets, each of which leaves just one sample as a validation set
* random subset cross validation, where you pick a random subset of your data points k times as your validation set.
Exercise 2a: Which of the four algorithms is most appropriate here? And why?
Answer: In this case, k-fold CV or random subset CV seem to be the most appropriate algorithms to use.
* Using hold-out cross validation leads to a percentage of the data not being used for training at all.
* Given that the data set is not too huge, using k-fold CV probably won't slow down the ML procedure too much.
* LOO CV is particularly useful for small data sets, where even training on a subset of the training data is difficult (for example because there are only very few examples of a certain class).
* Random subset CV could also yield good results, since there's no real ordering to the training data. Do not use this algorithm when the ordering matters (for example in Hidden Markov Models)
Important: One important thing to remember that cross-validation crucially depends on your samples being independent from each other. Be sure that this is the case before using it. For example, say you want to classify images of galaxies, but your data set is small, and you're not sure whether your algorithm is rotation independent. So you might choose to use the same images multiple times in your training data set, but rotated by a random degree. In this case, you have to make sure all versions of the same image are included in the same data set (either the training, the validation or the test set), and not split across data sets! If you don't, your algorithm will be unreasonably confident in its accuracy (because you are training and validating essentially on the same data points).
Note that scikit-learn can actually deal with that! The class GroupKFold allows k-fold cross validation using an array of indices for your training data. Validation sets will only be split among samples with different indices.
But this was just an aside. Last time, you used a random forest and used k-fold cross validation to effectively do model selection for the different parameters that the random forest classifier uses.
Exercise 2b: Now follow the instructions above and implement your random forest classifier.
End of explanation
"""
grid_results.grid_scores_
"""
Explanation: Exercise 2c: Take a look at the different validation scores for the different parameter combinations. Are they very different or are they similar?
End of explanation
"""
from sklearn.decomposition import PCA
# instantiate the PCA object
pca = PCA(n_components=2)
# fit and transform the samples:
X_pca = pca.fit_transform(X)
# make a plot object
fig, ax = plt.subplots(1, 1, figsize=(12,8))
# loop over number of classes:
for i,l in enumerate(np.unique(y)):
members = y == l
plt.scatter(X_pca[members, 0], X_pca[members, 1],
color=sns.color_palette("colorblind",8)[i],
label=l)
ax.set_xlabel("PCA Component 1")
ax.set_ylabel("PCA Component 2")
plt.legend()
"""
Explanation: It looks like the scores are very similar, and have very small variance between the different cross validation instances. It can be useful to do this kind of representation to see for example whether there is a large variance in the cross-validation results.
Cross-validating Multiple Model Components
In most machine learning applications, your machine learning algorithm might not be the only component having free parameters. You might not even be sure which machine learning algorithm to use!
For demonstration purposes, imagine you have many features, but many of them might be correlated. A standard dimensionality reduction technique to use is Principal Component Analysis.
Exercise 4: The number of features in our present data set is pretty small, but let's nevertheless attempt to reduce dimensionality with PCA. Run a PCA decomposition in 2 dimensions and plot the results. Colour-code stars versus calaxies. How well do they separate along the principal components?
Hint: Think about whether you can run PCA on training and test set separately, or whether you need to run it on both together before doing the train-test split?
End of explanation
"""
# Train PCA on training data set
X_pca_train = pca.fit_transform(X_train)
# apply to test set
X_pca_test = pca.transform(X_test)
# we'll leave the test set for later.
# instantiate the random forest classifier:
RFmod = RandomForestClassifier()
# do a grid search over the free random forest parameters:
pars = {"n_estimators": [10, 100, 300],
"max_features": [1, 2],
"min_samples_leaf": [1,10]}
grid_results = GridSearchCV(RandomForestClassifier(),
pars,
cv = 5)
grid_results.fit(X_pca_train, y_train)
grid_results.best_score_
"""
Explanation: Exercise 5: Re-do the classification on the PCA components instead of the original features.
End of explanation
"""
from sklearn.pipeline import Pipeline
# make a list of name-estimator tuples
estimators = [('pca', PCA()), ('clf', RandomForestClassifier())]
# instantiate the pipeline
pipe = Pipeline(estimators)
# make a dictionary of parameters
params = dict(pca__n_components=[2, 4, 6, 8],
clf__n_estimators=[10, 100, 300],
clf__min_samples_leaf=[1,10])
# perform the grid search
grid_search = GridSearchCV(pipe, param_grid=params)
grid_search.fit(X_train, y_train)
print(grid_search.best_score_)
print(grid_search.best_params_)
"""
Explanation: Note: In general, you should (cross-)validate both your data transformations and your classifiers!
But how do we know whether two components was really the right number to choose? perhaps it should have been three? Or four? Ideally, we would like to include the feature engineering in our cross validation procedure. In principle, you can do this by running a complicated for-loop. In practice, this is what scikit-learns Pipeline is for! A Pipeline object takes a list of tuples of ("string", ScikitLearnObject) pairs as input and strings them together (your feature vector X will be put first through the first object, then the second object and so on sequentially).
Note: scikit-learn distinguishes between transformers (i.e. classes that transform the features into something else, like PCA, t-SNE, StandardScaler, ...) and predictors (i.e. classes that produce predictions, such as random forests, logistic regression, ...). In a pipeline, all but the last objects must be transformers; the last object can be either.
Exercise 6: Make a pipeline including (1) a PCA object and (2) a random forest classifier. Cross-validate both the PCA components and the parameters of the random forest classifier. What is the best number of PCA components to use?
Hint: You can also use the convenience function make_pipeline to creatue your pipeline.
Hint: Check the documentation for the precise notation to use for cross-validating parameters.
End of explanation
"""
# First, let's redo the train-test split to split the training data
# into training and hold-out validation set
X_train_new, X_val, y_train_new, y_val = train_test_split(X_train, y_train,
test_size = 0.2,
random_state = rs)
# Now we have to re-do the PCA pipeline:
from sklearn.pipeline import Pipeline
# make a list of name-estimator tuples
estimators = [('pca', PCA()), ('clf', RandomForestClassifier())]
# instantiate the pipeline
pipe = Pipeline(estimators)
# make a dictionary of parameters
params = dict(pca__n_components=[2, 4, 6, 8],
clf__n_estimators=[10, 100, 300],
clf__min_samples_leaf=[1,10])
# perform the grid search
grid_search = GridSearchCV(pipe, param_grid=params)
grid_search.fit(X_train_new, y_train_new)
print("Best score: " + str(grid_search.best_score_))
print("Best parameter set: " + str(grid_search.best_params_))
print("Validation score for model with PCA: " + str(grid_search.score(X_val, y_val)))
# I'm going to pick locally linear embedding here:
# LLE has two free parameters:
# - the number of parameters to use `n_neighbors`
# - the number of components in the output
from sklearn.manifold import LocallyLinearEmbedding
from sklearn.pipeline import Pipeline
# make a list of name-estimator tuples
estimators = [('lle', LocallyLinearEmbedding()), ('clf', RandomForestClassifier())]
# instantiate the pipeline
pipe2 = Pipeline(estimators)
# make a dictionary of parameters
params = dict(lle__n_components=[2, 4, 6, 8],
lle__n_neighbors=[5, 10, 100],
clf__n_estimators=[10, 100, 300],
clf__min_samples_leaf=[1,10])
# perform the grid search
grid_search2 = GridSearchCV(pipe2, param_grid=params)
grid_search2.fit(X_train_new, y_train_new)
print("Best score: " + str(grid_search2.best_score_))
print("Best parameter set: " + str(grid_search2.best_params_))
print("Validation score for model with LLE: " + str(grid_search2.score(X_val, y_val)))
"""
Explanation: It looks like n_components=6 works best.
Comparing Algorithms
So far, we've just picked PCA because it's common. But what if there's a better algorithm for dimensionality reduction out there for our problem? Or what if you'd want to compare random forests to other classifiers?
In this case, your best option is to split off a separate validation set, perform cross-validation for each algorithm separately, and then compare the results using hold-out cross validation and your validation set (Note: Do not use your test set for this! Your test set is only used for your final error estimate!)
Doing CV across algorithms is difficult, since the KFoldCV object needs to know which parameters belong to which algorithms, which is difficult to do.
Exercise 7: Pick an algorithm from the manifold learning library in scikit-learn, cross-validate a random forest for both, and compare the performance of both.
Important: Do not choose t-SNE. The reason is that t-SNE does not generalize to new samples! This means while it's useful for data visualization, you cannot train a t-SNE transformation (in the scikit-learn implementation) on one part of your data and apply it to another!
End of explanation
"""
from sklearn.linear_model import LogisticRegressionCV
lr = LogisticRegressionCV(penalty="l2", Cs=10, cv=10)
lr.fit(X_train, y_train)
lr.coef_
"""
Explanation: Looks like PCA does slightly better as a dimensionality reduction method.
Challenge Problem: Interpreting Results
Earlier today, we talked about interpreting machine learning models. Let's see how you would go about this in practice.
Repeat your classification with a logistic regression model.
Is the logistic regression model easier or harder to interpret? Why?
Assume you're interested in which features are the most relevant to your classification (because they might have some bearing on the underlying physics). Would you do your classification on the original features or the PCA transformation? Why?
Change the subset of parameters used in the logistic regression models. Look at the weights. Do they change? How? Does that affect your interpretability?
End of explanation
"""
# let's leave out the first parameter and see whether the coefficients change:
lr.fit(X_train[:,1:], y_train)
lr.coef_
"""
Explanation: Answer 1: Whether the model is easier or harder to interpret depends on what type of interpretability is desired. If you are interested in how the features influence the classification, the logistic regression model is easier to interpret: because random forests is an ensemble method, it's very hard to understand in detail how a prediction comes about (since the individual decision trees may have very different structures). However, for very large feature spaces with complicated, engineered features, your linear model (the logistic regression model) loses interpretability in how the parameters affect the outcomes just as much.
Answer 2: The more feature engineering you do, the harder it will be to interpret the results. The PCA features are a linear transformation of your original eight features. But what do they mean in physical terms? Who knows?
End of explanation
"""
from sklearn.base import BaseEstimator, TransformerMixin
class RebinTimeseries(BaseEstimator, TransformerMixin):
def __init__(self, n=4, method="average"):
"""
Initialize hyperparameters
:param n: number of samples to bin
:param method: "average" or "sum" the samples within a bin?
:return:
"""
self.n = n ## save number of bins to average together
self.method = method
return
def fit(self,X):
"""
I don't really need a fit method!
"""
## set number of light curves (L) and
## number of samples per light curve (k)
return self
def transform(self, X):
self.L, self.K = X.shape
## set the number of binned samples per light curve
K_binned = int(self.K/self.n)
## if the number of samples in the original light curve
## is not divisible by n, then chop off the last few samples of
## the light curve to make it divisible
#print("X shape: " + str(X.shape))
if K_binned*self.n < self.K:
X = X[:,:self.n*K_binned]
## the array for the new, binned light curves
X_binned = np.zeros((self.L, K_binned))
if self.method in ["average", "mean"]:
method = np.mean
elif self.method == "sum":
method = np.sum
else:
raise Exception("Method not recognized!")
#print("X shape: " + str(X.shape))
#print("L: " + str(self.L))
for i in xrange(self.L):
t_reshape = X[i,:].reshape((K_binned, self.n))
X_binned[i,:] = method(t_reshape, axis=1)
return X_binned
def predict(self, X):
pass
def score(self, X):
pass
def fit_transform(self, X, y=None):
self.fit(X)
X_binned = self.transform(X)
return X_binned
"""
Explanation: Answer 3: Some of the coefficients just changed sign! This is one of the problems with directly interpreting linear models: they are quite sensitive to the structure of the feature space. If you took these parameters and interpreted them in a causal sense, you might get completely different causal inferences depending on which parameters you use so be careful to check how robust your model is to changes in the feature space!
Even More Challenging Challenge Problem: Implementing Your Own Estimator
Sometimes, you might want to use algorithms, for example for feature engineering, that are not implemented in scikit-learn. But perhaps these transformations still have free parameters to estimate. What to do?
scikit-learn classes inherit from certain base classes that make it easy to implement your own objects. Below is an example I wrote for a machine learning model on time series, where I wanted to re-bin the time series in different ways and and optimize the rebinning factor with respect to the classification afterwards.
End of explanation
"""
class PSFMagThreshold(BaseEstimator, TransformerMixin):
def __init__(self, p=1.45,):
"""
Initialize hyperparameters
Parameters
----------
p : float
The threshold for the magnitude - model magnitude
"""
self.p = p # store parameter in object
return
def fit(self,X):
"""
I don't really need a fit method!
"""
return self
def transform(self, X):
# extract relevant columns
psfmag = X[:,0]
c_psfmag = X[:,-1]
# compute difference
d_psfmag = psfmag - c_psfmag
# make a 1D array of length N
X_new = np.zeros(X.shape[0])
X_new[d_psfmag > self.p] = 1.0
# IMPORTANT: Your output vector must be a COLUMN vector
# You can achieve this with the numpy function atleast_2D()
# and the numpy function transpose()
return np.atleast_2d(X_new).T
def predict(self, X):
pass
def score(self, X):
pass
def fit_transform(self, X, y=None):
self.fit(X)
X_new = self.transform(X)
return X_new
pt = PSFMagThreshold(p=1.45)
X_pt = pt.fit_transform(X)
"""
Explanation: Here are the important things about writing transformer objects for use in scikit-learn:
* The class must have the following methods:
- fit: fit your training data
- transform: transform your training data into the new representation
- predict: predict new examples
- score: score predictions
- fit_transform is optional (I think)
* The __init__ method only sets up parameters. Don't put any relevant code in there (this is convention more than anything else, but it's a good one to follow!)
* The fit method is always called in a Pipeline object (either on its own or as part of fit_transform). It usually modifies the internal state of the object, so returning self (i.e. the object itself) is usually fine.
* For transformer objects, which don't need scoring and prediction methods, you can just return pass as above.
Exercise 8: Last time, you learned that the SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag} - \mathtt{cmodelMag} \gt 0.145,$$
sources that satisfy this criteria are considered galaxies.
Implement an object that takes $\mathtt{psfMag}$ and $\mathtt{cmodelMag}$ as inputs and has a free parameter s that sets the value above which a source is considered a galaxy.
Implement a transform methods that returns a single binary feature that is one if $$\mathtt{psfMag} - \mathtt{cmodelMag} \gt p$$ and zero otherwise.
Add this feature to your optimized set of features consisting of either the PCA or your alternative representation, and run a random forest classifier on both. Run a CV on all components involved.
Hint: $\mathtt{psfMag}$ and $\mathtt{cmodelMag}$ are the first and the last column in your feature vector, respectively.
Hint: You can use FeatureUnion to combine the outputs of two transformers in a single data set. (Note that using pipeline with all three will chain them, rather than compute the feature union, followed by a classifier). You can input your FeatureUnion object into Pipeline.
End of explanation
"""
from sklearn.pipeline import FeatureUnion
transformers = [("pca", PCA(n_components=2)),
("pt", PSFMagThreshold(p=1.45))]
feat_union = FeatureUnion(transformers)
X_transformed = feat_union.fit_transform(X_train)
"""
Explanation: Now let's make a feature set that combines this feature with the PCA features:
End of explanation
"""
# combine the
transformers = [("pca", PCA()),
("pt", PSFMagThreshold(p=1.45))]
feat_union = FeatureUnion(transformers)
estimators = [("feats", feat_union),
("clf", RandomForestClassifier())]
pipe_c = Pipeline(estimators)
# make the parameter set
params = dict(feats__pca__n_components=[2, 4, 6, 8],
feats__pt__p=[0.5, 0.9, 1.45, 2.0],
clf__n_estimators=[10, 100, 300],
clf__min_samples_leaf=[1,10])
# perform the grid search
grid_search_c = GridSearchCV(pipe_c, param_grid=params)
grid_search_c.fit(X_train_new, y_train_new)
# print validation score
print("Best score: " + str(grid_search_c.best_score_))
print("Best parameter set: " + str(grid_search_c.best_params_))
print("Validation score: " + str(grid_search_c.score(X_val, y_val)))
"""
Explanation: Now we can build the pipeline:
End of explanation
"""
# all stars
star_ind = np.argwhere(y == b"STAR").T[0]
# all galaxies
galaxy_ind = np.argwhere(y == b"GALAXY").T[0]
np.random.seed(100)
# new array with much fewer stars
star_ind_new = np.random.choice(star_ind, replace=False, size=int(len(star_ind)/80.0))
X_new = np.vstack((X[galaxy_ind], X[star_ind_new]))
y_new = np.hstack((y[galaxy_ind], y[star_ind_new]))
"""
Explanation: Choosing The Right Scoring Function
As a standard, the algorithms in scikit-learn use accuracy to score results. The accuracy is basically the raw fraction of correctly classified samples in your validation or test set.
Question: Is this scoring function always the best method to use? Why (not)? Can you think of alternatives to use?
Let's make a heavily biased data set:
End of explanation
"""
print(len(y_new[y_new == b"GALAXY"]))
print(len(y_new[y_new == b"STAR"]))
"""
Explanation: We have now made a really imbalanced data set with many galaxies and only a few stars:
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, accuracy_score
X_train2, X_test2, y_train2, y_test2 = train_test_split(X_new, y_new,
test_size = 0.3,
random_state = 20)
C_all = [0.0001, 10000]
for C in C_all:
lr = LogisticRegression(penalty='l2', C=C)
lr.fit(X_train2, y_train2)
y_pred = lr.predict(X_test2)
print("The accuracy score for C = %i is %.4f"%(C, accuracy_score(y_test2, y_pred)))
cm = confusion_matrix(y_test2, y_pred, labels=np.unique(y))
print(cm)
"""
Explanation: Exercise 10: Run a logistic regression classifier on this data, for a very low regularization (0.0001) and a very large regularization (10000) parameter. Print the accuracy and a confusion matrix of the results for each run. How many mis-classified samples are in each? Where do the mis-classifications end up? If you were to run a cross validation on this, could you be sure to get a good model? Why (not)?
Hint: Our imbalanced class, the one we're interested in, is the
End of explanation
"""
for C in C_all:
lr = LogisticRegression(penalty='l2', C=C)
lr.fit(X_train2, y_train2)
y_pred = lr.predict(X_test2)
print("The accuracy score for C = %i is %.4f"%(C, accuracy_score(y_test2, y_pred)))
print("The F1 score for C = %.5f is %.4f"%(C, f1_score(y_test2, y_pred,
pos_label=b"STAR",
average="binary")))
cm = confusion_matrix(y_test2, y_pred, labels=np.unique(y))
print(cm)
"""
Explanation: Exercise 11: Take a look at the metrics implemented for model evaluation in scikit-learn, in particular the different versions of the F1 score. Is there a metric that may be more suited to the task above? Which one?
End of explanation
"""
|
amitkaps/machine-learning | RF_GBM/notebook/Bank Marketing.ipynb | mit | #Import the necessary libraries
import numpy as np
import pandas as pd
#Read the train and test data
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/test.csv")
"""
Explanation: Frame
The client bank XYZ is running a direct marketing campaign. It wants to identify customers who would potentially be buying their new term deposit plan.
Acquire
Data is obtained from UCI Machine Learning repository.
http://mlr.cs.umass.edu/ml/datasets/Bank+Marketing
Data from direct marketing campaign (phone calls) of a Portuguese Bank is provided.
Attribute Information:
bank client data:
age (numeric)
job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')
marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)
education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')
default: has credit in default? (categorical: 'no','yes','unknown')
housing: has housing loan? (categorical: 'no','yes','unknown')
loan: has personal loan? (categorical: 'no','yes','unknown')
related with the last contact of the current campaign:
contact: contact communication type (categorical: 'cellular','telephone')
month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')
day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')
duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.
other attributes:
campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
previous: number of contacts performed before this campaign and for this client (numeric)
poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')
social and economic context attributes
emp.var.rate: employment variation rate - quarterly indicator (numeric)
cons.price.idx: consumer price index - monthly indicator (numeric)
cons.conf.idx: consumer confidence index - monthly indicator (numeric)
euribor3m: euribor 3 month rate - daily indicator (numeric)
nr.employed: number of employees - quarterly indicator (numeric)
Output variable (desired target):
y - has the client subscribed a term deposit? (binary: 'yes','no')
The given data is randomly divided into train and test for the purpose of this workshop. Build the model for train and use it to predict on test.
Explore
End of explanation
"""
#train
#test
#Are they the same?
#Combine train and test
frames = [train, test]
input = pd.concat(frames)
#Print first 10 records of input
"""
Explanation: Exercise 1
print the number of rows and columns of train and test
Exercise 2
Print the first 10 rows of train
Exercise 3
Print the column types of train and test. Are they the same in both train and test?
End of explanation
"""
#Replace deposit with a numeric column
#First, set all labels to be 0
input.at[:, "depositLabel"] = 0
#Now, set depositLabel to 1 whenever deposit is yes
input.at[input.deposit=="yes", "depositLabel"] = 1
"""
Explanation: Exercise 4
Find if any column has missing value
There is a pd.isnull function. How to use that?
End of explanation
"""
#Create the labels
labels =
labels
#Drop the deposit column
input.drop(["deposit", "depositLabel"], axis=1)
"""
Explanation: Exercise 5
Find % of customers in the input dataset who have purchased the term deposit
End of explanation
"""
#Get list of columns that are continuous/integer
continuous_variables = input.dtypes[input.dtypes != "object"].index
continuous_variables
#Get list of columns that are categorical
categorical_variables = input.dtypes[input.dtypes=="object"].index
categorical_variables
"""
Explanation: Exercise 6
Did it drop? If not, what has to be done?
Exercise 7
Print columnn names of input
End of explanation
"""
inputInteger =
#print inputInteger
inputInteger.head()
inputCategorical =
#print inputCategorical
inputCategorical.head()
#Convert categorical variables into Labels using labelEncoder
inputCategorical = np.array(inputCategorical)
"""
Explanation: Exercise 8
Create inputInteger and inputCategorical - two datasets - one having integer variables and another having categorical variables
End of explanation
"""
#Load the preprocessing module
from sklearn import preprocessing
for i in range(len(categorical_variables)):
lbl = preprocessing.LabelEncoder()
lbl.fit(list(inputCategorical[:,i]))
inputCategorical[:, i] = lbl.transform(inputCategorical[:, i])
#print inputCategorical
"""
Explanation: Exercise 9
Find length of categorical_variables
End of explanation
"""
inputInteger =
inputInteger
"""
Explanation: Exercise 10
Convert inputInteger to numpy array
End of explanation
"""
inputUpdated.shape
"""
Explanation: Exercise 11
Now, create the inputUpdated array that has both inputInteger and inputCategorical concatenated
Hint Check function called vstack and hstack
End of explanation
"""
from sklearn import tree
from sklearn.externals.six import StringIO
import pydot
bankModelDT = tree.DecisionTreeClassifier(max_depth=2)
bankModelDT.fit(inputUpdated[:train.shape[0],:], labels[:train.shape[0]])
dot_data = StringIO()
tree.export_graphviz(bankModelDT, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("bankDT.pdf")
#Check the pdf
"""
Explanation: Train the model
Model 1: Decision Tree
End of explanation
"""
# Prediction
prediction_DT = bankModelDT.predict(inputUpdated[train.shape[0]:,:])
#Compute the error metrics
import sklearn.metrics
sklearn.metrics.auc(labels[train.shape[0]:], prediction_DT)
#What does that tell?
#What's the error AUC for the other Decision Tree Models
"""
Explanation: Exercise 12
Now, change the max_depth = 6 and check the results.
Then, change the max_depth= None and check the results
End of explanation
"""
sklearn.metrics.auc(labels[train.shape[0]:], prediction_DT[:,0])
"""
Explanation: Exercise 13
Instead of predicting classes directly, predict the probability and check the auc
End of explanation
"""
#Precision and Recall
sklearn.metrics.precision_score(labels[train.shape[0]:], prediction_DT)
sklearn.metrics.recall_score(labels[train.shape[0]:], prediction_DT)
"""
Explanation: Accuracy Metrics
AUC
ROC
Misclassification Rate
Confusion Matrix
Precision & Recall
Confusion Matrix
<img src="img/confusion_matrix.jpg" style="width:604px;height:428px;">
Calculate True Positive Rate
TPR = TP / (TP+FN)
Calculate False Positive Rate
FPR = FP / (FP+TN)
Precision
Recall
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
bankModelRF = RandomForestClassifier(n_jobs=-1, oob_score=True)
bankModelRF.fit(inputUpdated[:train.shape[0],:], labels[:train.shape[0]])
bankModelRF.oob_score_
"""
Explanation: Ensemble Trees
<img src="img/tree_ensemble1.png" style="width:604px;height:428px;">
<br>
<br>
<br>
<br>
<br>
<br>
<img src="img/tree_ensemble2.png" style="width:604px;height:428px;">
src: http://www.slideshare.net/hustwj/scaling-up-machine-learning-the-tutorial-kdd-2011-part-iia-tree-ensembles
Random Forest
<img src="img/random_forest.jpg" style="width:604px;height:428px;">
src: http://www.slideshare.net/0xdata/jan-vitek-distributedrandomforest522013
End of explanation
"""
import xgboost as xgb
params = {}
params["min_child_weight"] = 3
params["subsample"] = 0.7
params["colsample_bytree"] = 0.7
params["scale_pos_weight"] = 1
params["silent"] = 0
params["max_depth"] = 4
params["nthread"] = 6
params["gamma"] = 1
params["objective"] = "binary:logistic"
params["eta"] = 0.005
params["base_score"] = 0.1
params["eval_metric"] = "auc"
params["seed"] = 123
plst = list(params.items())
num_rounds = 120
xgtrain_pv = xgb.DMatrix(inputUpdated[:train.shape[0],:], label=labels[:train.shape[0]])
watchlist = [(xgtrain_pv, 'train')]
bankModelXGB = xgb.train(plst, xgtrain_pv, num_rounds)
prediction_XGB = bankModelXGB.predict(xgb.DMatrix(inputUpdated[train.shape[0]:,:]))
sklearn.metrics.auc(labels[train.shape[0]:], prediction_XGB)
"""
Explanation: Exercise 14
Do the following
Predict on test
Find accuracy metrics: AUC, Precision, Recall
How does it compare against Decision Tree
Gradient Boosting Machines
<img src="img/boosting.jpg" style="width:604px;height:428px;">
src: http://www.slideshare.net/hustwj/scaling-up-machine-learning-the-tutorial-kdd-2011-part-iia-tree-ensembles
End of explanation
"""
inputOneHot = pd.get_dummies(input)
"""
Explanation: Another way of encoding
One Hot Encoding
<img src="img/onehot.png" style="width:404px;height:228px;">
Whiteboard !
End of explanation
"""
|
bramacchino/numberSense | GraphStructure.ipynb | mit | from igraph import Graph, summary
g = Graph()
g.add_vertices(3) #0,1,2
#g.add_vertices([0,1,2])
g.add_edges([(0,1), (1,2)])
g.add_vertices(3)
#g.delete_vertices([1,2]) try out, deleting vertices delete also the incident edges
g.add_edges([(2,3),(3,4),(4,5),(5,3)])
g.delete_edges(g.get_eid(2,3)) #Note that edges and vertices IDs are `continuous`, that is when one removed,
#the successives are renamed, however attributes may be added
print(g)
summary(g)
g2 = Graph.Tree(127, 2)
g2.get_edgelist()[0:10]
g.isomorphic(g2)
from igraph import *
vertices = ["one", "two", "three"]
edges = [(0,2),(2,1),(0,1)]
g = Graph(vertex_attrs={"label": vertices}, edges=edges, directed=True)
visual_style = {}
visual_style["bbox"] = (300, 300)
plot(g, **visual_style)
"""
Explanation: (NOTES, Learning Graphs in Python): material not yet organized
This notebook requires, igraph, cairo: Installation instructions provided
Graph Structure
Pros/Cons Adjacency Lists/Matrices
Pros/Cons Adjacency Lists/Matrices - When to use
Sparse (adjacency) Matrices in scipy
as explained in
https://codereview.stackexchange.com/questions/95598/order-a-list-of-tuples-or-a-numpy-array-in-a-specific-format
Great introduction (not only)
https://www.python-course.eu/graphs_python.php
Graphs as Objects:
https://triangleinequality.wordpress.com/2013/08/21/graphs-as-objects-in-python/
Graph theory libraries are available for python:
(Some useful comments on them https://www.quora.com/What-is-the-best-Python-graph-library)
http://networkx.github.io/ (slower, easiest)
http://igraph.org/ (faster, not complicated, not great documentation)
https://graph-tool.skewed.de/ (fastest, but hard to install: https://graph-tool.skewed.de/performance, and requires attention https://stackoverflow.com/questions/36193773/graph-tool-surprisingly-slow-compared-to-networkx)
Let's have a look at igraph ###
Installation instruction at http://igraph.wikidot.com/installing-python-igraph-on-linux
In Linux Mint 18 everything was pretty smooth,
>> sudo apt-get install -y libigraph0-dev
for installing the igraph libraries
>> sudo pip3 install python-igraph
for compiling python-igraph
The getting-start tutorial http://igraph.org/python/doc/tutorial/tutorial.html, the API documentation is available at http://igraph.org/python/doc/igraph-module.html (https://github.com/igraph/python-igraph/tree/master/doc/source)
End of explanation
"""
import cairo
from igraph import plot
print(g)
visual_style = {}
layout = g.layout("tree")
visual_style["bbox"] = (300, 300)
plot(g, layout = layout, **visual_style)
"""
Explanation: igraph Visualization
Based on Cairo (and pycairo, bindings).
The installation in Linux Mint 18 has been pretty smooth. The following passage were needed
sudo add-apt-repository ppa:ricotz/testing
sudo apt-get update
sudo apt-get install libcairo2-dev
cd ~
git clone https://github.com/pygobject/pycairo.git
cd pycairo
sudo python3 setup.py install
Moreover, for python3, it was necessary to slightly modify the __init__py file in igraph/drawing raw 354 from io.getvalue().encode('utf-8) to io.getvalue().decode('utf-8)
Indeed this is a known issue: https://github.com/igraph/python-igraph/commit1/8864b46849b031a3013764d03e167222963c0f5d?diff=split#diff-32d2a59da2ec039816077b378595d915
For a correct visualization in jupyter notebook another hack is necessary: https://stackoverflow.com/questions/30640489/issue-plotting-vertex-labels-using-igraph-in-ipython, https://stackoverflow.com/questions/30632412/python-igraph-vertex-labels-in-ipython-notebook
End of explanation
"""
g.write_adjacency('name')
g.write_lgl('adjacency_list')
"""
Explanation: Various layouts are available:
- circle, drl, lgl, fr, grid_fr (https://github.com/gephi/gephi/wiki/Fruchterman-Reingold), kk (https://pdfs.semanticscholar.org/b8d3/bca50ccc573c5cb99f7d201e8acce6618f04.pdf), random, tree (rt, http://hci.stanford.edu/courses/cs448b/f09/lectures/CS448B-20091021-GraphsAndTrees.pdf),
- fr3d, kk3d, random3d, sphere not plottable in python: https://stackoverflow.com/questions/16907564/how-can-i-plot-a-3d-graph-in-python-with-igraph
- grid_fr listed in the documentation, but not implemented
End of explanation
"""
from igraph import plot
# Create a Graph g with
g = Graph([(0,1), (0,2), (2,3), (3,4), (4,2), (2,5), (5,0), (6,3), (5,6)])
# To change the attribute of all vertices
g.vs["name"] = ["Alice", "Bob", "Claire", "Dennis", "Esther", "Frank", "George"]
# To change the attribute of all edges
g.es["is_formal"] = [False, False, True, True, True, False, True, False, False]
# To change the attribute of a single vertex
g.es[0]['is_formal'] = True
print(g.es[0].attributes()) #available also for g.vs
print(g.es[0].index) #available also for g.vs
print(g.es[0].source)
print(g.es[0].target)
print(g.es[0].tuple)
g.vs["label"] = g.vs["name"]
g["name"] = "First Graph"
del g["name"]
plot(g, layout='kk')
"""
Explanation: Your best bet is probably GraphML or GML if you want to save igraph graphs in a format that can be read from an external package and you want to preserve numeric and string attributes
Attributes: ##
attached to vertex, edge and graphs. The igraph vertex, edge, graphs objects behaves a standard Phython dic, such that the key (a string) is the name of the attribute, and the value represent the attribute.
Note, if attribute $\neq$ string or number, e.g. object, use pickle if you wish to `save' the graph
g.vs (vertices), g.es (edges)
End of explanation
"""
g.vs.select(_degree = g.maxdegree())["name"]
g.vs.select(name_eq="pet")
"""
Explanation: Attribute based querying:
select behave like filter (see filter,map, reduce), depending on the parameter:
_degree := positional arguments before keyword arguments
a function := vertex included if the function return true (similar to filter)
iterable := integers taken as indexes (the rest is ignored)
select accept also the following keywords:
name_eq, name_ne, name_lt, name_le, name_gt, name_ge, name_in, name_notin
That is the attribute value must be (equal, or greather then, etc..) to the value of the keyword
Note there are more
End of explanation
"""
# Styling graph
from igraph import *
import numpy as np
# Create the graph
vertices = ["one", "two", "three"]
edges = [(0,2),(2,1),(0,1)]
g = Graph(vertex_attrs={"label": vertices}, edges=edges, directed=True)
visual_style = {}
# Scale vertices based on degree
outdegree = g.outdegree()
visual_style["vertex_size"] = [x/max(outdegree)*50+110 for x in outdegree]
# Set bbox and margin
visual_style["bbox"] = (800,800)
visual_style["margin"] = 100
# Define colors used for outdegree visualization
colours = ['#fecc5c', '#a31a1c']
# Order vertices in bins based on outdegree
bins = np.linspace(0, max(outdegree), len(colours))
digitized_degrees = np.digitize(outdegree, bins)
# Set colors according to bins
g.vs["color"] = [colours[x-1] for x in digitized_degrees]
# Also color the edges
for ind, color in enumerate(g.vs["color"]):
edges = g.es.select(_source=ind)
edges["color"] = [color]
# Don't curve the edges
visual_style["edge_curved"] = False
visual_style["bbox"] = (500, 500)
# Plot the graph
plot(g, **visual_style)
# Community detection
# https://stackoverflow.com/questions/25254151/using-igraph-in-python-for-community-detection-and-writing-community-number-for
from igraph import *
import numpy as np
# Create the graph
vertices = [i for i in range(7)]
edges = [(0,2),(0,1),(0,3),(1,0),(1,2),(1,3),(2,0),(2,1),(2,3),(3,0),(3,1),(3,2),(2,4),(4,5),(4,6),(5,4),(5,6),(6,4),(6,5)]
g = Graph(vertex_attrs={"label":vertices}, edges=edges, directed=True)
visual_style = {}
# Scale vertices based on degree
outdegree = g.outdegree()
visual_style["vertex_size"] = [x/max(outdegree)*25+50 for x in outdegree]
# Set bbox and margin
visual_style["bbox"] = (600,600)
visual_style["margin"] = 100
# Define colors used for outdegree visualization
colours = ['#fecc5c', '#a31a1c']
# Order vertices in bins based on outdegree
bins = np.linspace(0, max(outdegree), len(colours))
digitized_degrees = np.digitize(outdegree, bins)
# Set colors according to bins
g.vs["color"] = [colours[x-1] for x in digitized_degrees]
# Also color the edges
for ind, color in enumerate(g.vs["color"]):
edges = g.es.select(_source=ind)
edges["color"] = [color]
# Don't curve the edges
visual_style["edge_curved"] = False
# Community detection
communities = g.community_edge_betweenness(directed=True)
clusters = communities.as_clustering()
# Set edge weights based on communities
weights = {v: len(c) for c in clusters for v in c}
g.es["weight"] = [weights[e.tuple[0]] + weights[e.tuple[1]] for e in g.es]
# Choose the layout
N = len(vertices)
visual_style["layout"] = g.layout_fruchterman_reingold(weights=g.es["weight"], maxiter=1000, area=N**3, repulserad=N**3)
# Plot the graph
plot(g, **visual_style)
"""
Explanation: Structural properties of a graph
degree(), betweeness(), edge_betweeness(),
Some examples#
End of explanation
"""
# Ok for some reasons, pop doesn't work (not exactly a dictionary)
from igraph import *
#http://igraph.org/python/doc/igraph.VertexSeq-class.html#attributes
g = Graph()
g.add_vertices([0,1,2])
print(g.vs.get_attribute_values('name'))
for vertex in g.vs:
print(vertex.attributes())
print(vertex.index)
for key in vertex.attributes():
vertex.attributes()[str(vertex.index)] = vertex.attributes().pop(key)
print(vertex)
g.vs[0]['ciao']=g.vs[0]['name']
del g.vs[0]['name']
print(g.vs[0])
#List of vertices (index) in the graph
vertices = [v.index for v in g.vs]
print(vertices)
#import inspect
#inspect.getsourcelines(Graph)
# import dill
# from dill.source import getsource
# print(getsource(g.vs.get_attribute_values('name')))
for vertex in g.vs:
print(vertex.degree())
g.degree(0)
"""
Explanation: igraph API:
End of explanation
"""
g.betweenness(0,1)
"""
Explanation: betweenness(vertices=None, directed=True, cutoff=None, weights=None, nobigint=True)
source code
Calculates or estimates the betweenness of vertices in a graph.
End of explanation
"""
print(g.vcount())
print(g.ecount())
a = set([])
a.add('ciao')
a
from functools import reduce
%timeit list(set(reduce(lambda x,y: x+y, [[1,2],[3,4],[1,4]])))
[1,2]+[3,4]
from itertools import chain
%timeit list(set(chain(*[[1,2],[3,4],[1,4]])))
%timeit list(set([item for sublist in [[1,2],[3,4],[1,4]] for item in sublist]))
def target(lista):
return lista[1]
incident_outward_edges = [[1,2],[1,3],[1,4],[1,6],[2,5],[1,8],[1,0],[1,2],[1,3],[1,4],[1,6],[2,5],[1,8],[1,0]]
%timeit targets = set(map(lambda x: target(x), incident_outward_edges))
targets
%timeit set([target(x) for x in incident_outward_edges])
"""
Explanation: Getting the number of vertices and edges
End of explanation
"""
|
sbalanovich/APM115Proj1 | src/Serguei Workspace.ipynb | mit | %matplotlib inline
import csv
import numpy as np
from scipy import interp
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_curve, auc
from sklearn.cross_validation import StratifiedKFold
from sklearn import preprocessing
import seaborn as sns
def score(clf, X_test, y_test):
idx = 0
score = 0
for x in X_test:
y_pred = clf.predict([x])
if y_pred == y_test[idx]:
score += 1
idx+=1
return 100*score/idx
def score_complex(clf,X_test,Y_test):
count = 0
tp = 0
fp = 0
tn = 0
fn = 0
user = Y_test.count(1)
impostor = Y_test.count(0)
for x in X_test:
y_hat = clf.predict([x])
if Y_test[count] == 1 and y_hat == 1:
tp += 1
if Y_test[count] == 1 and y_hat == 0:
fn += 1
if Y_test[count] == 0 and y_hat == 1:
fp += 1
if Y_test[count] == 0 and y_hat == 0:
tn += 1
count += 1
return (round(tp/user*100,2),round(tn/impostor*100,2),round(fp/impostor*100,2),round(fn/user*100,2))
"""
Explanation: Keyboard Biometrics
SVM Approach
Problem Background
The problem explored in this project is the identification of a given user based on their typing patterns. Here, we used 3 datasets (found in the data folder under the names 'keystroke0' (the CMU dataset), 'keystroke1' (the standard dataset), 'keystroke2' (the multi-password dataset), and 'keystroke3' (the demographics dataset). The latter 3 come from the GREYC lab.
In this project, we will run an SVM classifier on several configurations. First, we will attempt to use an n-class classifier, where n is the number of users in total for the dataset. Here we will try to precisely detect from the test set which of the n users it came from.
Next, we will run a two-class classifier, in an attempt to determine with a simple binary whether the input provided has come from the genuine user or from one of the impostors. This is a simpler problem because it does not need to be as specific, but it is more practical for real-world applications as keyboard biometrics are really only interesting for ensuring that the typist of the password is in fact the authenticated user.
Finally, we want to detect not only the proportion of time that the classifier gets the correct answer with respect to the genuine user and the impostor, but also to actually focus in on the false positives (when an impostor gets in) and the false negatives (when the genuine user is erroneously blocked from the system). For this portion, we will attempt to construct an ROC curve to visually present the true and false positive rates for the detections.
SVM Background
Support vector machines (SVMs) are supervised learning models. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm encodes every input as some p-dimensional vector and computes a (p-1)-dimensional hyperplane to separate these points into two groups. New datapoints are then assigned a similar p-dimensional vector which then falls one one or the other side of the hyperplane and the point is thus classified.
The SVM creates a margin spearating the data. The margin is defined by the distance from the hyperplane to the nearest point on either side of the hyperplane. If the data is linearly separable, it will be possible to create a "hard margin" meaning the data will be evenly split into two halves and there will be no data in the margin at all. This is a very good classifier but the data is not always this nice. More commonly, a "soft margin" is used which simply has a penalty factor on the points that fall within the margin and minimizes this penalty to determine the best hyperplane.
The hyperplane's shape is determined by the "kernel" function used. The kernel can be linear, polynomial, or using a Gaussian radial basis function (rbf). Using a kernel will morph the plane from a linear shape to some curve which will often allow for better classification depending on the type of data and the parameters of the curve.
For multi-class classification, we use the standard dual-class SVM but take a one vs all or a one vs one approach to detemine the one class that the given test data point fits best. In one vs all, n classifiers are created to compare the class x vs the other classes. The test point is run through each of these classifiers and the one with the highest probability of being in the class rather than being in the impostor group wins. In one vs one, each class is pitted against the others and a winner-takees-all strategy is employed until the best one is found.
SVM for Biometrics
SVMs have been proven good for text categorization problems and bioinformatics. They are also known to perform well for large datasets. They are best for linearly separable data, though it's not certain whether the datasets looked at in this project have this quality.
End of explanation
"""
filename = "../data/keystroke0.csv"
X = []
Y = []
with open(filename, 'r') as f:
reader = csv.reader(f)
first = 1
for row in reader:
if first:
first = 0
continue
X.append(row[3:])
Y.append(int(row[0][1:]))
"""
Explanation: Dataset 1 - CMU
Background
This is the basic dataset used in many keyboard biometrics studies. It contains 51 users and 20,400 total rows
Preprocessing the data
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
clf = svm.SVC(C = 1)
clf.fit(X_train, y_train)
print(str(score(clf, X_test, y_test)) + '%')
"""
Explanation: Multi-class SVM
End of explanation
"""
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
clf = svm.SVC(C = 1)
clf.fit(X_train, y_train)
print(str(score(clf, X_test, y_test)) + '%')
# Test different Csprint(str(score(clf, X_test, y_test)) + '%')
results = []
Cs = [1,25,50,75,100]
for i in [1,25,50,75,100]:
results.append(score(svm.SVC(C = i).fit(X_train, y_train), X_test, y_test))
print(Cs)
print(results)
# Test different kernels
for i in ['linear','rbf','poly','sigmoid']:
print(i + ': ' + str(score(svm.SVC(kernel = i).fit(X_train, y_train), X_test, y_test)) + '%')
"""
Explanation: Now we normalize the data with mean 0 and variance 1. We get a significantly better fit. We will normalize all datasets in this way from here on.
End of explanation
"""
genuines = [2, 5, 13, 39]
ys = []
for g in genuines:
# Genuine = 1, Impostor = 0
newY = []
for elt in Y:
if elt == g:
newY.append(1)
else:
newY.append(0)
ys.append(newY)
# Make sure we have exactly 400 1s in each list
for i in range(len(genuines)):
print(sum(ys[i]))
scores = []
for y in ys:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = svm.SVC(C = 1)
clf.fit(X_train, y_train)
scores.append(str(score(clf, X_test, y_test)) + '%')
print(scores)
"""
Explanation: RBF is clearly the best kernel in this case (which is also the default), so we will use it for all remaining calculations on this data set.
Genuine vs Impostor SVM
Clearly, SVM is fairly accurate at identifying the 57 users in the database based on their keyboard biometrics. However, much more important that this metric is the algorithm's ability to distinguish one specified user from the other 56, as would be the case if the user wanted to use this biometric as a means of authenticating his/her identity before logging into a computer, phone, etc.
End of explanation
"""
true_positives = []
false_negatives = []
true_negatives = []
false_positives = []
count = 0
for i in [x for x in range(2,58) if x in Y]:
count += 1
user = [1 if x == i else 0 for x in Y]
X_train, X_test, Y_train, Y_test = train_test_split(X, user, test_size=0.2, random_state=42)
clf = svm.SVC()
clf.fit(X_train,Y_train)
scores = score_complex(clf,X_test,Y_test)
true_positives.append(scores[0])
false_negatives.append(scores[3])
true_negatives.append(scores[1])
false_positives.append(scores[2])
# print('SUBJECT' + str(i) + ': \n True-Positive = ' + str(scores[0]) + '% \n False-Negative = ' + str(scores[3]) + '% \n True-Negative = ' + str(scores[1]) + '% \n False-Positive = ' + str(scores[2]) + '%')
print(count)
plt.figure()
sns.boxplot(true_positives,color = 'turquoise')
plt.title('True-Positive Rates for All Users')
plt.xlabel('TPR')
plt.show()
plt.figure()
sns.boxplot(false_positives,color = 'r')
plt.title('False-Positive Rates for All Users')
plt.xlabel('FPR')
plt.show()
"""
Explanation: VERY Weird. or maybe not that weird... they're not exactly the same, just very close, which kinda makes sense given what you were saying the other day. It's going to be almost entirely correct because there are so many 0s relative to 1s.
End of explanation
"""
# Gonna just use the first y since it doesn't seem to matter
yyy = np.array(ys[0])
XXX = np.array(X)
# More intense cross-validation
cv = StratifiedKFold(yyy, n_folds=6)
classifier = svm.SVC(kernel='linear', probability=True)
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas_ = classifier.fit(XXX[train], yyy[train]).predict_proba(XXX[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(yyy[test], probas_[:, 1])
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i, roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Luck')
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='Mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
"""
Explanation: Using SVM to validate the identity of each of the users against the others generates some interesting results. The TPR (i.e. the percentage of time the true user is attempting to log in and is identified as such) takes on quite a large range of values between different users, with a mean of ~78%. This means that 1/5 times the true user will be rejected from his/her own account, which is less than idea. The complimentary metirc, FPR (i.e. the percentage of time an impostor is attempting to log in as someone else and is incorrectly identified as the true user) takes on a much narrower range of values, and thankfully, has a much lower mean. The mean is ~5%, meaning that only 1/20 times will an impostor be able to get into someone else's account, and the data is left-skewed. This is still not a terrific metric, but is not bad considering the naivity of the approach.
ROC Curve
Below is the basic code for creating some ROC curves. In this test I just made 6 different "folds", or cross-validation splits and compared them against each other. Ideally we would be comparing different C values, kernels, and datasets using this
End of explanation
"""
filename = "../data/keystroke1.csv"
X = []
y = []
with open(filename, 'r') as f:
reader = csv.reader(f)
first = 1
for row in reader:
if first:
first = 0
continue
rw = list(map(lambda x: x/10000000., (map(int, row[5].split(' ')[1:61]))))
if len(rw) < 60:
continue
X.append(rw)
y.append(int(row[0]))
X = preprocessing.scale(X)
"""
Explanation: Dataset 2 - GREYC Standard
Background
This is the standard GREYC dataset. It contains 133 users and 7,555 total rows
Preprocessing the data
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = svm.SVC(C = 1)
clf.fit(X_train, y_train)
"""
Explanation: Multi-class SVM
End of explanation
"""
print(str(score(clf, X_test, y_test)) + '%')
"""
Explanation: This actually takes a stupid long time to run..
End of explanation
"""
filename = "../data/keystroke3-1.csv"
X = []
Y = []
gender = []
age = []
handedness = []
with open(filename, 'r') as f:
reader = csv.reader(f)
first = 1
for row in reader:
if first:
first = 0
continue
rw = list(map(lambda x: x/10000000., (map(int, row[6].split(' ')[1:61]))))
if len(rw) < 5:
print(rw)
X.append(rw)
Y.append(int(row[0]))
gender.append(row[1])
age.append(int(row[2]))
handedness.append(row[3])
# normalize with mean 0 and variance 1
X = preprocessing.scale(X)
# 1 = female, 0 = male
gender = [0 if x == 'M' else 1 for x in gender]
# 1 = right, 0 = left
handedness = [0 if x == 'L' else 1 for x in handedness]
"""
Explanation: Well this is a bit awkward.. The data is very strange - I divided everything by 10,000,000 because the numbers were huge (I suspect they were in milliseconds), but still unclear why this took so long and yielded 0..
Dataset 4 - GREYC Demographics
Background
This is the GREYC dataset where users are also classfified by demographics. It contains 110 users and each has a binary class assigned to them based on age (younger or older than 30), gender, and handedness.
The data is actually split into 2 files - 'keystroke3-1.csv' and 'keystroke3-2.csv'
Each of these has the typing of a different password
Preprocessing the data
End of explanation
"""
print('Right-Handed: ' + str(np.round(handedness.count(0)/len(handedness),2)*100) + '%')
print('Male: ' + str(np.round(gender.count(0)/len(gender),2)*100) + '%')
plt.figure()
plt.hist(age,bins = 15)
plt.title('Age of Subjects')
plt.xlabel('Age')
plt.ylabel('Frequency')
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
clf = svm.SVC(C = 1)
clf.fit(X_train, y_train)
print(str(score(clf, X_test, y_test)) + '%')
"""
Explanation: Data set demographics:
End of explanation
"""
# test different kernels
for i in ['linear','rbf','poly','sigmoid']:
print(i + ': ' + str(score(svm.SVC(kernel = i).fit(X_train, y_train), X_test, y_test)) + '%')
"""
Explanation: This one goes a lot faster but still.. 3%, what? -- better after normalizing but still really low
End of explanation
"""
true_positives = []
false_negatives = []
true_negatives = []
false_positives = []
for i in [x for x in range(1,max(Y)) if x in Y]:
user = [1 if x == i else 0 for x in Y]
X_train, X_test, Y_train, Y_test = train_test_split(X, user, test_size=0.2, random_state=42)
if Y_test.count(1) == 0:
print(i)
clf = svm.SVC(kernel = 'linear')
clf.fit(X_train,Y_train)
scores = score_complex(clf,X_test,Y_test)
true_positives.append(scores[0])
false_negatives.append(scores[3])
true_negatives.append(scores[1])
false_positives.append(scores[2])
# print('SUBJECT' + str(i) + ': \n True-Positive = ' + str(scores[0]) + '% \n False-Negative = ' + str(scores[3]) + '% \n True-Negative = ' + str(scores[1]) + '% \n False-Positive = ' + str(scores[2]) + '%')
"""
Explanation: In contrast to the first data set, it appears as though a linear kernel is more appropriate for this data set, so it will be used in all further calculations.
Running into problems here... data is too limited per user. We can try a hardcode solution, but might not be worth it.
End of explanation
"""
import csv
import numpy as np
import sqlite3
from sklearn import svm
from sklearn.cross_validation import train_test_split
filename = "../data/DSL-StrongPasswordData.csv"
specs = []
users = []
with open(filename, 'rb') as f:
reader = csv.reader(f)
first = 1
for row in reader:
if first:
first = 0
continue
specs.append(row[3:])
users.append(int(row[0][1:]))
users
specs
X_train, X_test, y_train, y_test = train_test_split(specs, users, test_size=0.2, random_state=42)
y_train
X_train, X_test, y_train, y_test = train_test_split(specs, users, test_size=0.2, random_state=42)
clf = svm.SVC()
clf.fit(X_train, y_train)
idx = 0
score = 0
for x in X_test:
y_pred = clf.predict([x])
if y_pred == y_test[idx]:
score += 1
idx+=1
print str(100*score/idx) + "%"
print score, idx
np_specs = np.array(X_train).astype(np.float).ravel()
np_users = np.array(y_train)
print np_specs
print np_users
h = 0.02
x_min, x_max = np_specs.min() - 15, np_specs.max() + 16
y_min, y_max = np_users.min() - 15, np_users.max() + 16
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
specs[:3]
X_train, X_test, y_train, y_test = train_test_split(specs, users, test_size=0.2, random_state=42)
clf = svm.SVC()
clf.fit(X_train, y_train)
"""
Explanation: Dataset 3 - GREYC Multi-password
Background
This is the GREYC dataset where users typed multiple passwords. It contains 118 users, each with a unique password. This dataset is complex in its organization and should only be used as a stretch goal - for each password it has the user info for that password and also impostor info for trying to get at the password. Everything here lives in keystroke3.tar.gz
The dataset is here and the associated paper that uses this dataset can he found here
It's not worth looking into this until we've done good work on the other datasets
IGNORE BELOW THIS POINT, ALL SCRATCHWORK
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/distributed_estimation.ipynb | bsd-3-clause | import numpy as np
from scipy.stats.distributions import norm
from statsmodels.base.distributed_estimation import DistributedModel
def _exog_gen(exog, partitions):
"""partitions exog data"""
n_exog = exog.shape[0]
n_part = np.ceil(n_exog / partitions)
ii = 0
while ii < n_exog:
jj = int(min(ii + n_part, n_exog))
yield exog[ii:jj, :]
ii += int(n_part)
def _endog_gen(endog, partitions):
"""partitions endog data"""
n_endog = endog.shape[0]
n_part = np.ceil(n_endog / partitions)
ii = 0
while ii < n_endog:
jj = int(min(ii + n_part, n_endog))
yield endog[ii:jj]
ii += int(n_part)
"""
Explanation: Distributed Estimation
This notebook goes through a couple of examples to show how to use distributed_estimation. We import the DistributedModel class and make the exog and endog generators.
End of explanation
"""
X = np.random.normal(size=(1000, 25))
beta = np.random.normal(size=25)
beta *= np.random.randint(0, 2, size=25)
y = norm.rvs(loc=X.dot(beta))
m = 5
"""
Explanation: Next we generate some random data to serve as an example.
End of explanation
"""
debiased_OLS_mod = DistributedModel(m)
debiased_OLS_fit = debiased_OLS_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
"""
Explanation: This is the most basic fit, showing all of the defaults, which are to use OLS as the model class, and the debiasing procedure.
End of explanation
"""
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.genmod.families import Gaussian
debiased_GLM_mod = DistributedModel(
m, model_class=GLM, init_kwds={"family": Gaussian()}
)
debiased_GLM_fit = debiased_GLM_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
"""
Explanation: Then we run through a slightly more complicated example which uses the GLM model class.
End of explanation
"""
from statsmodels.base.distributed_estimation import _est_regularized_naive, _join_naive
naive_OLS_reg_mod = DistributedModel(
m, estimation_method=_est_regularized_naive, join_method=_join_naive
)
naive_OLS_reg_params = naive_OLS_reg_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
"""
Explanation: We can also change the estimation_method and the join_method. The below example show how this works for the standard OLS case. Here we using a naive averaging approach instead of the debiasing procedure.
End of explanation
"""
from statsmodels.base.distributed_estimation import (
_est_unregularized_naive,
DistributedResults,
)
naive_OLS_unreg_mod = DistributedModel(
m,
estimation_method=_est_unregularized_naive,
join_method=_join_naive,
results_class=DistributedResults,
)
naive_OLS_unreg_params = naive_OLS_unreg_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
"""
Explanation: Finally, we can also change the results_class used. The following example shows how this work for a simple case with an unregularized model and naive averaging.
End of explanation
"""
|
amitkaps/applied-machine-learning | Module-05a-ML-Pipeline.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib` inline
"""
Explanation: Machine Learning Supervised Pipeline
Frame
Supervised Learning - Regression
y: Predict Sale Price
X: Features about the house
score: Mean Squared Error
End of explanation
"""
data = pd.read_csv("http://bit.do/df-housing")
df = data[['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt', 'FireplaceQu', 'LotFrontage']]
df.head()
X_raw = df.drop('SalePrice', axis=1)
y_raw = df.SalePrice
"""
Explanation: Acquire
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.impute import SimpleImputer
cat_cols = ['FireplaceQu']
num_cols = ['GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt', "OverallQual" ,'LotFrontage']
cat_si_step = ('si', SimpleImputer(strategy='constant', fill_value='MISSING'))
cat_ohe_step = ('ohe', OneHotEncoder(sparse=False, handle_unknown='ignore'))
cat_steps = [cat_si_step, cat_ohe_step]
cat_pipe = Pipeline(cat_steps)
cat_transformers = [('cat', cat_pipe, cat_cols)]
num_si_step = ('si', SimpleImputer(strategy='median'))
num_ss_step = ('ss', StandardScaler())
num_steps = [num_si_step, num_ss_step]
num_pipe = Pipeline(num_steps)
num_transformers = [('num', num_pipe, num_cols)]
transformers = [('cat', cat_pipe, cat_cols),
('num', num_pipe, num_cols)]
ct = ColumnTransformer(transformers=transformers)
X_encoded = ct.fit_transform(X)
X_encoded.shape
from sklearn.linear_model import Ridge
ml_pipe = Pipeline([('transform', ct), ('ridge', Ridge())])
ml_pipe.fit(X, y)
ml_pipe.score(X, y)
from sklearn.model_selection import KFold, cross_val_score
kf = KFold(n_splits=5, shuffle=True, random_state=123)
cross_val_score(ml_pipe, X, y, cv=kf).mean()
from sklearn.model_selection import GridSearchCV
param_grid = {
'transform__num__si__strategy': ['mean', 'median'],
'ridge__alpha': [.001, 0.1, 1.0, 5, 10, 50, 100, 1000],
}
gs = GridSearchCV(ml_pipe, param_grid, cv=kf)
gs.fit(X, y)
gs.best_params_
"""
Explanation: Pipeline
Refine
Transform
End of explanation
"""
|
ScienceStacks/jupyter_scisheets_widget | test_notebooks/20171017_scisheets_widget.ipynb | bsd-3-clause | import json
import numpy as np
import pandas as pd
from jupyter_scisheets_widget import scisheets_widget
"""
Explanation: Demonstration of Use Case
Users can enter step by step explanations of changes made to a SciSheet in a Jupyter notebook
Load necessary packages
End of explanation
"""
income_data = pd.read_csv('income_data.csv', sep=';')
income_data
"""
Explanation: Load data into the notebook
End of explanation
"""
tbl2 = scisheets_widget.HandsonDataFrame(income_data)
tbl2.show()
tbl2._df
tbl2._widget._model_data
tbl2._widget._model_header
"""
Explanation: Display the loaded data as a scisheet widget
End of explanation
"""
|
ml4a/ml4a-guides | examples/fundamentals/convolutional_neural_networks.ipynb | gpl-2.0 | import random
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
"""
Explanation: Convolutional networks
This is an annotated version of PyTorch's example MNIST CNN code for how to implement convolutional networks.
The CIFAR-10 classification task is a classic machine learning benchmark. The data includes 50,000 images belonging to 10 classes, and the task is to identify them. Along with MNIST, CIFAR-10 classification is a sort of like a "hello world" for computer vision and convolutional networks, so a solution can be implemented quickly with an off-the-shelf machine learning library.
Since convolutional neural networks have thus far proven to be the best at computer vision tasks, we'll use the PyTorch library to implement a convolutional networks as our solution.
Note, if you have been running these notebooks on a regular laptop without GPU until now, it's going to become more and more difficult to do so. The neural networks we will be training, starting with convolutional networks, will become increasingly memory and processing-intensive and may slow down laptops without good graphics processing. As such, it is recommended that you follow future tutorials inside Google Collab or a sufficiently powerful computer.
Collab does not automatically include a GPU in it's runtime. Rather by navigating to Edit>Notebook Settings>Hardware Accelerator>GPU you can enable a gpu.
End of explanation
"""
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(32, 64)
self.fc2 = nn.Linear(64, 1)
def forward(self, x):
x = F.sigmoid(self.fc1(x))
x = self.fc2(x)
return x
net = Net().to(device)
"""
Explanation: Recall that a basic neural network in PyTorch can be set up like this:
Note:
The images in CIFAR-10 are 32 by 32. An easy way to troubleshoot the correct input dimension is to encounter a dimension mismatch error. For example, [2x35] x [1024x4] doesn't work, but [2x1024] x [1924x4] does. This is usually as simple as matching the second matrix's first number with the first matrix's second number.
End of explanation
"""
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128,
shuffle=True, num_workers=8)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=128,
shuffle=False, num_workers=8)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
"""
Explanation: We load CIFAR-10 dataset and reshape them as individual vectors. This is easy to do using TorchVision as all the classes are essentially reshapen already. Here we use the DataLoader to create an instance of the training dataset and the test set. The DataLoader also has an option to shuffle the dataset, which helps avoid overfitting.
End of explanation
"""
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
"""
Explanation: Let's see some of our samples.
End of explanation
"""
from torch import optim
optimizer = optim.SGD(net.parameters(), lr=0.01)
criterion = nn.MSELoss()
def fullPass(loader):
running_loss = 0.0
for i, data in enumerate(loader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
outputs = net(inputs)
loss = criterion(outputs, labels.float())
loss.backward()
optimizer.step()
# print statistics
running_loss += loss
if i % inputs.size()[0] == inputs.size()[0]-1:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / inputs.size()[0]))
running_loss = 0.0
net.train()
for epoch in range(30):
fullPass(trainloader);
print('Finished Training')
"""
Explanation: We can compile the model using categorical-cross-entropy loss, and train it for 30 epochs.
End of explanation
"""
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().to(device)
print(net)
"""
Explanation: At this point, there is practically no need to evaluate the model as our neural net is simply not cutting it. With a loss function in the millions, we cannot use a single layer simple neural network on images.
Introducing, the convolutional network... The general architecture of a convolutional neural network is this:
convolution layers, followed by pooling layers
fully-connected layers
a final fully-connected softmax layer
We'll follow this same basic structure and interweave some other components, such as [dropout](https://en.wikipedia.org/wiki/Dropout_(neural_networks), to improve performance.
To begin, we start with our convolution layers. We first need to specify some architectural hyperparemeters:
How many filters do we want for our convolution layers? Like most hyperparameters, this is chosen through a mix of intuition and tuning. A rough rule of thumb is: the more complex the task, the more filters. (Note that we don't need to have the same number of filters for each convolution layer, but we are doing so here for convenience.)
What size should our convolution filters be? We don't want filters to be too large or the resulting matrix might not be very meaningful. For instance, a useless filter size in this task would be a 28x28 filter since it covers the whole image. We also don't want filters to be too small for a similar reason, e.g. a 1x1 filter just returns each pixel.
What size should our pooling window be? Again, we don't want pooling windows to be too large or we'll be throwing away information. However, for larger images, a larger pooling window might be appropriate (same goes for convolution filters).
We start by designing a neural network with two alternating convolutional and max-pooling layers, followed by a 100-neuron fully-connected layer and a 10-neuron output. We'll have 64 and 32 filters in the two convolutional layers, and make the input shape a full-sized image (32x32x3) instead of an unrolled vector (3072x1). We also now use ReLU activation units instead of sigmoids, to avoid vanishing gradients.
End of explanation
"""
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(30): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 128 == 127: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 128))
running_loss = 0.0
print('Finished Training')
"""
Explanation: Let's compile the model and test it again.
End of explanation
"""
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
"""
Explanation: Let's evaluate the model again.
End of explanation
"""
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
for epoch in range(5): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 128 == 127: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 128))
running_loss = 0.0
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
"""
Explanation: 58% accuracy is a big improvement on practically nothing! All of that is accomplished in just 2 epochs using convolutional layers and ReLUs.
Let's try to make the network bigger.
One problem you might notice is that the accuracy of the model is much better on the training set than on the test set. You can see that by monitoring the progress at the end of each epoch above or by evaluating it directly.
This is a symptom of "overfitting". Our model has probably tried to bend itself a little too well towards predicting the training set but does not generalize very well to unseen data. This is a very common problem.
It's normal for the training accuracy to be better than the testng accuracy to some degree, because it's hard to avoid for the network to be better at predicting the data it sees. But a 9% difference is too much.
One way of helping this is by doing some regularization. We can add dropout to our model after a few layers.
All this is arguably most essential, albeit timeconsuming, part of designing a neural network: Editting hyperparamenters.
We compile and train again.
Training accuracy will likely lower but our test accuracy will increase. This is more like what we expect.
Another way of improving performance is to experiment with different optimizers beyond just standard sgd. Let's try to instantiate the same network but use ADAM instead of sgd.
End of explanation
"""
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
"""
Explanation: 61% accuracy! Our best yet. Looks heavily overfit though...
Still a long way to go to beat the record (96%). We can make a lot of progress by making the network (much) bigger, training for (much) longer and using a lot of little tricks (like data augmentation) but that is beyond the scope of this lesson for now.
Let's also recall how to predict a single value and look at its probabilities.
End of explanation
"""
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
torch.load(PATH)
"""
Explanation: Let's also review here how to save and load trained PyTorch models. It's easy!
End of explanation
"""
|
materialsvirtuallab/matgenb | notebooks/2018-03-09-Computing the Reaction Diagram between Two Compounds.ipynb | bsd-3-clause | from pymatgen import MPRester, Composition
from pymatgen.analysis.phase_diagram import PhaseDiagram
from pymatgen.entries.computed_entries import ComputedEntry
from pymatgen.apps.borg.hive import VaspToComputedEntryDrone
from pymatgen.entries.compatibility import MaterialsProjectCompatibility
from pymatgen.analysis.phase_diagram import ReactionDiagram, PDPlotter, PDEntry
%matplotlib inline
"""
Explanation: This notebook shows how to
* Obtain information about interface reactions between cathode and electrolyte. (VOPO4 with C3H4O3 (or EC) in this example)
* Plot formation energy as a function of mixing ratio.
Written using:
- pymatgen==2018.3.14
We use the Materials Project API to obtain energies of compounds.
End of explanation
"""
mp = MPRester()
compat = MaterialsProjectCompatibility()
chemsys = ["H", "P", "V","O", "C"]
all_entries = mp.get_entries_in_chemsys(chemsys)
"""
Explanation: Get all H, P, V, O, C entries by MPRester
End of explanation
"""
CO_entries = [e for e in all_entries if e.composition.reduced_formula == "CO"]
CO2_entries = [e for e in all_entries if e.composition.reduced_formula == "CO2"]
H2O_entries = [e for e in all_entries if e.composition.reduced_formula == "H2O"]
VPO5_entries = [e for e in all_entries if e.composition.reduced_formula == "VPO5"]
non_solid = ["CO", "CO2", "H2O", "VPO5"]
entries = list(filter(lambda e: e.composition.reduced_formula not in non_solid, all_entries))
"""
Explanation: Remove CO, CO2, H2O, VPO5 entries from all_entries, use experimental data and our own calculations
End of explanation
"""
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["C", "H2", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
"""
Explanation: Get POTCAR of C, H, O for EC to construct its ComputedEntry
End of explanation
"""
factor = 1000.0 / 6.0221409e23 / 1.60217662e-19
ec_form_energy = -682.8 * factor
ec = ComputedEntry(composition="C3H4O3", energy=0, parameters={"potcar_symbols": list(potcars)})
ec.data["oxide_type"] = "oxide"
# MaterialsProjectCompatibility
ec = compat.process_entry(ec)
pd = PhaseDiagram(all_entries)
ec.uncorrected_energy = ec_form_energy + sum([pd.el_refs[el].energy_per_atom * amt \
for el, amt in ec.composition.items()]) - ec.correction
"""
Explanation: EC solid phase:
Enthalpy of formation of solid at standard conditions = -590.9 (kJ/mol)
(http://webbook.nist.gov/cgi/cbook.cgi?ID=C96491&Units=SI&Mask=2#Thermo-Condensed)
EC liquid phase:
Enthalpy of formation of solid at standard conditions = -682.8 (kJ/mol)
(http://webbook.nist.gov/cgi/cbook.cgi?ID=C96491&Units=SI&Mask=2#Thermo-Condensed)
Get total energy by experimental formation energy, elemental energies and correction to construct EC ComputedEntry.
Apply corection for gas such as O2 etc. Ref: https://journals.aps.org/prb/pdf/10.1103/PhysRevB.73.195107
End of explanation
"""
vopo4 = []
vc = VaspToComputedEntryDrone()
for d in ["VOPO4/"]:
e = vc.assimilate(d)
e.data["oxide_type"] = "oxide"
e = compat.process_entry(e)
vopo4.append(e)
hxvopo4 = []
for d in ["HVOPO4/", "H2VOPO4/"]:
e = vc.assimilate(d)
e.data["oxide_type"] = "oxide"
e = compat.process_entry(e)
hxvopo4.append(e)
"""
Explanation: Use my own calculation entries
End of explanation
"""
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["C", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
co_form_energy = -110.53 * factor
co = ComputedEntry(composition="CO", energy=0, parameters={"potcar_symbols": list(potcars)})
co.data["oxide_type"] = "oxide"
co = compat.process_entry(co)
pd = PhaseDiagram(all_entries)
co.uncorrected_energy = co_form_energy + sum([pd.el_refs[el].energy_per_atom * amt \
for el, amt in co.composition.items()]) - co.correction
"""
Explanation: CO solid phase:
Enthalpy of formation of solid at standard conditions = -1.15 (eV/f.u.) = -110.5 (kJ/mol)
https://en.wikipedia.org/wiki/Carbon_monoxide_%28data_page%29
CO gas phase:
Enthalpy of formation of gas at standard conditions = -1.15 (eV/f.u.) = -110.53 (kJ/mol)
https://en.wikipedia.org/wiki/Carbon_monoxide_%28data_page%29
End of explanation
"""
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["C", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
co2_form_energy = -393.52 * factor
co2 = ComputedEntry(composition="CO2", energy=0, parameters={"potcar_symbols": list(potcars)})
co2.data["oxide_type"] = "oxide"
co2 = compat.process_entry(co2)
pd = PhaseDiagram(all_entries)
co2.uncorrected_energy = co2_form_energy + sum([pd.el_refs[el].energy_per_atom * amt
for el, amt in co2.composition.items()]) - co2.correction
"""
Explanation: CO2 gas phase:
Enthalpy of formation of gas at standard conditions = -4.43 (eV/f.u.) = −393.52 (kJ/mol)
https://en.wikipedia.org/wiki/Carbon_dioxide_%28data_page%29
End of explanation
"""
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["H2", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
h2o_form_energy = -286.629 * factor
h2o = ComputedEntry(composition="H2O", energy=0, parameters={"potcar_symbols": list(potcars)})
h2o.data["oxide_type"] = "oxide"
h2o = compat.process_entry(h2o)
pd = PhaseDiagram(all_entries)
h2o.uncorrected_energy = h2o_form_energy + sum([pd.el_refs[el].energy_per_atom * amt for el, amt in h2o.composition.items()]) - h2o.correction
entry1 = vopo4[0]
entry2 = ec
useful_entries = entries + hxvopo4 + [h2o, co2, co]
from scipy import stats
import numpy as np
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['axes.linewidth']=3
mpl.rcParams['lines.markeredgewidth']=2
mpl.rcParams['lines.linewidth']=3
mpl.rcParams['lines.markersize']=13
mpl.rcParams['xtick.major.width']=3
mpl.rcParams['xtick.major.size']=8
mpl.rcParams['xtick.minor.width']=3
mpl.rcParams['xtick.minor.size']=4
mpl.rcParams['ytick.major.width']=3
mpl.rcParams['ytick.major.size']=8
mpl.rcParams['ytick.minor.width']=3
mpl.rcParams['ytick.minor.size']=4
ra = ReactionDiagram(entry1=entry1, entry2=entry2, all_entries=useful_entries)
cpd = ra.get_compound_pd()
plotter = PDPlotter(cpd, show_unstable=False)
plotter.get_plot(label_stable=False, label_unstable=False)
for i, l in ra.labels.items():
print ("%s - %s" % (i, l))
"""
Explanation: H2O liquid phase:
Enthalpy of formation of liquid at standard conditions = -286.629 (kJ/mol)
http://www1.lsbu.ac.uk/water/water_properties.html
End of explanation
"""
|
PLBMR/cmuDSCWorkshopNotebooks | okCupidInitialAnalysis.ipynb | mit | #imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display, HTML
#constants
%matplotlib inline
sns.set_style("dark")
sigLev = 3
figWidth = figHeight = 5
"""
Explanation: OkCupid: Dataset Analysis
This is a notebook I am using to test out the feasibility of using the OkCupid Dataset for our first workshop.
End of explanation
"""
okCupidFrame = pd.read_csv("data/JSE_OkCupid/profiles.csv")
"""
Explanation: Dataset Information
Before starting with the analysis of this dataset, it will be important to study the codebook of variables in data/JSE_OkCupid/okcupid_codebook.txt. Some important notes I found from viewing this codebook:
Contains Profiles from 25 Mile Radius outside of San Francisco, with at least one profile picture.
Data Scraped in 2012.
Contains $n = 59946$ observations; This is a mid-sized dataset, with a lot of information, although nont big-data unweildy.
Contains a mix of different demographic and lifestyle data, along with some textual data in the essay variables.
With that in mind, let's get started with the data.
End of explanation
"""
numObservations = okCupidFrame.shape[0]
numFeatures = okCupidFrame.shape[1]
"""
Explanation: Assessing Data Quality
As is usual for starting data analyses, let's assess some aspects of data quality.
End of explanation
"""
#make numMissing for a given column
def numMissing(col):
#helper that checks the number of observations missing from a given col
missingRows = col[col.isnull()]
return missingRows.shape[0]
#then apply over our feature set
missingSummaryFrame = okCupidFrame.apply(numMissing,axis = 0)
display(missingSummaryFrame)
"""
Explanation: As discussed in the codebook, we see that we have {{numObservations}} profiles in this dataset. We also have {{numFeatures}} variables in this dataset, and given that some of these variables are language data (see the dataset's codebook), this dataset can be very high-dimensional if we choose to transform it in that manner.
Let us study how many missing values we have in this dataset.
End of explanation
"""
#plot age
plt.hist(okCupidFrame["age"])
#then make some labels
plt.xlabel("Age (In Years)")
plt.ylabel("Count")
plt.title("Distribution of Age")
"""
Explanation: Table 1: Counts of the number of observations missing per variable.
We see that we have variables with many missing variables, and this is especially consistent with the language data (i.e. the essay variables). This may suggest that we want to target a particular variable that generally has fewer misisng observations. Let us try out targeting the age variable.
Summary Statistics and EDA
Let us start by studying the distribution of our target variable age and some of the potentially relevant variables for determining age of someone on OkCupid.
End of explanation
"""
plt.hist(okCupidFrame["income"])
plt.xlabel("Income (In USD)")
plt.ylabel("Count")
plt.title("Distribution of Income")
"""
Explanation: Figure 1: Distribution of Age.
We see that the distribution of age is fairly young, with most people being around $20$ to $35$ years old. We see that the distribution is relatively right-skewed, with fewer observations among middle-aged individuals. This may have an effect towards a relatively imbalanced regression.
Perhaps income might be useful in determining age, as generally older persons tend to have higher incomes from being farther in their careers. Let us study the distribution of income.
End of explanation
"""
numNotReportIncome = okCupidFrame[okCupidFrame["income"] == -1].shape[0]
propNotReportIncome = float(numNotReportIncome) / okCupidFrame.shape[0]
#get percent
percentMul = 100
percentNotReportIncome = propNotReportIncome * percentMul
"""
Explanation: Figure 2: Distribution of Income (in USD).
We see that the distribution of income is heavily right-skewed, as in typical of income distributions. That being said, it is apparent that this distribution may be affected by the "Prefer not to say" category ($-1$ in this variable). Let us see how many of those observations occur.
End of explanation
"""
filteredOkCupidFrame = okCupidFrame[okCupidFrame["essay0"].notnull()]
"""
Explanation: We see that {{numNotReportIncome}} of our observations did not report their income, which is about {{int(np.round(percentNotReportIncome))}}% of the observations in our dataset. This is not ideal, and this suggests that this variable will not be strong for informing our age observations.
Another possible option is to look at the summary of individuals (essay0) to inform us of their age. It is likely that the language content, language complexity, and summary length will be strong informers of an individual's maturity, which is (hopefully) correlated with their age.
Let us filter out the observations that do not have the summary (around {{missingSummaryFrame["essay0"]}} observations) and then study that language.
End of explanation
"""
#language imports
import nltk
import collections as co
import StringIO
import re
#find full distribution of word frequencies
#write the all to a string wrtier
stringWriteTerm = StringIO.StringIO()
filteredOkCupidFrame["essay0"].apply(lambda x: stringWriteTerm.write(x))
#get the full string from the writer
summaryString = stringWriteTerm.getvalue()
stringWriteTerm.close()
#lower and split into series of words (tokens) on multiple split criteria
summaryString = summaryString.lower()
#split on ".", " ", ";", "-", or new line
summaryWordList = re.split("\.| |,|;|-|\n",summaryString)
#keep only legal words, and non stop-words (i.e. "." or "&")
#get counter of legal English
legalWordCounter = co.Counter(nltk.corpus.words.words())
stopWordsCounter = co.Counter(nltk.corpus.stopwords.words())
#filter narrativeWordList
filterSummaryWordList = [i for i in summaryWordList
if i in legalWordCounter and
i not in stopWordsCounter]
#counter for the legal words in our filtered list
filteredWordCounter = co.Counter(filterSummaryWordList)
"""
Explanation: We will start analyzing this data by first filtering out all the stopwords (i.e. punctuation) from our data and remove any words that are not featured in the standard english dictionary. We will use the Natural Language Toolkit to perform this task.
End of explanation
"""
#make series of word frequency ordered by most common words
wordFrequencyFrame = pd.DataFrame(filteredWordCounter.most_common(),
columns = ["Word","Frequency"])
wordFrequencyFrame["Density"] = (wordFrequencyFrame["Frequency"] /
np.sum(wordFrequencyFrame["Frequency"]))
#then plot rank-density plot
#for the sake of easier visuals, we will log the rank
desiredLineWidth = 3
plt.plot(np.log(wordFrequencyFrame.index+1),wordFrequencyFrame["Density"],
lw = desiredLineWidth)
plt.xlabel("Log-Rank")
plt.ylabel("Density")
plt.title("Log(Rank)-Density Plot\nFor Words in our Summary Set")
"""
Explanation: We see that there are {{sum(filteredWordCounter.values())}} non-distinct words that occur in the entire corpus of summaries and {{len(filteredWordCounter.values())}} distinct words in this corpus. This is actually a rather diverse set of possible words that occur. Let us look at the distribution of the words by rank.
End of explanation
"""
topLev = 10
topTenWordFrame = wordFrequencyFrame.iloc[0:topLev,:].loc[:,
["Word","Frequency"]]
#then display
display(HTML(topTenWordFrame.to_html(index = False)))
"""
Explanation: Figure 3: Our word distribution by $\log(Rank)$ of a word.
We see that this distribution dissipates after about the top $e^6 \approx 403$ most frequent words in our corpus, which suggests that this is a very sparse language. Since we have over $20000$ possible words in our corpus and only about $403$ of them are used that often, this suggests to me that if we were to do a formal model selection procedure, we would need to not consider many words that occur very rarely.
Let us see what our top ten most frequent words look like.
End of explanation
"""
#import our count vectorizer
from sklearn.feature_extraction.text import CountVectorizer
#make a vocab dictionary
counterList = filteredWordCounter.most_common()
vocabDict = {}
for i in xrange(len(counterList)):
rankWord = counterList[i][0]
vocabDict[rankWord] = i
#initialize vectorizer
vectorizer = CountVectorizer(min_df=1,stop_words=stopWordsCounter,
vocabulary = vocabDict)
#then fit and transform our summaries
bagOfWordsMatrix = vectorizer.fit_transform(filteredOkCupidFrame["essay0"])
#get language frame
langFrame = pd.DataFrame(bagOfWordsMatrix.toarray(),
columns = vectorizer.get_feature_names())
#import linear model
import sklearn.linear_model as lm
#build model
initialLinearMod = lm.LinearRegression()
initialLinearMod.fit(langFrame,filteredOkCupidFrame["age"])
"""
Explanation: Table 2: Our Top Ten Most Frequent words along with their frequencies in our corpus.
We see that our summaries are generally focused around "love", and aspects of affection ("like","enjoy","good", etc).
Prediction
Let us try to use these summaries to predict the age of an individual. We will do this by regressing age on a bag-of-words representation of our summaries. We will use Scikit-Learn for building our bag-of-words per each observation.
End of explanation
"""
#get predictions
predictionVec = initialLinearMod.predict(langFrame)
"""
Explanation: Let us first study how well our model is performing on the dataset. We will use the error metric root mean-squared error (RMSE).
End of explanation
"""
|
JanetMatsen/bacteriopop | depreciated/dmd_demo.ipynb | apache-2.0 | import matplotlib as mpl
mpl.use('TkAgg')
import matplotlib.pyplot as plt
%matplotlib inline
import bacteriopop_utils
import feature_selection_utils
import load_data
loaded_data = data = load_data.load_data()
loaded_data.shape
"""
Explanation: for some reason Janet's virtualenv is much happier with this TkAgg thing set.
End of explanation
"""
loaded_data[loaded_data['phylum'].isnull()].head(3)
loaded_data.head()
"""
Explanation: make sure none of the phyla are NA (checking 160304 update to load_data.py
End of explanation
"""
bacteriopop_utils.filter_by_abundance(dataframe=loaded_data, low= 0.6).head()
bacteriopop_utils.reduce_data(dataframe=loaded_data, min_abundance= 0.6,
phylo_column='genus', oxygen='high').head()
"""
Explanation: Test filter and reduce functions using a high threshold, which selects for genus==Methylobacter
End of explanation
"""
raw_dmd_data = bacteriopop_utils.reduce_data(
dataframe=loaded_data, min_abundance= 0.01,
phylo_column='genus', oxygen='Low')
"""
Explanation: Demo of DMD data prep
End of explanation
"""
data_dict = bacteriopop_utils.break_apart_experiments(raw_dmd_data)
data_dict.keys()
# Can't view generators very easily!!!
data_dict.itervalues()
# But we can make a list from them and grab the 0th item
first_df = list(data_dict.itervalues())[0]
first_df.head(3)
first_df[first_df['genus'] == 'other'].head()
first_df[first_df['genus'] != ''].pivot(index='genus', columns='week', values='abundance')
raw_dmd_data.columns
DMD_input_dict = \
bacteriopop_utils.prepare_DMD_matrices(raw_dmd_data,
groupby_level = "genus")
type(DMD_input_dict)
"""
Explanation: Errors are thrown by functions below if you drop min_abunance below. I think it is hanging up on multiple "other" rows.
End of explanation
"""
DMD_input_dict[('Low', 1)]
DMD_input_dict[('Low', 1)].shape
DMD_input_dict[('Low', 1)].groupby('week')['abundance'].sum()
"""
Explanation: We can get each dataframe out like this:
End of explanation
"""
DMD_test_matrix = DMD_input_dict[('Low', 1)]
# Who is in there?
DMD_test_matrix.reset_index()['genus'].unique()
"""
Explanation: DMD
TODO: test DMD on this abundance marix.
End of explanation
"""
# following example 1: https://pythonhosted.org/modred/tutorial_modaldecomp.html
import modred as MR
num_modes = 1
modes, eig_vals = MR.compute_POD_matrices_snaps_method(DMD_test_matrix, range(num_modes))
modes
eig_vals
"""
Explanation: I'm stuck at the installation of modred :(
End of explanation
"""
extracted_features = bacteriopop_utils.extract_features(
dataframe = loaded_data,
column_list = ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'oxygen', 'abundance']
# default list was: ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'length', 'abundance', 'project']
)
extracted_features.head()
extracted_features.shape
"""
Explanation: Feature extraction and PCA
End of explanation
"""
pca_results = feature_selection_utils.pca_bacteria(
data = extracted_features.head(100), n_components = 10)
pca_results.components_
"""
Explanation: Just do PCA on a tiny bit of the data as a demo
End of explanation
"""
feature_selection_utils.calculate_features_target_correlation(
data = extracted_features.head(100),
features = extracted_features.columns.tolist(),
target='abundance',
method="Pearson")
"""
Explanation: Do correlations for a tiny subset of the data.
End of explanation
"""
|
DJCordhose/ai | notebooks/sklearn/knn.ipynb | mit | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import pandas as pd
print(pd.__version__)
"""
Explanation: ML using KNN
End of explanation
"""
df = pd.read_csv('./insurance-customers-300.csv', sep=';')
y=df['group']
df.drop('group', axis='columns', inplace=True)
X = df.as_matrix()
df.describe()
"""
Explanation: First Step: Load Data and disassemble for our purposes
End of explanation
"""
# ignore this, it is just technical code
# should come from a lib, consider it to appear magically
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
plt.figure(figsize=(20,10))
if clf and mesh:
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
if print:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')
plt.xlabel(x_label, fontsize=font_size)
plt.ylabel(y_label, fontsize=font_size)
plt.title(title, fontsize=font_size)
if fname:
plt.savefig(fname)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train_kmh_age = X_train[:, :2]
X_test_kmh_age = X_test[:, :2]
X_train_2_dim = X_train_kmh_age
X_test_2_dim = X_test_kmh_age
from sklearn import neighbors
clf = neighbors.KNeighborsClassifier(1)
%time clf.fit(X_train_2_dim, y_train)
plotPrediction(clf, X_train_2_dim[:, 1], X_train_2_dim[:, 0],
'Age', 'Max Speed', y_train,
title="Train Data Max Speed vs Age with Classification")
"""
Explanation: Second Step: First version using KNN
End of explanation
"""
clf.score(X_train_2_dim, y_train)
"""
Explanation: Look how great it is doing!
End of explanation
"""
plotPrediction(clf, X_test_2_dim[:, 1], X_test_2_dim[:, 0],
'Age', 'Max Speed', y_test,
title="Test Data Max Speed vs Age with Prediction")
clf.score(X_test_2_dim, y_test)
"""
Explanation: But really?
End of explanation
"""
# http://scikit-learn.org/stable/modules/cross_validation.html
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, X[:, :2], y, cv=10)
scores
# https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule
print("Accuracy: %0.2f (+/- %0.2f for 95 percent of runs)" % (scores.mean(), scores.std() * 2))
"""
Explanation: Cross Validation is a way to make the score more telling
we run the training and scoring many times (10 in our case)
each time we use a different part of the data for validation
this way we have many runs that take out a random factor
additionally we use all data for training
only works when training time is reasonably short
End of explanation
"""
|
ebellm/ztf_summerschool_2015 | notebooks/Period_Finding.ipynb | bsd-3-clause | # point to our previously-saved data
reference_catalog = '../data/PTF_Refims_Files/PTF_d022683_f02_c06_u000114210_p12_sexcat.ctlg'
outfile = reference_catalog.split('/')[-1].replace('ctlg','shlv')
"""
Explanation: Hands-On Exercise 3: Period Finding
One of the fundamental tasks of time-domain astronomy is determining if a source is periodic, and if so, measuring the period. Period measurements are a vital first step for more detailed scientific study, which may include source classification (e.g., RR Lyrae, W Uma), lightcurve modeling (binaries), or luminosity estimation (Cepheids).
Binary stars in particular have lightcurves which may show a wide variety of shapes, depending on the nature of the stars and the system geometry.
In this workbook we will develop a basic toolset for the generic problem of finding periodic sources.
by Eric Bellm (2014-2016)
Let's use the relative-photometry corrected light curves we built in Exercise 2. We'll use the utility function source_lightcurve to load the columns MJD, magnitude, and magnitude error. Note that we will use days as our time coordinate throughout the homework.
End of explanation
"""
ra_fav, dec_fav = (312.503802, -0.706603)
mjds, mags, magerrs = source_lightcurve('../data/'+outfile, ra_fav, dec_fav)
"""
Explanation: We'll start by loading the data from our favorite star, which has coordinates $\alpha_\mathrm{J2000}, \delta_\mathrm{J2000} = (312.503802, -0.706603)$.
End of explanation
"""
import astropy.constants as const
(const.au / const.c).to(u.minute)
"""
Explanation: Barycentering
Our times are Modified Julian Date on earth. We need to correct them for Earth's motion around the sun (this is called heliocentering or barycentering). The largest this timing error can be if we do not make this correction is about the light travel time over one AU. We can use astropy constants to calculate this easily:
End of explanation
"""
bjds = barycenter_times(mjds,ra_fav,dec_fav)
"""
Explanation: We have provided a script to barycenter the data--note that it assumes that the data come from the P48. Use the bjds (barycentered modified julian date) variable through the remainder of this notebook.
End of explanation
"""
# define plot function
def plot_data( # COMPLETE THIS LINE
plt.errorbar( # COMPLETE THIS LINE
fmt = '_', capsize=0)
plt.xlabel('Date (MJD)')
plt.ylabel('Magnitude')
plt.gca().invert_yaxis()
# run plot function
plot_data(bjds, mags, magerrs)
"""
Explanation: Optional exercise: plot a histogram of the time differences between the barycentered and non-barycentered data.
Exercise 1: Getting started plotting
Complete this function for plotting the lightcurve:
End of explanation
"""
# documentation for the astroML lomb_scargle function
help(lomb_scargle)
"""
Explanation: The Lomb Scargle Periodogram
The Lomb-Scarge Periodogram provides a method for searching for periodicities in time-series data. It is comparable to the discrete Fourier Transform, but may be applied to irregularly sampled data. The periodogram gives as output the relative significance of a least-squares sinusoidal fit to the data as a function of frequency.
Much of this presentation follows Ch. 10 of Ivezic et al..
We use the "generalized" LS version implemented in astroML rather than the "standard" version implemented in scipy: the generalized version accounts better for cases of poor sampling.
End of explanation
"""
freq_min = # COMPLETE
print('The minimum frequency our data is sensitive to is {:.3f} radian/days, corresponding to a period of {:.3f} days'.format(freq_min, 2*np.pi/freq_min)
"""
Explanation: Exercise 2: Determining the frequency grid
One of the challenges of using the LS periodogram is determining the appropriate frequency grid to search. We have to select the minimum and maximum frequencies as well as the bin size.
If we don't include the true frequency in our search range, we can't find the period!
If the bins are too coarse, true peaks may be lost. If the bins are too fine, the periodogram becomes very slow to compute.
The first question to ask is what range of frequencies our data is sensitive to.
Exercise 2.1
What is the smallest angular frequency $\omega_{\rm min}$ our data is sensitive to? (Hint: smallest frequency => largest time)
End of explanation
"""
freq_max = # COMPLETE
print('The maximum frequency our data is sensitive to is APPROXIMATELY {:.3f} radian/days, corresponding to a period of {:.3f} days'.format(freq_max, 2*np.pi/freq_max)
"""
Explanation: Exercise 2.2
Determining the highest frequency we are sensitive to turns out to be complicated.
if $\Delta t$ is the difference between consecutive observations,
$\pi$/median($\Delta t$) is a good starting point, although in practice we may be sensitive to frequencies even higher than $2 \pi$/min($\Delta t$) depending on the details of the sampling.
What is the largest angular frequency $\omega_{\rm max}$ our data is sensitive to?
End of explanation
"""
n_bins = # COMPLETE
print(n_bins)
"""
Explanation: Exercise 2.3
We need enough bins to resolve the periodogram peaks, which have frequency width $\Delta f \sim 2\pi/ (t_{\rm max} - t_{\rm min}) = \omega_{\rm min}$.
If we want to have 5 samples of $\Delta f$, how many bins will be in our periodogram? Is this computationally feasible?
End of explanation
"""
# define frequency function
def frequency_grid(times):
freq_min = # COMPLETE
freq_max = # COMPLETE
n_bins = # COMPLETE
print('Using {} bins'.format(n_bins))
return np.linspace(freq_min, freq_max, n_bins)
# run frequency function
omegas = frequency_grid(bjds)
"""
Explanation: Exercise 2.4
Let's wrap this work up in a convenience function that takes as input a list of observation times and returns a frequency grid with decent defaults.
End of explanation
"""
# provided alternate frequency function
def alt_frequency_grid(Pmin, Pmax, n_bins = 5000):
"""Generate an angular frequency grid between Pmin and Pmax (assumed to be in days)"""
freq_max = 2*np.pi / Pmin
freq_min = 2*np.pi / Pmax
return np.linspace(freq_min, freq_max, n_bins)
"""
Explanation: In some cases you'll want to generate the frequency grid by hand, either to extend to higher frequencies (shorter periods) than found by default, to avoid generating too many bins, or to get a more precise estimate of the period. In that case use the following code. We'll use a large fixed number of bins to smoothly sample the periodogram as we zoom in.
End of explanation
"""
# calculate and plot LS periodogram
P_LS = lomb_scargle( # COMPLETE
plt.plot(omegas, P_LS)
plt.xlabel('$\omega$')
plt.ylabel('$P_{LS}$')
# provided: define function to find best period
def LS_peak_to_period(omegas, P_LS):
"""find the highest peak in the LS periodogram and return the corresponding period."""
max_freq = omegas[np.argmax(P_LS)]
return 2*np.pi/max_freq
# run function to find best period
best_period = LS_peak_to_period(omegas, P_LS)
print("Best period: {} days".format(best_period))
"""
Explanation: Exercise 3: Computing the Periodogram
Calculate the LS periodiogram and plot the power.
End of explanation
"""
# define function to phase lightcurves
def phase(time, period, t0 = None):
""" Given an input array of times and a period, return the corresponding phase."""
if t0 is None:
t0 = time[0]
return # COMPLETE
"""
Explanation: Exercise 4: Phase Calculation
Complete this function that returns the phase of an observation (in the range 0-1) given its period. For simplicity set the zero of the phase to be the time of the initial observation.
Hint: Consider the python modulus operator, %.
Add a keyword that allows your function to have an optional user-settable time of zero phase.
End of explanation
"""
# define function to plot phased lc
def plot_phased_lc(mjds, mags, magerrs, period, t0=None):
phases = # COMPLETE
plt.errorbar( #COMPLETE
fmt = '_', capsize=0)
plt.xlabel('Phase')
plt.ylabel('Magnitude')
plt.gca().invert_yaxis()
# run function to plot phased lc
plot_phased_lc(bjds, mags, magerrs, best_period)
"""
Explanation: Exercise 5: Phase Plotting
Plot the phased lightcurve at the best-fit period.
End of explanation
"""
omegas = alt_frequency_grid( # COMPLETE
P_LS = lomb_scargle( # COMPLETE
plt.plot(omegas, P_LS)
plt.xlabel('$\omega$')
plt.ylabel('$P_{LS}$')
best_period = # COMPLETE
print("Best period: {} days".format(best_period))
plot_phased_lc(bjds, mags, magerrs, best_period)
"""
Explanation: How does that look? Do you think you are close to the right period?
Try re-running your analysis using the alt_frequency_grid command, searching a narrower period range around the best-fit period.
End of explanation
"""
D = lomb_scargle_bootstrap( # COMPLETE
sig99, sig95 = np.percentile( # COMPLETE
plt.plot(omegas, P_LS)
plt.plot([omegas[0],omegas[-1]], sig99*np.ones(2),'--')
plt.plot([omegas[0],omegas[-1]], sig95*np.ones(2),'--')
plt.xlabel('$\omega$')
plt.ylabel('$P_{LS}$')
"""
Explanation: Exercise 6: Calculating significance of the period detection
Real data may have aliases--frequency components that appear because of the sampling of the data, such as once per night. Bootstrap significance tests, which shuffle the data values around but keep the times the same, can help rule these out.
Calculate the chance probability of finding a LS peak higher than the observed value in random data observed at the specified intervals: use lomb_scargle_bootstrap and np.percentile to find the 95 and 99 percent significance levels and plot them over the LS power.
End of explanation
"""
import gatspy
ls = gatspy.periodic.LombScargleFast()
ls.optimizer.period_range = ( # COMPLETE
# we have to subtract the t0 time so the model plotting has the correct phase origin
ls.fit(bjds-bjds[0],mags,magerrs)
gatspy_period = ls. # COMPLETE
print(gatspy_period)
plot_phased_lc(bjds, mags, magerrs, gatspy_period)
p = np.linspace(0,gatspy_period,100)
plt.plot(p/gatspy_period,ls.predict(p,period=gatspy_period))
"""
Explanation: Exercise 7: Find periods of other sources
Now find the periods of these sources, plot their phased lightcurves, and evaluate the significance of the period you find:
312.066287628 -0.983790357518
311.967177518 -0.886275170839
312.263445107 -0.342008023626
312.050877142 -1.0632849268
312.293550866 -0.783896411315
Suggestion: wrap the code you used above in a function that takes ra & dec as input.
[Challenge] Exercise 8: gatspy
Try using the gatspy package to search for periods. It uses a slightly different interface but has several nice features, such as automatic zooming on candidate frequency peaks.
You'll need to read the online documentation or call help(gatspy.periodic.LombScargleFast()) to learn how to which commands to use.
End of explanation
"""
ss = gatspy.periodic.SuperSmoother(fit_period=True)
ss.optimizer.period_range = ( #COMPLETE
ss.fit( # COMPLETE
gatspy_period = ss. # COMPLETE
print(gatspy_period)
plot_phased_lc(bjds, mags, magerrs, gatspy_period)
p = np.linspace(0,gatspy_period,100)
plt.plot(p/gatspy_period,ss.predict(p,period=gatspy_period))
"""
Explanation: [Challenge] Exercise 9: Alternate Algorithms
Lomb-Scargle is equivalent to fitting a sinusoid to the phased data, but many kinds of variable stars do not have phased lightcurves that are well-represented by a sinusoid. Other algorithms, such as those that attempt to minimize the dispersion within phase bins over a grid of trial phases, may be more successful in such cases. See Graham et al (2013) for a review.
End of explanation
"""
from astroML.time_series import multiterm_periodogram
omegas = alt_frequency_grid(.2,1.2)
P_mt = multiterm_periodogram( #COMPLETE
plt.plot(omegas, P_mt)
plt.xlabel('$\omega$')
plt.ylabel('$P_{mt}$')
best_period = # COMPLETE
print("Best period: {} days".format(best_period))
plot_phased_lc(bjds, mags, magerrs, best_period)
ls = gatspy.periodic.LombScargle(Nterms=4)
ls.optimizer.period_range = ( # COMPLETE
ls.fit( # COMPLETE
gatspy_period = ls. # COMPLETE
print(gatspy_period)
plot_phased_lc(bjds, mags, magerrs, gatspy_period)
p = np.linspace(0,gatspy_period,100)
plt.plot(p/gatspy_period,ls.predict(p,period=gatspy_period))
"""
Explanation: [Challenge] Exercise 10: Multi-harmonic fitting
Both AstroML and gatspy include code for including multiple Fourier components in the fit, which can better fit lightcurves that don't have a simple sinusoidal shape (like RR Lyrae).
End of explanation
"""
# open the stored data
import shelve
import astropy
shelf = shelve.open('../data/'+outfile)
all_mags = shelf['mags']
all_mjds = shelf['mjds']
all_errs = shelf['magerrs']
all_coords = shelf['ref_coords']
shelf.close()
# loop over stars
variable_inds = []
best_periods = []
best_power = []
with astropy.utils.console.ProgressBar(all_mags.shape[0],ipython_widget=True) as bar:
for i in range(all_mags.shape[0]):
# make sure there's real data
wgood = (all_mags[i,:].mask == False)
n_obs = np.sum(wgood)
# if we don't have many observations, don't bother computing periods
if n_obs < 40:
continue # the "continue" instruction tells python to skip the rest of the loop for this element and continue with the next one
# COMPLETE: make a cut so you only complete periods on variabile sources
if # source is not variable:
continue
variable_inds.append(i)
bjds = barycenter_times(all_mjds[wgood],all_coords[i].ra.degree,all_coords[i].dec.degree)
# COMPLETE: calculate best period
best_periods.append( # COMPLETE
best_power.append( # COMPLETE: add the LS power here
bar.update()
# COMPLETE: now find the most promising periods and plot them!
"""
Explanation: [Challenge] Exercise 11: Compute all periods
This is a big one: can you compute periods for all of the sources in our database with showing evidence of variability? How will you compute variablity? How can you tell which sources are likely to have good periods?
We're giving you less guidance here than before--see how you can do!
End of explanation
"""
|
huangziwei/pyMF3 | pymf3/examples/01_using_nmf_to_factorize_face_images.ipynb | mit | import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import sys
sys.path.append('/home/Repos/pyMF3/')
import pymf3
from pymf3.datasets import CBCL_faces
from matplotlib import rcParams
rcParams['font.family'] = 'DejaVu Sans'
faces = CBCL_faces.get_CBCL_faces(scale_images=True)
"""
Explanation: Using NMF to Factorize Face Images
In this notebook, we try to reproduce the result in Lee & Seung (1999) using the standard NMF algorithm.
End of explanation
"""
rand_imgs = np.random.randint(0, faces.shape[0], size=7)
fig, ax = plt.subplots(1, len(rand_imgs), figsize=(14, 2))
for i, img in enumerate(rand_imgs):
ax[i].imshow(faces[img].reshape(19,19))
ax[i].axes.get_xaxis().set_visible(False)
ax[i].axes.get_yaxis().set_visible(False)
"""
Explanation: The dataset we used here is the Face Data, the same as the one used in Lee & Seung (1999). The function CBCL_faces.get_CBCL_faces() will download and load data into a 2D matrix, in which each row is all pixel from one image. All 2429 face images in training set have been read by PIL and converted into Numpy Array. Here are some examples:
End of explanation
"""
nmf = pymf3.NMF(data=faces, num_bases=49)
nmf.factorize(niter=12000, show_progress=False)
nmf.plot_modules(module_ndims=(19,19), num_per_row=7)
nmf.plot_cost()
"""
Explanation: In this example we didn't scale the greyscale intensites to mean and std 0.25, as Lee & Seung did in their paper. Here we only normalized them to [0,1].
After 12000 iterations, we get:
End of explanation
"""
|
ktmud/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.3'), 'Please use TensorFlow version 1.3 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
return None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For example, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return None, None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
csadorf/signac | doc/signac_104_Modifying_the_Data_Space.ipynb | bsd-3-clause | import numpy as np
def V_vdW(p, kT, N, a=0, b=0):
"""Solve the van der Waals equation for V."""
coeffs = [p, - (kT * N + p * N *b), a * N**2, - a * N**3 * b]
V = sorted(np.roots(coeffs))
return np.real(V).tolist()
"""
Explanation: 1.4 Modifying the Data Space
It is very common that we discover at a later stage that we need to revise our computational protocol.
In this case we need to carefully update existing job state points and the associated data.
Let's assume that we realize that the ideal gas law is not sufficiently exact for our needs, so we're going to use the van der Waals equation (vdW) for a more exact estimation of the volume for each state point
$\left(p + \frac{N^2 a}{V^2} \right) (V - Nb) = N kT$, where
$a$ and $b$ are two additional parameters.
For $a=b=0$ this equation is identical to the ideal gas equation.
We start by implementing a function to calculate the volume for a given statepoint.
End of explanation
"""
print(V_vdW(1.0, 1.0, 1000))
"""
Explanation: You will notice that this equation is a cubic polynomial and therefore has 3 possible solutions instead of only one!
End of explanation
"""
import signac
project = signac.get_project('projects/tutorial')
for job in project:
if 'a' not in job.sp:
job.sp.a = 0
if 'b' not in job.sp:
job.sp.b = 0
"""
Explanation: That is because the vdW system has a critical point and up to three possible solutions.
These solutions correspond to a liquid, a gaseous and a meta-stable phase.
We want to make the old data compatible with the new protocol, which requires two modifications of the existing data space:
We need to add parameters $a$ and $b$ to each statepoint and set them to zero.
The former value V needs to be relabeled V_gas and we add a zero-value for V_liq.
We previously learned that we can use the Job.sp attribute interface to access individual state point values.
We can use the same interface to modify existing state point parameters.
End of explanation
"""
for job in project:
if 'V' in job.document:
job.document['V_liq'] = 0
job.document['V_gas'] = job.document.pop('V')
with open(job.fn('V.txt'), 'w') as file:
file.write('{},{}\n'.format(0, job.document['V_gas']))
"""
Explanation: Please checkout the section on State Point Modifications in the reference documentation for a detailed description on how to modify state points.
Next, we need to update the existing volume data.
We check whether the job document has a V value and replace it with V_liq and V_gas.
The V.txt files will be rewritten to contain two comma-separated values.
End of explanation
"""
for job in project:
print(job.statepoint(), job.document)
"""
Explanation: Let's verify our modifications!
End of explanation
"""
vdW = {
# Source: https://en.wikipedia.org/wiki/Van_der_Waals_constants_(data_page)
'ideal gas': {'a': 0, 'b': 0},
'argon':{'a': 1.355, 'b': 0.03201},
'water': {'a': 5.536, 'b': 0.03049},
}
def calc_volume(job):
V = V_vdW(** job.statepoint())
job.document['V_liq'] = min(V)
job.document['V_gas'] = max(V)
with open(job.fn('V.txt'), 'w') as file:
file.write('{},{}\n'.format(min(V), max(V)))
for fluid in vdW:
for p in np.linspace(0.1, 10.0, 10):
sp = {'N': 1000, 'p': p, 'kT': 1.0}
sp.update(vdW[fluid])
job = project.open_job(sp)
job.document['fluid'] = fluid
calc_volume(job)
"""
Explanation: Next, we add a few state points with known parameters.
End of explanation
"""
ps = set((job.statepoint()['p'] for job in project))
for fluid in sorted(vdW):
print(fluid)
for p in sorted(ps):
jobs = project.find_jobs({'p': p}, doc_filter={'fluid': fluid})
for job in jobs:
print(round(p, 2), round(job.document['V_liq'], 4), round(job.document['V_gas'], 2))
print()
"""
Explanation: The fluid label is stored in the job document as a hint, which parameters were used, however they are deliberately not part of the state point, since our calculation is only based on the parameters N, kT, p, a, and b.
In general, all state point variables should be independent of each other.
Let's inspect the results:
End of explanation
"""
|
fastai/course-v3 | zh-nbs/Lesson2_download.ipynb | apache-2.0 | from fastai.vision import *
"""
Explanation: Practical Deep Learning for Coders, v3
Lesson 2_download
Creating your own dataset from Google Images
从Google Images创建你自己的数据集
作者: Francisco Ingham 和 Jeremy Howard. 灵感来源于Adrian Rosebrock
In this tutorial we will see how to easily create an image dataset through Google Images. Note: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).<br>
在本教程中,我们将看到如何从Goolge Images中轻松地创建一个图片数据集。 注意:从Google搜集任何你想要的新品类,你都必须重复这些步骤(比如,狗的数据集,还有猫的数据集,你就得把这些步骤各执行一遍)。
End of explanation
"""
folder = 'black'
file = 'urls_black.csv'
folder = 'teddys'
file = 'urls_teddys.csv'
folder = 'grizzly'
file = 'urls_grizzly.csv'
"""
Explanation: Get a list of URLs 获取URL的列表
Search and scroll 搜索并翻看
Go to Google Images and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.<br>
打开Google Images页面,搜索你感兴趣的图片。你在搜索框中输入的信息越精确,那么搜索的结果就越好,而需要你手动处理的工作就越少。
Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.<br>
往下翻页直到你看到所有你想下载的图片,或者直到你看到一个“显示更多结果”的按钮为止。你刚翻看过的所有图片都是可下载的。为了获得更多的图片,点击“显示更多结果”按钮,继续翻看。Goolge Images最多可以显示700张图片。
It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:<br>
在搜索请求框中增加一些你想排除在外的信息是个好主意。比如,如果你要搜canis lupus lupus这一类欧亚混血狼,最好筛除掉别的种类(这样返回的结果才比较靠谱)
"canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis
You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.<br>
你也可以限制搜索的结果,让搜索结果只显示照片,通过点击工具从Type里选择照片进行下载。
Download into file 下载到文件中
Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.<br>
现在你需要在浏览器中运行一些javascript代码,浏览器将保存所有你想要放入数据集的图片的URL地址。
Press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd> in Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd> in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.<br>
(浏览器窗口下)windows/linux系统按<kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd>,Mac系统按 <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd>,就会弹出javascript的“控制台”面板,在这个面板中,你可以把相关的javascript命令粘贴进去。
You will need to get the urls of each of the images. Before running the following commands, you may want to disable ads block add-ons (YouBlock) in Chrome. Otherwise window.open() coomand doesn't work. Then you can run the following commands:<br>
你需要获得每个图片对应的url。在运行下面的代码之前,你可能需要在Chrome中禁用广告拦截插件,否则window.open()函数将不能工作。然后你就可以运行下面的代码:
javascript
urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);
window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));
Create directory and upload urls file into your server
创建一个目录并将url文件上传到服务器上
Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.<br>
为带标签的图片选择一个合适的名字,你可以多次执行下面的步骤来创建不同的标签。
End of explanation
"""
path = Path('data/bears')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
path.ls()
"""
Explanation: You will need to run this cell once per each category.<br>
下面的单元格,每一个品种运行一次。
End of explanation
"""
classes = ['teddys','grizzly','black']
download_images(path/file, dest, max_pics=200)
# If you have problems download, try with `max_workers=0` to see exceptions:
download_images(path/file, dest, max_pics=20, max_workers=0)
"""
Explanation: Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.<br>
最后,上传你的url文件。你只需要在工作区点击“Upload”按钮,然后选择你要上传的文件,再点击“Upload”即可。
Download images 下载图片
Now you will need to download your images from their respective urls.<br>
现在,你要做的是从图片对应的url地址下载这些图片。
fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.<br>
fast.ai提供了一个函数来完成这个工作。你只需要指定url地址文件名和目标文件夹,这个函数就能自动下载和保存可打开的图片。如果图片本身无法打开的话,对应图片也不会被保存.
Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.<br>
我们开始下载图片吧!注意你可以设定需要下载的最大图片数量,这样我们就不会下载所有url地址了。
You will need to run this line once for every category.<br>
下面这行代码,每一个品种运行一次。
End of explanation
"""
for c in classes:
print(c)
verify_images(path/c, delete=True, max_size=500)
"""
Explanation: Then we can remove any images that can't be opened: <br>
然后我们可以删除任何不能打开的图片:
End of explanation
"""
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
# If you already cleaned your data, run this cell instead of the one before
# 如果你已经清洗过你的数据,直接运行这格代码而不是上面的
# np.random.seed(42)
# data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv',
# ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
"""
Explanation: View data 浏览数据
End of explanation
"""
data.classes
data.show_batch(rows=3, figsize=(7,8))
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
"""
Explanation: Good! Let's take a look at some of our pictures then.<br>
好!我们浏览一些照片。
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(4)
learn.save('stage-1')
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4))
learn.save('stage-2')
"""
Explanation: Train model 训练模型
End of explanation
"""
learn.load('stage-2');
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
"""
Explanation: Interpretation 结果解读
End of explanation
"""
from fastai.widgets import *
"""
Explanation: Cleaning Up 清理
Some of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.<br>
某些最大误差,不是由于模型的性能差导致的,而是由于数据集中的有些图片本身存在问题才导致的。
Using the ImageCleaner widget from fastai.widgets we can prune our top losses, removing photos that don't belong.<br>
从fastai.widgets库中导入并使用ImageCleaner小工具,我们就可以剔除那些归类错误的图片,从而减少预测失误。
End of explanation
"""
db = (ImageList.from_folder(path)
.no_split()
.label_from_folder()
.transform(get_transforms(), size=224)
.databunch()
)
# If you already cleaned your data using indexes from `from_toplosses`,<br><br>
# 如果你已经从`from_toplosses`使用indexes清理了你的数据
# run this cell instead of the one before to proceed with removing duplicates.<br><br>
# 运行这个单元格里面的代码(而非上面单元格的内容)以便继续删除重复项
# Otherwise all the results of the previous step would be overwritten by<br><br>
# 否则前一个步骤中的结果都会被覆盖
# the new run of `ImageCleaner`.<br><br>
# 下面就是要运行的`ImageCleaner`代码,请把下面的注释去掉开始运行
# db = (ImageList.from_csv(path, 'cleaned.csv', folder='.')
# .no_split()
# .label_from_df()
# .transform(get_transforms(), size=224)
# .databunch()
# )
"""
Explanation: First we need to get the file paths from our top_losses. We can do this with .from_toplosses. We then feed the top losses indexes and corresponding dataset to ImageCleaner.<br>
首先,我们可以借助.from_toplosses,从top_losses中获取我们需要的文件路径。随后喂给ImageCleaner误差高的索引以及对应的数据集参数。
Notice that the widget will not delete images directly from disk but it will create a new csv file cleaned.csv from where you can create a new ImageDataBunch with the corrected labels to continue training your model.<br>
需要注意的是,这些小工具本身并不会直接从磁盘删除图片,它会创建一个新的csv文件cleaned.csv,通过这个文件,你可以新创建一个包含准确标签信息的ImageDataBunch(图片数据堆),并继续训练你的模型。
In order to clean the entire set of images, we need to create a new dataset without the split. The video lecture demostrated the use of the ds_type param which no longer has any effect. See the thread for more details.<br>
为了清空整个图片集,我们需要创建一个新的未经分拆的数据集。视频课程里演示的ds_type 参数的用法已经不再有效。参照 the thread 来获取更多细节。
End of explanation
"""
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)
learn_cln.load('stage-2');
ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
"""
Explanation: Then we create a new learner to use our new databunch with all the images.<br>
接下来,我们要创建一个新的学习器来使用包含全部图片的新数据堆。
End of explanation
"""
ImageCleaner(ds, idxs, path)
"""
Explanation: Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via /tree, not /lab. Running the ImageCleaner widget in Jupyter Lab is not currently supported.<br>
确保你在Jupyter Notebook环境下运行这个notebook,而不是在Jupyter Lab中运行。我们可以通过/tree来访问(notebook),而不是/lab。目前还不支持在Jupyter Lab中运行ImageCleaner小工具。<br>
End of explanation
"""
ds, idxs = DatasetFormatter().from_similars(learn_cln)
ImageCleaner(ds, idxs, path, duplicates=True)
"""
Explanation: Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. ImageCleaner will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.ImageCleaner(ds, idxs).<br>
点击“Delete”标记待删除的照片,然后再点击“Next Batch”来删除已标记的照片,同时保持其他图片仍在原来的位置。ImageCleaner将显示一行新的图片,直到没有更多的图片可以展示。在这种情况下,小工具程序会为你展示图片,直到从top_losses.ImageCleaner(ds, idxs)没有更多图片输出为止。
You can also find duplicates in your dataset and delete them! To do this, you need to run .from_similars to get the potential duplicates' ids and then run ImageCleaner with duplicates=True. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.<br>
你会发现在你的数据集中存在重复图片,一定要删除他们!为了做到这一点,你需要运行.from_similars来获取有潜在重复可能的图片的id,然后运行ImageCleaner并使用duplicate=True作为参数。API的工作方式和(处理)错误分类的图片相类似:你只要选中那些你想删除的图片,然后点击'Next Batch'直到没有更多的图片遗留为止。
Make sure to recreate the databunch and learn_cln from the cleaned.csv file. Otherwise the file would be overwritten from scratch, loosing all the results from cleaning the data from toplosses.<br>
确保你从cleaned.csv文件中重新创建了数据堆和learn_cln,否则文件会被完全覆盖,你将丢失所有从失误排行里清洗数据后的结果。
End of explanation
"""
learn.export()
"""
Explanation: Remember to recreate your ImageDataBunch from your cleaned.csv to include the changes you made in your data!<br>
记住从你的cleaned.csv中重新创建ImageDatabunch,以便包含你对数据的所有变更!
Putting your model in production 部署模型
First thing first, let's export the content of our Learner object for production:<br>
首先,导出我们训练好的Learner对象内容,为部署做好准备:
End of explanation
"""
defaults.device = torch.device('cpu')
img = open_image(path/'black'/'00000021.jpg')
img
"""
Explanation: This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).<br>
这个命令会在我们处理模型的目录里创建名为export.pk1的文件,该文件中包含了用于部署模型的所有信息(模型、权重,以及一些元数据,如一些类或用到的变换/归一化处理等)。
You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:<br>
你可能想用CPU来进行推断,除了大规模的(几乎可以肯定你不需要实时训练模型),(所以)如果你没有GPU资源,你也可以使用CPU来对模型做简单的测试:
End of explanation
"""
learn = load_learner(path)
pred_class,pred_idx,outputs = learn.predict(img)
pred_class
"""
Explanation: We create our Learner in production enviromnent like this, jsut make sure that path contains the file 'export.pkl' from before.<br>
我们在这样的生产环境下创建学习器,只需确保path参数包含了前面生成好的“export.pk1”文件。
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(1, max_lr=0.5)
"""
Explanation: So you might create a route something like this (thanks to Simon Willison for the structure of this code):<br>
你可能需要像下面的代码这样,创建一个路径, (谢谢Simon Willison提供了这些代码的架构):
python
@app.route("/classify-url", methods=["GET"])
async def classify_url(request):
bytes = await get_bytes(request.query_params["url"])
img = open_image(BytesIO(bytes))
_,_,losses = learner.predict(img)
return JSONResponse({
"predictions": sorted(
zip(cat_learner.data.classes, map(float, losses)),
key=lambda p: p[1],
reverse=True
)
})
(This example is for the Starlette web app toolkit.)<br>
(这个例子适用于 Starlette的web app工具包)
Things that can go wrong 可能出错的地方
Most of the time things will train fine with the defaults
大多数时候使用默认参数就能训练出好模型
There's not much you really need to tune (despite what you've heard!)
没有太多需要你去调整的(尽管你可能听到过一些)
Most likely are
可能就是(下面的参数)
Learning rate 学习率
Number of epochs epochs的数目
Learning rate (LR) too high 学习率(LR)太高
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
"""
Explanation: Learning rate (LR) too low 学习率(LR)太低
End of explanation
"""
learn.fit_one_cycle(5, max_lr=1e-5)
learn.recorder.plot_losses()
"""
Explanation: Previously we had this result:<br>
前面的代码运行后,我们得到如下结果:
Total time: 00:57
epoch train_loss valid_loss error_rate
1 1.030236 0.179226 0.028369 (00:14)
2 0.561508 0.055464 0.014184 (00:13)
3 0.396103 0.053801 0.014184 (00:13)
4 0.316883 0.050197 0.021277 (00:15)
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False)
learn.fit_one_cycle(1)
"""
Explanation: As well as taking a really long time, it's getting too many looks at each image, so may overfit.<br>
不仅运行耗时过长,而且模型对每一个图片都太过注重细节,因此可能过拟合。
Too few epochs epochs过少
End of explanation
"""
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32,
ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0
),size=224, num_workers=4).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0)
learn.unfreeze()
learn.fit_one_cycle(40, slice(1e-6,1e-4))
"""
Explanation: Too many epochs epochs过多
End of explanation
"""
|
analysiscenter/dataset | examples/experiments/stochastic_depth/stochastic_depth.ipynb | apache-2.0 | import sys
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqn
%matplotlib inline
sys.path.append('../../..')
sys.path.append('../../utils')
import utils
from resnet_with_stochastic_depth import StochasticResNet
from batchflow import B,V,F
from batchflow.opensets import MNIST
from batchflow.models.tf import ResNet50
"""
Explanation: Stochastic depth
Dropout proved to be a working tool that improves the stability of a neural network. Essentially, dropout shuts down some neurons of a specific layer. Gao Huang, Yu Sun, Zhuang Liu What in the article "Deep Networks with Stochastic Depth" went into further and attempted to shut down whole blocks of layers.
In this notebook, we will investigate whether the stochastic depth improves accuracy of neural networks.
Pay attention to the file if you want to know how the Stochastic ResNet is implemented.
End of explanation
"""
dset = MNIST()
"""
Explanation: In our expements we will work with MNIST dataset
End of explanation
"""
ResNet_config = {
'inputs': {'images': {'shape': (28, 28, 1)},
'labels': {'classes': (10),
'transform': 'ohe',
'dtype': 'int64',
'name': 'targets'}},
'input_block/inputs': 'images',
'loss': 'softmax_cross_entropy',
'optimizer': 'Adam',
'output': dict(ops=['accuracy'])
}
Stochastic_config = {**ResNet_config}
"""
Explanation: Firstly, let us define the shape of inputs of our model, loss function and an optimizer:
End of explanation
"""
res_train_ppl = (dset.train.p
.init_model('dynamic',
ResNet50,
'resnet',
config=ResNet_config)
.train_model('resnet',
feed_dict={'images': B('images'),
'labels': B('labels')}))
res_test_ppl = (dset.test.p
.init_variable('resacc', init_on_each_run=list)
.import_model('resnet', res_train_ppl)
.predict_model('resnet',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('resacc'),
mode='a'))
"""
Explanation: Secondly, we create pipelines for train and test Simple ResNet model
End of explanation
"""
stochastic_train_ppl = (dset.train.p
.init_model('dynamic',
StochasticResNet,
'stochastic',
config=Stochastic_config)
.init_variable('stochasticacc', init_on_each_run=list)
.train_model('stochastic',
feed_dict={'images': B('images'),
'labels': B('labels')}))
stochastic_test_ppl = (dset.test.p
.init_variable('stochasticacc', init_on_each_run=list)
.import_model('stochastic', stochastic_train_ppl)
.predict_model('stochastic',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('stochasticacc'),
mode='a'))
"""
Explanation: The same thing for Stochastic ResNet model
End of explanation
"""
for i in tqn(range(1000)):
res_train_ppl.next_batch(400, n_epochs=None, shuffle=True)
res_test_ppl.next_batch(400, n_epochs=None, shuffle=True)
stochastic_train_ppl.next_batch(400, n_epochs=None, shuffle=True)
stochastic_test_ppl.next_batch(400, n_epochs=None, shuffle=True)
"""
Explanation: Let's train our models
End of explanation
"""
resnet_loss = res_test_ppl.get_variable('resacc')
stochastic_loss = stochastic_test_ppl.get_variable('stochasticacc')
utils.draw(resnet_loss, 'ResNet', stochastic_loss, 'Stochastic', window=20, type_data='accuracy')
"""
Explanation: Show test accuracy for all iterations
End of explanation
"""
|
UCSBarchlab/PyRTL | ipynb-examples/example8-verilog.ipynb | bsd-3-clause | import random
import io
import pyrtl
pyrtl.reset_working_block()
"""
Explanation: Example 8: Interfacing with Verilog.
While there is much more about PyRTL design to discuss, at some point somebody
might ask you to do something with your code other than have it print
pretty things out to the terminal. We provide import from and export to
Verilog of designs, export of waveforms to VCD, and a set of transforms
that make doing netlist-level transforms and analyis directly in pyrtl easy.
End of explanation
"""
full_adder_blif = """
# Generated by Yosys 0.3.0+ (git sha1 7e758d5, clang 3.4-1ubuntu3 -fPIC -Os)
.model full_adder
.inputs x y cin
.outputs sum cout
.names $false
.names $true
1
.names y $not$FA.v:12$3_Y
0 1
.names x $not$FA.v:11$1_Y
0 1
.names cin $not$FA.v:15$6_Y
0 1
.names ind3 ind4 sum
1- 1
-1 1
.names $not$FA.v:15$6_Y ind2 ind3
11 1
.names x $not$FA.v:12$3_Y ind1
11 1
.names ind2 $not$FA.v:16$8_Y
0 1
.names cin $not$FA.v:16$8_Y ind4
11 1
.names x y $and$FA.v:19$11_Y
11 1
.names ind0 ind1 ind2
1- 1
-1 1
.names cin ind2 $and$FA.v:19$12_Y
11 1
.names $and$FA.v:19$11_Y $and$FA.v:19$12_Y cout
1- 1
-1 1
.names $not$FA.v:11$1_Y y ind0
11 1
.end
"""
pyrtl.input_from_blif(full_adder_blif)
# have to find the actual wire vectors generated from the names in the blif file
x, y, cin = [pyrtl.working_block().get_wirevector_by_name(s) for s in ['x', 'y', 'cin']]
io_vectors = pyrtl.working_block().wirevector_subset((pyrtl.Input, pyrtl.Output))
# we are only going to trace the input and output vectors for clarity
sim_trace = pyrtl.SimulationTrace(wires_to_track=io_vectors)
"""
Explanation: Importing From Verilog
Sometimes it is useful to pull in components written in Verilog to be used
as subcomponents of PyRTL designs or to be subject to analysis written over
the PyRTL core. One standard format supported by PyRTL is "blif" format:
https://www.ece.cmu.edu/~ee760/760docs/blif.pdf
Many tools support outputing hardware designs to this format, including the
free open source project "Yosys". Blif files can then be imported either
as a string or directly from a file name by the function input_from_blif.
Here is a simple example of a 1 bit full adder imported and then simulated
from this blif format.
End of explanation
"""
sim = pyrtl.Simulation(tracer=sim_trace)
for i in range(15):
# here we actually generate random booleans for the inputs
sim.step({
'x': random.choice([0, 1]),
'y': random.choice([0, 1]),
'cin': random.choice([0, 1])
})
sim_trace.render_trace(symbol_len=5, segment_size=5)
"""
Explanation: Now simulate the logic with some random inputs
End of explanation
"""
pyrtl.reset_working_block()
zero = pyrtl.Input(1, 'zero')
counter_output = pyrtl.Output(3, 'counter_output')
counter = pyrtl.Register(3, 'counter')
counter.next <<= pyrtl.mux(zero, counter + 1, 0)
counter_output <<= counter
"""
Explanation: Exporting to Verilog
However, not only do we want to have a method to import from Verilog, we also
want a way to export it back out to Verilog as well. To demonstrate PyRTL's
ability to export in Verilog, we will create a sample 3-bit counter. However
unlike the example in example2, we extend it to be synchronously resetting.
End of explanation
"""
print("--- PyRTL Representation ---")
print(pyrtl.working_block())
print()
print("--- Verilog for the Counter ---")
with io.StringIO() as vfile:
pyrtl.OutputToVerilog(vfile)
print(vfile.getvalue())
print("--- Simulation Results ---")
sim_trace = pyrtl.SimulationTrace([counter_output, zero])
sim = pyrtl.Simulation(tracer=sim_trace)
for cycle in range(15):
sim.step({'zero': random.choice([0, 0, 0, 1])})
sim_trace.render_trace()
"""
Explanation: The counter gets 0 in the next cycle if the "zero" signal goes high, otherwise just
counter + 1. Note that both "0" and "1" are bit extended to the proper length and
here we are making use of that native add operation.
Let's dump this bad boy out
to a verilog file and see what is looks like (here we are using StringIO just to
print it to a string for demo purposes, most likely you will want to pass a normal
open file).
End of explanation
"""
print("--- Verilog for the TestBench ---")
with io.StringIO() as tbfile:
pyrtl.output_verilog_testbench(dest_file=tbfile, simulation_trace=sim_trace)
print(tbfile.getvalue())
"""
Explanation: We already did the "hard" work of generating a test input for this simulation so
we might want to reuse that work when we take this design through a verilog toolchain.
The class OutputVerilogTestbench grabs the inputs used in the simulation trace
and sets them up in a standard verilog testbench.
End of explanation
"""
print("--- Optimized Single-bit Verilog for the Counter ---")
pyrtl.synthesize()
pyrtl.optimize()
with io.StringIO() as vfile:
pyrtl.OutputToVerilog(vfile)
print(vfile.getvalue())
"""
Explanation: Now let's talk about transformations of the hardware block.
Many times when you are
doing some hardware-level analysis you might wish to ignore higher level things like
multi-bit wirevectors, adds, concatination, etc. and just thing about wires and basic
gates. PyRTL supports "lowering" of designs into this more restricted set of functionality
though the function "synthesize". Once we lower a design to this form we can then apply
basic optimizations like constant propgation and dead wire elimination as well. By
printing it out to verilog we can see exactly how the design changed.
End of explanation
"""
|
diging/tethne-notebooks | 2. Working with data from JSTOR Data-for-Research.ipynb | gpl-3.0 | from tethne.readers import dfr
"""
Explanation: Introduction to Tethne: Working with data from the Web of Science
Now that we have the basics down, in this notebook we'll begin working with data from the JSTOR Data-for-Research (DfR) portal.
The JSTOR DfR portal gives researchers access to
bibliographic data and N-grams for the entire JSTOR database.
Tethne can use DfR data to generate coauthorship networks, and to improve
metadata for Web of Science records. Tethne is also able to use
N-gram counts to add information to networks, and can interface with MALLET to perform LDA topic modeling.
Methods in Digital & Computational Humanities
This notebook is part of a cluster of learning resources developed by the Laubichler Lab and the Digital Innovation Group at Arizona State University as part of an initiative for digital and computational humanities (d+cH). For more information, see our evolving online methods course at https://diging.atlassian.net/wiki/display/DCH.
Getting Help
Development of the Tethne project is led by Erick Peirson. To get help, first check our issue tracking system on GitHub. There, you can search for questions and problems reported by other users, or ask a question of your own. You can also reach Erick via e-mail at erick.peirson@asu.edu.
Getting bibliographic data from JSTOR Data-for-Research
For the purpose of this tutorial, you can use the sample dataset from https://www.dropbox.com/s/q2jy87pmy9r6bsa/tethne_workshop_data.zip?dl=0.
Access the DfR portal at http://dfr.jstor.org/ If you don't already have an
account, you will need to create a new account.
After you've logged in, perform a search using whatever criteria you please.
When you have achieved the result that you desire, create a new dataset request.
Under the "Dataset Request" menu in the upper-right corner of the page, click
"Submit new request".
On the Download Options page, select your desired Data Type. If you do
not intend to make use of the contents of the papers themselves, then "Citations
Only" is sufficient. Otherwise, choose word counts, bigrams, etc.
Output Format should be set to XML.
Give your request a title, and set the maximum number of articles. Note that
the maximum documents allowed per request is 1,000. Setting Maximum Articles
to a value less than the number of search results will yield a random sample of
your results.
Your request should now appear in your list of Data Requests. When your
request is ready (hours to days later), you will receive an e-mail with a
download link. When downloading from the Data Requests list, be sure to use
the link in the full dataset column.
When your dataset download is complete, unzip it. The contents should look
something like those shown below.
citations.XML contains bibliographic data in XML format. The bigrams,
trigrams, wordcounts folders contain N-gram counts for each document.
If you were to open one of the XML files in the wordcounts folder, say, you would see some XML that looks like this:
```
<?xml version="1.0" encoding="UTF-8"?>
<article id="10.2307/4330482" >
<wordcount weight="21" > of </wordcount>
<wordcount weight="16" > the </wordcount>
<wordcount weight="10" > university </wordcount>
<wordcount weight="10" > a </wordcount>
<wordcount weight="9" > s </wordcount>
<wordcount weight="9" > d </wordcount>
<wordcount weight="9" > harvard </wordcount>
<wordcount weight="8" > m </wordcount>
<wordcount weight="7" > and </wordcount>
<wordcount weight="6" > u </wordcount>
<wordcount weight="6" > press </wordcount>
<wordcount weight="5" > cambridge </wordcount>
<wordcount weight="5" > massachusetts </wordcount>
<wordcount weight="5" > journal </wordcount>
<wordcount weight="4" > by </wordcount>
...
<wordcount weight="1" > stephen </wordcount>
<wordcount weight="1" > liver </wordcount>
<wordcount weight="1" > committee </wordcount>
<wordcount weight="1" > school </wordcount>
<wordcount weight="1" > lewontin </wordcount>
<wordcount weight="1" > canguilhem </wordcount>
<wordcount weight="1" > assistant </wordcount>
<wordcount weight="1" > jay </wordcount>
<wordcount weight="1" > state </wordcount>
<wordcount weight="1" > morgan </wordcount>
<wordcount weight="1" > advertising </wordcount>
<wordcount weight="1" > animal </wordcount>
<wordcount weight="1" > is </wordcount>
<wordcount weight="1" > species </wordcount>
<wordcount weight="1" > claude </wordcount>
<wordcount weight="1" > review </wordcount>
<wordcount weight="1" > hunt </wordcount>
<wordcount weight="1" > founder </wordcount>
</article>
```
Each word is represented by a <wordcount></wordcount> tag. The "weight" attribute gives the number of times that the word occurs in the document, and the word itself is between the tags. We'll come back to this in just a moment.
Parsing DfR datasets
Just as for WoS data, there is a module in tethne.readers for working with DfR data. We can import it with:
End of explanation
"""
dfr_corpus = dfr.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/DfR')
"""
Explanation: Once again, read() accepts a string containing a path to either a single DfR dataset, or a directory containing several. Here, "DfR dataset" refers to the folder containing the file "citations.xml", and the contents of that folder.
This will take considerably more time than loading a WoS dataset. The reason is that Tethne automatically detects and parses all of the wordcount data.
End of explanation
"""
from tethne.readers import wos
wos_corpus = wos.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/wos')
"""
Explanation: Combining DfR and WoS data
We can combine our datasets using the merge() function. First, we load our WoS data in a separate Corpus:
End of explanation
"""
len(dfr_corpus), len(wos_corpus)
"""
Explanation: Both of these datasets are for the Journal of the History of Biology. But note that the WoS and DfR corpora have different numbers of Papers:
End of explanation
"""
from tethne.readers import merge
"""
Explanation: Then import merge() from tethne.readers:
End of explanation
"""
corpus = merge(dfr_corpus, wos_corpus)
"""
Explanation: We then create a new Corpus by passing both Corpus objects to merge(). If there is conflicting information in the two corpora, the first Corpus gets priority.
End of explanation
"""
len(corpus)
"""
Explanation: merge() has combined data where possible, and discarded any duplicates in the original datasets.
End of explanation
"""
corpus.features
"""
Explanation: FeatureSets
Our wordcount data are represented by a FeatureSet. A FeatureSet is a description of how certain sets of elements are distributed across a Corpus. This is kind of like an inversion of an index. For example, we might be interested in which words (elements) are found in which Papers. We can think of authors as a FeatureSet, too.
All of the available FeatureSets are available in the features attribute (a dictionary) of our Corpus. We can see the available FeatureSets by inspecting its:
End of explanation
"""
corpus.features['wordcounts'].features.items()[0] # Just show data for the first Paper.
"""
Explanation: Note that citations and authors are also FeatureSets. In fact, the majority of network-building functions in Tethne operate on FeatureSets -- including the coauthors() and bibliographic_coupling() functions that we used in the WoS notebook.
Each FeatureSet has several attributes. The features attribute contains the distribution data itself. These data themselves are (element, value) tuples. In this case, the elements are words, and the values are wordcounts.
End of explanation
"""
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index)
"""
Explanation: The index contains our "vocabulary":
End of explanation
"""
plt.figure(figsize=(10, 5))
plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary')) # <-- The action.
plt.ylabel('Frequency of the word ``evolutionary`` in this Corpus')
plt.xlabel('Publication Date')
plt.show()
"""
Explanation: We can use the feature_distribution() method of our Corpus to look at the distribution of words over time. In the example below I used MatPlotLib to visualize the distribution.
End of explanation
"""
plt.figure(figsize=(10, 5))
plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts')) # <-- The action.
plt.ylabel('Documents containing ``evolutionary``')
plt.xlabel('Publication Date')
plt.show()
"""
Explanation: If we add the argument mode='documentCounts', we get the number of documents in which 'evolutionary' occurs.
End of explanation
"""
plt.figure(figsize=(10, 5))
plt.bar(*corpus.distribution()) # <-- The action.
plt.ylabel('Number of Documents')
plt.xlabel('Publication Date')
plt.show()
"""
Explanation: Note that we can look how documents themselves are distributed using the distribution() method.
End of explanation
"""
dates, N_evolution = corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts')
dates, N = corpus.distribution()
normalized_frequency = [f/N[i] for i, f in enumerate(N_evolution)]
plt.figure(figsize=(10, 5))
plt.bar(dates, normalized_frequency) # <-- The action.
plt.ylabel('Proportion of documents containing ``evolutionary``')
plt.xlabel('Publication Date')
plt.show()
"""
Explanation: So, putting these together, we can normalize our feature_distribution() data to get a sense of the relative use of the word 'evolution'.
End of explanation
"""
from nltk.corpus import stopwords
stoplist = stopwords.words()
"""
Explanation: Topic Modeling with DfR wordcounts
Latent Dirichlet Allocation is a popular approach to discovering latent "topics" in large corpora. Many digital humanists use a software package called MALLET to fit LDA to text data. Tethne uses MALLET to fit LDA topic models.
Before we use LDA, however, we need to do some preprocessing. "Preprocessing" refers to anything that we do to filter or transform our FeatureSet prior to analysis.
Pre-processing
Two important preprocessing steps are:
1. Removing "stopwords" -- common words like "the", "and", "but", "for", that don't yield much insight into the contents of documents.
2. Removing words that are too common or too rare. These include typos or OCR artifacts.
We can do both of these by using the transform() method on our FeatureSet.
First, we need a stoplist. NLTK provides a great stoplist.
End of explanation
"""
def apply_stoplist(f, v, c, dc):
if f in stoplist or dc > 500 or dc < 3 or len(f) < 4:
return None # Discard the element.
return v
"""
Explanation: We then need to define what elements to keep, and what elements to discard. We will use a function that will evaluate whether or not a word is in our stoplist. The function should take three arguments:
f -- the feature itself (the word)
v -- the number of instances of that feature in a specific document
c -- the number of instances of that feature in the whole FeatureSet
dc -- the number of documents that contain that feature
This function will be applied to each word in each document. If it returns 0 or None, the word will be excluded. Otherwise, it should return a numeric value (in this case, the count for that document).
In addition to applying the stoplist, we'll also exclude any word that occurs in more than 500 of the documents and less than 3 documents, and is less than 4 characters in length.
End of explanation
"""
corpus.features['wordcounts_filtered'] = corpus.features['wordcounts'].transform(apply_stoplist)
"""
Explanation: We apply the stoplist using the transform() method. FeatureSets are not modified in place; instead, a new FeatureSet is generated that reflects the specified changes. We'll call the new FeatureSet 'wordcounts_filtered'.
End of explanation
"""
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index)
print 'There are %i words in the wordcounts_filtered featureset' % len(corpus.features['wordcounts_filtered'].index)
"""
Explanation: There should be significantly fewer words in our new "wordcounts_filtered" FeatureSet.
End of explanation
"""
from tethne import LDAModel
"""
Explanation: The LDA topic model
Tethne provides a class called LDAModel. You should be able to import it directly from the tethne package:
End of explanation
"""
model = LDAModel(corpus, featureset_name='wordcounts_filtered')
"""
Explanation: Now we'll create a new LDAModel for our Corpus. The featureset_name parameter tells the LDAModel which FeatureSet we want to use. We'll use our filtered wordcounts.
End of explanation
"""
model.fit(Z=50, max_iter=500)
"""
Explanation: Next we'll fit the model. We need to tell MALLET how many topics to fit (the hyperparameter Z), and how many iterations (max_iter) to perform. This step may take a little while, depending on the size of your corpus.
End of explanation
"""
model.print_topics()
"""
Explanation: You can inspect the inferred topics using the model's print_topics() method. By default, this will print the top ten words for each topic.
End of explanation
"""
plt.figure(figsize=(15, 5))
for k in xrange(5): # Generates numbers k in [0, 4].
x, y = model.topic_over_time(k) # Gets topic number k.
plt.plot(x, y, label='topic {0}'.format(k), lw=2, alpha=0.7)
plt.legend(loc='best')
plt.show()
"""
Explanation: We can also look at the representation of a topic over time using the topic_over_time() method. In the example below we'll print the first five of the topics on the same plot.
End of explanation
"""
from tethne.networks import topics
"""
Explanation: Generating networks from topic models
The features module in the tethne.networks subpackage contains some useful methods for visualizing topic models as networks. You can import it just like the authors or papers modules.
End of explanation
"""
termGraph = topics.terms(model, threshold=0.01)
termGraph.order(), termGraph.size()
termGraph.name = ''
from tethne.writers.graph import to_graphml
to_graphml(termGraph, '/Users/erickpeirson/Desktop/topic_terms.graphml')
"""
Explanation: The terms function generates a network of words connected on the basis of shared affinity with a topic. If two words i and j are both associated with a topic z with $\Phi(i|z) >= 0.01$ and $\Phi(j|z) >= 0.01$, then an edge is drawn between them.
End of explanation
"""
topicCoupling = topics.topic_coupling(model, threshold=0.2)
print '%i nodes and %i edges' % (topicCoupling.order(), topicCoupling.size())
to_graphml(topicCoupling, '/Users/erickpeirson/Desktop/lda_topicCoupling.graphml')
"""
Explanation:
End of explanation
"""
|
campagnucci/api_sof | .ipynb_checkpoints/SOF_Execucao_Orcamentaria_PMSP-checkpoint.ipynb | gpl-3.0 | import pandas as pd
import requests
import json
import numpy as np
TOKEN = '198f959a5f39a1c441c7c863423264'
base_url = "https://gatewayapi.prodam.sp.gov.br:443/financas/orcamento/sof/v2.1.0"
headers={'Authorization' : str('Bearer ' + TOKEN)}
"""
Explanation: Explorando as despesas da cidade de São Paulo
Um tutorial de primeiros passos para acessar a execução orçamentária do município usando Python e a biblioteca de análise de dados Pandas *
Passo 1. Cadastro na API e token de acesso
Acessar a Vitrine de APIs da Prodam:https://api.prodam.sp.gov.br/store/
Selecione a API do SOF
Clique em "Inscrever-se"
Acesse o menu "Minhas assinaturas"
Gere uma chave de acesso de produção; coloque um valor de validade negativo, para evitar que expire
Copie o Token de Acesso
Passo 2. Teste na API Console
A API Console é uma interface que permite testar as diferentes consultas e obter a URL com os parâmetros desejados. Por exemplo, se deseja obter todos os contratos da Secretaria de Educação em 2017, basta entrar no item /consultaContrato e informar "2017" no campo anoContrato e "16" (código da Educação) no campo codOrgao. A URL resultante dessa consulta é https://gatewayapi.prodam.sp.gov.br:443/financas/orcamento/sof/v2.1.0/consultaContrato?anoContrato=2017&codOrgao=16
Passo 3. Mãos ao Pandas!
Este é o script que consulta a API (para qualquer URL gerada acima) e transforma o arquivo obtido em formato json para um Data Frame do Pandas, a partir do qual será possível fazer as análises. Substitua a constante TOKEN pelo seu código de assinatura!
End of explanation
"""
url_orcado = '{base_url}/consultarDespesas?anoDotacao=2017&mesDotacao=08&codOrgao=84'.format(base_url=base_url)
request_orcado = requests.get(url_orcado,
headers=headers,
verify=True).json()
df_orcado = pd.DataFrame(request_orcado['lstDespesas'])
df_resumo_orcado = df_orcado[['valOrcadoInicial', 'valOrcadoAtualizado', 'valCongelado', 'valDisponivel', 'valEmpenhadoLiquido', 'valLiquidado']]
df_resumo_orcado
"""
Explanation: Orçamento
Primeiro, vamos ter uma visão geral do que foi orçado para a Secretaria Municipal de Saúde neste ano, bem como os valores congelados e já executados. Isso é possível com a consulta "Despesas"
End of explanation
"""
url_empenho = '{base_url}/consultaEmpenhos?anoEmpenho=2017&mesEmpenho=08&codOrgao=84'.format(base_url=base_url)
pagination = '&numPagina={PAGE}'
request_empenhos = requests.get(url_empenho,
headers=headers,
verify=True).json()
"""
Explanation: Empenhos
Empenho é o ato em que autoridade verifica a existência do crédito orçamentário e autoriza a execução da despesa (por exemplo, para realizar uma licitação). A partir daí, os valores vão sendo liquidados e pagos conforme a execução de um contrato.
Vamos ver quanto a Secretaria Municipal de Saúde empenhou de seu orçamento em 2017.
End of explanation
"""
number_of_pages = request_empenhos['metadados']['qtdPaginas']
todos_empenhos = []
todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos']
if number_of_pages>1:
for p in range(2, number_of_pages+1):
request_empenhos = requests.get(url_empenho + pagination.format(PAGE=p), headers=headers, verify=True).json()
todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos']
df_empenhos = pd.DataFrame(todos_empenhos)
"""
Explanation: A API fornece apenas uma página na consulta. O script abaixo checa a quantidade de páginas nos metadados da consulta e itera o número de vezes necessário para obter todas as páginas:
End of explanation
"""
df_empenhos.tail()
"""
Explanation: Com os passos acima, fizemos a requisição de todas as páginas e convertemos o arquivo formato json em um DataFrame. Agora podemos trabalhar com a análise desses dado no Pandas. Para checar quantos registros existentes, vamos ver o final da lista:
End of explanation
"""
modalidades = df_empenhos.groupby('txtModalidadeAplicacao')['valTotalEmpenhado', 'valLiquidado'].sum()
modalidades
# Outra maneira de fazer a mesma operação:
#pd.pivot_table(df_empenhos, values='valTotalEmpenhado', index=['txtModalidadeAplicacao'], aggfunc=np.sum)
"""
Explanation: Modalidades de Aplicação
Aqui vemos a quantidade de recursos aplicados na Saúde, a título de exemplo, por Modalidade -- se é aplicação na rede direta ou repasse a organizações sociais. Note que o mesmo poderia ser feito para qualquer órgão, ou mesmo para a Prefeitura como um todo:
End of explanation
"""
despesas = pd.pivot_table(df_empenhos,
values=['valLiquidado', 'valPagoExercicio'],
index=['numCpfCnpj', 'txtRazaoSocial', 'txtDescricaoPrograma'],
aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')
despesas.head(15)
"""
Explanation: Maiores despesas de 2017
Aqui vamos produzir a lista das 15 maiores despesas da Saúde neste ano:
End of explanation
"""
fonte = pd.pivot_table(df_empenhos,
values=['valLiquidado', 'valPagoExercicio'],
index=['txtDescricaoFonteRecurso'],
aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')
fonte
"""
Explanation: Fontes de recursos
Agrupamento dos empenhos por fonte de recursos:
End of explanation
"""
df_empenhos.to_csv('empenhos.csv')
"""
Explanation: Passo 4. Quer salvar um csv?
O objetivo deste tutorial não era fazer uma análise exaustiva da base, mas apenas mostrar o que é possível a partir do consumo da API. Você também pode salvar toda a base de empenhos num arquivo .csv e trabalhar no seu Excel (super te entendo). O Pandas também ajuda nisso! Assim:
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/agents/tutorials/8_networks_tutorial.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
"""
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import random_py_environment
from tf_agents.environments import tf_py_environment
from tf_agents.networks import encoding_network
from tf_agents.networks import network
from tf_agents.networks import utils
from tf_agents.specs import array_spec
from tf_agents.utils import common as common_utils
from tf_agents.utils import nest_utils
tf.compat.v1.enable_v2_behavior()
"""
Explanation: 네트워크
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/8_networks_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
소개
이 colab에서는 에이전트에 대한 사용자 정의 네트워크를 정의하는 방법을 다룹니다. 네트워크는 에이전트를 통해 훈련되는 모델을 정의하는 데 도움이 됩니다. TF-Agents에는 에이전트에 유용한 여러 가지 유형의 네트워크가 있습니다.
주요 네트워크
QNetwork: 불연속 행동이 있는 환경에 대한 Qlearning에서 사용되는 이 네트워크는 관찰 값을 가능한 각 행동에 대한 추정 값에 매핑합니다.
CriticNetworks: 문헌에서 ValueNetworks라고도 하며 일부 상태를 예상되는 정책 반환의 추정에 매핑하는 일부 Value 함수 버전을 추정하는 방법을 학습합니다. 이 네트워크는 에이전트의 현재 상태가 얼마나 좋은지 추정합니다.
ActorNetworks: 관찰 값에서 행동으로의 매핑을 학습합니다. 이러한 네트워크는 일반적으로 정책에서 행동을 생성하는 데 사용됩니다.
ActorDistributionNetworks: ActorNetworks와 유사하지만 정책이 행동을 생성하기 위해 샘플링할 수 있는 분포를 생성합니다.
도우미 네트워크
EncodingNetwork : 사용자는 네트워크의 입력에 적용 할 전처리 레이어의 매핑을 쉽게 정의 할 수 있습니다.
DynamicUnrollLayer : 시간 순서에 따라 적용되는 에피소드 경계에서 네트워크 상태를 자동으로 재설정합니다.
ProjectionNetwork: CategoricalProjectionNetwork 또는 NormalProjectionNetwork와 같은 네트워크는 입력을 받아 범주형 또는 정규 분포를 생성하는 데 필요한 매개변수를 생성합니다.
TF-Agents의 모든 예에는 사전 구성된 네트워크가 함께 제공됩니다. 그러나 이러한 네트워크는 복잡한 관찰 값을 처리하도록 설정되지 않습니다.
둘 이상의 관측 값/행동을 노출하는 환경에 있으면서 네트워크를 사용자 정의해야 하는 경우 이 튜토리얼을 제대로 찾았습니다!
설정
tf-agent를 아직 설치하지 않은 경우 다음을 실행하십시오.
End of explanation
"""
class ActorNetwork(network.Network):
def __init__(self,
observation_spec,
action_spec,
preprocessing_layers=None,
preprocessing_combiner=None,
conv_layer_params=None,
fc_layer_params=(75, 40),
dropout_layer_params=None,
activation_fn=tf.keras.activations.relu,
enable_last_layer_zero_initializer=False,
name='ActorNetwork'):
super(ActorNetwork, self).__init__(
input_tensor_spec=observation_spec, state_spec=(), name=name)
# For simplicity we will only support a single action float output.
self._action_spec = action_spec
flat_action_spec = tf.nest.flatten(action_spec)
if len(flat_action_spec) > 1:
raise ValueError('Only a single action is supported by this network')
self._single_action_spec = flat_action_spec[0]
if self._single_action_spec.dtype not in [tf.float32, tf.float64]:
raise ValueError('Only float actions are supported by this network.')
kernel_initializer = tf.keras.initializers.VarianceScaling(
scale=1. / 3., mode='fan_in', distribution='uniform')
self._encoder = encoding_network.EncodingNetwork(
observation_spec,
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner,
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params,
dropout_layer_params=dropout_layer_params,
activation_fn=activation_fn,
kernel_initializer=kernel_initializer,
batch_squash=False)
initializer = tf.keras.initializers.RandomUniform(
minval=-0.003, maxval=0.003)
self._action_projection_layer = tf.keras.layers.Dense(
flat_action_spec[0].shape.num_elements(),
activation=tf.keras.activations.tanh,
kernel_initializer=initializer,
name='action')
def call(self, observations, step_type=(), network_state=()):
outer_rank = nest_utils.get_outer_rank(observations, self.input_tensor_spec)
# We use batch_squash here in case the observations have a time sequence
# compoment.
batch_squash = utils.BatchSquash(outer_rank)
observations = tf.nest.map_structure(batch_squash.flatten, observations)
state, network_state = self._encoder(
observations, step_type=step_type, network_state=network_state)
actions = self._action_projection_layer(state)
actions = common_utils.scale_to_spec(actions, self._single_action_spec)
actions = batch_squash.unflatten(actions)
return tf.nest.pack_sequence_as(self._action_spec, [actions]), network_state
"""
Explanation: 네트워크 정의하기
네트워크 API
TF-Agents에서는 Keras Networks를 하위 클래스화합니다. 그러면 다음을 수행할 수 있습니다.
대상 네트워크를 만들 때 필요한 복사 작업을 단순화합니다.
network.variables()를 호출할 때 자동 변수 생성을 수행합니다.
네트워크 input_spec을 기반으로 입력을 검증합니다.
EncodingNetwork는 대부분 선택적인 다음과 같은 레이어로 구성됩니다.
네트워크 인코딩의 특별한 점은 입력 전처리가 적용된다는 것입니다. 입력 전처리는 preprocessing_layers 및 preprocessing_combiner 레이어를 통해 가능합니다. 이러한 레이어 각각은 중첩 구조로 지정할 수 있습니다. preprocessing_layers 중첩이 input_tensor_spec 보다 얕으면 레이어에 하위 중첩이 생깁니다. 예를 들어, 다음과 같다면
전처리 레이어
전처리 결합기
전환
반음 낮추다
밀집한
전처리가 다음을 호출합니다.
input_tensor_spec = ([TensorSpec(3)] * 2, [TensorSpec(3)] * 5) preprocessing_layers = (Layer1(), Layer2())
그러나 다음과 같다면
preprocessed = [preprocessing_layers[0](observations[0]), preprocessing_layers[1](obsrevations[1])]
전처리는 다음을 호출합니다.
preprocessing_layers = ([Layer1() for _ in range(2)], [Layer2() for _ in range(5)])
전처리는 다음을 호출합니다.
python
preprocessed = [ layer(obs) for layer, obs in zip(flatten(preprocessing_layers), flatten(observations)) ]
사용자 정의 네트워크
자체 네트워크를 만들려면 __init__ 및 call 메서드만 재정의하면 됩니다. EncodingNetworks에 대해 배운 내용을 이용해 사용자 정의 네트워크를 만들어 이미지와 벡터를 포함한 관찰 값을 취하는 ActorNetwork를 만들어 보겠습니다.
End of explanation
"""
action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)
observation_spec = {
'image': array_spec.BoundedArraySpec((16, 16, 3), np.float32, minimum=0,
maximum=255),
'vector': array_spec.BoundedArraySpec((5,), np.float32, minimum=-100,
maximum=100)}
random_env = random_py_environment.RandomPyEnvironment(observation_spec, action_spec=action_spec)
# Convert the environment to a TFEnv to generate tensors.
tf_env = tf_py_environment.TFPyEnvironment(random_env)
"""
Explanation: RandomPyEnvironment를 생성하여 구조화된 관찰 값을 생성하고 구현을 검증해 보겠습니다.
End of explanation
"""
preprocessing_layers = {
'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),
tf.keras.layers.Flatten()]),
'vector': tf.keras.layers.Dense(5)
}
preprocessing_combiner = tf.keras.layers.Concatenate(axis=-1)
actor = ActorNetwork(tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner)
"""
Explanation: 관찰 값이 사전이 되도록 정의했으므로 관찰 값을 처리하기 위한 전처리 레이어를 만들어야 합니다.
End of explanation
"""
time_step = tf_env.reset()
actor(time_step.observation, time_step.step_type)
"""
Explanation: 이제 actor 네트워크가 생겼으므로 환경에서 관찰 값을 처리할 수 있습니다.
End of explanation
"""
|
computational-class/cjc2016 | code/tba/DeepLearningMovies/Kaggle tutorial Part 1 Natural Language Processing.ipynb | mit | import os
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from KaggleWord2VecUtility import KaggleWord2VecUtility # in the same folader
import pandas as pd
import numpy as np
"""
Explanation: Deep learning goes to the movies
Kaggle tutorial Part 1: Natural Language Processing.
Author: Angela Chapman
Date: 8/6/2014
Bag of Words Meets Bags of Popcorn
The labeled data set consists of 50,000 IMDB movie reviews, specially selected for sentiment analysis. The sentiment of reviews is binary, meaning the IMDB rating < 5 results in a sentiment score of 0, and rating >=7 have a sentiment score of 1. No individual movie has more than 30 reviews. The 25,000 review labeled training set does not include any of the same movies as the 25,000 review test set. In addition, there are another 50,000 IMDB reviews provided without any rating labels.
End of explanation
"""
traindata_path = "/Users/chengjun/bigdata/kaggle_popcorn_data/labeledTrainData.tsv"
testdata_path = "/Users/chengjun/bigdata/kaggle_popcorn_data/testData.tsv"
train = pd.read_csv(traindata_path, header=0, delimiter="\t", quoting=3)
test = pd.read_csv(testdata_path, header=0, delimiter="\t", quoting=3 )
print 'The first review is:'
print train["review"][0]
train[:3]
test[:3]
"""
Explanation: 读取数据
End of explanation
"""
import nltk
nltk.download()
# 'Download text data sets. If you already have NLTK datasets downloaded, just close the Python download window...'
# Download text data sets, including stop words
# Initialize an empty list to hold the clean reviews
clean_train_reviews = []
# Loop over each review; create an index i that goes from 0 to the length
# of the movie review list
print "Cleaning and parsing the training set movie reviews...\n"
for i in xrange( 0, len(train["review"])):
clean_train_reviews.append(" ".join(KaggleWord2VecUtility.review_to_wordlist(train["review"][i], True)))
clean_train_reviews[0]
train['review'][0]
"""
Explanation: 清洗数据
End of explanation
"""
# ****** Create a bag of words from the training set
# Initialize the "CountVectorizer" object, which is scikit-learn's
# bag of words tool.
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
max_features = 5000)
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of strings.
train_data_features = vectorizer.fit_transform(clean_train_reviews)
# Numpy arrays are easy to work with, so convert the result to an array
train_data_features = train_data_features.toarray()
type(train_data_features)
len(train_data_features)
train_data_features[1][100:105]
"""
Explanation: 计算特征向量(词向量)
End of explanation
"""
from sklearn.cross_validation import cross_val_score
forest_val = RandomForestClassifier(n_estimators = 100)
scores = cross_val_score(forest_val, train_data_features, train["sentiment"], cv = 3)
scores.mean()
scores
"""
Explanation: Cross validation Score of RandomForestClassifier
RandomForestClassifier
在机器学习中,随机森林是一个包含多个决策树的分类器, 并且其输出的类别是由个别树输出的类别的众数而定。 Leo Breiman和Adele Cutler发展出推论出随机森林的算法。 而 "Random Forests" 是他们的商标。 这个术语是1995年由贝尔实验室的Tin Kam Ho所提出的随机决策森林(random decision forests)而来的。这个方法则是结合 Breimans 的 "Bootstrap aggregating" 想法和 Ho 的"random subspace method"以建造决策树的集合.
根据下列算法而建造每棵树:
一. 用 N 来表示训练例子的个数,M表示变量的数目。
二. 我们会被告知一个数 m ,被用来决定当在一个节点上做决定时,会使用到多少个变量。m应小于M
三. 从N个训练案例中以可重复取样的方式,取样N次,形成一组训练集(即bootstrap取样)。并使用这棵树来对剩余预测其类别,并评估其误差。
四. 对于每一个节点,随机选择m个基于此点上的变量。根据这 m 个变量,计算其最佳的分割方式。
五. 每棵树都会完整成长而不会剪枝(Pruning)(这有可能在建完一棵正常树状分类器后会被采用)。
End of explanation
"""
# ******* Train a random forest using the bag of words
# Initialize a Random Forest classifier with 100 trees
forest = RandomForestClassifier(n_estimators = 100)
# Fit the forest to the training set, using the bag of words as
# features and the sentiment labels as the response variable
# This may take a few minutes to run
forest = forest.fit( train_data_features, train["sentiment"] )
"""
Explanation: Use all train data to train a forest model
End of explanation
"""
# Create an empty list and append the clean reviews one by one
clean_test_reviews = []
for i in xrange(0,len(test["review"])):
clean_test_reviews.append(" ".join(KaggleWord2VecUtility.review_to_wordlist(test["review"][i], True)))
len(clean_test_reviews)
clean_test_reviews[0]
test['review'][0]
# Get a bag of words for the test set, and convert to a numpy array
test_data_features = vectorizer.transform(clean_test_reviews)
test_data_features = test_data_features.toarray()
test_data_features[3]
# Use the random forest to make sentiment label predictions
result = forest.predict(test_data_features)
# Copy the results to a pandas dataframe with an "id" column and a "sentiment" column
output = pd.DataFrame( data={"id":test["id"], "sentiment":result} )
# Use pandas to write the comma-separated output file
output.to_csv('/Users/chengjun/github/cjc2016/data/Bag_of_Words_model.csv', index=False, quoting=3)
"""
Explanation: Predict the testset
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tutorials/distribute/multi_worker_with_keras.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
!pip install tf-nightly
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
"""
Explanation: 利用 Keras 来训练多工作器(worker)
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/distribute/multi_worker_with_keras"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorflow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 Github 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到
tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入
docs-zh-cn@tensorflow.org Google Group。
概述
本教程使用 tf.distribute.Strategy API 演示了使用 Keras 模型的多工作器(worker)分布式培训。借助专为多工作器(worker)训练而设计的策略,设计在单一工作器(worker)上运行的 Keras 模型可以在最少的代码更改的情况下无缝地处理多个工作器。
TensorFlow 中的分布式培训指南可用于概述TensorFlow支持的分布式策略,并想要更深入理解tf.distribute.Strategy API 感兴趣的人。
配置
首先,设置 TensorFlow 和必要的导入。
End of explanation
"""
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# 将 MNIST 数据从 (0, 255] 缩放到 (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
"""
Explanation: 准备数据集
现在,让我们从 TensorFlow 数据集 中准备MNIST数据集。 MNIST 数据集 包括60,000个训练样本和10,000个手写数字0-9的测试示例,格式为28x28像素单色图像。
End of explanation
"""
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
"""
Explanation: 构建 Keras 模型
在这里,我们使用tf.keras.Sequential API来构建和编译一个简单的卷积神经网络 Keras 模型,用我们的 MNIST 数据集进行训练。
注意:有关构建 Keras 模型的详细训练说明,请参阅TensorFlow Keras 指南。
End of explanation
"""
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
"""
Explanation: 让我们首先尝试用少量的 epoch 来训练模型,并在单个工作器(worker)中观察结果,以确保一切正常。 随着训练的迭代,您应该会看到损失(loss)下降和准确度(accuracy)接近1.0。
End of explanation
"""
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
"""
Explanation: 多工作器(worker)配置
现在让我们进入多工作器(worker)训练的世界。在 TensorFlow 中,需要 TF_CONFIG 环境变量来训练多台机器,每台机器可能具有不同的角色。 TF_CONFIG用于指定作为集群一部分的每个 worker 的集群配置。
TF_CONFIG 有两个组件:cluster 和 task 。 cluster 提供有关训练集群的信息,这是一个由不同类型的工作组成的字典,例如 worker 。在多工作器(worker)培训中,除了常规的“工作器”之外,通常还有一个“工人”承担更多责任,比如保存检查点和为 TensorBoard 编写摘要文件。这样的工作器(worker)被称为“主要”工作者,习惯上worker 中 index 0被指定为主要的 worker(事实上这就是tf.distribute.Strategy的实现方式)。 另一方面,task 提供当前任务的信息。
在这个例子中,我们将任务 type 设置为 "worker" 并将任务 index 设置为 0 。这意味着具有这种设置的机器是第一个工作器,它将被指定为主要工作器并且要比其他工作器做更多的工作。请注意,其他机器也需要设置 TF_CONFIG 环境变量,它应该具有相同的 cluster 字典,但是不同的任务type 或 index 取决于这些机器的角色。
为了便于说明,本教程展示了如何在 localhost 上设置一个带有2个工作器的TF_CONFIG。 实际上,用户会在外部IP地址/端口上创建多个工作器,并在每个工作器上适当地设置TF_CONFIG。
警告:不要在 Colab 中执行以下代码。TensorFlow 的运行时将尝试在指定的IP地址和端口创建 gRPC 服务器,这可能会失败。
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"]
},
'task': {'type': 'worker', 'index': 0}
})
注意,虽然在该示例中学习速率是固定的,但是通常可能需要基于全局批量大小来调整学习速率。
选择正确的策略
在 TensorFlow 中,分布式训练包括同步训练(其中训练步骤跨工作器和副本同步)、异步训练(训练步骤未严格同步)。
MultiWorkerMirroredStrategy 是同步多工作器训练的推荐策略,将在本指南中进行演示。
要训练模型,请使用 tf.distribute.experimental.MultiWorkerMirroredStrategy 的实例。 MultiWorkerMirroredStrategy 在所有工作器的每台设备上创建模型层中所有变量的副本。 它使用 CollectiveOps ,一个用于集体通信的 TensorFlow 操作,来聚合梯度并使变量保持同步。 tf.distribute.Strategy指南有关于此策略的更多详细信息。
End of explanation
"""
NUM_WORKERS = 2
# 由于 `tf.data.Dataset.batch` 需要全局的批处理大小,
# 因此此处的批处理大小按工作器数量增加。
# 以前我们使用64,现在变成128。
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# 创建数据集需要在 MultiWorkerMirroredStrategy 对象
# 实例化后。
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# 模型的建立/编译需要在 `strategy.scope()` 内部。
multi_worker_model = build_and_compile_cnn_model()
# Keras 的 `model.fit()` 以特定的时期数和每时期的步数训练模型。
# 注意此处的数量仅用于演示目的,并不足以产生高质量的模型。
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
"""
Explanation: 注意:解析 TF_CONFIG 并且在调用 MultiWorkerMirroredStrategy.__init__() 时启动 TensorFlow 的 GRPC 服务器,因此必须在创建tf.distribute.Strategy实例之前设置 TF_CONFIG 环境变量。
MultiWorkerMirroredStrategy 通过CollectiveCommunication参数提供多个实现。RING 使用 gRPC 作为跨主机通信层实现基于环的集合。NCCL 使用Nvidia 的 NCCL来实现集体。 AUTO 将选择推迟到运行时。 集体实现的最佳选择取决于GPU的数量和种类以及群集中的网络互连。
使用 MultiWorkerMirroredStrategy 训练模型
通过将 tf.distribute.Strategy API集成到 tf.keras 中,将训练分发给多人的唯一更改就是将模型进行构建和 model.compile() 调用封装在 strategy.scope() 内部。 分发策略的范围决定了如何创建变量以及在何处创建变量,对于 MultiWorkerMirroredStrategy 而言,创建的变量为 MirroredVariable ,并且将它们复制到每个工作器上。
注意:在此Colab中,以下代码可以按预期结果运行,但是由于未设置TF_CONFIG,因此这实际上是单机训练。 在您自己的示例中设置了 TF_CONFIG 后,您应该期望在多台机器上进行培训可以提高速度。
End of explanation
"""
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
"""
Explanation: 数据集分片和批(batch)大小
在多工作器训练中,需要将数据分片为多个部分,以确保融合和性能。 但是,请注意,在上面的代码片段中,数据集直接发送到model.fit(),而无需分片; 这是因为tf.distribute.Strategy API在多工作器训练中会自动处理数据集分片。
如果您喜欢手动分片进行训练,则可以通过tf.data.experimental.DistributeOptions API关闭自动分片。
End of explanation
"""
# 将 `filepath` 参数替换为在文件系统中所有工作器都能访问的路径。
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
"""
Explanation: 要注意的另一件事是 datasets 的批处理大小。 在上面的代码片段中,我们使用 GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS ,这是单个工作器的大小的 NUM_WORKERS 倍,因为每个工作器的有效批量大小是全局批量大小(参数从 tf.data.Dataset.batch() 传入)除以工作器的数量,通过此更改,我们使每个工作器的批处理大小与以前相同。
性能
现在,您已经有了一个Keras模型,该模型全部通过 MultiWorkerMirroredStrategy 运行在多个工作器中。 您可以尝试以下技术来调整多工作器训练的效果。
MultiWorkerMirroredStrategy 提供了多个[集体通信实现]collective communication implementations. RING 使用gRPC作为跨主机通信层实现基于环的集合。 NCCL 使用 Nvidia's NCCL 来实现集合。 AUTO 将推迟到运行时选择。集体实施的最佳选择取决于GPU的数量和种类以及集群中的网络互连。 要覆盖自动选择,请为 MultiWorkerMirroredStrategy 的构造函数的 communication 参数指定一个有效值,例如: communication=tf.distribute.experimental.CollectiveCommunication.NCCL.
如果可能的话,将变量强制转换为 tf.float。ResNet 的官方模型包括如何完成此操作的示例。
容错能力
在同步训练中,如果其中一个工作器出现故障并且不存在故障恢复机制,则集群将失败。 在工作器退出或不稳定的情况下,将 Keras 与 tf.distribute.Strategy 一起使用会具有容错的优势。 我们通过在您选择的分布式文件系统中保留训练状态来做到这一点,以便在重新启动先前失败或被抢占的实例后,将恢复训练状态。
由于所有工作器在训练 epochs 和 steps 方面保持同步,因此其他工作器将需要等待失败或被抢占的工作器重新启动才能继续。
ModelCheckpoint 回调
要在多工作器训练中利用容错功能,请在调用 tf.keras.Model.fit() 时提供一个 tf.keras.callbacks.ModelCheckpoint 实例。 回调会将检查点和训练状态存储在与 ModelCheckpoint 的 filepath 参数相对应的目录中。
End of explanation
"""
|
eds-uga/csci1360-fa16 | lectures/L13.ipynb | mit | li = ["this", "is", "a", "list"]
print(li)
print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive)
print(li[2:]) # Print element 2 and everything after that
print(li[:-1]) # Print everything BEFORE element -1 (the last one)
"""
Explanation: Lecture 13: Array Indexing, Slicing, and Broadcasting
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
Most of this lecture will be a review of basic indexing and slicing operations, albeit within the context of NumPy arrays. Therefore, there will be some additional functionalities that are critical to understand. By the end of this lecture, you should be able to:
Use "fancy indexing" in NumPy arrays
Create boolean masks to pull out subsets of a NumPy array
Understand array broadcasting for performing operations on subsets of NumPy arrays
Part 1: NumPy Array Indexing and Slicing
Hopefully, you recall basic indexing and slicing from L4.
End of explanation
"""
import numpy as np
x = np.array([1, 2, 3, 4, 5])
print(x)
print(x[1:3])
print(x[2:])
print(x[:-1])
"""
Explanation: With NumPy arrays, all the same functionality you know and love from lists is still there.
End of explanation
"""
python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]
print(python_matrix)
"""
Explanation: These operations all work whether you're using Python lists or NumPy arrays.
The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices.
To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists:
End of explanation
"""
numpy_matrix = np.array(python_matrix)
print(numpy_matrix)
"""
Explanation: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method:
End of explanation
"""
print(python_matrix) # The full list-of-lists
print(python_matrix[0]) # The inner-list at the 0th position of the outer-list
print(python_matrix[0][0]) # The 0th element of the 0th inner-list
"""
Explanation: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way:
End of explanation
"""
print(numpy_matrix)
print(numpy_matrix[0])
print(numpy_matrix[0, 0]) # Note the comma-separated format!
"""
Explanation: With NumPy arrays, you can use that same notation...or you can use comma-separated indices:
End of explanation
"""
x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
print(x)
print()
print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1.
"""
Explanation: It's not earth-shattering, but enough to warrant a heads-up.
When you index NumPy arrays, the nomenclature used is that of an axis: you are indexing specific axes of a NumPy array object. In particular, when access the .shape attribute on a NumPy array, that tells you two things:
1: How many axes there are. This number is len(ndarray.shape), or the number of elements in the tuple returned by .shape. In our above example, numpy_matrix.shape would return (3, 3), so it would have 2 axes.
2: How many elements are in each axis. In our above example, where numpy_matrix.shape returns (3, 3), there are 2 axes (since the length of that tuple is 2), and both axes have 3 elements (hence the numbers 3).
Here's the breakdown of axis notation and indices used in a 2D NumPy array:
As with lists, if you want an entire axis, just use the colon operator all by itself:
End of explanation
"""
video = np.empty(shape = (1920, 1080, 5000))
print("Axis 0 length: {}".format(video.shape[0])) # How many rows?
print("Axis 1 length: {}".format(video.shape[1])) # How many columns?
print("Axis 2 length: {}".format(video.shape[2])) # How many frames?
"""
Explanation: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3):
Depending on your field, it's entirely possible that you'll go beyond 2D matrices. If so, it's important to be able to recognize what these structures "look" like.
For example, a video can be thought of as a 3D cube. Put another way, it's a NumPy array with 3 axes: the first axis is height, the second axis is width, and the third axis is number of frames.
End of explanation
"""
print(video.ndim)
del video
"""
Explanation: We know video is 3D because we can also access its ndim attribute.
End of explanation
"""
tensor = np.empty(shape = (2, 640, 480, 360, 100))
print(tensor.shape)
# Axis 0: color channel--used to differentiate between fluorescent markers
# Axis 1: height--same as before
# Axis 2: width--same as before
# Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie
# Axis 4: frame--same as before
"""
Explanation: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object:
End of explanation
"""
print(tensor.size)
del tensor
"""
Explanation: We can also ask how many elements there are total, using the size attribute:
End of explanation
"""
example = np.empty(shape = (3, 5, 9))
print(example.shape)
sliced = example[0] # Indexed the first axis.
print(sliced.shape)
sliced_again = example[0, 0] # Indexed the first and second axes.
print(sliced_again.shape)
"""
Explanation: These are extreme examples, but they're to illustrate how flexible NumPy arrays are.
If in doubt: once you index the first axis, the NumPy array you get back has the shape of all the remaining axes.
End of explanation
"""
x = np.array([1, 2, 3, 4, 5])
x += 10
print(x)
"""
Explanation: Part 2: NumPy Array Broadcasting
"Broadcasting" is a fancy term for how Python--specifically, NumPy--handles vectorized operations when arrays of differing shapes are involved. (this is, in some sense, "how the sausage is made")
When you write code like this:
End of explanation
"""
zeros = np.zeros(shape = (3, 4))
ones = 1
zeros += ones
print(zeros)
"""
Explanation: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting.
Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array.
We saw this in our previous example: the low-dimensional scalar was replicated, or broadcast, to each element of the array x so that the addition operation could be performed element-wise.
This concept can be generalized to higher-dimensional NumPy arrays.
End of explanation
"""
x = np.zeros(shape = (3, 3))
y = np.ones(4)
x + y
"""
Explanation: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition.
This all happens under the NumPy hood--we don't see it! It "just works"...most of the time.
There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as
both dimensions are of equal size (e.g., both have the same number of rows)
one of them is 1 (the scalar case)
If these rules aren't met, you get all kinds of strange errors:
End of explanation
"""
x = np.zeros(shape = (3, 4))
y = np.array([1, 2, 3, 4])
z = x + y
print(z)
"""
Explanation: But on some intuitive level, this hopefully makes sense: there's no reasonable arithmetic operation that can be performed when you have one $3 \times 3$ matrix and a vector of length 4.
To be rigorous, though: it's the trailing dimensions / axes that you want to make sure line up.
End of explanation
"""
x = np.random.standard_normal(size = (7, 4))
print(x)
"""
Explanation: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Part 3: "Fancy" Indexing
Hopefully you have at least an intuitive understanding of how indexing works so far. Unfortunately, it gets more complicated, but still retains a modicum of simplicity.
First: indexing by boolean masks.
Boolean indexing
We've already seen that you can index by integers. Using the colon operator, you can even specify ranges, slicing out entire swaths of rows and columns.
But suppose we want something very specific; data in our array which satisfies certain criteria, as opposed to data which is found at certain indices?
Put another way: can we pull data out of an array that meets certain conditions?
Let's say you have some data.
End of explanation
"""
mask = x < 0
print(mask)
"""
Explanation: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's 7 people who are described by their height, weight, age, and 40-yard dash time. Or it's a matrix of data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints.
Whatever our data, a common first step before any analysis involves some kind of preprocessing. In this case, if the example we're looking at is the video game scenario from the previous slide, then we know that any negative numbers are junk. After all, how can you have a negative rating? Or a negative number of active players?
So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops, but it's much easier (and faster) to use boolean indexing.
First, we create a mask. This is what it sounds like: it "masks" certain portions of the data we don't want to change (in this case, all the numbers greater than 0).
End of explanation
"""
x[mask] = 0
print(x)
"""
Explanation: Now, we can use our mask to access only the indices we want to set to 0.
End of explanation
"""
mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5
x[mask] = 99
print(x)
"""
Explanation: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
One small caveat with boolean indexing.
Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals.
But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators: & (for and) and | (for or).
End of explanation
"""
matrix = np.empty(shape = (8, 4))
for i in range(8):
matrix[i] = i # Broadcasting is happening here!
print(matrix)
"""
Explanation: Fancy Indexing
"Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough: fancy indexing allows you to index arrays with other [integer] arrays.
Now, to demonstrate:
Let's build a 2D array that, for the sake of simplicity, has across each row the index of that row.
End of explanation
"""
indices = np.array([7, 0, 5, 2])
print(matrix[indices])
"""
Explanation: We have 8 rows and 4 columns, where each row is a vector of the same value repeated across the columns, and that value is the index of the row.
In addition to slicing and boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them.
Let's say I want rows 7, 0, 5, and 2. In that order.
End of explanation
"""
matrix = np.arange(32).reshape((8, 4))
print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise.
indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays!
print(matrix[indices])
"""
Explanation: Ta-daaa! Pretty spiffy!
But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array.
End of explanation
"""
( np.array([1, 7, 4]), np.array([3, 0, 1]) )
"""
Explanation: Ok, this will take a little explaining, bear with me:
When you pass in tuples as indices, they act as $(x, y)$ coordinate pairs: the first NumPy array of the tuple is the list of $x$ coordinates, while the second NumPy array is the list of corresponding $y$ coordinates.
In this way, the corresponding elements of the two NumPy arrays in the tuple give you the row and column indices to be selected from the original NumPy array.
In our previous example, this was our tuple of indices:
End of explanation
"""
|
zaqwes8811/micro-apps | self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb | mit | from __future__ import division, print_function
%matplotlib inline
#format the book
import book_format
book_format.set_style()
"""
Explanation: Table of Contents
The Extended Kalman Filter
End of explanation
"""
import kf_book.ekf_internal as ekf_internal
ekf_internal.show_linearization()
"""
Explanation: We have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique.
The EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature.
Linearizing the Kalman Filter
The Kalman filter uses linear equations, so it does not work with nonlinear problems. Problems can be nonlinear in two ways. First, the process model might be nonlinear. An object falling through the atmosphere encounters drag which reduces its acceleration. The drag coefficient varies based on the velocity the object. The resulting behavior is nonlinear - it cannot be modeled with linear equations. Second, the measurements could be nonlinear. For example, a radar gives a range and bearing to a target. We use trigonometry, which is nonlinear, to compute the position of the target.
For the linear filter we have these equations for the process and measurement models:
$$\begin{aligned}\dot{\mathbf x} &= \mathbf{Ax} + w_x\
\mathbf z &= \mathbf{Hx} + w_z
\end{aligned}$$
Where $\mathbf A$ is the systems dynamic matrix. Using the state space methods covered in the Kalman Filter Math chapter these equations can be tranformed into
$$\begin{aligned}\bar{\mathbf x} &= \mathbf{Fx} \
\mathbf z &= \mathbf{Hx}
\end{aligned}$$
where $\mathbf F$ is the fundamental matrix. The noise $w_x$ and $w_z$ terms are incorporated into the matrices $\mathbf R$ and $\mathbf Q$. This form of the equations allow us to compute the state at step $k$ given a measurement at step $k$ and the state estimate at step $k-1$. In earlier chapters I built your intuition and minimized the math by using problems describable with Newton's equations. We know how to design $\mathbf F$ based on high school physics.
For the nonlinear model the linear expression $\mathbf{Fx} + \mathbf{Bu}$ is replaced by a nonlinear function $f(\mathbf x, \mathbf u)$, and the linear expression $\mathbf{Hx}$ is replaced by a nonlinear function $h(\mathbf x)$:
$$\begin{aligned}\dot{\mathbf x} &= f(\mathbf x, \mathbf u) + w_x\
\mathbf z &= h(\mathbf x) + w_z
\end{aligned}$$
You might imagine that we could proceed by finding a new set of Kalman filter equations that optimally solve these equations. But if you remember the charts in the Nonlinear Filtering chapter you'll recall that passing a Gaussian through a nonlinear function results in a probability distribution that is no longer Gaussian. So this will not work.
The EKF does not alter the Kalman filter's linear equations. Instead, it linearizes the nonlinear equations at the point of the current estimate, and uses this linearization in the linear Kalman filter.
Linearize means what it sounds like. We find a line that most closely matches the curve at a defined point. The graph below linearizes the parabola $f(x)=x^2−2x$ at $x=1.5$.
End of explanation
"""
ekf_internal.show_radar_chart()
"""
Explanation: If the curve above is the process model, then the dotted lines shows the linearization of that curve for the estimate $x=1.5$.
We linearize systems by taking the derivative, which finds the slope of a curve:
$$\begin{aligned}
f(x) &= x^2 -2x \
\frac{df}{dx} &= 2x - 2
\end{aligned}$$
and then evaluating it at $x$:
$$\begin{aligned}m &= f'(x=1.5) \&= 2(1.5) - 2 \&= 1\end{aligned}$$
Linearizing systems of differential equations is similar. We linearize $f(\mathbf x, \mathbf u)$, and $h(\mathbf x)$ by taking the partial derivatives of each to evaluate $\mathbf F$ and $\mathbf H$ at the point $\mathbf x_t$ and $\mathbf u_t$. We call the partial derivative of a matrix the Jacobian. This gives us the the discrete state transition matrix and measurement model matrix:
$$
\begin{aligned}
\mathbf F
&= {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|{{\mathbf x_t},{\mathbf u_t}} \
\mathbf H &= \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|{\bar{\mathbf x}_t}
\end{aligned}
$$
This leads to the following equations for the EKF. I put boxes around the differences from the linear filter:
$$\begin{array}{l|l}
\text{linear Kalman filter} & \text{EKF} \
\hline
& \boxed{\mathbf F = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|{{\mathbf x_t},{\mathbf u_t}}} \
\mathbf{\bar x} = \mathbf{Fx} + \mathbf{Bu} & \boxed{\mathbf{\bar x} = f(\mathbf x, \mathbf u)} \
\mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q & \mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q \
\hline
& \boxed{\mathbf H = \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|{\bar{\mathbf x}_t}} \
\textbf{y} = \mathbf z - \mathbf{H \bar{x}} & \textbf{y} = \mathbf z - \boxed{h(\bar{x})}\
\mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} & \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \
\mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} & \mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \
\mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} & \mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}}
\end{array}$$
We don't normally use $\mathbf{Fx}$ to propagate the state for the EKF as the linearization causes inaccuracies. It is typical to compute $\bar{\mathbf x}$ using a suitable numerical integration technique such as Euler or Runge Kutta. Thus I wrote $\mathbf{\bar x} = f(\mathbf x, \mathbf u)$. For the same reasons we don't use $\mathbf{H\bar{x}}$ in the computation for the residual, opting for the more accurate $h(\bar{\mathbf x})$.
I think the easiest way to understand the EKF is to start off with an example. Later you may want to come back and reread this section.
Example: Tracking a Airplane
This example tracks an airplane using ground based radar. We implemented a UKF for this problem in the last chapter. Now we will implement an EKF for the same problem so we can compare both the filter performance and the level of effort required to implement the filter.
Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the slant distance - the straight line distance from the radar installation to the object.
The relationship between the radar's slant range distance $r$ and elevation angle $\epsilon$ with the horizontal position $x$ and altitude $y$ of the aircraft is illustrated in the figure below:
End of explanation
"""
from math import sqrt
def HJacobian_at(x):
""" compute Jacobian of H matrix at x """
horiz_dist = x[0]
altitude = x[2]
denom = sqrt(horiz_dist**2 + altitude**2)
return array ([[horiz_dist/denom, 0., altitude/denom]])
"""
Explanation: This gives us the equalities:
$$\begin{aligned}
\epsilon &= \tan^{-1} \frac y x\
r^2 &= x^2 + y^2
\end{aligned}$$
Design the State Variables
We want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, horizonal velocity, and altitude:
$$\mathbf x = \begin{bmatrix}\mathtt{distance} \\mathtt{velocity}\ \mathtt{altitude}\end{bmatrix}= \begin{bmatrix}x \ \dot x\ y\end{bmatrix}$$
Design the Process Model
We assume a Newtonian, kinematic system for the aircraft. We've used this model in previous chapters, so by inspection you may recognize that we want
$$\mathbf F = \left[\begin{array}{cc|c} 1 & \Delta t & 0\
0 & 1 & 0 \ \hline
0 & 0 & 1\end{array}\right]$$
I've partioned the matrix into blocks to show the upper left block is a constant velocity model for $x$, and the lower right block is a constant position model for $y$.
However, let's practice finding these matrices. We model systems with a set of differential equations. We need an equation in the form
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{w}$$
where $\mathbf{w}$ is the system noise.
The variables $x$ and $y$ are independent so we can compute them separately. The differential equations for motion in one dimension are:
$$\begin{aligned}v &= \dot x \
a &= \ddot{x} = 0\end{aligned}$$
Now we put the differential equations into state-space form. If this was a second or greater order differential system we would have to first reduce them to an equivalent set of first degree equations. The equations are first order, so we put them in state space matrix form as
$$\begin{aligned}\begin{bmatrix}\dot x \ \ddot{x}\end{bmatrix} &= \begin{bmatrix}0&1\0&0\end{bmatrix} \begin{bmatrix}x \
\dot x\end{bmatrix} \ \dot{\mathbf x} &= \mathbf{Ax}\end{aligned}$$
where $\mathbf A=\begin{bmatrix}0&1\0&0\end{bmatrix}$.
Recall that $\mathbf A$ is the system dynamics matrix. It describes a set of linear differential equations. From it we must compute the state transition matrix $\mathbf F$. $\mathbf F$ describes a discrete set of linear equations which compute $\mathbf x$ for a discrete time step $\Delta t$.
A common way to compute $\mathbf F$ is to use the power series expansion of the matrix exponential:
$$\mathbf F(\Delta t) = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A \Delta t)^3}{3!} + ... $$
$\mathbf A^2 = \begin{bmatrix}0&0\0&0\end{bmatrix}$, so all higher powers of $\mathbf A$ are also $\mathbf{0}$. Thus the power series expansion is:
$$
\begin{aligned}
\mathbf F &=\mathbf{I} + \mathbf At + \mathbf{0} \
&= \begin{bmatrix}1&0\0&1\end{bmatrix} + \begin{bmatrix}0&1\0&0\end{bmatrix}\Delta t\
\mathbf F &= \begin{bmatrix}1&\Delta t\0&1\end{bmatrix}
\end{aligned}$$
This is the same result used by the kinematic equations! This exercise was unnecessary other than to illustrate finding the state transition matrix from linear differential equations. We will conclude the chapter with an example that will require the use of this technique.
Design the Measurement Model
The measurement function takes the state estimate of the prior $\bar{\mathbf x}$ and turn it into a measurement of the slant range distance. We use the Pythagorean theorem to derive:
$$h(\bar{\mathbf x}) = \sqrt{x^2 + y^2}$$
The relationship between the slant distance and the position on the ground is nonlinear due to the square root. We linearize it by evaluating its partial derivative at $\mathbf x_t$:
$$
\mathbf H = \frac{\partial{h(\bar{\mathbf x})}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}
$$
The partial derivative of a matrix is called a Jacobian, and takes the form
$$\frac{\partial \mathbf H}{\partial \bar{\mathbf x}} =
\begin{bmatrix}
\frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} &\dots \
\frac{\partial h_2}{\partial x_1} & \frac{\partial h_2}{\partial x_2} &\dots \
\vdots & \vdots
\end{bmatrix}
$$
In other words, each element in the matrix is the partial derivative of the function $h$ with respect to the $x$ variables. For our problem we have
$$\mathbf H = \begin{bmatrix}{\partial h}/{\partial x} & {\partial h}/{\partial \dot{x}} & {\partial h}/{\partial y}\end{bmatrix}$$
Solving each in turn:
$$\begin{aligned}
\frac{\partial h}{\partial x} &= \frac{\partial}{\partial x} \sqrt{x^2 + y^2} \
&= \frac{x}{\sqrt{x^2 + y^2}}
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial \dot{x}} &=
\frac{\partial}{\partial \dot{x}} \sqrt{x^2 + y^2} \
&= 0
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial y} &= \frac{\partial}{\partial y} \sqrt{x^2 + y^2} \
&= \frac{y}{\sqrt{x^2 + y^2}}
\end{aligned}$$
giving us
$$\mathbf H =
\begin{bmatrix}
\frac{x}{\sqrt{x^2 + y^2}} &
0 &
&
\frac{y}{\sqrt{x^2 + y^2}}
\end{bmatrix}$$
This may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\mathbf H$. As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\mathbf x$ so we need to take the derivative of the slant range with respect to $\mathbf x$. For the linear Kalman filter $\mathbf H$ was a constant that we computed prior to running the filter. For the EKF $\mathbf H$ is updated at each step as the evaluation point $\bar{\mathbf x}$ changes at each epoch.
To make this more concrete, let's now write a Python function that computes the Jacobian of $h$ for this problem.
End of explanation
"""
def hx(x):
""" compute measurement for slant range that
would correspond to state x.
"""
return (x[0]**2 + x[2]**2) ** 0.5
"""
Explanation: Finally, let's provide the code for $h(\bar{\mathbf x})$:
End of explanation
"""
from numpy.random import randn
import math
class RadarSim(object):
""" Simulates the radar signal returns from an object
flying at a constant altityude and velocity in 1D.
"""
def __init__(self, dt, pos, vel, alt):
self.pos = pos
self.vel = vel
self.alt = alt
self.dt = dt
def get_range(self):
""" Returns slant range to the object. Call once
for each new measurement at dt time from last call.
"""
# add some process noise to the system
self.vel = self.vel + .1*randn()
self.alt = self.alt + .1*randn()
self.pos = self.pos + self.vel*self.dt
# add measurement noise
err = self.pos * 0.05*randn()
slant_dist = math.sqrt(self.pos**2 + self.alt**2)
return slant_dist + err
"""
Explanation: Now let's write a simulation for our radar.
End of explanation
"""
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import ExtendedKalmanFilter
from numpy import eye, array, asarray
import numpy as np
dt = 0.05
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
# make an imperfect starting guess
rk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000])
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]]) * dt
range_std = 5. # meters
rk.R = np.diag([range_std**2])
rk.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1)
rk.Q[2,2] = 0.1
rk.P *= 50
xs, track = [], []
for i in range(int(20/dt)):
z = radar.get_range()
track.append((radar.pos, radar.vel, radar.alt))
rk.update(array([z]), HJacobian_at, hx)
xs.append(rk.x)
rk.predict()
xs = asarray(xs)
track = asarray(track)
time = np.arange(0, len(xs)*dt, dt)
ekf_internal.plot_radar(xs, track, time)
"""
Explanation: Design Process and Measurement Noise
The radar measures the range to a target. We will use $\sigma_{range}= 5$ meters for the noise. This gives us
$$\mathbf R = \begin{bmatrix}\sigma_{range}^2\end{bmatrix} = \begin{bmatrix}25\end{bmatrix}$$
The design of $\mathbf Q$ requires some discussion. The state $\mathbf x= \begin{bmatrix}x & \dot x & y\end{bmatrix}^\mathtt{T}$. The first two elements are position (down range distance) and velocity, so we can use Q_discrete_white_noise noise to compute the values for the upper left hand side of $\mathbf Q$. The third element of $\mathbf x$ is altitude, which we are assuming is independent of the down range distance. That leads us to a block design of $\mathbf Q$ of:
$$\mathbf Q = \begin{bmatrix}\mathbf Q_\mathtt{x} & 0 \ 0 & \mathbf Q_\mathtt{y}\end{bmatrix}$$
Implementation
FilterPy provides the class ExtendedKalmanFilter. It works similarly to the KalmanFilter class we have been using, except that it allows you to provide a function that computes the Jacobian of $\mathbf H$ and the function $h(\mathbf x)$.
We start by importing the filter and creating it. The dimension of x is 3 and z has dimension 1.
```python
from filterpy.kalman import ExtendedKalmanFilter
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
We create the radar simulator:python
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
```
We will initialize the filter near the airplane's actual position:
python
rk.x = array([radar.pos, radar.vel-10, radar.alt+100])
We assign the system matrix using the first term of the Taylor series expansion we computed above:
python
dt = 0.05
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])*dt
After assigning reasonable values to $\mathbf R$, $\mathbf Q$, and $\mathbf P$ we can run the filter with a simple loop. We pass the functions for computing the Jacobian of $\mathbf H$ and $h(x)$ into the update method.
python
for i in range(int(20/dt)):
z = radar.get_range()
rk.update(array([z]), HJacobian_at, hx)
rk.predict()
Adding some boilerplate code to save and plot the results we get:
End of explanation
"""
import sympy
sympy.init_printing(use_latex=True)
x, x_vel, y = sympy.symbols('x, x_vel y')
H = sympy.Matrix([sympy.sqrt(x**2 + y**2)])
state = sympy.Matrix([x, x_vel, y])
H.jacobian(state)
"""
Explanation: Using SymPy to compute Jacobians
Depending on your experience with derivatives you may have found the computation of the Jacobian difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations.
As explained in Appendix A, we can use the SymPy package to compute the Jacobian for us.
End of explanation
"""
ekf_internal.plot_bicycle()
"""
Explanation: This result is the same as the result we computed above, and with much less effort on our part!
Robot Localization
It's time to try a real problem. I warn you that this section is difficult. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to solve a real world problem.
We will consider the problem of robot localization. We already implemented this in the Unscented Kalman Filter chapter, and I recommend you read it now if you haven't already. In this scenario we have a robot that is moving through a landscape using a sensor to detect landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. It might be one of those small robots that vacuum your house, or a robot in a warehouse.
The robot has 4 wheels in the same configuration used by automobiles. It maneuvers by pivoting the front wheels. This causes the robot to pivot around the rear axle while moving forward. This is nonlinear behavior which we will have to model.
The robot has a sensor that measures the range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry.
Both the process model and measurement models are nonlinear. The EKF accommodates both, so we provisionally conclude that the EKF is a viable choice for this problem.
Robot Motion Model
At a first approximation an automobile steers by pivoting the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modeling steering requires a complicated set of differential equations.
For lower speed robotic applications a simpler bicycle model has been found to perform well. This is a depiction of the model:
End of explanation
"""
import sympy
from sympy.abc import alpha, x, y, v, w, R, theta
from sympy import symbols, Matrix
sympy.init_printing(use_latex="mathjax", fontsize='16pt')
time = symbols('t')
d = v*time
beta = (d/w)*sympy.tan(alpha)
r = w/sympy.tan(alpha)
fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)],
[theta+beta]])
F = fxu.jacobian(Matrix([x, y, theta]))
F
"""
Explanation: In the Unscented Kalman Filter chapter we derived these equations:
$$\begin{aligned}
\beta &= \frac d w \tan(\alpha) \
x &= x - R\sin(\theta) + R\sin(\theta + \beta) \
y &= y + R\cos(\theta) - R\cos(\theta + \beta) \
\theta &= \theta + \beta
\end{aligned}
$$
where $\theta$ is the robot's heading.
You do not need to understand this model in detail if you are not interested in steering models. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter.
Design the State Variables
For our filter we will maintain the position $x,y$ and orientation $\theta$ of the robot:
$$\mathbf x = \begin{bmatrix}x \ y \ \theta\end{bmatrix}$$
Our control input $\mathbf u$ is the velocity $v$ and steering angle $\alpha$:
$$\mathbf u = \begin{bmatrix}v \ \alpha\end{bmatrix}$$
Design the System Model
We model our system as a nonlinear motion model plus noise.
$$\bar x = f(x, u) + \mathcal{N}(0, Q)$$
Using the motion model for a robot that we created above, we can expand this to
$$\bar{\begin{bmatrix}x\y\\theta\end{bmatrix}} = \begin{bmatrix}x\y\\theta\end{bmatrix} +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \
R\cos(\theta) - R\cos(\theta + \beta) \
\beta\end{bmatrix}$$
We find The $\mathbf F$ by taking the Jacobian of $f(x,u)$.
$$\mathbf F = \frac{\partial f(x, u)}{\partial x} =\begin{bmatrix}
\frac{\partial f_1}{\partial x} &
\frac{\partial f_1}{\partial y} &
\frac{\partial f_1}{\partial \theta}\
\frac{\partial f_2}{\partial x} &
\frac{\partial f_2}{\partial y} &
\frac{\partial f_2}{\partial \theta} \
\frac{\partial f_3}{\partial x} &
\frac{\partial f_3}{\partial y} &
\frac{\partial f_3}{\partial \theta}
\end{bmatrix}
$$
When we calculate these we get
$$\mathbf F = \begin{bmatrix}
1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \
0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \
0 & 0 & 1
\end{bmatrix}$$
We can double check our work with SymPy.
End of explanation
"""
# reduce common expressions
B, R = symbols('beta, R')
F = F.subs((d/w)*sympy.tan(alpha), B)
F.subs(w/sympy.tan(alpha), R)
"""
Explanation: That looks a bit complicated. We can use SymPy to substitute terms:
End of explanation
"""
V = fxu.jacobian(Matrix([v, alpha]))
V = V.subs(sympy.tan(alpha)/w, 1/R)
V = V.subs(time*v/R, B)
V = V.subs(time*v, 'd')
V
"""
Explanation: This form verifies that the computation of the Jacobian is correct.
Now we can turn our attention to the noise. Here, the noise is in our control input, so it is in control space. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system.
$$\mathbf{M} = \begin{bmatrix}\sigma_{vel}^2 & 0 \ 0 & \sigma_\alpha^2\end{bmatrix}$$
If this was a linear problem we would convert from control space to state space using the by now familiar $\mathbf{FMF}^\mathsf T$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\mathbf{V}$.
$$\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix}
\frac{\partial f_1}{\partial v} & \frac{\partial f_1}{\partial \alpha} \
\frac{\partial f_2}{\partial v} & \frac{\partial f_2}{\partial \alpha} \
\frac{\partial f_3}{\partial v} & \frac{\partial f_3}{\partial \alpha}
\end{bmatrix}$$
These partial derivatives become very difficult to work with. Let's compute them with SymPy.
End of explanation
"""
px, py = symbols('p_x, p_y')
z = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)],
[sympy.atan2(py-y, px-x) - theta]])
z.jacobian(Matrix([x, y, theta]))
"""
Explanation: This should give you an appreciation of how quickly the EKF become mathematically intractable.
This gives us the final form of our prediction equations:
$$\begin{aligned}
\mathbf{\bar x} &= \mathbf x +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \
R\cos(\theta) - R\cos(\theta + \beta) \
\beta\end{bmatrix}\
\mathbf{\bar P} &=\mathbf{FPF}^{\mathsf T} + \mathbf{VMV}^{\mathsf T}
\end{aligned}$$
This form of linearization is not the only way to predict $\mathbf x$. For example, we could use a numerical integration technique such as Runge Kutta to compute the movement
of the robot. This will be required if the time step is relatively large. Things are not as cut and dried with the EKF as for the Kalman filter. For a real problem you have to carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns.
Design the Measurement Model
The robot's sensor provides a noisy bearing and range measurement to multiple known locations in the landscape. The measurement model must convert the state $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf T$ into a range and bearing to the landmark. If $\mathbf p$
is the position of a landmark, the range $r$ is
$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$
The sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so:
$$\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$
Thus our measurement model $h$ is
$$\begin{aligned}
\mathbf z& = h(\bar{\mathbf x}, \mathbf p) &+ \mathcal{N}(0, R)\
&= \begin{bmatrix}
\sqrt{(p_x - x)^2 + (p_y - y)^2} \
\arctan(\frac{p_y - y}{p_x - x}) - \theta
\end{bmatrix} &+ \mathcal{N}(0, R)
\end{aligned}$$
This is clearly nonlinear, so we need linearize $h$ at $\mathbf x$ by taking its Jacobian. We compute that with SymPy below.
End of explanation
"""
from math import sqrt
def H_of(x, landmark_pos):
""" compute Jacobian of H matrix where h(x) computes
the range and bearing to a landmark for state x """
px = landmark_pos[0]
py = landmark_pos[1]
hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2
dist = sqrt(hyp)
H = array(
[[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0],
[ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]])
return H
"""
Explanation: Now we need to write that as a Python function. For example we might write:
End of explanation
"""
from math import atan2
def Hx(x, landmark_pos):
""" takes a state variable and returns the measurement
that would correspond to that state.
"""
px = landmark_pos[0]
py = landmark_pos[1]
dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2)
Hx = array([[dist],
[atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]])
return Hx
"""
Explanation: We also need to define a function that converts the system state into a measurement.
End of explanation
"""
from filterpy.kalman import ExtendedKalmanFilter as EKF
from numpy import dot, array, sqrt
class RobotEKF(EKF):
def __init__(self, dt, wheelbase, std_vel, std_steer):
EKF.__init__(self, 3, 2, 2)
self.dt = dt
self.wheelbase = wheelbase
self.std_vel = std_vel
self.std_steer = std_steer
a, x, y, v, w, theta, time = symbols(
'a, x, y, v, w, theta, t')
d = v*time
beta = (d/w)*sympy.tan(a)
r = w/sympy.tan(a)
self.fxu = Matrix(
[[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)],
[theta+beta]])
self.F_j = self.fxu.jacobian(Matrix([x, y, theta]))
self.V_j = self.fxu.jacobian(Matrix([v, a]))
# save dictionary and it's variables for later use
self.subs = {x: 0, y: 0, v:0, a:0,
time:dt, w:wheelbase, theta:0}
self.x_x, self.x_y, = x, y
self.v, self.a, self.theta = v, a, theta
def predict(self, u=0):
self.x = self.move(self.x, u, self.dt)
self.subs[self.theta] = self.x[2, 0]
self.subs[self.v] = u[0]
self.subs[self.a] = u[1]
F = array(self.F_j.evalf(subs=self.subs)).astype(float)
V = array(self.V_j.evalf(subs=self.subs)).astype(float)
# covariance of motion noise in control space
M = array([[self.std_vel*u[0]**2, 0],
[0, self.std_steer**2]])
self.P = dot(F, self.P).dot(F.T) + dot(V, M).dot(V.T)
def move(self, x, u, dt):
hdg = x[2, 0]
vel = u[0]
steering_angle = u[1]
dist = vel * dt
if abs(steering_angle) > 0.001: # is robot turning?
beta = (dist / self.wheelbase) * tan(steering_angle)
r = self.wheelbase / tan(steering_angle) # radius
dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)],
[r*cos(hdg) - r*cos(hdg + beta)],
[beta]])
else: # moving in straight line
dx = np.array([[dist*cos(hdg)],
[dist*sin(hdg)],
[0]])
return x + dx
"""
Explanation: Design Measurement Noise
It is reasonable to assume that the noise of the range and bearing measurements are independent, hence
$$\mathbf R=\begin{bmatrix}\sigma_{range}^2 & 0 \ 0 & \sigma_{bearing}^2\end{bmatrix}$$
Implementation
We will use FilterPy's ExtendedKalmanFilter class to implement the filter. Its predict() method uses the standard linear equations for the process model. Ours is nonlinear, so we will have to override predict() with our own implementation. I'll want to also use this class to simulate the robot, so I'll add a method move() that computes the position of the robot which both predict() and my simulation can call.
The matrices for the prediction step are quite large. While writing this code I made several errors before I finally got it working. I only found my errors by using SymPy's evalf function. evalf evaluates a SymPy Matrix with specific values for the variables. I decided to demonstrate this technique to you, and used evalf in the Kalman filter code. You'll need to understand a couple of points.
First, evalf uses a dictionary to specify the values. For example, if your matrix contains an x and y, you can write
python
M.evalf(subs={x:3, y:17})
to evaluate the matrix for x=3 and y=17.
Second, evalf returns a sympy.Matrix object. Use numpy.array(M).astype(float) to convert it to a NumPy array. numpy.array(M) creates an array of type object, which is not what you want.
Here is the code for the EKF:
End of explanation
"""
def residual(a, b):
""" compute residual (a-b) between measurements containing
[range, bearing]. Bearing is normalized to [-pi, pi)"""
y = a - b
y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi)
if y[1] > np.pi: # move to [-pi, pi)
y[1] -= 2 * np.pi
return y
"""
Explanation: Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\circ$ and $h(x)$ has a bearing of $359^\circ$. Naively subtracting them would yield a angular difference of $-358^\circ$, whereas the correct value is $2^\circ$. We have to write code to correctly compute the bearing residual.
End of explanation
"""
from filterpy.stats import plot_covariance_ellipse
from math import sqrt, tan, cos, sin, atan2
import matplotlib.pyplot as plt
dt = 1.0
def z_landmark(lmark, sim_pos, std_rng, std_brg):
x, y = sim_pos[0, 0], sim_pos[1, 0]
d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2)
a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0]
z = np.array([[d + randn()*std_rng],
[a + randn()*std_brg]])
return z
def ekf_update(ekf, z, landmark):
ekf.update(z, HJacobian=H_of, Hx=Hx,
residual=residual,
args=(landmark), hx_args=(landmark))
def run_localization(landmarks, std_vel, std_steer,
std_range, std_bearing,
step=10, ellipse_step=20, ylim=None):
ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel,
std_steer=std_steer)
ekf.x = array([[2, 6, .3]]).T # x, y, steer angle
ekf.P = np.diag([.1, .1, .1])
ekf.R = np.diag([std_range**2, std_bearing**2])
sim_pos = ekf.x.copy() # simulated position
# steering command (vel, steering angle radians)
u = array([1.1, .01])
plt.figure()
plt.scatter(landmarks[:, 0], landmarks[:, 1],
marker='s', s=60)
track = []
for i in range(200):
sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot
track.append(sim_pos)
if i % step == 0:
ekf.predict(u=u)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='k', alpha=0.3)
x, y = sim_pos[0, 0], sim_pos[1, 0]
for lmark in landmarks:
z = z_landmark(lmark, sim_pos,
std_range, std_bearing)
ekf_update(ekf, z, lmark)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='g', alpha=0.8)
track = np.array(track)
plt.plot(track[:, 0], track[:,1], color='k', lw=2)
plt.axis('equal')
plt.title("EKF Robot localization")
if ylim is not None: plt.ylim(*ylim)
plt.show()
return ekf
landmarks = array([[5, 10], [10, 5], [15, 15]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
print('Final P:', ekf.P.diagonal())
"""
Explanation: The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable landmarks that contains the landmark coordinates. I update the simulated robot position 10 times a second, but run the EKF only once per second. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed.
End of explanation
"""
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
plt.show()
print('Final P:', ekf.P.diagonal())
"""
Explanation: I have plotted the landmarks as solid squares. The path of the robot is drawn with a black line. The covariance ellipses for the predict step are light gray, and the covariances of the update are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\sigma$.
We can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We determine that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements and the errors improve.
I used the same initial conditions and landmark locations in the UKF chapter. The UKF achieves much better accuracy in terms of the error ellipse. Both perform roughly as well as far as their estimate for $\mathbf x$ is concerned.
Now let's add another landmark.
End of explanation
"""
ekf = run_localization(
landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
"""
Explanation: The uncertainly in the estimates near the end of the track are smaller. We can see the effect that multiple landmarks have on our uncertainty by only using the first two landmarks.
End of explanation
"""
ekf = run_localization(
landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
"""
Explanation: The estimate quickly diverges from the robot's path after passing the landmarks. The covariance also grows quickly. Let's see what happens with only one landmark:
End of explanation
"""
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10],
[10,14], [23, 14], [25, 20], [10, 20]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1, ylim=(0, 21))
print('Final P:', ekf.P.diagonal())
"""
Explanation: As you probably suspected, one landmark produces a very bad result. Conversely, a large number of landmarks allows us to make very accurate estimates.
End of explanation
"""
import kf_book.nonlinear_plots as nonlinear_plots
nonlinear_plots.plot_ekf_vs_mc()
"""
Explanation: Discussion
I said that this was a real problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to simpler Jacobians. On the other hand, my model of the movement is also simplistic in several ways. First, it uses a bicycle model. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot responds instantaneously to the control input. Sebastian Thrun writes in Probabilistic Robots that this simplified model is justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the CPU time required to perform the linear algebra.
Another way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic.
UKF vs EKF
In the last chapter I used the UKF to solve this problem. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial despite a rudimentary motion model. A different problem could result in a Jacobian which is difficult or impossible to derive analytically. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model.
There are many cases where the Jacobian cannot be found analytically. The details are beyond the scope of this book, but you will have to use numerical methods to compute the Jacobian. That undertaking is not trivial, and you will spend a significant portion of a master's degree at a STEM school learning techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates.
So, UKFs are easy. Are they accurate? In practice they often perform better than the EKF. You can find plenty of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point, and the UKF uses $2n+1$ points.
Let's look at a specific example. Take $f(x) = x^3$ and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I generate 50,000 points randomly distributed according to the Gaussian, pass each through $f(x)$, then compute the mean and variance of the result.
The EKF linearizes the function by taking the derivative to find the slope at the evaluation point $x$. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that.
End of explanation
"""
nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.)
"""
Explanation: The EKF computation is rather inaccurate. In contrast, here is the performance of the UKF:
End of explanation
"""
|
OceanPARCELS/parcels | parcels/examples/tutorial_timestamps.ipynb | mit | from parcels import Field
from glob import glob
import numpy as np
"""
Explanation: Tutorial on how to use timestaps in Field construction
End of explanation
"""
# tempfield = Field.from_netcdf(glob('WOA_data/woa18_decav_*_04.nc'), 't_an',
# {'lon': 'lon', 'lat': 'lat', 'time': 'time'})
"""
Explanation: Some NetCDF files, such as for example those from the World Ocean Atlas, have time calendars that can't be parsed by xarray. These result in a ValueError: unable to decode time units, for example when the calendar is in 'months since' a particular date.
In these cases, a workaround in Parcels is to use the timestamps argument in Field (or FieldSet) creation. Here, we show how this works for example temperature data from the World Ocean Atlas in the Pacific Ocean
The following cell would give an error, since the calendar of the World Ocean Atlas data is in "months since 1955-01-01 00:00:00"
End of explanation
"""
timestamps = np.expand_dims(np.array([np.datetime64('2001-%.2d-15' %m) for m in range(1,13)]), axis=1)
"""
Explanation: However, we can create our own numpy array of timestamps associated with each of the 12 snapshots in the netcdf file
End of explanation
"""
tempfield = Field.from_netcdf(glob('WOA_data/woa18_decav_*_04.nc'), 't_an',
{'lon': 'lon', 'lat': 'lat', 'time': 'time'},
timestamps=timestamps)
"""
Explanation: And then we can add the timestamps as an extra argument
End of explanation
"""
|
bgroveben/python3_machine_learning_projects | introduction_to_ml_with_python/4_Data_and_Features.ipynb | mit | import os
# The file has no headers naking the columns, so we pass
# header=None and provide the column names explicitly in "names".
adult_path = os.path.join(mglearn.datasets.DATA_PATH, "adult.data")
print(adult_path)
data = pd.read_csv(adult_path, header=None, index_col=False,
names=['age', 'workclass', 'fnlwgt', 'education',
'education-num', 'marital-status', 'occupation',
'relationship', 'race', 'gender', 'capital-gain',
'capital-loss', 'hours-per-week', 'native-country',
'income'])
# For illustrative purposes, we'll only select some of the columns:
data = data[['age', 'workclass', 'education', 'gender',
'hours-per-week', 'occupation', 'income']]
# IPython.display allows nice output formatting within the
# Jupyter notebook:
display(data.head())
"""
Explanation: Chapter 4. Representing Data and Engineering Features
So far, we’ve assumed that our data comes in as a two-dimensional array of floating-point numbers, where each column is a continuous feature that describes the data points.
For many applications, this is not how the data is collected.
A particularly common type of feature is the categorical features.
Also known as discrete features, these are usually not numeric.
The distinction between categorical features and continuous features is analogous to the distinction between classification and regression, only on the input side rather than the output side.
Examples of continuous features that we have seen are pixel brightnesses and size measurements of plant flowers.
Examples of categorical features are the brand of a product, the color of a product, or the department (books, clothing, hardware) it is sold in.
These are all properties that can describe a product, but they don’t vary in a continuous way.
A product belongs either in the clothing department or in the books department.
There is no middle ground between books and clothing, and no natural order for the different categories (books is not greater or less than clothing, hardware is not between books and clothing, etc.).
Regardless of the types of features your data consists of, how you represent them can have an enormous effect on the performance of machine learning models.
We saw in Chapters 2 and 3 that scaling of the data is important.
In other words, if you don’t rescale your data (say, to unit variance), then it makes a difference whether you represent a measurement in centimeters or inches.
We also saw in Chapter 2 that it can be helpful to augment your data with additional features, like adding interactions (products) of features or more general polynomials.
The question of how to represent your data best for a particular application is known as feature engineering, and it is one of the main tasks of data scientists and machine learning practitioners trying to solve real-world problems.
Representing your data in the right way can have a bigger influence on the performance of a supervised model than the exact parameters you choose.
In this chapter, we will first go over the important and very common case of categorical features, and then give some examples of helpful transformations for specific combinations of features and models.
Categorical Variables
As an example, we will use the dataset of adult incomes in the United States, derived from the 1994 census database.
The task of the adult dataset is to predict whether a worker has an income of over \$50,000 or under \$50,000.
The features in this dataset include the workers’ ages, how they are employed (self employed, private industry employee, government employee, etc.), their education, their gender, their working hours per week, occupation, and more.
Table 4-1 shows the first few entries in the dataset:
The task is phrased as a classification task with the two classes being income <=50k and >50k.
It would also be possible to predict the exact income, and make this a regression task.
However, that would be much more difficult, and the 50K division is interesting to understand on its own.
In this dataset, age and hours-per-week are continuous features, which we know how to treat.
The workclass, education, sex, and occupation features are categorical, however.
All of them come from a fixed list of possible values, as opposed to a range, and denote a qualitative property, as opposed to a quantity.
As a starting point, let’s say we want to learn a logistic regression classifier on this data.
We know from Chapter 2 that a logistic regression makes predictions, $ŷ$, using the following formula:
$ŷ = w[0] * x[0] +w[1] * x[1] + \ldots + w[p] * x[p] + b > 0$
where $w[i]$ and $b$ are coefficients learned from the training set and $x[i]$ are the input features.
This formula makes sense when $x[i]$ are numbers, but not when $x[2]$ is "Masters" or "Bachelors".
Clearly we need to represent our data in some different way when applying logistic regression.
The next section will explain how we can overcome this problem.
One-Hot-Encoding (Dummy Variables)
By far the most common way to represent categorical variables is using the one-hot-encoding or one-out-of-N encoding, also known as dummy variables.
The idea behind dummy variables is to replace a categorical variable with one or more new features that can have the values 0 and 1.
The values 0 and 1 make sense in the formula for linear binary classification (and for all other models in scikit-learn), and we can represent any number of categories by introducing one new feature per category, as described here.
Let’s say for the workclass feature we have possible values of "Government Employee", "Private Employee", "Self Employed", and "Self Employed Incorporated".
To encode these four possible values, we create four new features, called "Government Employee", "Private Employee", "Self Employed", and "Self Employed Incorporated".
A feature is 1 if workclass for this person has the corresponding value and 0 otherwise, so exactly one of the four new features will be 1 for each data point.
This is why this is called one-hot or one-out-of-N encoding.
The principle is illustrated in Table 4-2.
A single feature is encoded using four new features.
When using this data in a machine learning algorithm, we would drop the original workclass feature and only keep the 0-1 features.
NOTE
The one-hot encoding we use is quite similar, but not identical, to the dummy coding used in statistics.
For simplicity, we encode each category with a different binary feature.
In statistics, it is common to encode a categorical feature with k different possible values into k–1 features (the last one is represented as all zeros).
This is done to simplify the analysis (more technically, this will avoid making the data matrix rank-deficient.
There are two ways to convert your data to a one-hot encoding of categorical variables, using either pandas or scikit-learn.
At the time of writing, using pandas is slightly easier, so let’s go this route.
First we load the data using pandas from a comma-separated values (CSV) file:
End of explanation
"""
print(data.gender.value_counts())
"""
Explanation: Checking string-encoded categorical data
After reading a dataset like this, it is often good to first check if a column actually contains meaningful categorical data.
When working with data that was input by humans (say, users on a website), there might not be a fixed set of categories, and differences in spelling and capitalization might require preprocessing.
For example, it might be that some people specified gender as “male” and some as “man,” and we might want to represent these two inputs using the same category.
A good way to check the contents of a column is using the value_counts method of a pandas Series (the type of a single column in a DataFrame), to show us what the unique values are and how often they appear:
End of explanation
"""
print("Original Features: \n", list(data.columns))
data_dummies = pd.get_dummies(data)
print("Features After Applying get_dummies(): \n",
list(data_dummies.columns))
"""
Explanation: We can see that there are exactly two values for gender in this dataset, Male and Female, meaning the data is already in a good format to be represented using one-hot-encoding.
In a real application, you should look at all columns and check their values.
We will skip this here for brevity’s sake.
There is a very simple way to encode the data in pandas, using the get_dummies() function.
The get_dummies() function automatically transforms all columns that have object type (like strings) or are categorical (which is a special pandas concept that we haven’t talked about yet):
End of explanation
"""
data_dummies.head()
"""
Explanation: You can see that the continuous features age and hours-per-week were not touched, while the categorical features were expanded into one new feature for each possible value:
End of explanation
"""
features = data_dummies.loc[:, 'age':'occupation_ Transport-moving']
# Extract NumPy arrays:
X = features.values
print("X[:5]: \n", X[:5], "\n")
y = data_dummies['income_ >50K'].values
print("y[:5]: \n", y[:5], "\n")
print("X.shape: {} y.shape: {}".format(X.shape, y.shape))
"""
Explanation: We can now use the values attribute to convert the data_dummies DataFrame into a NumPy array, and then train a machine learning model on it.
Be careful to separate the target variable (which is now encoded in two income columns) from the data before training a model.
Including the output variable, or some derived property of the output variable, into the feature representation is a very common mistake in building supervised machine learning models.
WARNING
Be careful: column indexing in pandas includes the end of the range, so 'age':'occupation_ Transport-moving' is inclusive of occupation_ Transport-moving.
This is different from slicing a NumPy array, where the end of a range is not included: for example, np.arange(11)[0:10] doesn’t include the entry with index 10.
In this case, we extract only the columns containing features -- that is, all columns from age to occupation_ Transport-moving.
This range contains all of the features but not the target:
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=0)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
print("Test Score: {:.2f}".format(logreg.score(X_test, y_test)))
"""
Explanation: Now the data is represented in a way that scikit-learn can work with, and we can proceed as usual:
End of explanation
"""
# Create a DataFrame with an integer feature and a categorical
# string feature:
demo_df = pd.DataFrame({'Integer Feature': [0, 1, 2, 1],
'Categorical Feature': ['socks',
'fox',
'socks',
'box']})
display(demo_df)
"""
Explanation: WARNING
In this example, we called get_dummies() on a DataFrame containing both the training and the test data.
This is important to ensure categorical values are represented in the same way in the training set and the test set.
Imagine we have the training and test sets in two different DataFrames.
If the "Private Employee" value for the workclass feature does not appear in the test set, pandas will assume there are only three possible values for this feature and will create only three new dummy features.
Now our training and test sets have different numbers of features, and we can’t apply the model we learned on the training set to the test set anymore.
Even worse, imagine the workclass feature has the values "Government Employee" and "Private Employee" in the training set, and "Self Employed" and "Self Employed Incorporated" in the test set.
In both cases, pandas will create two new dummy features, so the encoded DataFrames will have the same number of features.
However, the two dummy features have entirely different meanings in the training and test sets.
The column that means "Government Employee" for the training set would encode "Self Employed" for the test set.
If we built a machine learning model on this data it would work very badly, because it would assume the columns mean the same things (because they are in the same position) when in fact they mean very different things.
To fix this, either call get_dummies() on a DataFrame that contains both the training and the test data points, or make sure that the column names are the same for the training and test sets after calling get_dummies(), to ensure they have the same semantics.
Numbers Can Encode Categoricals
In the example of the adult dataset, the categorical variables were encoded as strings.
On the one hand, that opens up the possibility of spelling errors, but on the other hand, it clearly marks a variable as categorical.
Often, whether for ease of storage or because of the way the data is collected, categorical variables are encoded as integers.
For example, imagine the census data in the adult dataset was collected using a questionnaire, and the answers for workclass were recorded as 0 (first box ticked), 1 (second box ticked), 2 (third box ticked), and so on.
Now the column will contain numbers from 0 to 8, instead of strings like "Private", and it won’t be immediately obvious to someone looking at the table representing the dataset whether they should treat this variable as continuous or categorical.
Knowing that the numbers indicate employment status, however, it is clear that these are very distinct states and should not be modeled by a single continuous variable.
WARNING
Categorical features are often encoded using integers.
That they are numbers doesn’t mean that they should necessarily be treated as continuous features.
It is not always clear whether an integer feature should be treated as continuous or discrete (and one-hot-encoded).
If there is no ordering between the semantics that are encoded (like in the workclass example), the feature must be treated as discrete.
For other cases, like five-star ratings, the better encoding depends on the particular task and data and which machine learning algorithm is used.
The get_dummies() function in pandas treats all numbers as continuous and will not create dummy variables for them.
To get around this, you can either use scikit-learn’s OneHotEncoder, for which you can specify which variables are continuous and which are discrete, or convert numeric columns in the DataFrame to strings.
To illustrate, let’s create a DataFrame object with two columns, one containing strings and one containing integers:
End of explanation
"""
display(pd.get_dummies(demo_df))
"""
Explanation: Using get_dummies() will only encode the string feature and will not change the integer feature, as you can see in Table 4-5:
End of explanation
"""
demo_df['Integer Feature'] = demo_df['Integer Feature'].astype(str)
display(pd.get_dummies(demo_df,
columns=['Integer Feature',
'Categorical Feature']))
"""
Explanation: If you want dummy variables to be created for the "Integer Feature" column, you can explicitly list the columns you want to encode using the columns parameter.
Then, both features will be treated as categorical (see Table 4-6):
End of explanation
"""
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
X, y = mglearn.datasets.make_wave(n_samples=100)
line = np.linspace(-3, 3, 1000, endpoint=False).reshape(-1, 1)
reg = DecisionTreeRegressor(min_samples_split=3).fit(X, y)
plt.plot(line, reg.predict(line), label='decision tree')
reg = LinearRegression().fit(X, y)
plt.plot(line, reg.predict(line), label='linear regression')
plt.plot(X[:, 0], y, 'o', c='k')
plt.ylabel("Regression Output")
plt.xlabel("Input Feature")
plt.legend(loc="best")
"""
Explanation: Binning, Discretization, Linear Models, and Trees
The best way to represent data depends not only on the semantics of the data, but also on the kind of model you are using.
Linear models and tree-based models (such as decision trees, gradient boosted trees, and random forests), two large and very commonly used families, have very different properties when it comes to how they work with different feature representations.
Let’s go back to the wave regression dataset that we used in Chapter 2.
It has only a single input feature.
Here is a comparison of a linear regression model and a decision tree regressor on this dataset (see Figure 4-1):
End of explanation
"""
bins = np.linspace(-3, 3, 11)
print("bins: {}".format(bins))
"""
Explanation: As you know, linear models can only model linear relationships, which are lines in the case of a single feature.
The decision tree can build a much more complex model of the data.
However, this is strongly dependent on the representation of the data.
One way to make linear models more powerful on continuous data is to use binning (also known as discretization) of the feature to split it up into multiple features, as described here.
We imagine a partition of the input range for the feature (in this case, the numbers from –3 to 3) into a fixed number of bins -- say, 10.
A data point will then be represented by which bin it falls into.
To determine this, we first have to define the bins.
In this case, we’ll define 10 bins equally spaced between –3 and 3.
We use the np.linspace function for this, creating 11 entries, which will create 10 bins -- they are the spaces in between two consecutive boundaries:
End of explanation
"""
which_bin = np.digitize(X, bins=bins)
print("\nData Points:\n", X[:5])
print("\nBin Membership for Data Points:\n", which_bin[:5])
"""
Explanation: Here, the first bin contains all data points with feature values –3 to –2.4, the second bin contains all points with feature values from –2.4 to –1.8, and so on.
Next, we record for each data point which bin it falls into.
This can be easily computed using the np.digitize() function:
End of explanation
"""
from sklearn.preprocessing import OneHotEncoder
# Transform data using the OneHotEncoder:
encoder = OneHotEncoder(sparse=False)
# Use encoder.fit to find the unique values that appear
# in which_bin:
encoder.fit(which_bin)
# Transform creates the one-hot encoding:
X_binned = encoder.transform(which_bin)
print(X_binned[:5])
"""
Explanation: What we did here is transform the single continuous input feature in the wave dataset into a categorical feature that encodes which bin a data point is in.
To use a scikit-learn model on this data, we transform this discrete feature to a one-hot encoding using the OneHotEncoder from the preprocessing module.
The OneHotEncoder does the same encoding as pandas.get_dummies(), though it currently only works on categorical variables that are integers:
End of explanation
"""
print("X_binned.shape: {}".format(X_binned.shape))
"""
Explanation: Because we specified 10 bins, the transformed dataset X_binned now is made up of 10 features:
End of explanation
"""
line_binned = encoder.transform(np.digitize(line, bins=bins))
reg = LinearRegression().fit(X_binned, y)
plt.plot(line, reg.predict(line_binned),
label="Linear Regression Binned")
reg = DecisionTreeRegressor(min_samples_split=3).fit(X_binned, y)
plt.plot(line, reg.predict(line_binned),
label="Decision Tree Binned")
plt.plot(X[:, 0], y, 'o', c='k')
plt.vlines(bins, -3, 3, linewidth=1, alpha=.2)
plt.legend(loc="best")
plt.ylabel("Regression Output")
plt.xlabel("Input Feature")
"""
Explanation: Now we can build a new linear regression model and a new decision tree model on the one-hot-encoded data.
The result is visualized in Figure 4-2, together with the bin boundaries, shown as dotted black lines:
End of explanation
"""
X_combined = np.hstack([X, X_binned])
print(X_combined.shape)
reg = LinearRegression().fit(X_combined, y)
line_combined = np.hstack([line, line_binned])
plt.plot(line, reg.predict(line_combined),
label="Linear Regression Combined")
for bin in bins:
plt.plot([bin, bin], [-3, 3], ':', c='k', linewidth=1)
plt.legend(loc="best")
plt.ylabel("Regression Output")
plt.xlabel("Input Feature")
plt.plot(X[:, 0], y, 'o', c='k')
"""
Explanation: The dashed line and solid line are exactly on top of each other, meaning the linear regression model and the decision tree make exactly the same predictions.
For each bin, they predict a constant value.
As features are constant within each bin, any model must predict the same value for all points within a bin.
Comparing what the models learned before binning the features and after, we see that the linear model became much more flexible, because it now has a different value for each bin, while the decision tree model got much less flexible.
Binning features generally has no beneficial effect for tree-based models, as these models can learn to split up the data anywhere.
In a sense, that means decision trees can learn whatever binning is most useful for predicting on this data.
Additionally, decision trees look at multiple features at once, while binning is usually done on a per-feature basis.
However, the linear model benefited greatly in expressiveness from the transformation of the data.
If there are good reasons to use a linear model for a particular dataset -- say, because it is very large and high-dimensional, but some features have nonlinear relations with the output -- binning can be a great way to increase modeling power.
Interactions and Polynomials
Another way to enrich a feature representation, particularly for linear models, is adding interaction features and polynomial features of the original data.
This kind of feature engineering is often used in statistical modeling, but it’s also common in many practical machine learning applications.
As a first example, look again at Figure 4-2.
The linear model learned a constant value for each bin in the wave dataset.
We know, however, that linear models can learn not only offsets, but also slopes.
One way to add a slope to the linear model on the binned data is to add the original feature (the x-axis in the plot) back in.
This leads to an 11-dimensional dataset, as seen in Figure 4-3:
End of explanation
"""
X_product = np.hstack([X_binned, X * X_binned])
print(X_product.shape)
"""
Explanation: In this example, the model learned an offset for each bin, together with a slope.
The learned slope is downward, and shared across all the bins --there is a single x-axis feature, which has a single slope.
Because the slope is shared across all bins, it doesn’t seem to be very helpful.
We would rather have a separate slope for each bin!
We can achieve this by adding an interaction or product feature that indicates which bin a data point is in and where it lies on the x-axis.
This feature is a product of the bin indicator and the original feature.
Let’s create this dataset:
End of explanation
"""
reg = LinearRegression().fit(X_product, y)
line_product = np.hstack([line_binned, line * line_binned])
plt.plot(line, reg.predict(line_product),
label="Linear Regression Product")
for bin in bins:
plt.plot([bin, bin], [-3, 3], ':', c='k', linewidth=1)
plt.plot(X[:, 0], y, 'o', c='k')
plt.ylabel("Regression Output")
plt.xlabel("Input Features")
plt.legend()
"""
Explanation: The dataset now has 20 features: the indicators for which bin a data point is in, and a product of the original feature and the bin indicator.
You can think of the product feature as a separate copy of the x-axis feature for each bin.
It is the original feature within the bin, and zero everywhere else.
Figure 4-4 shows the result of the linear model on this new representation:
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
# Include polynomials up to x ** 10.
# The default "include_bias=True" adds a feature that's
# constantly 1.
poly = PolynomialFeatures(degree=10, include_bias=False)
poly.fit(X)
X_poly = poly.transform(X)
"""
Explanation: As you can see, now each bin has its own offset and slope in this model.
Using binning is one way to expand a continuous feature.
Another one is to use polynomials of the original features.
For a given feature x, we might want to consider x ** 2, x ** 3, x ** 4, and so on.
This is implemented in PolynomialFeatures in the preprocessing module:
End of explanation
"""
print("X_poly.shape: {}".format(X_poly.shape))
"""
Explanation: Using a degree of 10 yields 10 features:
End of explanation
"""
print("Entries of X: \n{}".format(X[:5]))
print("Entries of X_poly: \n{}".format(X_poly[:5]))
"""
Explanation: Let's compare the entries of X_poly to those of X:
End of explanation
"""
print("Polynomial feature names: \n{}".format(
poly.get_feature_names()))
"""
Explanation: You can obtain the semantics of the features by calling the get_feature_names() method, which provides the exponent for each feature:
End of explanation
"""
reg = LinearRegression().fit(X_poly, y)
line_poly = poly.transform(line)
plt.plot(line, reg.predict(line_poly),
label="Polynomial Linear Regression")
plt.plot(X[:, 0], y, 'o', c='k')
plt.ylabel("Regression Output")
plt.xlabel("Input Feature")
plt.legend()
"""
Explanation: You can see that the first column of X_poly corresponds exactly to X, while the other columns are the powers of the first entry.
It’s interesting to see how large some of the values can get.
The second row has entries above 20,000, orders of magnitude different from the rest.
Using polynomial features together with a linear regression model yields the classical model of polynomial regression (see Figure 4-5):
End of explanation
"""
from sklearn.svm import SVR
for gamma in [1, 10]:
svr = SVR(gamma=gamma).fit(X, y)
plt.plot(line, svr.predict(line),
label="SVR gamma={}".format(gamma))
plt.plot(X[:, 0], y, 'o', c='k')
plt.ylabel("Regression Output")
plt.xlabel("Input Feature")
plt.legend()
"""
Explanation: As you can see, polynomial features yield a very smooth fit on this one-dimensional data.
However, polynomials of high degree tend to behave in extreme ways on the boundaries or in regions with little data.
As a comparison, here is a kernel SVM model learned on the original data, without any transformation (see Figure 4-6):
End of explanation
"""
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
boston = load_boston()
X_train, X_test, y_train, y_test = train_test_split(
boston.data, boston.target, random_state=0)
# Rescale the data:
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
"""
Explanation: Using a more complex model, a kernel SVM, we are able to learn a similarly complex prediction to the polynomial regression without an explicit transformation of the features.
As a more realistic application of interactions and polynomials, let’s look again at the Boston Housing dataset.
We already used polynomial features on this dataset in Chapter 2.
Now let’s have a look at how these features were constructed, and at how much the polynomial features help.
First we load the data, and rescale it to between 0 and 1 using MinMaxScaler:
End of explanation
"""
poly = PolynomialFeatures(degree=2).fit(X_train_scaled)
X_train_poly = poly.transform(X_train_scaled)
X_test_poly = poly.transform(X_test_scaled)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_poly.shape: {}".format(X_train_poly.shape))
"""
Explanation: Now we extract polynomial features and interactions up to a degree of 2:
End of explanation
"""
print("Polynomial Feature Names: \n{}".format(
poly.get_feature_names()))
"""
Explanation: The data originally had 13 features, which were expanded into 105 interaction features.
These new features represent all possible interactions between two different original features, as well as the square of each original feature.
degree=2 here means that we look at all features that are the product of up to two original features.
The exact correspondence between input and output features can be found using the get_feature_names() method:
End of explanation
"""
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train_scaled, y_train)
print("Score without interactions: {:.3f}".format(
ridge.score(X_test_scaled, y_test)))
ridge = Ridge().fit(X_train_poly, y_train)
print("Score with interactions: {:.3f}".format(
ridge.score(X_test_poly, y_test)))
"""
Explanation: The first new feature is a constant feature, called "1" here. The next 13 features are the original features (called "x0" to "x12").
Then follows the first feature squared ("x0^2") and combinations of the first and the other features.
Let’s compare the performance using Ridge on the data with and without interactions:
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=100).fit(
X_train_scaled, y_train)
print("Score without interactions: {:.3f}".format(
rf.score(X_test_scaled, y_test)))
rf = RandomForestRegressor(n_estimators=100).fit(
X_train_poly, y_train)
print("Score with interactions: {:.3f}".format(
rf.score(X_test_poly, y_test)))
"""
Explanation: Clearly, the interactions and polynomial features gave us a good boost inperformance when using Ridge.
When using a more complex model like a random forest, the story is a bit different:
End of explanation
"""
rnd = np.random.RandomState(0)
X_org = rnd.normal(size=(1000, 3))
w = rnd.normal(size=3)
# The Poisson distribution is a discrete probability distribution
# that expresses the probability of a given number of events
# occurring in a fixed interval of time and/or space if these
# events occur with a known constant rate and independently of
# the time since the last event.
X = rnd.poisson(10 * np.exp(X_org))
y = np.dot(X_org, w)
"""
Explanation: You can see that even without additional features, the random forest beats the performance of Ridge.
Adding interactions and polynomials actually decreases performance slightly.
Univariate Nonlinear Transformations
We just saw that adding squared or cubed features can help linear models for regression.
There are other transformations that often prove useful for transforming certain features: in particular, applying mathematical functions like log(), exp(), or sin().
While tree-based models only care about the ordering of the features, linear models and neural networks are very tied to the scale and distribution of each feature, and if there is a nonlinear relation between the feature and the target, that becomes hard to model -- particularly in regression.
The functions log() and exp() can help by adjusting the relative scales in the data so that they can be captured better by a linear model or neural network.
We saw an application of that in Chapter 2 with the memory price data.
The sin() and cos() functions can come in handy when dealing with data that encodes periodic patterns.
Most models work best when each feature (and in regression also the target) is loosely Gaussian distributed -- that is, a histogram of each feature should have something resembling the familiar "bell curve" shape.
Using transformations like log() and exp() is a hacky but simple and efficient way to achieve this.
A particularly common case when such a transformation can be helpful is when dealing with integer count data.
By count data, we mean features like "how often did user A log in?"
Counts are never negative, and often follow particular statistical patterns.
We are using a synthetic dataset of counts here that has properties similar to those you can find in the wild.
The features are all integer-valued, while the response is continuous:
End of explanation
"""
print("Number of Feature Appearances: \n{}".format(
np.bincount(X[:, 0])))
"""
Explanation: Let’s look at the first 10 entries of the first feature.
All are integer values and positive, but apart from that it’s hard to make out a particular pattern.
If we count the appearance of each value, the distribution of values becomes clearer:
End of explanation
"""
bins = np.bincount(X[:, 0])
plt.bar(range(len(bins)), bins, color='r')
plt.ylabel("Number of Appearances")
plt.xlabel("Value")
"""
Explanation: The value 2 seems to be the most common, with 68 appearances (bincount always starts at 0), and the counts for higher values fall quickly.
However, there are some very high values, like 84 and 85, that are appearing twice.
We visualize the counts in Figure 4-7:
End of explanation
"""
from sklearn.linear_model import Ridge
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=0)
score = Ridge().fit(X_train, y_train).score(X_test, y_test)
print("Test score: {:.3f}".format(score))
"""
Explanation: Features X[:, 1] and X[:, 2] have similar properties.
This kind of distribution of values (many small ones and a few very large ones) is very common in practice.
However, it is something most linear models can’t handle very well.
Let’s try to fit a ridge regression to this model:
End of explanation
"""
X_train_log = np.log(X_train + 1)
X_test_log = np.log(X_test + 1)
print("First entries of X_test: \n{}".format(X_test[:5]))
print()
print("First entries of X_test_log: \n{}".format(X_test_log[:5]))
"""
Explanation: As you can see from the relatively low $R^2$ score, Ridge was not able to really capture the relationship between X and y.
Applying a logarithmic transformation can help, though.
Because the value 0 appears in the data (and the logarithm is not defined at 0), we can’t actually just apply log(), but we have to compute log(X + 1):
End of explanation
"""
plt.hist(X_train_log[:, 0], bins=25, color='gray')
plt.ylabel("Number of Appearances")
plt.xlabel("Value")
"""
Explanation: After the transformation, the distibution of the data is less asymmetrical and doesn't have any large outliers anymore (see Figure 4-8 and Figure 4-8A):
End of explanation
"""
plt.hist(X_train[:, 0], bins=25, color='gray')
plt.ylabel("Number of Appearances")
plt.xlabel("Value")
"""
Explanation: Let's show a comparison with the un-transformed X_train data:
End of explanation
"""
score = Ridge().fit(X_train_log, y_train).score(
X_test_log, y_test)
print("Test Score: {:.3f}".format(score))
"""
Explanation: As you will see, building a ridge model on the new data provides a much better fit:
End of explanation
"""
from sklearn.datasets import load_breast_cancer
from sklearn.feature_selection import SelectPercentile
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
# Get deterministic random numbers
rng = np.random.RandomState(42)
noise = rng.normal(size=(len(cancer.data), 50))
# Add noise features to the data.
# The first 30 features are from the dataset.
# The next 50 features are randomly generated noise.
X_w_noise = np.hstack([cancer.data, noise])
X_w_noise[0]
X_train, X_test, y_train, y_test = train_test_split(
X_w_noise, cancer.target, random_state=0, test_size=0.5)
# Use f_classif (the default) and SelectPercentile to
# select 50% of features:
select = SelectPercentile(percentile=50)
select.fit(X_train, y_train)
# Transform training set:
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(
X_train_selected.shape))
"""
Explanation: Finding the transformation that works best for each combination of dataset and model is somewhat of an art.
In this example, all the features had the same properties.
This is rarely the case in practice, and usually only a subset of the features should be transformed, or sometimes each feature needs to be transformed in a different way.
As we mentioned earlier, these kinds of transformations are irrelevant for tree-based models but might be essential for linear models.
Sometimes it is also a good idea to transform the target variable y in regression.
Trying to predict counts (say, number of orders) is a fairly common task, and using the log(y + 1) transformation often helps.
As you saw in the previous examples, binning, polynomials, and interactions can have a huge influence on how models perform on a given dataset.
This is particularly true for less complex models like linear models and naive Bayes models.
Tree-based models, on the other hand, are often able to discover important interactions themselves, and don’t require transforming the data explicitly most of the time.
Other models, like SVMs, nearest neighbors, and neural networks, might sometimes benefit from using binning, interactions, or polynomials, but the implications there are usually much less clear than in the case of linear models.
Automatic Feature Selection
With so many ways to create new features, you might get tempted to increase the dimensionality of the data way beyond the number of original features.
However, adding more features makes all models more complex, and so increases the chance of overfitting.
When adding new features, or with high-dimensional datasets in general, it can be a good idea to reduce the number of features to only the most useful ones, and discard the rest.
This can lead to simpler models that generalize better.
But how can you know how good each feature is?
There are three basic strategies: univariate statistics, model-based selection, and iterative selection.
We will discuss all three of them in detail.
All of these methods are supervised methods, meaning they need the target for fitting the model.
This means we need to split the data into training and test sets, and fit the feature selection only on the training part of the data.
Univariate Statistics
In univariate statistics, we compute whether there is a statistically significant relationship between each feature and the target.
Then the features that are related with the highest confidence are selected.
In the case of classification, this is also known as analysis of variance (ANOVA).
A key property of these tests is that they are univariate, meaning that they only consider each feature individually.
Consequently, a feature will be discarded if it is only informative when combined with another feature.
Univariate tests are often very fast to compute, and don’t require building a model.
On the other hand, they are completely independent of the model that you might want to apply after the feature selection.
To use univariate feature selection in scikit-learn, you need to choose a test, usually either f_classif (the default) for classification or f_regression for regression, and a method to discard features based on the $p$-values determined in the test.
All methods for discarding parameters use a threshold to discard all features with too high a $p$-value (which means they are unlikely to be related to the target).
The methods differ in how they compute this threshold, with the simplest ones being SelectKBest, which selects a fixed number $k$ of features, and SelectPercentile, which selects a fixed percentage of features.
Let’s apply the feature selection for classification on the cancer dataset.
To make the task a bit harder, we’ll add some noninformative noise features to the data.
We expect the feature selection to be able to identify the features that are noninformative and remove them:
End of explanation
"""
mask = select.get_support()
print(mask)
"""
Explanation: As you can see, the number of features was reduced from 80 to 40 (50 percent of the original features).
We can find out which features have been selected using the get_support() method, which returns a Boolean mask of the slected features (visualized in Figure 4-9):
End of explanation
"""
plt.matshow(mask.reshape(1, -1), cmap="gray_r")
plt.xlabel("Sample Index")
plt.yticks(())
"""
Explanation: Nice, but visualizing the mask may be a bit more helpful.
Black is true, white is False:
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# Transform test data:
X_test_selected = select.transform(X_test)
lr = LogisticRegression()
lr.fit(X_train, y_train)
print("Score with all features: \n{:.3f}".format(
lr.score(X_test, y_test)))
lr.fit(X_train_selected, y_train)
print("Score with only the selected features: \n{:.3f}".format(
lr.score(X_test_selected, y_test)))
"""
Explanation: As you can see from the visualization of the mask, most of the selected features are the original features, and most of the noise features were removed.
However, the recovery of the original features is not perfect.
Let’s compare the performance of logistic regression on all features against the performance using only the selected features:
End of explanation
"""
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
select = SelectFromModel(
RandomForestClassifier(n_estimators=100, random_state=42),
threshold="median")
print(select)
"""
Explanation: In this case, removing the noise features improved performance, even though some of the original features were lost.
This was a very simple synthetic example, and outcomes on real data are usually mixed.
Univariate feature selection can still be very helpful, though, if there is such a large number of features that building a model on them is infeasible, or if you suspect that many features are completely uninformative.
Model-Based Feature Selection
Model-based feature selection uses a supervised machine learning model to judge the importance of each feature, and keeps only the most important ones.
The supervised model that is used for feature selection doesn’t need to be the same model that is used for the final supervised modeling.
The feature selection model needs to provide some measure of importance for each feature, so that they can be ranked by this measure.
Decision trees and decision tree–based models provide a feature_importances_ attribute, which directly encodes the importance of each feature.
Linear models have coefficients, which can also be used to capture feature importances by considering the absolute values.
As we saw in Chapter 2, linear models with L1 penalty learn sparse coefficients, which only use a small subset of features.
This can be viewed as a form of feature selection for the model itself, but can also be used as a preprocessing step to select features for another model.
In contrast to univariate selection, model-based selection considers all features at once, and so can capture interactions (if the model can capture them).
To use model-based feature selection, we need to use the SelectFromModel transformer:
End of explanation
"""
select.fit(X_train, y_train)
X_train_l1 = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_l1.shape: {}".format(X_train_l1.shape))
"""
Explanation: The SelectFromModel class selects all features that have an importance measure of the feature (as provided by the supervised model) greater than the provided threshold.
To get a comparable result to what we got with univariate feature selection, we used the median as a threshold, so that half of the features will be selected.
We use a random forest classifier with 100 trees to compute the feature importances.
This is a quite complex model and much more powerful than using univariate tests.
Now let’s actually fit the model:
End of explanation
"""
mask = select.get_support()
# Visualize the mask; black is True, white is False
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
plt.xlabel("Sample Index")
plt.yticks(())
"""
Explanation: Again, we can have a look at the features that were selected (Figure 4-10):
End of explanation
"""
X_test_l1 = select.transform(X_test)
score = LogisticRegression().fit(
X_train_l1, y_train).score(X_test_l1, y_test)
print("Test score: {:.3f}".format(score))
"""
Explanation: This time, all but two of the original features were selected.
Because we told the machine learning classifier to select 40 features, some of the noise features are also selected.
Let's take a look at the performance:
End of explanation
"""
from sklearn.feature_selection import RFE
select = RFE(
RandomForestClassifier(n_estimators=100, random_state=42),
n_features_to_select=40)
select.fit(X_train, y_train)
# Visualize the selected features:
mask = select.get_support()
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
plt.xlabel("Sample Index")
plt.yticks(())
"""
Explanation: With the better feature selection, we also gained some improvements here.
Iterative Feature Selection
In univariate testing we used no model, while in model-based selection we used a single model to select features.
In iterative feature selection, a series of models are built, with varying numbers of features.
There are two basic methods: starting with no features and adding features one by one until some stopping criterion is reached, or starting with all features and removing features one by one until some stopping criterion is reached.
Because a series of models are built, these methods are much more computationally expensive than the methods we discussed previously.
One particular method of this kind is recursive feature elimination (RFE), which starts with all features, builds a model, and discards the least important feature according to the model.
Then a new model is built using all but the discarded feature, and so on until only a prespecified number of features are left.
For this to work, the model used for selection needs to provide some way to determine feature importance, as was the case for the model-based selection.
Here, we use the same random forest model that we used earlier, and get the results shown in Figure 4-11:
End of explanation
"""
X_train_rfe = select.transform(X_train)
X_test_rfe = select.transform(X_test)
score = LogisticRegression().fit(
X_train_rfe, y_train).score(X_test_rfe, y_test)
print("Test Score: {:.3f}".format(score))
"""
Explanation: The feature selection got better compared to the univariate and model-based selection, but one feature was still missed.
Running this code also takes significantly longer than that for the model-based selection, because a random forest model is trained 40 times, once for each feature that is dropped.
Let’s test the accuracy of the logistic regression model when using RFE for feature selection:
End of explanation
"""
print("Test Score: {:.3f}".format(select.score(X_test, y_test)))
"""
Explanation: We can also use the model used inside the RFE to make predictions.
This uses only the feature set that was selected:
End of explanation
"""
citibike = mglearn.datasets.load_citibike()
print("Citi Bike Data: \n{}".format(citibike.head()))
"""
Explanation: Here, the performance of the random forest used inside the RFE is the same as that achieved by training a logistic regression model on top of the selected features.
In other words, once we’ve selected the right features, the linear model performs as well as the random forest.
If you are unsure when selecting what to use as input to your machine learning algorithms, automatic feature selection can be quite helpful.
It is also great for reducing the amount of features needed—for example, to speed up prediction or to allow for more interpretable models.
In most real-world cases, applying feature selection is unlikely to provide large gains in performance.
However, it is still a valuable tool in the toolbox of the feature engineer.
Utilizing Expert Knowledge
Feature engineering is often an important place to use expert knowledge for a particular application.
While the purpose of machine learning in many cases is to avoid having to create a set of expert-designed rules, that doesn’t mean that prior knowledge of the application or domain should be discarded.
Often, domain experts can help in identifying useful features that are much more informative than the initial representation of the data.
Imagine you work for a travel agency and want to predict flight prices.
Let’s say you have a record of prices together with dates, airlines, start locations, and destinations.
A machine learning model might be able to build a decent model from that.
Some important factors in flight prices, however, cannot be learned.
For example, flights are usually more expensive during peak vacation months and around holidays.
While the dates of some holidays (like Christmas) are fixed, and their effect can therefore be learned from the date, others might depend on the phases of the moon (like Hanukkah and Easter) or be set by authorities (like school holidays).
These events cannot be learned from the data if each flight is only recorded using the (Gregorian) date.
However, it is easy to add a feature that encodes whether a flight was on, preceding, or following a public or school holiday.
In this way, prior knowledge about the nature of the task can be encoded in the features to aid a machine learning algorithm.
Adding a feature does not force a machine learning algorithm to use it, and even if the holiday information turns out to be noninformative for flight prices, augmenting the data with this information doesn’t hurt.
We'll now look at one particular case of using expert knowledge -- though in this case it might be more rightfully called "common sense".
The task is predicting bicycle rentals in front of Andreas’s house.
In New York, Citi Bike operates a network of bicycle rental stations with a subscription system.
The stations are all over the city and provide a convenient way to get around.
Bike rental data is made public in an anonymized form and has been analyzed in various ways.
The task we want to solve is to predict for a given time and day how many people will rent a bike in front of Andreas’s house—so he knows if any bikes will be left for him.
We first load the data for August 2015 for this particular station as a pandas DataFrame.
We resample the data into three-hour intervals to obtain the main trends for each day:
End of explanation
"""
plt.figure(figsize=(10, 3))
xticks = pd.date_range(start=citibike.index.min(),
end=citibike.index.max(), freq='D')
plt.xticks(xticks, xticks.strftime("%a %m-%d"),
rotation=90, ha="left")
plt.plot(citibike, linewidth=1)
plt.xlabel("Date")
plt.ylabel("Rentals")
"""
Explanation: The following example shows a visualization of the rental frequencies for the whole month (Figure 4-12):
End of explanation
"""
# Extract the target values (number of rentals):
y = citibike.values
print("y[:5]: {}".format(y[:5]))
# Convert to POSIX time by dividing by 10**9:
X = citibike.index.astype("int64").values.reshape(-1, 1) // 10**9
print("X[:5]: \n{}".format(X[:5]))
"""
Explanation: Looking at the data, we can clearly distinguish day and night for each 24-hour interval.
The patterns for weekdays and weekends also seem to be quite different.
When evaluating a prediction task on a time series like this, we usually want to learn from the past and predict for the future.
This means when doing a split into a training and a test set, we want to use all the data up to a certain date as the training set and all the data past that date as the test set.
This is how we would usually use time series prediction: given everything that we know about rentals in the past, what do we think will happen tomorrow?
We will use the first 184 data points, corresponding to the first 23 days, as our training set, and the remaining 64 data points, corresponding to the remaining 8 days, as our test set.
The only feature that we are using in our prediction task is the date and time when a particular number of rentals occurred.
So, the input feature is the date and time—say, 2015-08-01 00:00:00 -- and the output is the number of rentals in the following three hours (three in this case, according to our DataFrame).
A (surprisingly) common way that dates are stored on computers is using POSIX time, which is the number of seconds since January 1970 00:00:00 (aka the beginning of Unix time).
As a first try, we can use this single integer feature as our data representation:
End of explanation
"""
# Use the first 184 data points for training,
# and the rest for testing.
n_train = 184
# Function to evaluate and plot a regressor on
# a given test set.
def eval_on_features(features, target, regressor):
# Split the given features into train and test sets.
X_train, X_test = features[:n_train], features[n_train:]
# Split the target array as well.
y_train, y_test = target[:n_train], target[N-train:]
regressor.fit(X_train, y_train)
print("Test-set R^2: {:.2f}".format(
regressor.score(X_test, y_test)))
y_pred = regressor.predict(X_test)
y_pred_train = regressor.predict(X_train)
plt.figure(figsize=(10, 3))
plt.xticks(range(0, len(X), 8), xticks.strftime(
"%a %m-%d"), rotation=90, ha="left")
plt.plot(range(n_train), y_train, label="train")
plt.plot(range(n_train, len(y_test) + n_train), y_test,
'-', label="test")
plt.plot(range(n_train), y_pred_train, '--',
label="Training Prediction:")
plt.plot(range(n_train, len(y_test) + n_train), y_pred,
'--', label="Testing Prediction:")
plt.legend(loc=(1.01, 0))
plt.xlabel("Date")
plt.ylabel("Rentals")
"""
Explanation: We first define a function to split the data into training and test sets, build the model, and visualize the result:
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/text_models/solutions/reusable_embeddings_vertex.ipynb | apache-2.0 | import os
import pandas as pd
from google.cloud import bigquery
"""
Explanation: Reusable Embeddings
Learning Objectives
1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors
1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model
1. Learn how to deploy and use a text model on CAIP
Introduction
In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset.
First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers.
End of explanation
"""
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
"""
Explanation: Replace the variable values in the cell below:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
"""
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
"""
regex = ".*://(.[^/]+)/"
sub_query = """
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
""".format(
regex
)
query = """
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(
sub_query=sub_query
)
print(query)
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
"""
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
"""
print(f"The full dataset contains {len(title_dataset)} titles")
"""
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
"""
title_dataset.source.value_counts()
"""
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
"""
DATADIR = "./data/"
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = "titles_full.csv"
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
"""
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
"""
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
"""
SAMPLE_DATASET_NAME = "titles_sample.csv"
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset.head()
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from tensorflow_hub import KerasLayer
print(tf.__version__)
%matplotlib inline
"""
Explanation: Let's write the sample datatset to disk.
End of explanation
"""
MODEL_DIR = f"gs://{BUCKET}/text_models"
"""
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
"""
ls $DATADIR
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATADIR, DATASET_NAME)
COLUMNS = ["title", "source"]
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
"""
Explanation: Loading the dataset
As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times):
End of explanation
"""
titles_df.source.value_counts()
"""
Explanation: Let's look again at the number of examples per label to make sure we have a well-balanced dataset:
End of explanation
"""
CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
"""
Explanation: Preparing the labels
In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate
advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text.
This also means that our model will be able to consume text directly instead of sequences of integers representing the words.
However, as before, we still need to preprocess the labels into one-hot-encoded vectors:
End of explanation
"""
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN],
titles_df.source[:N_TRAIN],
)
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:],
titles_df.source[N_TRAIN:],
)
"""
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
"""
sources_train.value_counts()
sources_valid.value_counts()
"""
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per class.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
"""
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
"""
Explanation: Now let's create the features and labels we will feed our models with:
End of explanation
"""
# TODO 1
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(
NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True
)
"""
Explanation: NNLM Model
We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called
nnlm-en-dim50-with-normalization, which also
normalizes the vectors produced.
Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding:
End of explanation
"""
# TODO 1
nnlm_module(tf.constant(["The dog is happy to see people in the street."]))
"""
Explanation: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence:
End of explanation
"""
# TODO 1
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(
SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True
)
"""
Explanation: Swivel Model
Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings.
TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.
End of explanation
"""
# TODO 1
swivel_module(tf.constant(["The dog is happy to see people in the street."]))
"""
Explanation: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence:
End of explanation
"""
def build_model(hub_module, name):
model = Sequential(
[
hub_module, # TODO 2
Dense(16, activation="relu"),
Dense(N_CLASSES, activation="softmax"),
],
name=name,
)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
"""
Explanation: Building the models
Let's write a function that
takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm)
returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes:
End of explanation
"""
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train,
Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(patience=1), TensorBoard(model_dir)],
)
return history
"""
Explanation: Let's also wrap the training code into a train_and_evaluate function that
* takes as input the training and validation data, as well as the compiled model itself, and the batch_size
* trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing
* returns an history object, which will help us to plot the learning curves
End of explanation
"""
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, "nnlm")
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
"""
Explanation: Training NNLM
End of explanation
"""
swivel_model = build_model(swivel_module, name="swivel")
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
"""
Explanation: Training Swivel
End of explanation
"""
!echo tensorboard --logdir $MODEL_DIR --port 6006
"""
Explanation: Comparing the models
Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on.
At last, let's compare all the models we have trained at once using TensorBoard in order
to choose the one that overfits the less for the same performance level.
Run the output of the following command in your Cloud Shell to launch TensorBoard, and use the Web Preview on port 6006 to view it.
End of explanation
"""
OUTPUT_DIR = "./savedmodels_vertex"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, "swivel")
os.environ["EXPORT_PATH"] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.keras.models.save_model(swivel_model, EXPORT_PATH)
"""
Explanation: Deploying the model
The first step is to serialize one of our trained Keras model as a SavedModel:
End of explanation
"""
%%bash
# TODO 5
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
MODEL_DISPLAYNAME=title_model_$TIMESTAMP
ENDPOINT_DISPLAYNAME=swivel_$TIMESTAMP
IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
ARTIFACT_DIRECTORY=gs://${BUCKET}/${MODEL_DISPLAYNAME}/
echo $ARTIFACT_DIRECTORY
gsutil cp -r ${EXPORT_PATH}/* ${ARTIFACT_DIRECTORY}
# Model
MODEL_RESOURCENAME=$(gcloud ai models upload \
--region=$REGION \
--display-name=$MODEL_DISPLAYNAME \
--container-image-uri=$IMAGE_URI \
--artifact-uri=$ARTIFACT_DIRECTORY \
--format="value(model)")
echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}"
echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}"
# Endpoint
ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \
--region=$REGION \
--display-name=$ENDPOINT_DISPLAYNAME \
--format="value(name)")
echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}"
echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}"
# Deployment
DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment
MACHINE_TYPE=n1-standard-2
MIN_REPLICA_COUNT=1
MAX_REPLICA_COUNT=3
gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \
--region=$REGION \
--model=$MODEL_RESOURCENAME \
--display-name=$DEPLOYED_MODEL_DISPLAYNAME \
--machine-type=$MACHINE_TYPE \
--min-replica-count=$MIN_REPLICA_COUNT \
--max-replica-count=$MAX_REPLICA_COUNT \
--traffic-split=0=100
"""
Explanation: Then we can deploy the model using the gcloud CLI as before:
End of explanation
"""
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
"""
Explanation: Note the ENDPOINT_RESOURCENAME above as you'll need it below for the prediction.
Before we try our deployed model, let's inspect its signature to know what to send to the deployed API:
End of explanation
"""
%%writefile input.json
{
"instances": [
{"keras_layer_1_input": "hello"}
]
}
"""
Explanation: Let's go ahead and hit our model:
End of explanation
"""
%%bash
ENDPOINT_RESOURCENAME= #TODO: insert the ENDPOINT_RESOURCENAME here from above
gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \
--region $REGION \
--json-request input.json
"""
Explanation: Insert below the ENDPOINT_RESOURCENAME from the deployment code above.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.