repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
LSST-Supernova-Workshops/Pittsburgh-2016 | Tutorials/Cadence/Cadence_And_OpSim.ipynb | mit | import numpy as np
import pandas as pd
import os
import sqlite3
from sqlalchemy import create_engine
%matplotlib inline
import matplotlib.pyplot as plt
opsimdbpath = os.environ.get('OPSIMDBPATH')
print(opsimdbpath)
engine = create_engine('sqlite:///' + opsimdbpath)
conn = sqlite3.connect(opsimdbpath)
cursor = conn.cursor()
print cursor
query = 'SELECT COUNT(*) FROM Summary'
cursor.execute(query)
cursor.fetchall()
opsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night < 1000', engine)
opsimdf.head()
# Definitions of the columns are
opsimdf[['obsHistID', 'filter', 'night', 'expMJD',
'fieldID', 'fieldRA', 'ditheredRA', 'ditheredRA', 'ditheredDec',
'propID', 'fiveSigmaDepth']].head()
opsimdf.propID.unique()
xx = opsimdf.query('fieldID == 316')
xx.head()
"""
Explanation: Cadence :
Look at the output of the OpSim Database
Requirements
sqlite
setup the sims stack before running the notebook (if not at least have healpy installed)
setup an environment variable OPSIMDBPATH to point to the absolute path of the OpSim Database (unzipped)
The Observing Strategy White Paper
website: https://github.com/LSSTScienceCollaborations/ObservingStrategy
Think about the requirements for different Science Cases.
Suggest an Experiment : (for example, Jeonghee Rho) https://github.com/LSSTScienceCollaborations/ObservingStrategy/blob/master/opsim/README.md
End of explanation
"""
xx.query('propID == 54')
"""
Explanation: Some unexpected issues
End of explanation
"""
test = opsimdf.drop_duplicates()
all(test == opsimdf)
test = opsimdf.drop_duplicates(subset='obsHistID')
len(test) == len(opsimdf)
opsimdf.obsHistID.size
opsimdf.obsHistID.unique().size
test.obsHistID.size
"""
Explanation: How to read the table:
obsHistID indexes a pointing ('fieldRA', 'fieldDec', 'ditheredRA', 'ditheredDec')
Additionally a pointing may be assigned a propID to describe what a pointing achieves
The meaning of the propID is given in the Proposal Table. For minion_1016_sqlite.db, the WFD is 54, and the DDF is 56, but this coding might change.
If a pointing achieves the task of succeeding in two different proposals, this is represented by haveing two records with the same pointng and different propID
End of explanation
"""
import opsimsummary as oss
# Read in the combined summary.
opsimout = oss.OpSimOutput.fromOpSimDB(opsimdbpath)
help(oss.OpSimOutput)
opsimDeep = oss.OpSimOutput.fromOpSimDB(opsimdbpath, subset='DDF')
oss.OpSimOutput.get_allowed_subsets()
odeep = oss.summarize_opsim.SummaryOpsim(opsimDeep.summary)
"""
Explanation: Using OpSimSummary
End of explanation
"""
xx = opsimout.summary.groupby('fieldID').expMJD.agg('count')
fig, ax = plt.subplots()
xx.hist(histtype='step', bins=np.arange(0., 5000, 50.), ax=ax)
ax.set_xlabel('fieldID')
ax.set_ylabel('Number of visits to field during survey')
#DDF
fig, ax = plt.subplots()
xx.hist(bins=np.arange(15000, 25000, 50), histtype='step')
ax.set_xlabel('fieldID')
ax.set_ylabel('Number of visits to field during survey DDF')
xx[xx > 5000]
# 1000 visits just in terms of exposure is 9 hrs 25 mins
fig, ax = plt.subplots()
xx = opsimout.summary.groupby(['night']).expMJD.agg('count')
xx.hist(histtype='step', bins=np.arange(0., 5000, 100.), ax=ax)
ax.set_xlabel('Number of visits in a night')
"""
Explanation: Recover some familiar numbers from this
End of explanation
"""
|
srcole/qwm | burrito/.ipynb_checkpoints/UNFINISHED_Burrito_correlations-checkpoint.ipynb | mit | %config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
"""
Explanation: San Diego Burrito Analytics
Scott Cole
23 April 2016
This notebook contains analyses on the burrito ratings in San Diego, including:
* How each metric correlates with one another.
* Linear model of how each dimension contributes to the overall rating
Default imports
End of explanation
"""
filename="burrito_current.csv"
df = pd.read_csv(filename)
N = df.shape[0]
"""
Explanation: Load data
End of explanation
"""
dfcorr = df.corr()
from tools.misc import pearsonp
metricscorr = ['Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(metricscorr)
Mcorrmat = np.zeros((M,M))
Mpmat = np.zeros((M,M))
for m1 in range(M):
for m2 in range(M):
if m1 != m2:
Mcorrmat[m1,m2] = dfcorr[metricscorr[m1]][metricscorr[m2]]
Mpmat[m1,m2] = pearsonp(Mcorrmat[m1,m2],N)
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(12,10))
cax = plt.pcolor(range(M+1), range(M+1), Mcorrmat, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
#plt.axis([2, M+1, floall[0],floall[-1]+10])
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(metricscorr,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(metricscorr,size=25)
plt.xticks(rotation='vertical')
plt.tight_layout()
plt.xlim((0,M))
plt.ylim((0,M))
figname = 'metriccorrmat'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
sp.stats.pearsonr(df.Hunger,df.overall)
print Mpmat[0]
print Mcorrmat[0]
"""
Explanation: Metric correlations
End of explanation
"""
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Cost',y='Volume',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Cost ($)',size=20)
plt.ylabel('Volume (L)',size=20)
plt.xticks(np.arange(5,11),size=15)
plt.yticks(np.arange(.6,1.2,.1),size=15)
plt.tight_layout()
print df.corr()['Cost']['Volume']
from tools.misc import pearsonp
print pearsonp(df.corr()['Cost']['Volume'],len(df[['Cost','Volume']].dropna()))
figname = 'corr-volume-cost'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
# Visualize some correlations
from tools.plt import scatt_corr
scatt_corr(df['overall'].values,df['Meat'].values,
xlabel = 'overall rating', ylabel='meat rating', xlim = (-.5,5.5),ylim = (-.5,5.5),xticks=range(6),yticks=range(6))
#showline = True)
scatt_corr(df['overall'].values,df['Wrap'].values,
xlabel = 'overall rating', ylabel='wrap integrity rating', xlim = (-.5,5.5),ylim = (-.5,5.5),xticks=range(6),yticks=range(6))
#showline = True)
"""
Explanation: Negative correlation: Cost and volume
End of explanation
"""
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
# Get all ingredient keys
startingredients = 29
ingredientkeys = df.keys()[startingredients:]
# Get all ingredient keys with at least 10 burritos
Nlim = 10
ingredientkeys = ingredientkeys[df.count()[startingredients:].values>=Nlim]
# Make a dataframe for all ingredient keys
dfing = df[ingredientkeys]
# For each key, make binary
for k in dfing.keys():
dfing[k] = dfing[k].map({'x':1,'X':1,1:1})
dfing[k] = dfing[k].fillna(0)
import statsmodels.api as sm
X = sm.add_constant(dfing)
y = df.overall
lm = sm.GLM(y,X)
res = lm.fit()
print(res.summary())
origR2 = 1 - np.var(res.resid_pearson) / np.var(y)
print origR2
Nsurr = 1000
randr2 = np.zeros(Nsurr)
for n in range(Nsurr):
Xrand = np.random.rand(X.shape[0],X.shape[1])
Xrand[:,0] = np.ones(X.shape[0])
lm = sm.GLM(y,Xrand)
res = lm.fit()
randr2[n] = 1 - np.var(res.resid_pearson) / np.var(y)
print 'p = ' , np.mean(randr2>origR2)
# Is this a null result? let's do t-tests
for k in dfing.keys():
withk = df.overall[dfing[k].values==1].values
nok = df.overall[dfing[k].values==0].values
print k
print sp.stats.ttest_ind(withk,nok)
"""
Explanation: Linear regression: ingredients
End of explanation
"""
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Meat rating',size=20)
plt.ylabel('Non-meat rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
print df.corr()['Meat']['Fillings']
from tools.misc import pearsonp
print pearsonp(df.corr()['Meat']['Fillings'],len(df[['Meat','Fillings']].dropna()))
figname = 'corr-meat-filling'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
# How many burritos at taco stand?
restrictCali = False
import re
reTS = re.compile('.*taco stand.*', re.IGNORECASE)
reCali = re.compile('.*cali.*', re.IGNORECASE)
locTS = np.ones(len(df))
for i in range(len(df)):
mat = reTS.match(df['Location'][i])
if mat is None:
locTS[i] = 0
else:
if restrictCali:
mat = reCali.match(df['Burrito'][i])
if mat is None:
locTS[i] = 0
print sum(locTS)
temp = np.arange(len(df))
dfTS = df.loc[temp[locTS==1]]
plt.figure(figsize=(4,4))
ax = plt.gca()
dfTS.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Meat rating',size=20)
plt.ylabel('Non-meat rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
print dfTS.corr()['Meat']['Fillings']
from tools.misc import pearsonp
print pearsonp(dfTS.corr()['Meat']['Fillings'],len(dfTS[['Meat','Fillings']].dropna()))
figname = 'corr-meat-filling-TS'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
# Spearman correlation
dfMF = df[['Meat','Fillings']].dropna()
dfTSMF = dfTS[['Meat','Fillings']].dropna()
print sp.stats.spearmanr(dfMF.Meat,dfMF.Fillings)
print sp.stats.spearmanr(dfTSMF.Meat,dfTSMF.Fillings)
"""
Explanation: Taco Stand case study: Meat-fillings correlation
End of explanation
"""
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Hunger',y='overall',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Hunger',size=20)
plt.ylabel('Overall rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
print df.corr()['Hunger']['overall']
from tools.misc import pearsonp
print pearsonp(df.corr()['Hunger']['overall'],len(df[['Hunger','overall']].dropna()))
figname = 'corr-hunger-overall'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
"""
Explanation: Hunger level slightly correlated to overall
End of explanation
"""
# GLM for
# Remove NANs
mainD = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
dffull = df[np.hstack((mainD,'overall'))].dropna()
X = sm.add_constant(dffull[mainD])
y = dffull['overall']
import statsmodels.api as sm
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print 1 - np.var(res.resid_pearson) / np.var(y)
# Linear regression
# Note that this matches GLM above :D
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X,y)
print lm.intercept_
print lm.coef_
print 'R2 = ' + np.str(lm.score(X,y))
# Visualize coefficients
from tools.plt import bar
newidx = np.argsort(-res.params.values)
temp = np.arange(len(newidx))
newidx = np.delete(newidx,temp[newidx==0])
bar(res.params[newidx],res.bse[newidx],X.keys()[newidx],'Overall rating\nLinear model\ncoefficient',
ylim =(0,.5),figsize=(11,3))
plt.plot()
figname = 'overall_metric_linearmodelcoef'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
sp.stats.pearsonr(X['Synergy'],y)[0]**2
"""
Explanation: Model overall as a function of each main dimension
End of explanation
"""
# Average each metric over each Location
# Avoid case issues; in the future should avoid article issues
df.Location = df.Location.str.lower()
m_Location = ['Location','N','Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
tacoshops = df.Location.unique()
TS = len(tacoshops)
dfmean = pd.DataFrame(np.nan, index=range(TS), columns=m_Location)
dfmean.Location = tacoshops
for ts in range(TS):
dfmean['N'][ts] = sum(df.Location == tacoshops[ts])
for m in m_Location[2:]:
dfmean[m][ts] = df[m].loc[df.Location==tacoshops[ts]].mean()
metricscorr = ['Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(metricscorr)
dfmeancorr = dfmean.corr()
Mcorrmat = np.zeros((M,M))
Mpmat = np.zeros((M,M))
for m1 in range(M):
for m2 in range(M):
if m1 != m2:
Mcorrmat[m1,m2] = dfmeancorr[metricscorr[m1]][metricscorr[m2]]
Mpmat[m1,m2] = pearsonp(Mcorrmat[m1,m2],N)
clim1 = (-1,1)
plt.figure(figsize=(10,10))
cax = plt.pcolor(range(M+1), range(M+1), Mcorrmat, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
#plt.axis([2, M+1, floall[0],floall[-1]+10])
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(metricscorr,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(metricscorr,size=9)
plt.tight_layout()
print Mcorrmat[0]
print Mpmat[0]
# GLM for Yelp
mainDo = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
dffull = dfmean[np.hstack((mainDo,'Yelp'))].dropna()
X = sm.add_constant(dffull[mainDo])
y = dffull['Yelp']
import statsmodels.api as sm
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(res.pvalues)
print 1 - np.var(res.resid_pearson) / np.var(y)
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Tortilla',y='Yelp',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Tortilla rating',size=20)
plt.ylabel('Yelp rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
plt.ylim((2,5))
print df.corr()['Yelp']['Tortilla']
from tools.misc import pearsonp
print pearsonp(df.corr()['Yelp']['Tortilla'],len(df[['Yelp','Tortilla']].dropna()))
figname = 'corr-Yelp-tortilla'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='overall',y='Yelp',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Overall rating',size=20)
plt.ylabel('Yelp rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
plt.ylim((2,5))
print df.corr()['Yelp']['overall']
from tools.misc import pearsonp
print pearsonp(df.corr()['Yelp']['overall'],len(df[['Yelp','overall']].dropna()))
figname = 'corr-Yelp-overall'
plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png')
# GLM for Google
mainDo = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
dffull = df[np.hstack((mainDo,'Google'))].dropna()
X = sm.add_constant(dffull[mainDo])
y = dffull['Google']
import statsmodels.api as sm
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(res.pvalues)
print 1 - np.var(res.resid_pearson) / np.var(y)
"""
Explanation: Yelp and Google
End of explanation
"""
# Identify california burritos
def caliburritoidx(x):
import re
idx = []
for b in range(len(x)):
re4str = re.compile('.*cali.*', re.IGNORECASE)
if re4str.match(x[b]) is not None:
idx.append(b)
return idx
caliidx = caliburritoidx(df.Burrito)
Ncaliidx = np.arange(len(df))
Ncaliidx = np.delete(Ncaliidx,caliidx)
met_Cali = ['Hunger','Volume','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
for k in met_Cali:
Mcali = df[k][caliidx].dropna()
MNcali = df[k][Ncaliidx].dropna()
print k
print sp.stats.ttest_ind(Mcali,MNcali)
"""
Explanation: Cali burritos vs. other burritos
End of explanation
"""
df_Scott = df[df.Reviewer=='Scott']
idx_Scott = df2.index.values
idx_NScott = np.arange(len(df))
idx_NScott = np.delete(idx_NScott,idx_Scott)
burritos_Scott = df.loc[df2.index.values]['Burrito']
dfScorr = df_Scott.corr()
metricscorr = ['Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(metricscorr)
Mcorrmat = np.zeros((M,M))
Mpmat = np.zeros((M,M))
for m1 in range(M):
for m2 in range(M):
if m1 != m2:
Mcorrmat[m1,m2] = dfcorr[metricscorr[m1]][metricscorr[m2]]
Mpmat[m1,m2] = pearsonp(Mcorrmat[m1,m2],N)
clim1 = (-1,1)
plt.figure(figsize=(10,10))
cax = plt.pcolor(range(M+1), range(M+1), Mcorrmat, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
#plt.axis([2, M+1, floall[0],floall[-1]+10])
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(metricscorr,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(metricscorr,size=9)
plt.tight_layout()
# Try to argue that me sampling a bunch of burritos is equivalent to a bunch of people sampling burritos
# you would not be able to tell if a rated burrito was by me or someone else.
# Tests:
# 1. Means of each metric are the same
# 2. Metric correlations are the same (between each quality and overall)
# 3. Do I like Cali burritos more than other people?
# 1. Metric means are the same: I give my meat and meat:filling lower ratings
met_Scott = ['Hunger','Volume','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
for k in met_Scott:
Msc = df[k][idx_Scott].dropna()
MNsc = df[k][idx_NScott].dropna()
print k
print sp.stats.ttest_ind(Msc,MNsc)
"""
Explanation: Independence of each dimension
End of explanation
"""
|
google-research/google-research | evolution/regularized_evolution_algorithm/regularized_evolution.ipynb | apache-2.0 | DIM = 100 # Number of bits in the bit strings (i.e. the "models").
NOISE_STDEV = 0.01 # Standard deviation of the simulated training noise.
class Model(object):
"""A class representing a model.
It holds two attributes: `arch` (the simulated architecture) and `accuracy`
(the simulated accuracy / fitness). See Appendix C for an introduction to
this toy problem.
In the real case of neural networks, `arch` would instead hold the
architecture of the normal and reduction cells of a neural network and
accuracy would be instead the result of training the neural net and
evaluating it on the validation set.
We do not include test accuracies here as they are not used by the algorithm
in any way. In the case of real neural networks, the test accuracy is only
used for the purpose of reporting / plotting final results.
In the context of evolutionary algorithms, a model is often referred to as
an "individual".
Attributes:
arch: the architecture as an int representing a bit-string of length `DIM`.
As a result, the integers are required to be less than `2**DIM`. They
can be visualized as strings of 0s and 1s by calling `print(model)`,
where `model` is an instance of this class.
accuracy: the simulated validation accuracy. This is the sum of the
bits in the bit-string, divided by DIM to produce a value in the
interval [0.0, 1.0]. After that, a small amount of Gaussian noise is
added with mean 0.0 and standard deviation `NOISE_STDEV`. The resulting
number is clipped to within [0.0, 1.0] to produce the final validation
accuracy of the model. A given model will have a fixed validation
accuracy but two models that have the same architecture will generally
have different validation accuracies due to this noise. In the context
of evolutionary algorithms, this is often known as the "fitness".
"""
def __init__(self):
self.arch = None
self.accuracy = None
def __str__(self):
"""Prints a readable version of this bitstring."""
return '{0:b}'.format(self.arch)
def train_and_eval(arch):
"""Simulates training and evaluation.
Computes the simulated validation accuracy of the given architecture. See
the `accuracy` attribute in `Model` class for details.
Args:
arch: the architecture as an int representing a bit-string.
"""
accuracy = float(_sum_bits(arch)) / float(DIM)
accuracy += random.gauss(mu=0.0, sigma=NOISE_STDEV)
accuracy = 0.0 if accuracy < 0.0 else accuracy
accuracy = 1.0 if accuracy > 1.0 else accuracy
return accuracy
def _sum_bits(arch):
"""Returns the number of 1s in the bit string.
Args:
arch: an int representing the bit string.
"""
total = 0
for _ in range(DIM):
total += arch & 1
arch = (arch >> 1)
return total
"""
Explanation: Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Summary
This document contains the code of the regularized evolution (i.e. aging evolution) algorithm used in:
E. Real, A. Aggarwal, Y. Huang, and Q. V. Le 2018. Regularized Evolution for Image Classifier Architecture Search.
It demonstrates the application of the algorithm in a toy search space that can be run quickly in a single machine.
Toy Search Space
For a simple problem to use for illustration purposes, we use the toy search space of appendix C. In it, toy models have a simulated architecture (bit-string) and a simulated accuracy (the number of 1s in the bit-string, with some additional random noise), as described below.
End of explanation
"""
import random
def random_architecture():
"""Returns a random architecture (bit-string) represented as an int."""
return random.randint(0, 2**DIM - 1)
def mutate_arch(parent_arch):
"""Computes the architecture for a child of the given parent architecture.
The parent architecture is cloned and mutated to produce the child
architecture. The child architecture is mutated by flipping a randomly chosen
bit in its bit-string.
Args:
parent_arch: an int representing the architecture (bit-string) of the
parent.
Returns:
An int representing the architecture (bit-string) of the child.
"""
position = random.randint(0, DIM - 1) # Index of the bit to flip.
# Flip the bit at position `position` in `child_arch`.
child_arch = parent_arch ^ (1 << position)
return child_arch
"""
Explanation: Search Space Traversal
Below we define a function to generate random architectures in the basic search space and a function to mutate architectures within the search space.
End of explanation
"""
import collections
import random
def regularized_evolution(cycles, population_size, sample_size):
"""Algorithm for regularized evolution (i.e. aging evolution).
Follows "Algorithm 1" in Real et al. "Regularized Evolution for Image
Classifier Architecture Search".
Args:
cycles: the number of cycles the algorithm should run for.
population_size: the number of individuals to keep in the population.
sample_size: the number of individuals that should participate in each
tournament.
Returns:
history: a list of `Model` instances, representing all the models computed
during the evolution experiment.
"""
population = collections.deque()
history = [] # Not used by the algorithm, only used to report results.
# Initialize the population with random models.
while len(population) < population_size:
model = Model()
model.arch = random_architecture()
model.accuracy = train_and_eval(model.arch)
population.append(model)
history.append(model)
# Carry out evolution in cycles. Each cycle produces a model and removes
# another.
while len(history) < cycles:
# Sample randomly chosen models from the current population.
sample = []
while len(sample) < sample_size:
# Inefficient, but written this way for clarity. In the case of neural
# nets, the efficiency of this line is irrelevant because training neural
# nets is the rate-determining step.
candidate = random.choice(list(population))
sample.append(candidate)
# The parent is the best model in the sample.
parent = max(sample, key=lambda i: i.accuracy)
# Create the child model and store it.
child = Model()
child.arch = mutate_arch(parent.arch)
child.accuracy = train_and_eval(child.arch)
population.append(child)
history.append(child)
# Remove the oldest model.
population.popleft()
return history
"""
Explanation: Regularized Evolution Algorithm
The regularized evolution (i.e. aging evolution) algorithm can be written as:
End of explanation
"""
history = regularized_evolution(
cycles=1000, population_size=100, sample_size=10)
"""
Explanation: Example Run on the Toy Search Space
We run the algorithm for 1000 samples, setting the population size and sample size parameters to reasonable values.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
sns.set_style('white')
xvalues = range(len(history))
yvalues = [i.accuracy for i in history]
ax = plt.gca()
ax.scatter(
xvalues, yvalues, marker='.', facecolor=(0.0, 0.0, 0.0),
edgecolor=(0.0, 0.0, 0.0), linewidth=1, s=1)
ax.xaxis.set_major_locator(ticker.LinearLocator(numticks=2))
ax.xaxis.set_major_formatter(ticker.ScalarFormatter())
ax.yaxis.set_major_locator(ticker.LinearLocator(numticks=2))
ax.yaxis.set_major_formatter(ticker.ScalarFormatter())
fig = plt.gcf()
fig.set_size_inches(8, 6)
fig.tight_layout()
ax.tick_params(
axis='x', which='both', bottom='on', top='off', labelbottom='on',
labeltop='off', labelsize=14, pad=10)
ax.tick_params(
axis='y', which='both', left='on', right='off', labelleft='on',
labelright='off', labelsize=14, pad=5)
plt.xlabel('Number of Models Evaluated', labelpad=-16, fontsize=16)
plt.ylabel('Accuracy', labelpad=-30, fontsize=16)
plt.xlim(0, 1000)
sns.despine()
"""
Explanation: We plot the progress of the experiment, showing how the accuracy improved gradually:
End of explanation
"""
|
dh7/ML-Tutorial-Notebooks | tf-image-generation.ipynb | bsd-2-clause | import numpy as np
import tensorflow as tf
"""
Explanation: Tensor Flow to create a useless images
To learn how to encode a simple image and a GIF
Import needed for Tensorflow
End of explanation
"""
%matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
from IPython.display import Image
"""
Explanation: Import needed for Jupiter
End of explanation
"""
def write_png(tensor, name):
casted_to_uint8 = tf.cast(tensor, tf.uint8)
converted_to_png = tf.image.encode_png(casted_to_uint8)
f = open(name, "wb+")
f.write(converted_to_png.eval())
f.close()
"""
Explanation: A function to save a picture
End of explanation
"""
class CostTrace:
"""A simple example class"""
def __init__(self):
self.cost_array = []
def log(self, cost):
self.cost_array.append(cost)
def draw(self):
plt.figure(figsize=(12,5))
plt.plot(range(len(self.cost_array)), self.cost_array, label='cost')
plt.legend()
plt.yscale('log')
plt.show()
"""
Explanation: A function to draw the cost function in Jupyter
End of explanation
"""
# Init size
width = 100
height = 100
RGB = 3
shape = [height,width, RGB]
# Create the generated tensor as a variable
rand_uniform = tf.random_uniform(shape, minval=0, maxval=255, dtype=tf.float32)
generated = tf.Variable(rand_uniform)
#define the cost function
c_mean = tf.reduce_mean(tf.pow(generated,2)) # we want a low mean
c_max = tf.reduce_max(generated) # we want a low max
c_min = -tf.reduce_min(generated) # we want a high mix
c_diff = 0
for i in range(0,height-1, 1):
line1 = tf.gather(generated, i,)
line2 = tf.gather(generated, i+1)
c_diff += tf.reduce_mean(tf.pow(line1-line2-30, 2)) # to force a gradient
cost = c_mean + c_max + c_min + c_diff
#cost = c_mean + c_diff
print ('cost defined')
train_op = tf.train.GradientDescentOptimizer(0.5).minimize(cost, var_list=[generated])
print ('train_op defined')
# Initializing the variables
init = tf.initialize_all_variables()
print ('variables initialiazed defined')
# Launch the graph
with tf.Session() as sess:
sess.run(init)
print ('init done')
cost_trace = CostTrace()
for epoch in range(0,10000):
sess.run(train_op)
if (epoch % 100 == 0):
c = cost.eval()
print ('epoch', epoch,'cost' ,c, c_mean.eval(), c_min.eval(), c_max.eval(), c_diff.eval())
cost_trace.log(c)
write_png(generated, "generated{:06}.png".format(epoch))
print ('all done')
cost_trace.draw()
Image("generated000000.png")
"""
Explanation: Create some random pictures
Encode the input (a number)
This example convert the number to a binary representation
End of explanation
"""
from PIL import Image, ImageSequence
import glob, sys, os
os.chdir(".")
frames = []
for file in glob.glob("gene*.png"):
print(file)
im = Image.open(file)
frames.append(im)
from images2gif import writeGif
writeGif("generated.gif", frames, duration=0.1)
"""
Explanation: To create a GIF
End of explanation
"""
|
james-prior/cohpy | 20150601-dojo-in-membership-tests-lists-tuples-sets-dicts.ipynb | mit | n = 10**7
a = list(range(n))
b = tuple(a)
c = set(a)
d = dict(zip(a, a))
5 in a, 5 in b, 5 in c, 5 in d
i = n/2
%timeit i in a
%timeit i in b
"""
Explanation: This notebook explores how fast determining if some value
is in lists, tuples, sets, and dictionaries.
See simplified conclusion at bottom of notebook.
The Mighty Dictionary (#55)
All Your Ducks In A Row: Data Structures in the Standard Library and Beyond
End of explanation
"""
%timeit i in c
730e-3 / 289e-9
"""
Explanation: Determining if something is in the tuple
takes about the same time as determining if something is in the list.
End of explanation
"""
%timeit i in d
730e-3 / 295e-9
"""
Explanation: Determining if something is in the set was about 2.5 million times faster
than determining if something is in the list.
End of explanation
"""
|
tensorflow/docs-l10n | site/es-419/tutorials/keras/classification.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
# TensorFlow y tf.keras
import tensorflow as tf
from tensorflow import keras
# Librerias de ayuda
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
"""
Explanation: Clasificacion Basica: Predecir una imagen de moda
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/es-419/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidad
son basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual
de la Documentacion Oficial en Ingles.
Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request"
al siguiente repositorio tensorflow/docs.
Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidad
por favor contacten al siguiente grupo docs@tensorflow.org list.
Esta Guia entrena un modelo de red neuronal para clasificar imagenes de ropa como, tennis y camisetas. No hay problema sino entiende todos los detalles; es un repaso rapido de un programa completo de Tensorflow con los detalles explicados a medida que avanza.
Esta Guia usa tf.keras, un API de alto nivel para construir y entrenar modelos en Tensorflow.
End of explanation
"""
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
"""
Explanation: Importar el set de datos de moda de MNIST
Esta guia usa el set de datos de Fashion MNIST
que contiene mas de 70,000 imagenes en 10 categorias. Las imagenes muestran articulos individuales de ropa a una resolucion baja (28 por 28 pixeles) como se ve aca:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Moda MNIST esta construida como un reemplazo para el set de datos clasico MNIST
casi siempre utilizado como el "Hola Mundo" de programas de aprendizaje automatico (ML) para computo de vision. El set de datos de MNIST contiene imagenes de digitos escrito a mano (0, 1, 2, etc.) en un formato identico al de los articulos de ropa que va a utilizar aca.
Esta guia utiliza Moda MNIST para variedad y por que es un poco mas retador que la regular MNIST. Ambos set de datos son relativamente pequenos y son usados para verificar que el algoritmo funciona como debe.
Aca, 60,000 imagenes son usadas para entrenar la red neuronal y 10,000 imagenes son usadas para evaluar que tan exacto aprendia la red a clasificar imagenes. Pueden acceder al set de moda de MNIST directamente desde TensorFlow. Para importar y cargar el set de datos de MNIST directamente de TensorFlow:
End of explanation
"""
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
"""
Explanation: link textAl cargar el set de datos retorna cuatro arreglos en NumPy:
El arreglo train_images y train_labels son los arreglos que training set—el modelo de datos usa para aprender.
el modelo es probado contra los arreglos test set, el test_images, y test_labels.
Las imagenes son 28x28 arreglos de NumPy, con valores de pixel que varian de 0 a 255. Los labels son un arreglo de integros, que van del 0 al 9. Estos corresponden a la class de ropa que la imagen representa.
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Cada imagen es mapeada a una unica etiqueta. Ya que los Class names no estan incluidoen el dataset, almacenelo aca para usarlos luego cuando se visualicen las imagenes:
End of explanation
"""
train_images.shape
"""
Explanation: Explore el set de datos
Explore el formato de el set de datos antes de entrenar el modelo. Lo siguiente muestra que hay 60,000 imagenes en el set de entrenamiento, con cada imagen representada por pixeles de 28x28:
End of explanation
"""
len(train_labels)
"""
Explanation: Asimismo, hay 60,000 etiquetas en el set de entrenamiento:
End of explanation
"""
train_labels
"""
Explanation: Cada etiqueta es un integro entre 0 y 9:
End of explanation
"""
test_images.shape
"""
Explanation: Hay 10,000 imagenes en el set de pruebas. Otra vez, cada imagen es representada como pixeles de 28x28:
End of explanation
"""
len(test_labels)
"""
Explanation: Y el set de pruebas contiene 10,000 etiquetas de imagen:
End of explanation
"""
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
"""
Explanation: Pre-procese el set de datos
El set de datos debe ser pre-procesada antes de entrenar la red. Si usted inspecciona la primera imagen en el set de entrenamiento, va a encontrar que los valores de los pixeles estan entre 0 y 255:
End of explanation
"""
train_images = train_images / 255.0
test_images = test_images / 255.0
"""
Explanation: Escale estos valores en un rango de 0 a 1 antes de alimentarlos al modelo de la red neuronal. Para hacero, divida los valores por 255. Es importante que el training set y el testing set se pre-procesen de la misma forma:
End of explanation
"""
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
"""
Explanation: Para verificar que el set de datos esta en el formato adecuado y que estan listos para construir y entrenar la red, vamos a desplegar las primeras 25 imagenes de el training set y despleguemos el nombre de cada clase debajo de cada imagen.
End of explanation
"""
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
"""
Explanation: Construir el Modelo
Construir la red neuronal requiere configurar las capas del modelo y luego compilar el modelo.
Configurar las Capas
Los bloques de construccion basicos de una red neuronal son las capas o layers. Las capas extraen representaciones de el set de datos que se les alimentan. Con suerte, estas representaciones son considerables para el problema que estamos solucionando.
La mayoria de aprendizaje profundo consiste de unir capas sencillas.
La mayoria de las capas como tf.keras.layers.Dense, tienen parametros que son aprendidos durante el entrenamiento.
End of explanation
"""
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
"""
Explanation: La primera capa de esta red, tf.keras.layers.Flatten,
transforma el formato de las imagenes de un arreglo bi-dimensional (de 28 por 28 pixeles) a un arreglo uni dimensional (de 28*28 pixeles = 784 pixeles). Observe esta capa como una capa no apilada de filas de pixeles en la misma imagen y alineandolo. Esta capa no tiene parametros que aprender; solo reformatea el set de datos.
Despues de que los pixeles estan "aplanados", la secuencia consiste de dos capastf.keras.layers.Dense. Estas estan densamente conectadas, o completamente conectadas. La primera capa Dense tiene 128 nodos (o neuronas). La segunda (y ultima) capa es una capa de 10 nodos softmax que devuelve un arreglo de 10 probabilidades que suman a 1. Cada nodo contiene una calificacion que indica la probabilidad que la actual imagen pertenece a una de las 10 clases.
Compile el modelo
Antes de que el modelo este listo para entrenar , se necesitan algunas configuraciones mas. Estas son agregadas durante el paso de compilacion del modelo:
Loss function —Esto mide que tan exacto es el modelo durante el entrenamiento. Quiere minimizar esta funcion para dirigir el modelo en la direccion adecuada.
Optimizer — Esto es como el modelo se actualiza basado en el set de datos que ve y la funcion de perdida.
Metrics — Se usan para monitorear los pasos de entrenamiento y de pruebas.
El siguiente ejemplo usa accuracy (exactitud) , la fraccion de la imagenes que son correctamente clasificadas.
End of explanation
"""
model.fit(train_images, train_labels, epochs=10)
"""
Explanation: Entrenar el Modelo
Entrenar el modelo de red neuronal requiere de los siguientes pasos:
Entregue los datos de entrenamiento al modelo. En este ejemplo , el set de datos de entrenamiento estan en los arreglos train_images y train_labels.
el modelo aprende a asociar imagenes y etiquetas.
Usted le pregunta al modelo que haga predicciones sobre un set de datos que se encuentran en el ejemplo,incluido en el arreglo test_images. Verifique que las predicciones sean iguales a las etiquetas de el arreglotest_labels.
Para comenzar a entrenar, llame el metodo model.fit, es llamado asi por que fit (ajusta) el modelo a el set de datos de entrenamiento:
End of explanation
"""
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
"""
Explanation: A medida que el modelo entrena, la perdida y la exactitud son desplegadas. Este modelo alcanza una exactitud de 0.88 (o 88%) sobre el set de datos de entrenamiento.
Evaluar Exactitud
Siguente, compare como el rendimiento del modelo sobre el set de datos:
End of explanation
"""
predictions = model.predict(test_images)
"""
Explanation: Resulta que la exactitud sobre el set de datos es un poco menor que la exactitud sobre el set de entrenamiento. Esta diferencia entre el entrenamiento y el test se debe a overfitting (sobre ajuste). Sobre ajuste sucede cuando un modelo de aprendizaje de maquina (ML) tiene un rendimiento peor sobre un set de datos nuevo, que nunca antes ha visto comparado con el de entrenamiento.
Hacer predicciones
Con el modelo entrenado usted puede usarlo para hacer predicciones sobre imagenes.
End of explanation
"""
predictions[0]
"""
Explanation: Aca, el modelo ha predecido la etiqueta para cada imagen en el set de datos de test (prueba). Miremos la primera prediccion:
End of explanation
"""
np.argmax(predictions[0])
"""
Explanation: una prediccion es un arreglo de 10 numeros. Estos representan el nivel de "confianza" del modelo sobre las imagenes de cada uno de los 10 articulos de moda/ropa. Ustedes pueden revisar cual tiene el nivel mas alto de confianza:
End of explanation
"""
test_labels[0]
"""
Explanation: Entonces,el modelo tiene mayor confianza que esta imagen es un bota de tobillo "ankle boot" o class_names[9]. Examinando las etiquetas de test o de pruebas muestra que esta clasificaion es correcta:
End of explanation
"""
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
"""
Explanation: Grafique esto para poder ver todo el set de la prediccion de las 10 clases.
End of explanation
"""
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
"""
Explanation: Miremos la imagen [0], sus predicciones y el arreglo de predicciones. Las etiquetas de prediccion correctas estan en azul y las incorrectas estan en rojo. El numero entrega el porcentaje (sobre 100) para la etiqueta predecida.
End of explanation
"""
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
"""
Explanation: Vamos a graficar multiples imagenes con sus predicciones. Notese que el modelo puede estar equivocado aun cuando tiene mucha confianza.
End of explanation
"""
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
"""
Explanation: Finalmente, usamos el modelo entrenado para hacer una prediccion sobre una unica imagen.
End of explanation
"""
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
"""
Explanation: Los modelos de tf.keras son optimizados sobre batch o bloques,
o coleciones de ejemplos por vez.
De acuerdo a esto, aunque use una unica imagen toca agregarla a una lista:
End of explanation
"""
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
"""
Explanation: Ahora prediga la etiqueta correcta para esta imagen:
End of explanation
"""
np.argmax(predictions_single[0])
"""
Explanation: model.predict retorna una lista de listas para cada imagen dentro del batch o bloque de datos. Tome la prediccion para nuestra unica imagen dentro del batch o bloque:
End of explanation
"""
|
ga7g08/ga7g08.github.io | _notebooks/2016-10-19-Basic-primer-on-MCMC.ipynb | mit | %matplotlib inline
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import norm, uniform
sns.set_style('white')
sns.set_context('talk')
np.random.seed(123)
"""
Explanation: Primer on Markov Chain Monte Carlo (MCMC) sampling
This is my simplified version of the (much better) blog post by Thomas Weicki which can be found here. I wrote this as I wanted to highlight some different points are simplify others.
Bayes formulae and the aim of MCMC sampling
Defining $x$ as some data and $\theta$ as model-parameters (of some as yet undefined model), Bayes formulae is
$$ P(\theta | x) = \frac{P(x| \theta) P(\theta)}{P(x)} $$
which is the joint posterior probability distrubution of $\theta$ given $x$.
Ultimately, the goal of MCMC sampling is to make inferences about $\theta$, given $x$ . To this extent, the denominator of Bayes formulae (the difficult-to-compute evidence) is irrelevant.Therefore, we may write the joint probability distribution of the model parameters $\theta$ as:
$$ P(\theta | x) \propto \underbrace{P(x| \theta)}_{\textrm{the likelihood}} \overbrace{P(\theta)}^\textrm{the prior}$$
In what follows, we build a simple MCMC sampler which is applied to some simple data.
Example using code
First, import somethings
End of explanation
"""
data = np.random.normal(0, 1, 50)
print data[:10]
"""
Explanation: Generate some data
Let our data consist of $N=50$ draws from a normal distribuition:
End of explanation
"""
ax = plt.subplot()
sns.distplot(data, kde=False, ax=ax, bins=10)
_ = ax.set(title='Histogram of observed data', xlabel='x', ylabel='# observations');
"""
Explanation: We can plot a histogram to get a feel for how it is distributed:
End of explanation
"""
mu_current = 1
"""
Explanation: Model of the data:
Let us model the data as being Gaussian, with a fixed standard deviation $\sigma=1$; this is an assumption going into the analysis, we could just as well pick any other probabilistic model. The likelihood for a data point $x_i$, given the Gaussian model, is therefore
$$P(x_i| \mu, \sigma{=}1) \sim \textrm{Normal}(x_i; \mu, \sigma{=}1),$$
such that $\mu$ is our only model parameter.
We have $N$ data points though, so the likelihood for the set of all data point $\mathbf{x}={x_i}$ is
$$P(\mathbf{x}| \mu, \sigma{=}1) \sim \prod_{i}^{N} \textrm{Normal}(x_i; \mu, \sigma{=}1).$$
Having obtained the likelihood, we now need the prior in order to calculate the joint probability distribution of all the model parameters (which in this case is just $\mu$). Let us choose this to be
$$P(\mu) = \textrm{Unif}(\mu; \textrm{lower}{=}{-}10, \textrm{upper}{=}10)$$
Then finally, our posterior probability distribution for $\mu$ is
$$P(\mu | \mathbf{x}) = P(\mathbf{x}| \mu, \sigma{=}1) P(\mu)= \prod_{i}^{N} \textrm{Normal}(x_i; \mu, \sigma{=}1) \times \textrm{Unif}(\mu; \textrm{lower}{=}{-}10, \textrm{upper}{=}10)$$
MCMC Sampling
Detail of each step
Initialise
We will now start sampling, to begin we need an initial point. Usually one chooses this randomly, but here we pick a starting value of $\mu=1$
End of explanation
"""
proposal_width = 0.5
mu_proposal = norm(mu_current, proposal_width).rvs()
print mu_proposal
"""
Explanation: Proposal
Next, we propose a new point in parameter space (this is the Markov chain part of MCMC). There are many ways to do this, but we will use the Metropolis sampler which proposes a new value based on a normal distribution (unrelated to the model normal distribution) with mean given by the current value of $\mu$ and a standard deviation given by a constant: proposal_width
End of explanation
"""
likelihood_current = norm(mu_current, 1).pdf(data).prod()
likelihood_proposal = norm(mu_proposal, 1).pdf(data).prod()
# Compute prior probability of current and proposed mu
prior_current = uniform(-10, 20).pdf(mu_current) # Note, this IS unif(-10, 10) - see docs
prior_proposal = uniform(-10, 20).pdf(mu_proposal)
# Nominator of Bayes formula
p_current = likelihood_current * prior_current
p_proposal = likelihood_proposal * prior_proposal
print 'Current probability={:1.2e}, proposed probability={:1.2e}'.format(p_current, p_proposal)
"""
Explanation: Evaluate the proposed jump
End of explanation
"""
p_accept = p_proposal / p_current
print p_accept
"""
Explanation: At this point we need to introduce the Monte-Carlo part. The trick is we calculate a ratio of the two probabilities, a so-called accepetance probability
End of explanation
"""
accept = np.random.rand() < p_accept
print accept
if accept:
# Update position
cur_pos = proposal
"""
Explanation: Then, we compare it against a number drawn from a uniform distribution on $[0,1]$. If p_proposal is larger than p_current, we will always accept, but if it is smaller then we occasionally accept.
End of explanation
"""
def sampler(data, samples=4, mu_init=.5, proposal_width=.5):
mu_current = mu_init
posterior = [mu_current]
for i in range(samples):
# suggest new position
mu_proposal = norm(mu_current, proposal_width).rvs()
# Compute likelihood by multiplying probabilities of each data point
likelihood_current = norm(mu_current, 1).pdf(data).prod()
likelihood_proposal = norm(mu_proposal, 1).pdf(data).prod()
# Compute prior probability of current and proposed mu
prior_current = uniform(-10, 20).pdf(mu_current) # Note, this IS unif(-10, 10) - see docs
prior_proposal = uniform(-10, 20).pdf(mu_proposal)
p_current = likelihood_current * prior_current
p_proposal = likelihood_proposal * prior_proposal
p_accept = p_proposal / p_current
accept = np.random.rand() < p_accept
if accept:
# Update position
mu_current = mu_proposal
posterior.append(mu_current)
return posterior
"""
Explanation: This process is repeated and it can be shown that the equilibrium distribution of this (the MCMC) algorithm is equivalent to the desired distribution (of the model parameters). There are many choices in this, we have used the most basic (and unsurprisingly poor) ones.
Putting it all together
Here is a function which puts all of the previous steps together for simplicity:
End of explanation
"""
samples = sampler(data, samples=100, mu_init=1, proposal_width=0.1)
plt.plot(np.arange(len(samples)), samples)
plt.xlabel('Number of steps')
plt.ylabel('Value of $\mu$')
plt.axhline(0)
plt.show()
"""
Explanation: Short chain example
We now run the chain for a few steps to show the progress:
End of explanation
"""
samples = sampler(data, samples=5000, mu_init=samples[-1])
plt.plot(np.arange(len(samples)), samples)
plt.xlabel('Number of steps')
plt.ylabel('Value of $\mu$')
"""
Explanation: Repeat for many steps
Then we repeat for thousands of steps
End of explanation
"""
ax = plt.subplot()
sns.distplot(samples, ax=ax, bins=20)
_ = ax.set(title='Histogram of posterior for $\mu$', xlabel='$\mu$', ylabel='# samples');
"""
Explanation: Finally, our estimate of the posterior can be made by plotting a histogram (or equivalent) of the samples from this chain (taking care to excluse any 'burn-in' period).
End of explanation
"""
|
tensorflow/hub | examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb | apache-2.0 | # Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2021 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Install dependencies
!pip install --quiet "tensorflow-text==2.8.*"
!pip install --quiet torch==1.8.1
"""
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Universal Sentence Encoder SentEval demo
This colab demostrates the Universal Sentence Encoder CMLM model using the SentEval toolkit, which is a library for measuring the quality of sentence embeddings. The SentEval toolkit includes a diverse set of downstream tasks that are able to evaluate the generalization power of an embedding model and to evaluate the linguistic properties encoded.
Run the first two code blocks to setup the environment, in the third code block you can pick a SentEval task to evaluate the model. A GPU runtime is recommended to run this Colab.
To learn more about the Universal Sentence Encoder CMLM model, see https://openreview.net/forum?id=WDVD4lUCTzU.
End of explanation
"""
#@title Install SentEval and download task data
!rm -rf ./SentEval
!git clone https://github.com/facebookresearch/SentEval.git
!cd $PWD/SentEval/data/downstream && bash get_transfer_data.bash > /dev/null 2>&1
"""
Explanation: Download SentEval and task data
This step download SentEval from github and execute the data script to download the task data. It may take up to 5 minutes to complete.
End of explanation
"""
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import sys
sys.path.append(f'{os.getcwd()}/SentEval')
import tensorflow as tf
# Prevent TF from claiming all GPU memory so there is some left for pytorch.
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Memory growth needs to be the same across GPUs.
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
import tensorflow_hub as hub
import tensorflow_text
import senteval
import time
PATH_TO_DATA = f'{os.getcwd()}/SentEval/data'
MODEL = 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1' #@param ['https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1', 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-large/1']
PARAMS = 'rapid prototyping' #@param ['slower, best performance', 'rapid prototyping']
TASK = 'CR' #@param ['CR','MR', 'MPQA', 'MRPC', 'SICKEntailment', 'SNLI', 'SST2', 'SUBJ', 'TREC']
params_prototyping = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5}
params_prototyping['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128,
'tenacity': 3, 'epoch_size': 2}
params_best = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10}
params_best['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16,
'tenacity': 5, 'epoch_size': 6}
params = params_best if PARAMS == 'slower, best performance' else params_prototyping
preprocessor = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
encoder = hub.KerasLayer(
"https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1")
inputs = tf.keras.Input(shape=tf.shape(''), dtype=tf.string)
outputs = encoder(preprocessor(inputs))
model = tf.keras.Model(inputs=inputs, outputs=outputs)
def prepare(params, samples):
return
def batcher(_, batch):
batch = [' '.join(sent) if sent else '.' for sent in batch]
return model.predict(tf.constant(batch))["default"]
se = senteval.engine.SE(params, batcher, prepare)
print("Evaluating task %s with %s parameters" % (TASK, PARAMS))
start = time.time()
results = se.eval(TASK)
end = time.time()
print('Time took on task %s : %.1f. seconds' % (TASK, end - start))
print(results)
"""
Explanation: Execute a SentEval evaulation task
The following code block executes a SentEval task and output the results, choose one of the following tasks to evaluate the USE CMLM model:
MR CR SUBJ MPQA SST TREC MRPC SICK-E
Select a model, params and task to run. The rapid prototyping params can be used for reducing computation time for faster result.
It typically takes 5-15 mins to complete a task with the 'rapid prototyping' params and up to an hour with the 'slower, best performance' params.
params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5}
params['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128,
'tenacity': 3, 'epoch_size': 2}
For better result, use the slower 'slower, best performance' params, computation may take up to 1 hour:
params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10}
params['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16,
'tenacity': 5, 'epoch_size': 6}
End of explanation
"""
|
samgoodgame/sf_crime | iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_20_1213.ipynb | mit | # Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
"""
Explanation: Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
Environment and Data
End of explanation
"""
#data_path = "./data/train_transformed.csv"
#df = pd.read_csv(data_path, header=0)
#x_data = df.drop('category', 1)
#y = df.category.as_matrix()
########## Adding the date back into the data
#import csv
#import time
#import calendar
#data_path = "./data/train.csv"
#dataCSV = open(data_path, 'rt')
#csvData = list(csv.reader(dataCSV))
#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
#allData = csvData[1:]
#dataCSV.close()
#df2 = pd.DataFrame(allData)
#df2.columns = csvFields
#dates = df2['Dates']
#dates = dates.apply(time.strptime, args=("%Y-%m-%d %H:%M:%S",))
#dates = dates.apply(calendar.timegm)
#print(dates.head())
#x_data['secondsFromEpoch'] = dates
#colnames = x_data.columns.tolist()
#colnames = colnames[-1:] + colnames[:-1]
#x_data = x_data[colnames]
##########
########## Adding the weather data into the original crime data
#weatherData1 = "./data/1027175.csv"
#weatherData2 = "./data/1027176.csv"
#dataCSV = open(weatherData1, 'rt')
#csvData = list(csv.reader(dataCSV))
#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
#allWeatherData1 = csvData[1:]
#dataCSV.close()
#dataCSV = open(weatherData2, 'rt')
#csvData = list(csv.reader(dataCSV))
#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
#allWeatherData2 = csvData[1:]
#dataCSV.close()
#weatherDF1 = pd.DataFrame(allWeatherData1)
#weatherDF1.columns = csvFields
#dates1 = weatherDF1['DATE']
#sunrise1 = weatherDF1['DAILYSunrise']
#sunset1 = weatherDF1['DAILYSunset']
#weatherDF2 = pd.DataFrame(allWeatherData2)
#weatherDF2.columns = csvFields
#dates2 = weatherDF2['DATE']
#sunrise2 = weatherDF2['DAILYSunrise']
#sunset2 = weatherDF2['DAILYSunset']
#functions for processing the sunrise and sunset times of each day
#def get_hour_and_minute(milTime):
# hour = int(milTime[:-2])
# minute = int(milTime[-2:])
# return [hour, minute]
#def get_date_only(date):
# return time.struct_time(tuple([date[0], date[1], date[2], 0, 0, 0, date[6], date[7], date[8]]))
#def structure_sun_time(timeSeries, dateSeries):
# sunTimes = timeSeries.copy()
# for index in range(len(dateSeries)):
# sunTimes[index] = time.struct_time(tuple([dateSeries[index][0], dateSeries[index][1], dateSeries[index][2], timeSeries[index][0], timeSeries[index][1], dateSeries[index][5], dateSeries[index][6], dateSeries[index][7], dateSeries[index][8]]))
# return sunTimes
#dates1 = dates1.apply(time.strptime, args=("%Y-%m-%d %H:%M",))
#sunrise1 = sunrise1.apply(get_hour_and_minute)
#sunrise1 = structure_sun_time(sunrise1, dates1)
#sunrise1 = sunrise1.apply(calendar.timegm)
#sunset1 = sunset1.apply(get_hour_and_minute)
#sunset1 = structure_sun_time(sunset1, dates1)
#sunset1 = sunset1.apply(calendar.timegm)
#dates1 = dates1.apply(calendar.timegm)
#dates2 = dates2.apply(time.strptime, args=("%Y-%m-%d %H:%M",))
#sunrise2 = sunrise2.apply(get_hour_and_minute)
#sunrise2 = structure_sun_time(sunrise2, dates2)
#sunrise2 = sunrise2.apply(calendar.timegm)
#sunset2 = sunset2.apply(get_hour_and_minute)
#sunset2 = structure_sun_time(sunset2, dates2)
#sunset2 = sunset2.apply(calendar.timegm)
#dates2 = dates2.apply(calendar.timegm)
#weatherDF1['DATE'] = dates1
#weatherDF1['DAILYSunrise'] = sunrise1
#weatherDF1['DAILYSunset'] = sunset1
#weatherDF2['DATE'] = dates2
#weatherDF2['DAILYSunrise'] = sunrise2
#weatherDF2['DAILYSunset'] = sunset2
#weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True)
# Starting off with some of the easier features to work with-- more to come here . . . still in beta
#weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', \
# 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY', 'DAILYSunrise', 'DAILYSunset']]
#weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True)
#weatherDates = weatherMetrics['DATE']
#'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed',
#'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY'
#timeWindow = 10800 #3 hours
#hourlyDryBulbTemp = []
#hourlyRelativeHumidity = []
#hourlyWindSpeed = []
#hourlySeaLevelPressure = []
#hourlyVisibility = []
#dailySunrise = []
#dailySunset = []
#daylight = []
#test = 0
#for timePoint in dates:#dates is the epoch time from the kaggle data
# relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)]
# hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean())
# hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean())
# hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean())
# hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean())
# hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean())
# dailySunrise.append(relevantWeather['DAILYSunrise'].iloc[-1])
# dailySunset.append(relevantWeather['DAILYSunset'].iloc[-1])
# daylight.append(1.0*((timePoint >= relevantWeather['DAILYSunrise'].iloc[-1]) and (timePoint < relevantWeather['DAILYSunset'].iloc[-1])))
#if timePoint < relevantWeather['DAILYSunset'][-1]:
#daylight.append(1)
#else:
#daylight.append(0)
# if test%100000 == 0:
# print(relevantWeather)
# test += 1
#hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp))
#hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity))
#hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed))
#hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure))
#hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility))
#dailySunrise = pd.Series.from_array(np.array(dailySunrise))
#dailySunset = pd.Series.from_array(np.array(dailySunset))
#daylight = pd.Series.from_array(np.array(daylight))
#x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp
#x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity
#x_data['HOURLYWindSpeed'] = hourlyWindSpeed
#x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure
#x_data['HOURLYVISIBILITY'] = hourlyVisibility
#x_data['DAILYSunrise'] = dailySunrise
#x_data['DAILYSunset'] = dailySunset
#x_data['Daylight'] = daylight
#x_data.to_csv(path_or_buf="C:/MIDS/W207 final project/x_data.csv")
##########
# Impute missing values with mean values:
#x_complete = x_data.fillna(x_data.mean())
#X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
#X = MinMaxScaler().fit_transform(X_raw)
# Shuffle data to remove any underlying pattern that may exist:
#shuffle = np.random.permutation(np.arange(X.shape[0]))
#X, y = X[shuffle], y[shuffle]
# Separate training, dev, and test data:
#test_data, test_labels = X[800000:], y[800000:]
#dev_data, dev_labels = X[700000:800000], y[700000:800000]
#train_data, train_labels = X[:700000], y[:700000]
#mini_train_data, mini_train_labels = X[:75000], y[:75000]
#mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]
#labels_set = set(mini_dev_labels)
#print(labels_set)
#print(len(labels_set))
#print(train_data[:10])
"""
Explanation: DDL to construct table for SQL transformations:
sql
CREATE TABLE kaggle_sf_crime (
dates TIMESTAMP,
category VARCHAR,
descript VARCHAR,
dayofweek VARCHAR,
pd_district VARCHAR,
resolution VARCHAR,
addr VARCHAR,
X FLOAT,
Y FLOAT);
Getting training data into a locally hosted PostgreSQL database:
sql
\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;
SQL Query used for transformations:
sql
SELECT
category,
date_part('hour', dates) AS hour_of_day,
CASE
WHEN dayofweek = 'Monday' then 1
WHEN dayofweek = 'Tuesday' THEN 2
WHEN dayofweek = 'Wednesday' THEN 3
WHEN dayofweek = 'Thursday' THEN 4
WHEN dayofweek = 'Friday' THEN 5
WHEN dayofweek = 'Saturday' THEN 6
WHEN dayofweek = 'Sunday' THEN 7
END AS dayofweek_numeric,
X,
Y,
CASE
WHEN pd_district = 'BAYVIEW' THEN 1
ELSE 0
END AS bayview_binary,
CASE
WHEN pd_district = 'INGLESIDE' THEN 1
ELSE 0
END AS ingleside_binary,
CASE
WHEN pd_district = 'NORTHERN' THEN 1
ELSE 0
END AS northern_binary,
CASE
WHEN pd_district = 'CENTRAL' THEN 1
ELSE 0
END AS central_binary,
CASE
WHEN pd_district = 'BAYVIEW' THEN 1
ELSE 0
END AS pd_bayview_binary,
CASE
WHEN pd_district = 'MISSION' THEN 1
ELSE 0
END AS mission_binary,
CASE
WHEN pd_district = 'SOUTHERN' THEN 1
ELSE 0
END AS southern_binary,
CASE
WHEN pd_district = 'TENDERLOIN' THEN 1
ELSE 0
END AS tenderloin_binary,
CASE
WHEN pd_district = 'PARK' THEN 1
ELSE 0
END AS park_binary,
CASE
WHEN pd_district = 'RICHMOND' THEN 1
ELSE 0
END AS richmond_binary,
CASE
WHEN pd_district = 'TARAVAL' THEN 1
ELSE 0
END AS taraval_binary
FROM kaggle_sf_crime;
Loading the data, version 2, with weather features to improve performance: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
We seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include).
End of explanation
"""
data_path = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare
# crimes from the data for quality issues.
X_minus_trea = X[np.where(y != 'TREA')]
y_minus_trea = y[np.where(y != 'TREA')]
X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
# Separate training, dev, and test data:
test_data, test_labels = X_final[800000:], y_final[800000:]
dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]
train_data, train_labels = X_final[100000:700000], y_final[100000:700000]
calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]
# Create mini versions of the above sets
mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]
mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]
mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]
# Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow
crime_labels = list(set(y_final))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
#print(len(train_data),len(train_labels))
#print(len(dev_data),len(dev_labels))
#print(len(mini_train_data),len(mini_train_labels))
#print(len(mini_dev_data),len(mini_dev_labels))
#print(len(test_data),len(test_labels))
#print(len(mini_calibrate_data),len(mini_calibrate_labels))
#print(len(calibrate_data),len(calibrate_labels))
"""
Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
End of explanation
"""
### Read in zip code data
#data_path_zip = "./data/2016_zips.csv"
#zips = pd.read_csv(data_path_zip, header=0, sep ='\t', usecols = [0,5,6], names = ["GEOID", "INTPTLAT", "INTPTLONG"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float})
#sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)]
### Mapping longitude/latitude to zipcodes
#def dist(lat1, long1, lat2, long2):
# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)
# return abs(lat1-lat2)+abs(long1-long2)
#def find_zipcode(lat, long):
# distances = sf_zips.apply(lambda row: dist(lat, long, row["INTPTLAT"], row["INTPTLONG"]), axis=1)
# return sf_zips.loc[distances.idxmin(), "GEOID"]
#x_data['zipcode'] = 0
#for i in range(0, 1):
# x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)
#x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)
### Read in school data
#data_path_schools = "./data/pubschls.csv"
#schools = pd.read_csv(data_path_schools,header=0, sep ='\t', usecols = ["CDSCode","StatusType", "School", "EILCode", "EILName", "Zip", "Latitude", "Longitude"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float})
#schools = schools[(schools["StatusType"] == 'Active')]
### Find the closest school
#def dist(lat1, long1, lat2, long2):
# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)
#def find_closest_school(lat, long):
# distances = schools.apply(lambda row: dist(lat, long, row["Latitude"], row["Longitude"]), axis=1)
# return min(distances)
#x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)
"""
Explanation: Sarah's School data that we may still get to work as features: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
End of explanation
"""
# The Kaggle submission format requires listing the ID of each example.
# This is to remember the order of the IDs after shuffling
#allIDs = np.array(list(df.axes[0]))
#allIDs = allIDs[shuffle]
#testIDs = allIDs[800000:]
#devIDs = allIDs[700000:800000]
#trainIDs = allIDs[:700000]
# Extract the column names for the required submission format
#sampleSubmission_path = "./data/sampleSubmission.csv"
#sampleDF = pd.read_csv(sampleSubmission_path)
#allColumns = list(sampleDF.columns)
#featureColumns = allColumns[1:]
# Extracting the test data for a baseline submission
#real_test_path = "./data/test_transformed.csv"
#testDF = pd.read_csv(real_test_path, header=0)
#real_test_data = testDF
#test_complete = real_test_data.fillna(real_test_data.mean())
#Test_raw = test_complete.as_matrix()
#TestData = MinMaxScaler().fit_transform(Test_raw)
# Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason
#testIDs = list(testDF.axes[0])
"""
Explanation: Formatting to meet Kaggle submission standards: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
End of explanation
"""
# Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data
#def MNB():
# mnb = MultinomialNB(alpha = 0.0000001)
# mnb.fit(train_data, train_labels)
# print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
# return mnb.predict_proba(dev_data)
#MNB()
#baselinePredictionProbabilities = MNB()
# Place the resulting prediction probabilities in a .csv file in the required format
# First, turn the prediction probabilties into a data frame
#resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns)
# Add the IDs as a final column
#resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index)
# Make the 'Id' column the first column
#colnames = resultDF.columns.tolist()
#colnames = colnames[-1:] + colnames[:-1]
#resultDF = resultDF[colnames]
# Output to a .csv file
# resultDF.to_csv('result.csv',index=False)
"""
Explanation: Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
End of explanation
"""
## Data sub-setting quality check-point
print(train_data[:1])
print(train_labels[:1])
# Modeling quality check-point with MNB--fast model
def MNB():
mnb = MultinomialNB(alpha = 0.0000001)
mnb.fit(train_data, train_labels)
print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
MNB()
"""
Explanation: Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.
End of explanation
"""
def model_prototype(train_data, train_labels, eval_data, eval_labels):
knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels)
bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels)
mnb = MultinomialNB().fit(train_data, train_labels)
log_reg = LogisticRegression().fit(train_data, train_labels)
neural_net = MLPClassifier().fit(train_data, train_labels)
random_forest = RandomForestClassifier().fit(train_data, train_labels)
decision_tree = DecisionTreeClassifier().fit(train_data, train_labels)
support_vm_step_one = svm.SVC(probability = True)
support_vm = support_vm_step_one.fit(train_data, train_labels)
models = [knn, bnb, mnb, log_reg, neural_net, random_forest, decision_tree, support_vm]
for model in models:
eval_prediction_probabilities = model.predict_proba(eval_data)
eval_predictions = model.predict(eval_data)
print(model, "Multi-class Log Loss:", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n")
model_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)
"""
Explanation: Defining Performance Criteria
As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error.
(Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able)
Multi-class Log Loss:
Accuracy:
F-score:
Lift:
ROC Area:
Average precision
Precision/Recall break-even point:
Squared-error:
Model Prototyping
We will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data:
End of explanation
"""
list_for_ks = []
list_for_ws = []
list_for_ps = []
list_for_log_loss = []
def k_neighbors_tuned(k,w,p):
tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)
list_for_ks.append(this_k)
list_for_ws.append(this_w)
list_for_ps.append(this_p)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",", p, "is:", working_log_loss)
k_value_tuning = [i for i in range(1,5002,500)]
weight_tuning = ['uniform', 'distance']
power_parameter_tuning = [1,2]
start = time.clock()
for this_k in k_value_tuning:
for this_w in weight_tuning:
for this_p in power_parameter_tuning:
k_neighbors_tuned(this_k, this_w, this_p)
index_best_logloss = np.argmin(list_for_log_loss)
print('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier
Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process.
1) Feature addition
We previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission.
We can have Kalvin expand on exactly what he did here.
2) Hyperparameter tuning
Each classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below.
3) Model calibration
We can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance.
Platt Scaling: ((brief explanation of how it works))
Isotonic Regression: ((brief explanation of how it works))
For each classifier, we can use CalibratedClassifierCV to perform probability calibration with isotonic regression or sigmoid (Platt Scaling). The parameters within CalibratedClassifierCV that we can adjust are the method ('sigmoid' or 'isotonic') and cv (cross-validation generator). As we will already be training our models before calibration, we will only use cv = 'prefit'. Thus, in practice the cross-validation generator will not be a modifiable parameter for us.
K-Nearest Neighbors
Hyperparameter tuning:
For the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p').
End of explanation
"""
list_for_ks = []
list_for_ws = []
list_for_ps = []
list_for_ms = []
list_for_log_loss = []
def knn_calibrated(k,w,p,m):
tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)
ccv = CalibratedClassifierCV(tuned_KNN, method = m, cv = 'prefit')
ccv.fit(mini_calibrate_data, mini_calibrate_labels)
ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)
list_for_ks.append(this_k)
list_for_ws.append(this_w)
list_for_ps.append(this_p)
list_for_ms.append(this_m)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",",p,",",m,"is:", working_log_loss)
k_value_tuning = ([i for i in range(1,21,1)] + [j for j in range(25,51,5)] + [k for k in range(55,22000,1000)])
weight_tuning = ['uniform', 'distance']
power_parameter_tuning = [1,2]
methods = ['sigmoid', 'isotonic']
start = time.clock()
for this_k in k_value_tuning:
for this_w in weight_tuning:
for this_p in power_parameter_tuning:
for this_m in methods:
knn_calibrated(this_k, this_w, this_p, this_m)
index_best_logloss = np.argmin(list_for_log_loss)
print('For KNN the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss], 'm =', list_for_ms[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Model calibration:
Here we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
End of explanation
"""
list_for_as = []
list_for_bs = []
list_for_log_loss = []
def BNB_tuned(a,b):
bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data)
list_for_as.append(this_a)
list_for_bs.append(this_b)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with BNB and a,b =", a,",",b,"is:", working_log_loss)
alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]
binarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]
start = time.clock()
for this_a in alpha_tuning:
for this_b in binarize_thresholds_tuning:
BNB_tuned(this_a, this_b)
index_best_logloss = np.argmin(list_for_log_loss)
print('For BNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Comments on results for Hyperparameter tuning and Calibration for KNN:
We see that the best log loss we achieve for KNN is with _ neighbors, _ weights, and _ power parameter.
When we add-in calibration, we see that the the best log loss we achieve for KNN is with _ neighbors, _ weights, _ power parameter, and _ calibration method.
(Further explanation here?)
Multinomial, Bernoulli, and Gaussian Naive Bayes
Hyperparameter tuning: Bernoulli Naive Bayes
For the Bernoulli Naive Bayes classifier, we seek to optimize the alpha parameter (Laplace smoothing parameter) and the binarize parameter (threshold for binarizing of the sample features). For the binarize parameter, we will create arbitrary thresholds over which our features, which are not binary/boolean features, will be binarized.
End of explanation
"""
list_for_as = []
list_for_bs = []
list_for_ms = []
list_for_log_loss = []
def BNB_calibrated(a,b,m):
bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data)
ccv = CalibratedClassifierCV(bnb_tuned, method = m, cv = 'prefit')
ccv.fit(mini_calibrate_data, mini_calibrate_labels)
ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)
list_for_as.append(this_a)
list_for_bs.append(this_b)
list_for_ms.append(this_m)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with BNB and a,b,m =", a,",", b,",", m, "is:", working_log_loss)
alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]
binarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]
methods = ['sigmoid', 'isotonic']
start = time.clock()
for this_a in alpha_tuning:
for this_b in binarize_thresholds_tuning:
for this_m in methods:
BNB_calibrated(this_a, this_b, this_m)
index_best_logloss = np.argmin(list_for_log_loss)
print('For BNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss], 'method = ', list_for_ms[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Model calibration: BernoulliNB
Here we will calibrate the BNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
End of explanation
"""
list_for_as = []
list_for_log_loss = []
def MNB_tuned(a):
mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities =mnb_tuned.predict_proba(mini_dev_data)
list_for_as.append(this_a)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with BNB and a =", a, "is:", working_log_loss)
alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]
start = time.clock()
for this_a in alpha_tuning:
MNB_tuned(this_a)
index_best_logloss = np.argmin(list_for_log_loss)
print('For MNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Hyperparameter tuning: Multinomial Naive Bayes
For the Multinomial Naive Bayes classifer, we seek to optimize the alpha parameter (Laplace smoothing parameter).
End of explanation
"""
list_for_as = []
list_for_ms = []
list_for_log_loss = []
def MNB_calibrated(a,m):
mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels)
ccv = CalibratedClassifierCV(mnb_tuned, method = m, cv = 'prefit')
ccv.fit(mini_calibrate_data, mini_calibrate_labels)
ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)
list_for_as.append(this_a)
list_for_ms.append(this_m)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with MNB and a =", a, "and m =", m, "is:", working_log_loss)
alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]
methods = ['sigmoid', 'isotonic']
start = time.clock()
for this_a in alpha_tuning:
for this_m in methods:
MNB_calibrated(this_a, this_m)
index_best_logloss = np.argmin(list_for_log_loss)
print('For MNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'and method =', list_for_ms[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Model calibration: MultinomialNB
Here we will calibrate the MNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
End of explanation
"""
def GNB_pre_tune():
gnb_pre_tuned = GaussianNB().fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities =gnb_pre_tuned.predict_proba(mini_dev_data)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
print("Multi-class Log Loss with pre-tuned GNB is:", working_log_loss)
GNB_pre_tune()
def GNB_post_tune():
# Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes
# adding noise can improve performance by making the data more normal:
mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1])
modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise)
gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels)
dev_prediction_probabilities =gnb_with_noise.predict_proba(mini_dev_data)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
print("Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution is:", working_log_loss)
GNB_post_tune()
"""
Explanation: Tuning: Gaussian Naive Bayes
For the Gaussian Naive Bayes classifier there are no inherent parameters within the classifier function to optimize, but we will look at our log loss before and after adding noise to the data that is hypothesized to give it a more normal (Gaussian) distribution, which is required by the GNB classifier.
End of explanation
"""
list_for_ms = []
list_for_log_loss = []
def GNB_calibrated(m):
# Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes
# adding noise can improve performance by making the data more normal:
mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1])
modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise)
gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels)
ccv = CalibratedClassifierCV(gnb_with_noise, method = m, cv = 'prefit')
ccv.fit(mini_calibrate_data, mini_calibrate_labels)
ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)
list_for_ms.append(this_m)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution and after calibration is:", working_log_loss, 'with calibration method =', m)
methods = ['sigmoid', 'isotonic']
start = time.clock()
for this_m in methods:
GNB_calibrated(this_m)
index_best_logloss = np.argmin(list_for_log_loss)
print('For GNB the best log loss with tuning and calibration is',list_for_log_loss[index_best_logloss], 'with method =', list_for_ms[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Model calibration: GaussianNB
Here we will calibrate the GNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
End of explanation
"""
list_for_cs = []
list_for_ss = []
list_for_mds = []
list_for_mss = []
list_for_cws = []
list_for_fs = []
list_for_log_loss = []
def DT_tuned(c,s,md,ms,cw,f):
tuned_DT = DecisionTreeClassifier(criterion=c, splitter=s, max_depth=md, min_samples_leaf=ms, max_features=f, class_weight=cw).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = tuned_DT.predict_proba(mini_dev_data)
list_for_cs.append(this_c)
list_for_ss.append(this_s)
list_for_mds.append(this_md)
list_for_mss.append(this_ms)
list_for_cws.append(this_cw)
list_for_fs.append(this_f)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with DT and c,s,md,ms,cw,f =", c,",",s,",", md,",",ms,",",cw,",",f,"is:", working_log_loss)
criterion_tuning = ['gini', 'entropy']
splitter_tuning = ['best', 'random']
max_depth_tuning = ([None,6,7,8,9,10,11,12,13,14,15,16,17,18,19])
min_samples_leaf_tuning = [x + 1 for x in [i for i in range(0,int(0.091*len(mini_train_data)),100)]]
class_weight_tuning = [None, 'balanced']
max_features_tuning = ['auto', 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
start = time.clock()
for this_c in criterion_tuning:
for this_s in splitter_tuning:
for this_md in max_depth_tuning:
for this_ms in min_samples_leaf_tuning:
for this_cw in class_weight_tuning:
for this_f in max_features_tuning:
DT_tuned(this_c, this_s, this_md, this_ms, this_cw, this_f)
index_best_logloss = np.argmin(list_for_log_loss)
print('For DT the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with criterion =', list_for_cs[index_best_logloss], 'splitter =', list_for_ss[index_best_logloss], 'max_depth =', list_for_mds[index_best_logloss], 'min_samples_leaf =', list_for_mss[index_best_logloss], 'class_weight =', list_for_cws[index_best_logloss], 'max_features =', list_for_fs[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Logistic Regression
Hyperparameter tuning:
For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')
Model calibration:
See above
Decision Tree
Hyperparameter tuning:
For the Decision Tree classifier, we can seek to optimize the following classifier parameters:
criterion: The function to measure the quality of a split; can be either Gini impurity "gini" or information gain "entropy"
splitter: The strategy used to choose the split at each node; can be either "best" to choose the best split or "random" to choose the best random split
min_samples_leaf: The minimum number of samples required to be at a leaf node
max_depth: The maximum depth of trees. If default "None" then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples
class_weight: The weights associated with classes; can be "None" giving all classes weight of one, or can be "balanced", which uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data
max_features: The number of features to consider when looking for the best split; can be "int", "float" (percent), "auto", "sqrt", or "None"
Other adjustable parameters include:
- min_samples_split: The minimum number of samples required to split an internal node; can be an integer or a float (percentage and ceil as the minimum number of samples for each node)
- min_weight_fraction_leaf: The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node; default = 0
- max_leaf_nodes: Grosw a tree with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If "None" then unlimited number of leaf nodes is used.
- min_impurity_decrease: A node will be split if this split induces a decrease of the impurity greater than or equal to the min_impurity_decrease value. Default is zero.
Setting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL) [].
End of explanation
"""
list_for_cs = []
list_for_ss = []
list_for_mds = []
list_for_mss = []
list_for_cws = []
list_for_fs = []
list_for_cms = []
list_for_log_loss = []
def DT_calibrated(c,s,md,ms,cw,f,cm):
tuned_DT = DecisionTreeClassifier(criterion=c, splitter=s, max_depth=md, min_samples_leaf=ms, max_features=f, class_weight=cw).fit(mini_train_data, mini_train_labels)
ccv = CalibratedClassifierCV(tuned_DT, method = cm, cv = 'prefit')
ccv.fit(mini_calibrate_data, mini_calibrate_labels)
ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)
list_for_cs.append(this_c)
list_for_ss.append(this_s)
list_for_mds.append(this_md)
list_for_mss.append(this_ms)
list_for_cws.append(this_cw)
list_for_fs.append(this_f)
list_for_cms.append(this_cm)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
print("Multi-class Log Loss with DT and c,s,md,ms,cw,f =", c,",",s,",", md,",",ms,",",cw,",",f,",",cm,"is:", working_log_loss)
criterion_tuning = ['gini', 'entropy']
splitter_tuning = ['best', 'random']
max_depth_tuning = ([None,6,7,8,9,10,11,12,13,14,15,16,17,18,19])
min_samples_leaf_tuning = [x + 1 for x in [i for i in range(0,int(0.091*len(mini_train_data)),100)]]
class_weight_tuning = [None, 'balanced']
max_features_tuning = ['auto', 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
methods = ['sigmoid', 'isotonic']
start = time.clock()
for this_c in criterion_tuning:
for this_s in splitter_tuning:
for this_md in max_depth_tuning:
for this_ms in min_samples_leaf_tuning:
for this_cw in class_weight_tuning:
for this_f in max_features_tuning:
for this_cm in methods:
DT_calibrated(this_c, this_s, this_md, this_ms, this_cw, this_f, this_cm)
index_best_logloss = np.argmin(list_for_log_loss)
print('For DT the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with criterion =', list_for_cs[index_best_logloss], 'splitter =', list_for_ss[index_best_logloss], 'max_depth =', list_for_mds[index_best_logloss], 'min_samples_leaf =', list_for_mss[index_best_logloss], 'class_weight =', list_for_cws[index_best_logloss], 'max_features =', list_for_fs[index_best_logloss], 'and calibration method =', list_for_cms[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
"""
Explanation: Model calibration:
See above
End of explanation
"""
### All the work from Sarah's notebook:
import theano
from theano import tensor as T
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
print (theano.config.device) # We're using CPUs (for now)
print (theano.config.floatX )# Should be 64 bit for CPUs
np.random.seed(0)
from IPython.display import display, clear_output
numFeatures = train_data[1].size
numTrainExamples = train_data.shape[0]
numTestExamples = test_data.shape[0]
print ('Features = %d' %(numFeatures))
print ('Train set = %d' %(numTrainExamples))
print ('Test set = %d' %(numTestExamples))
class_labels = list(set(train_labels))
print(class_labels)
numClasses = len(class_labels)
### Binarize the class labels
def binarizeY(data):
binarized_data = np.zeros((data.size,39))
for j in range(0,data.size):
feature = data[j]
i = class_labels.index(feature)
binarized_data[j,i]=1
return binarized_data
train_labels_b = binarizeY(train_labels)
test_labels_b = binarizeY(test_labels)
numClasses = train_labels_b[1].size
print ('Classes = %d' %(numClasses))
print ('\n', train_labels_b[:5, :], '\n')
print (train_labels[:10], '\n')
###1) Parameters
numFeatures = train_data.shape[1]
numHiddenNodeslayer1 = 50
numHiddenNodeslayer2 = 30
w_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01)))
w_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01)))
w_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01)))
params = [w_1, w_2, w_3]
###2) Model
X = T.matrix()
Y = T.matrix()
srng = RandomStreams()
def dropout(X, p=0.):
if p > 0:
X *= srng.binomial(X.shape, p=1 - p)
X /= 1 - p
return X
def model(X, w_1, w_2, w_3, p_1, p_2, p_3):
return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3))
y_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5)
y_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.)
### (3) Cost function
cost = T.mean(T.sqr(y_hat - Y))
cost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y))
### (4) Objective (and solver)
alpha = 0.01
def backprop(cost, w):
grads = T.grad(cost=cost, wrt=w)
updates = []
for wi, grad in zip(w, grads):
updates.append([wi, wi - grad * alpha])
return updates
update = backprop(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
y_pred = T.argmax(y_hat_predict, axis=1)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
miniBatchSize = 10
def gradientDescent(epochs):
for i in range(epochs):
for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)):
cc = train(train_data[start:end], train_labels_b[start:end])
clear_output(wait=True)
print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) )
gradientDescent(50)
### How to decide what # to use for epochs? epochs in this case are how many rounds?
### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should
### do more iterations; otherwise if its looking like its flattening, you can stop
"""
Explanation: Support Vector Machines (Kalvin)
Hyperparameter tuning:
For the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed')
See source [2] for parameter optimization in SVM
Model calibration:
See above
Neural Nets (Sarah)
Hyperparameter tuning:
For the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive')
End of explanation
"""
# Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss.
# This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed
# and the corresponding performance metrics are gathered.
"""
Explanation: Model calibration:
See above
Random Forest (Sam, possibly in AWS)
Hyperparameter tuning:
For the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)
Model calibration:
See above
Meta-estimators
AdaBoost Classifier
Hyperparameter tuning:
There are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values.
Adaboosting each classifier:
We will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.
Bagging Classifier
Hyperparameter tuning:
For the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)
Bagging each classifier:
We will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.
Gradient Boosting Classifier
Hyperparameter tuning:
For the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features
Gradient Boosting each classifier:
We will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.
Final evaluation on test data
End of explanation
"""
A = [n for n in range(1,21,1)]+([i for i in range(25,50,5)])
print(A)
A = int(float(1.1))
print(A)
print([i for i in range(0,int(0.031*len(mini_train_data)),100)])
A = 1+[1,2,3]
print(A)
"""
Explanation: References
1) Hsiang, Solomon M. and Burke, Marshall and Miguel, Edward. "Quantifying the Influence of Climate on Human Conflict". Science, Vol 341, Issue 6151, 2013
2) Huang, Cheng-Lung. Wang, Chieh-Jen. "A GA-based feature selection and parameters optimization for support vector machines". Expert Systems with Applications, Vol 31, 2006, p 231-240
3) https://gallery.cortanaintelligence.com/Experiment/Evaluating-and-Parameter-Tuning-a-Decision-Tree-Model-1
End of explanation
"""
|
zomansud/coursera | ml-classification/week-2/module-4-linear-classifier-regularization-assignment-blank.ipynb | mit | from __future__ import division
import graphlab
"""
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
products = graphlab.SFrame('amazon_baby_subset.gl/')
products.head()
"""
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
"""
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
"""
products
"""
Explanation: Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).
End of explanation
"""
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
"""
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
"""
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
"""
Explanation: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
"""
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
"""
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
import math
sigmoid = lambda x: 1 / (1 + math.exp(-x))
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
dot_product = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = []
for dpi in dot_product:
predictions.append(sigmoid(dpi))
# return predictions
return predictions
"""
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
"""
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative += -2 * l2_penalty * coefficient
return derivative
"""
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
"""
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
"""
Explanation: Quiz question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
"""
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative_with_L2(
errors,
feature_matrix[:,j],
coefficients[j],
l2_penalty,
j == 0
)
# add the step size times the derivative to the current coefficient
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
"""
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
"""
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
"""
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
"""
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
"""
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
"""
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
"""
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
"""
coefficients_l2_0_no_intercept = list(coefficients_0_penalty[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients_l2_0_no_intercept)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
positive_words = []
for t in word_coefficient_tuples[:5]:
positive_words.append(t[0])
positive_words
negative_words = []
for t in word_coefficient_tuples[-5:]:
negative_words.append(t[0])
negative_words
"""
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
"""
Explanation: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
"""
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
"""
Explanation: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
End of explanation
"""
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
"""
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
"""
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
"""
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation
"""
|
kkozarev/mwacme | notebooks/ICRS_on_AIA_BinChen.ipynb | gpl-2.0 | aiamap=sunpy.map.Map('/Users/kkozarev/sunpy/data/sample_data/AIA20110319_105400_0171.fits')
"""
Explanation: Load an AIA image
End of explanation
"""
sunc_1au=SkyCoord(ra='23h53m53.47',dec='-00d39m44.3s', distance=1.*u.au,frame='icrs').transform_to(aiamap.coordinate_frame)
"""
Explanation: I then go to JPL Horizons (https://ssd.jpl.nasa.gov/horizons.cgi) and find out RA and DEC of the solar disk center. I use a geocentric observer because I do not know exactly where SDO is located at that time.
The following is assuming the target is at 1 AU (where the Sun is supposed to be)
End of explanation
"""
sunc_1ly=SkyCoord(ra='23h53m53.47',dec='-00d39m44.3s',
distance=1.*u.lightyear,frame='icrs').transform_to(aiamap.coordinate_frame)
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(projection=aiamap)
aiamap.plot(axes=ax)
aiamap.draw_grid(axes=ax)
aiamap.draw_limb(axes=ax)
ax.plot_coord(sunc_1au, '+w', ms=10, label='Sun Center 1 AU')
ax.plot_coord(sunc_1ly, '*r', ms=10, label='Sun Center 1 LY')
#plt.show()
sunc_1au
sunc_1ly
"""
Explanation: This is assuming the target is at 1 ly away (very far!)
End of explanation
"""
|
rishuatgithub/MLPy | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/01-Visualizing-POS.ipynb | apache-2.0 | # Perform standard imports
import spacy
nlp = spacy.load('en_core_web_sm')
# Import the displaCy library
from spacy import displacy
# Create a simple Doc object
doc = nlp(u"The quick brown fox jumped over the lazy dog's back.")
# Render the dependency parse immediately inside Jupyter:
displacy.render(doc, style='dep', jupyter=True, options={'distance': 110})
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Visualizing Parts of Speech
spaCy offers an outstanding visualizer called displaCy:
End of explanation
"""
for token in doc:
print(f'{token.text:{10}} {token.pos_:{7}} {token.dep_:{7}} {spacy.explain(token.dep_)}')
"""
Explanation: The dependency parse shows the coarse POS tag for each token, as well as the dependency tag if given:
End of explanation
"""
displacy.serve(doc, style='dep', options={'distance': 110})
"""
Explanation: Creating Visualizations Outside of Jupyter
If you're using another Python IDE or writing a script, you can choose to have spaCy serve up HTML separately.
Instead of displacy.render(), use displacy.serve():
End of explanation
"""
doc2 = nlp(u"This is a sentence. This is another, possibly longer sentence.")
# Create spans from Doc.sents:
spans = list(doc2.sents)
displacy.serve(spans, style='dep', options={'distance': 110})
"""
Explanation: <font color=blue>After running the cell above, click the link below to view the dependency parse:</font>
http://127.0.0.1:5000
<br><br>
<font color=red>To shut down the server and return to jupyter, interrupt the kernel either through the Kernel menu above, by hitting the black square on the toolbar, or by typing the keyboard shortcut Esc, I, I</font>
<font color=green>NOTE: We'll use this method moving forward because, at this time, several of the customizations we want to show don't work well in Jupyter.</font>
Handling Large Text
displacy.serve() accepts a single Doc or list of Doc objects. Since large texts are difficult to view in one line, you may want to pass a list of spans instead. Each span will appear on its own line:
End of explanation
"""
options = {'distance': 110, 'compact': 'True', 'color': 'yellow', 'bg': '#09a3d5', 'font': 'Times'}
displacy.serve(doc, style='dep', options=options)
"""
Explanation: Click this link to view the dependency: http://127.0.0.1:5000
<br>Interrupt the kernel to return to jupyter.
Customizing the Appearance
Besides setting the distance between tokens, you can pass other arguments to the options parameter:
<table>
<tr><th>NAME</th><th>TYPE</th><th>DESCRIPTION</th><th>DEFAULT</th></tr>
<tr><td>`compact`</td><td>bool</td><td>"Compact mode" with square arrows that takes up less space.</td><td>`False`</td></tr>
<tr><td>`color`</td><td>unicode</td><td>Text color (HEX, RGB or color names).</td><td>`#000000`</td></tr>
<tr><td>`bg`</td><td>unicode</td><td>Background color (HEX, RGB or color names).</td><td>`#ffffff`</td></tr>
<tr><td>`font`</td><td>unicode</td><td>Font name or font family for all text.</td><td>`Arial`</td></tr>
</table>
For a full list of options visit https://spacy.io/api/top-level#displacy_options
End of explanation
"""
|
chbrandt/pynotes | moc/MOC_LaMassa.ipynb | gpl-2.0 | baseurl = 'ftp://cdsarc.u-strasbg.fr/pub/cats/J/ApJ/817/172/'
readme_file = 'ReadMe'
chandra_file = 'chandra.dat'
import astropy
print "astropy version:",astropy.__version__
import mocpy
print "mocpy version:",mocpy.__version__
import healpy
print "healpy version:",healpy.__version__
"""
Explanation: Building a MOC from a CDS/Vizier catalog
After a couple of conversations I had during the week of the 2nd ASTERICS VO school I've got two small TODO's hanging on my list, this is an answer to both. First one (to François) is an observation about CDS/Vizier LaMassa 2016 catalog's metadata; the second one (to Markus) is about MOC catalogs format. I will not get into the details -- since the respective discussion had already being done --, it is sufficient to say that:
some of Lamassa's null value are not properly read when using Astropy;
what I understand about a MOC catalog.
The data to be used is provided by Vizier, the LaMassa et al, 2016, ApJ, 817, 172, in particular the ReadMe and chandra.dat files.
Software to handle the catalog will be Astropy and Thomas Boch's mocpy and Healpy.
TOC:
* Dealing with null values from Vizier metadata
* Generating a MOC catalog
End of explanation
"""
def download(path,filename,outdir):
from urllib2 import urlopen
url = path+filename
f = urlopen(url)
data = f.read()
with open(outdir+filename, "wb") as fp:
fp.write(data)
import os
if not os.path.isdir('data'):
os.mkdir('data')
download(baseurl,readme_file,outdir='data/')
download(baseurl,chandra_file,outdir='data/')
!ls 'data/'
"""
Explanation: Download ReadMe and chandra.dat files and save them inside ./data/ dir
End of explanation
"""
from astropy.table import Table
chandra = Table.read('data/chandra.dat',readme='data/ReadMe',format='ascii.cds')
chandra # Notice the '-999' values
"""
Explanation: Dealing with null values from Vizier metadata
The goal here, as said before, is to show the "bug" in Vizier ReadMe (description) file when dealing with null values not properly formatted.
We start by opening the chandra table and noticing the values -999 not being properly handled by Astropy as null values.
End of explanation
"""
from astropy.table import Table
chandra = Table.read('data/chandra.dat',readme='data/ReadMe_fix',format='ascii.cds')
chandra
"""
Explanation: We can see records in columns logLSoft, logLHard, logLFull, for example, showing the values -999.0.
Although we already suspect, we go to ReadMe and double-check it to see what is there about Null values.
For those three example columns we see ?=-999, which is the right --although truncated/integer-- value.
For some reason, Astropy is not handling it as it should.
To fix this, we have sync the number of significat digits of the null values with the (declared) format.
For instance, those columns have a Format=F7.2 and so should the null values be declared as ?=-999.00.
Changing such signatures for those columns (logLSoft, logLHard and logLFull) and saving them to file ReadMe_fix give us the following:
End of explanation
"""
# A function to find out which healpix level corresponds a given (typical) size of coverage
def size2level(size):
"""
Returns nearest Healpix level corresponding to a given diamond size
The 'nearest' Healpix level is here to be the nearest greater level,
right before the first level smaller than 'size'.
"""
# units
from astropy import units as u
# Structure to map healpix' levels to their angular sizes
#
healpix_levels = {
0 : 58.63 * u.deg,
1 : 29.32 * u.deg,
2 : 14.66 * u.deg,
3 : 7.329 * u.deg,
4 : 3.665 * u.deg,
5 : 1.832 * u.deg,
6 : 54.97 * u.arcmin,
7 : 27.48 * u.arcmin,
8 : 13.74 * u.arcmin,
9 : 6.871 * u.arcmin,
10 : 3.435 * u.arcmin,
11 : 1.718 * u.arcmin,
12 : 51.53 * u.arcsec,
13 : 25.77 * u.arcsec,
14 : 12.88 * u.arcsec,
15 : 6.442 * u.arcsec,
16 : 3.221 * u.arcsec,
17 : 1.61 * u.arcsec,
18 : 805.2 * u.milliarcsecond,
19 : 402.6 * u.milliarcsecond,
20 : 201.3 * u.milliarcsecond,
21 : 100.6 * u.milliarcsecond,
22 : 50.32 * u.milliarcsecond,
23 : 25.16 * u.milliarcsecond,
24 : 12.58 * u.milliarcsecond,
25 : 6.291 * u.milliarcsecond,
26 : 3.145 * u.milliarcsecond,
27 : 1.573 * u.milliarcsecond,
28 : 786.3 * u.microarcsecond,
29 : 393.2 * u.microarcsecond
}
assert size.unit
ko = None
for k,v in healpix_levels.iteritems():
if v < 2 * size: # extrapolating the error by one order of magnitude
break
ko = k
return ko
import numpy as np
from astropy import units as u
median_positional_error = np.median(chandra['e_Pos']) * u.arcsec
level = size2level(median_positional_error)
nside = 2**level
print "Typical (median) position error: \n{}".format(median_positional_error)
print "\nCorrespondig healpix level: {} \n\t and nsize value: {}".format(level,nside)
def healpix_radec2pix(nside, ra, dec, nest=True):
"""
convert ra,dec to healpix elements
"""
def radec2thetaphi(ra,dec):
"""
convert equatorial ra, dec in degrees
to polar theta, phi in radians
"""
def ra2phi(ra):
import math
return math.radians(ra)
def dec2theta(dec):
import math
return math.pi/2 - math.radians(dec)
_phi = ra2phi(ra)
_theta = dec2theta(dec)
return _theta,_phi
import healpy
_theta,_phi = radec2thetaphi(ra, dec)
return healpy.ang2pix(nside, _theta, _phi, nest=nest)
radec = zip(chandra['RAdeg'],chandra['DEdeg'])
hpix = [ healpix_radec2pix(nside,ra,dec) for ra,dec in radec ]
"""
Explanation: That's it. After declaring the null values with all the significant numbers following the Format, such values are properly handled.
Generating a MOC catalog
Now we go through a MOC catalog creation. There is no problem here, just to answer Markus what I understand what a MOC is: a list of (unique) element numbers. The section 2.3.1 NUNIQ packing of the IVOA MOC document version-1 explains how the conversion between the two kinds of elements representation.
The lines below will use the catalog we just have in hands, chandra, to build the MOC.
First, I will compute Healpix level/nside values based on the mean positional error of the catalog and then the MOC elements are computed from (RA,Dec) using HealPy.
MOCPy is used at the end to plot the MOC elements; Also Aladin is used to have a better view of the elements.
Files can be download from data/ directory.
End of explanation
"""
hpix
"""
Explanation: Here it is, the MOC catalog (the list of elements to be more precise):
End of explanation
"""
moc = mocpy.MOC()
moc.add_pix_list(level,hpix)
moc.plot()
moc.write('data/MOC_chandra.fits')
"""
Explanation: The plot made by MOCPy:
End of explanation
"""
from IPython.display import HTML
HTML('''
<figure>
<img src="data/MOC_on_Aladin.png" alt="MOC printed on Aladin">
<figcaption>Figure 1: MOC printed on Aladin</figcaption>
</figure>
''')
"""
Explanation: And here after importing it to Aladin:
End of explanation
"""
|
kubeflow/kfp-tekton-backend | components/gcp/dataproc/submit_pig_job/sample.ipynb | apache-2.0 | %%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
"""
Explanation: Name
Data preparation using Apache Pig on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, YARN, Pig, Apache, Kubeflow, pipelines, components
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Pig job on YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Pig job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| queries | The queries to execute the Pig job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
| query_file_uri | The HCFS URI of the script that contains the Pig queries. | Yes | GCSPath | | None |
| script_variables | Mapping of the query’s variable names to their values (equivalent to the Pig command: SET name="value";). | Yes | Dict | | None |
| pig_job | The payload of a PigJob. | Yes | Dict | | None |
| job | The payload of a Dataproc job. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this guide.
* Create a new cluster.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the Kubeflow user service account the role roles/dataproc.editor on the project.
Detailed description
This component creates a Pig job from Dataproc submit job REST API.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
"""
import kfp.components as comp
dataproc_submit_pig_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/submit_pig_job/component.yaml')
help(dataproc_submit_pig_job_op)
"""
Explanation: Load the component using KFP SDK
End of explanation
"""
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
natality_csv = load 'gs://public-datasets/natality/csv' using PigStorage(':');
top_natality_csv = LIMIT natality_csv 10;
dump natality_csv;'''
EXPERIMENT_NAME = 'Dataproc - Submit Pig Job'
"""
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Setup a Dataproc cluster
Create a new Dataproc cluster (or reuse an existing one) before running the sample code.
Prepare a Pig query
Either put your Pig queries in the queries list, or upload your Pig queries into a file to a Cloud Storage bucket and then enter the Cloud Storage bucket’s path in query_file_uri. In this sample, we will use a hard coded query in the queries list to select data from a local passwd file.
For more details on Apache Pig, see the Pig documentation.
Set sample parameters
End of explanation
"""
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Pig job pipeline',
description='Dataproc submit Pig job pipeline'
)
def dataproc_submit_pig_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
pig_job='',
job='',
wait_interval='30'
):
dataproc_submit_pig_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
pig_job=pig_job,
job=job,
wait_interval=wait_interval)
"""
Explanation: Example pipeline that uses the component
End of explanation
"""
pipeline_func = dataproc_submit_pig_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
"""
Explanation: Compile the pipeline
End of explanation
"""
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
|
googledatalab/notebooks | samples/Exploring Genomics Data.ipynb | apache-2.0 | import google.datalab.bigquery as bq
"""
Explanation: Exploring Genomic Data
This notebook demonstrates working with genetic variant data stored as publicly accessible Google BigQuery datasets.
Specifically, we will work with the Illumina Platinum Genomes data. The source data was originally in VCF format, which was imported to Google Genomics and exported to BigQuery.
If you want to explore other genomics samples, see https://github.com/googlegenomics/datalab-examples. You can import them into your Google Cloud Datalab instance by uploading them from the notebook list page.
End of explanation
"""
variants = bq.Table('genomics-public-data.platinum_genomes.variants')
variants.length
"""
Explanation: Variants
Let's start with a little bit of genomics nomenclature:
Variant: A region of the genome that has been identified as differing from the reference genome.
Non-variant Segment: A region of the genome that matches the reference genome.
Reference Name: The name of a reference segment of DNA. Typically, this is a chromosome, but it may be other named regions from a reference genome.
Reference Bases: The A's, C's, T's, and G's comprising the DNA of the reference genome at the site of variation.
Alternate Bases: The A's, C's, T's, and G's comprising the DNA actually found at the site of variation for the sample(s) sequenced.
SNP: A single nucleotide polymorphism is a DNA sequence variation in which a single nucleotide — A, T, C or G — in the genome (or other shared sequence) differs between members of a biological species or paired chromosomes.
End of explanation
"""
%%bq query --name single_base
SELECT
reference_name,
start,
reference_bases,
alternate_bases
FROM
`genomics-public-data.platinum_genomes.variants`
WHERE
reference_bases IN ('A','C','G','T') AND
ARRAY_LENGTH(alternate_bases) = 1
LIMIT 100
%bq execute --query single_base
"""
Explanation: Wow, that's a lot of records! For a detailed description of the schema, see Understanding the BigQuery Variants Table Schema.
SNPs
Next, let's take a look at a few SNPs in our data. As mentioned in the nomenclature, SNPs are a particular kind of genetic variant. Let's create an SQL statement that can list all variant records for a single base change:
End of explanation
"""
%%bq query --name snps
SELECT
reference_name,
reference_bases,
alternate_bases,
CAST(FLOOR(start / 100000) AS INT64) AS windows,
CONCAT(reference_bases, CONCAT(CAST('->' AS STRING), ARRAY_TO_STRING(alternate_bases, ''))) AS mutation,
ARRAY_LENGTH(alternate_bases) AS num_alts
FROM
`genomics-public-data.platinum_genomes.variants`
WHERE reference_name IN ("1", "chr1")
# Limit to bi-allelic SNP variants
AND reference_bases IN ('A','C','G','T')
AND ARRAY_LENGTH(alternate_bases) = 1
"""
Explanation: Transition/Transversion Ratio
A SNP can be further classified as either a transition or a transversion. The ratio of transitions to transversions (TiTv ratio) in humans is observed to be approximately 2.1, but this is not uniform across the genome. Let's take a closer look by computing the TiTv ratio in contiguous regions of 100,000 base pairs.
End of explanation
"""
%bq sample -q snps --count 10
"""
Explanation: We've updated the above query to include the "window" in which the SNP resides, and added a new field called "mutation".
End of explanation
"""
%%bq query --name windows --subqueries snps
SELECT
reference_name,
windows AS win,
COUNTIF(mutation IN ('A->G', 'G->A', 'C->T', 'T->C')) AS transitions,
COUNTIF(mutation IN ('A->C', 'C->A', 'G->T', 'T->G',
'A->T', 'T->A', 'C->G', 'G->C')) AS transversions,
COUNT(mutation) AS num_variants_in_window
FROM snps
GROUP BY
reference_name,
win
%bq sample -q windows --count 10
"""
Explanation: Next we group and classify the SNPs within their windows.
End of explanation
"""
%%bq query --name titv --subqueries snps windows
SELECT
reference_name,
win * 100000 AS window_start,
transitions,
transversions,
transitions/transversions AS titv,
num_variants_in_window
FROM windows
ORDER BY
window_start
%bq sample -q titv --count 10
titvRatios = titv.execute(output_options=bq.QueryOutput.dataframe()).result()
titvRatios[:5]
"""
Explanation: And finally, we compute the per-window TiTv ratio.
End of explanation
"""
import seaborn as sns
import matplotlib.pyplot as plt
g = sns.lmplot(x='window_start',
y='titv',
data = titvRatios,
size = 8,
aspect = 2,
scatter_kws = { 'color': 'black', 'alpha': 0.4 },
line_kws = { 'color': 'blue', 'alpha': 0.5, 'lw': 4 },
lowess = True)
plt.xlabel('Genomic Position', fontsize = 14)
plt.ylabel('Ti/Tv Ratio', fontsize = 14)
plt.title('Ti/Tv Ratio computed on %d base pair windows for %s' % (100000, ['1', 'chr1']),
fontsize = 18)
plt.annotate('Centromere', xy = (1.3e8, 1.5), xytext = (1.5e8, 3), size = 14,
arrowprops=dict(facecolor='black', shrink=0.05))
"""
Explanation: Visualization
Now we can take the ratios and plot them by genomic position to see how the ratio varies depending upon where it was computed within the chromosome. By default, this plot shows chromosome 1 with its gap in the center of the data corresponding to its metacentric contromere.
End of explanation
"""
|
rvm-segfault/edx | python_for_data_sci_dse200x/week3/.ipynb_checkpoints/Intro Notebook-checkpoint.ipynb | apache-2.0 | 365 * 24 * 60 * 60
print(str(_/1e6) + ' million')
x = 4 + 3
print (x)
"""
Explanation: Number of seconds in a year
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import plot
plot([0,1,0,1])
"""
Explanation: This is a markdown cell
This is heading 2
This is heading 3
Hi!
One Fish
Two Fish
Red Fish
Blue Fish
Example Bold Text here
example italic text here
http://google.com
This is a Latex equation
$\int_0^\infty x^{-\alpha}$
End of explanation
"""
|
stevetjoa/stanford-mir | segmentation.ipynb | mit | T = 3.0 # duration in seconds
sr = 22050 # sampling rate in Hertz
amplitude = numpy.logspace(-3, 0, int(T*sr), endpoint=False, base=10.0) # time-varying amplitude
print(amplitude.min(), amplitude.max()) # starts at 110 Hz, ends at 880 Hz
"""
Explanation: ← Back to Index
Segmentation
In audio processing, it is common to operate on one frame at a time using a constant frame size and hop size (i.e. increment). Frames are typically chosen to be 10 to 100 ms in duration.
Let's create an audio signal consisting of a pure tone that gradually gets louder. Then, we will segment the signal and compute the root mean square (RMS) energy for each frame.
First, set our parameters:
End of explanation
"""
t = numpy.linspace(0, T, int(T*sr), endpoint=False)
x = amplitude*numpy.sin(2*numpy.pi*440*t)
"""
Explanation: Create the signal:
End of explanation
"""
ipd.Audio(x, rate=sr)
"""
Explanation: Listen to the signal:
End of explanation
"""
librosa.display.waveplot(x, sr=sr)
"""
Explanation: Plot the signal:
End of explanation
"""
frame_length = 1024
hop_length = 512
"""
Explanation: Segmentation Using Python List Comprehensions
In Python, you can use a standard list comprehension to perform segmentation of a signal and compute RMSE at the same time.
Initialize segmentation parameters:
End of explanation
"""
def rmse(x):
return numpy.sqrt(numpy.mean(x**2))
"""
Explanation: Define a helper function:
End of explanation
"""
plt.semilogy([rmse(x[i:i+frame_length])
for i in range(0, len(x), hop_length)])
"""
Explanation: Using a list comprehension, plot the RMSE for each frame on a log-y axis:
End of explanation
"""
frames = librosa.util.frame(x, frame_length=frame_length, hop_length=hop_length)
plt.semilogy([rmse(frame) for frame in frames.T])
"""
Explanation: librosa.util.frame
Given a signal, librosa.util.frame will produce a list of uniformly sized frames:
End of explanation
"""
|
tensorflow/agents | docs/tutorials/per_arm_bandits_tutorial.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
"""
!pip install tf-agents
"""
Explanation: A Tutorial on Multi-Armed Bandits with Per-Arm Features
Get Started
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/per_arm_bandits_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/per_arm_bandits_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/per_arm_bandits_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/per_arm_bandits_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial is a step-by-step guide on how to use the TF-Agents library for contextual bandits problems where the actions (arms) have their own features, such as a list of movies represented by features (genre, year of release, ...).
Prerequisite
It is assumed that the reader is somewhat familiar with the Bandit library of TF-Agents, in particular, has worked through the tutorial for Bandits in TF-Agents before reading this tutorial.
Multi-Armed Bandits with Arm Features
In the "classic" Contextual Multi-Armed Bandits setting, an agent receives a context vector (aka observation) at every time step and has to choose from a finite set of numbered actions (arms) so as to maximize its cumulative reward.
Now consider the scenario where an agent recommends to a user the next movie to watch. Every time a decision has to be made, the agent receives as context some information about the user (watch history, genre preference, etc...), as well as the list of movies to choose from.
We could try to formulate this problem by having the user information as the context and the arms would be movie_1, movie_2, ..., movie_K, but this approach has multiple shortcomings:
The number of actions would have to be all the movies in the system and it is cumbersome to add a new movie.
The agent has to learn a model for every single movie.
Similarity between movies is not taken into account.
Instead of numbering the movies, we can do something more intuitive: we can represent movies with a set of features including genre, length, cast, rating, year, etc. The advantages of this approach are manifold:
Generalisation across movies.
The agent learns just one reward function that models reward with user and movie features.
Easy to remove from, or introduce new movies to the system.
In this new setting, the number of actions does not even have to be the same in every time step.
Per-Arm Bandits in TF-Agents
The TF-Agents Bandit suite is developed so that one can use it for the per-arm case as well. There are per-arm environments, and also most of the policies and agents can operate in per-arm mode.
Before we dive into coding an example, we need the necessery imports.
Installation
End of explanation
"""
import functools
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tf_agents.bandits.agents import lin_ucb_agent
from tf_agents.bandits.environments import stationary_stochastic_per_arm_py_environment as p_a_env
from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import tf_py_environment
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
nest = tf.nest
"""
Explanation: Imports
End of explanation
"""
# The dimension of the global features.
GLOBAL_DIM = 40 #@param {type:"integer"}
# The elements of the global feature will be integers in [-GLOBAL_BOUND, GLOBAL_BOUND).
GLOBAL_BOUND = 10 #@param {type:"integer"}
# The dimension of the per-arm features.
PER_ARM_DIM = 50 #@param {type:"integer"}
# The elements of the PER-ARM feature will be integers in [-PER_ARM_BOUND, PER_ARM_BOUND).
PER_ARM_BOUND = 6 #@param {type:"integer"}
# The variance of the Gaussian distribution that generates the rewards.
VARIANCE = 100.0 #@param {type: "number"}
# The elements of the linear reward parameter will be integers in [-PARAM_BOUND, PARAM_BOUND).
PARAM_BOUND = 10 #@param {type: "integer"}
NUM_ACTIONS = 70 #@param {type:"integer"}
BATCH_SIZE = 20 #@param {type:"integer"}
# Parameter for linear reward function acting on the
# concatenation of global and per-arm features.
reward_param = list(np.random.randint(
-PARAM_BOUND, PARAM_BOUND, [GLOBAL_DIM + PER_ARM_DIM]))
"""
Explanation: Parameters -- Feel Free to Play Around
End of explanation
"""
def global_context_sampling_fn():
"""This function generates a single global observation vector."""
return np.random.randint(
-GLOBAL_BOUND, GLOBAL_BOUND, [GLOBAL_DIM]).astype(np.float32)
def per_arm_context_sampling_fn():
""""This function generates a single per-arm observation vector."""
return np.random.randint(
-PER_ARM_BOUND, PER_ARM_BOUND, [PER_ARM_DIM]).astype(np.float32)
def linear_normal_reward_fn(x):
"""This function generates a reward from the concatenated global and per-arm observations."""
mu = np.dot(x, reward_param)
return np.random.normal(mu, VARIANCE)
"""
Explanation: A Simple Per-Arm Environment
The stationary stochastic environment, explained in the other tutorial, has a per-arm counterpart.
To initialize the per-arm environment, one has to define functions that generate
* global and per-arm features: These functions have no input parameters and generate a single (global or per-arm) feature vector when called.
* rewards: This function takes as parameter the concatenation of a global and a per-arm feature vector, and generates a reward. Basically this is the function that the agent will have to "guess". It is worth noting here that in the per-arm case the reward function is identical for every arm. This is a fundamental difference from the classic bandit case, where the agent has to estimate reward functions for each arm independently.
End of explanation
"""
per_arm_py_env = p_a_env.StationaryStochasticPerArmPyEnvironment(
global_context_sampling_fn,
per_arm_context_sampling_fn,
NUM_ACTIONS,
linear_normal_reward_fn,
batch_size=BATCH_SIZE
)
per_arm_tf_env = tf_py_environment.TFPyEnvironment(per_arm_py_env)
"""
Explanation: Now we are equipped to initialize our environment.
End of explanation
"""
print('observation spec: ', per_arm_tf_env.observation_spec())
print('\nAn observation: ', per_arm_tf_env.reset().observation)
action = tf.zeros(BATCH_SIZE, dtype=tf.int32)
time_step = per_arm_tf_env.step(action)
print('\nRewards after taking an action: ', time_step.reward)
"""
Explanation: Below we can check what this environment produces.
End of explanation
"""
observation_spec = per_arm_tf_env.observation_spec()
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.BoundedTensorSpec(
dtype=tf.int32, shape=(), minimum=0, maximum=NUM_ACTIONS - 1)
agent = lin_ucb_agent.LinearUCBAgent(time_step_spec=time_step_spec,
action_spec=action_spec,
accepts_per_arm_features=True)
"""
Explanation: We see that the observation spec is a dictionary with two elements:
One with key 'global': this is the global context part, with shape matching the parameter GLOBAL_DIM.
One with key 'per_arm': this is the per-arm context, and its shape is [NUM_ACTIONS, PER_ARM_DIM]. This part is the placeholder for the arm features for every arm in a time step.
The LinUCB Agent
The LinUCB agent implements the identically named Bandit algorithm, which estimates the parameter of the linear reward function while also maintains a confidence ellipsoid around the estimate. The agent chooses the arm that has the highest estimated expected reward, assuming that the parameter lies within the confidence ellipsoid.
Creating an agent requires the knowledge of the observation and the action specification. When defining the agent, we set the boolean parameter accepts_per_arm_features set to True.
End of explanation
"""
print('training data spec: ', agent.training_data_spec)
"""
Explanation: The Flow of Training Data
This section gives a sneak peek into the mechanics of how per-arm features go from the policy to training. Feel free to jump to the next section (Defining the Regret Metric) and come back here later if interested.
First, let us have a look at the data specification in the agent. The training_data_spec attribute of the agent specifies what elements and structure the training data should have.
End of explanation
"""
print('observation spec in training: ', agent.training_data_spec.observation)
"""
Explanation: If we have a closer look to the observation part of the spec, we see that it does not contain per-arm features!
End of explanation
"""
print('chosen arm features: ', agent.training_data_spec.policy_info.chosen_arm_features)
"""
Explanation: What happened to the per-arm features? To answer this question, first we note that when the LinUCB agent trains, it does not need the per-arm features of all arms, it only needs those of the chosen arm. Hence, it makes sense to drop the tensor of shape [BATCH_SIZE, NUM_ACTIONS, PER_ARM_DIM], as it is very wasteful, especially if the number of actions is large.
But still, the per-arm features of the chosen arm must be somewhere! To this end, we make sure that the LinUCB policy stores the features of the chosen arm within the policy_info field of the training data:
End of explanation
"""
def _all_rewards(observation, hidden_param):
"""Outputs rewards for all actions, given an observation."""
hidden_param = tf.cast(hidden_param, dtype=tf.float32)
global_obs = observation['global']
per_arm_obs = observation['per_arm']
num_actions = tf.shape(per_arm_obs)[1]
tiled_global = tf.tile(
tf.expand_dims(global_obs, axis=1), [1, num_actions, 1])
concatenated = tf.concat([tiled_global, per_arm_obs], axis=-1)
rewards = tf.linalg.matvec(concatenated, hidden_param)
return rewards
def optimal_reward(observation):
"""Outputs the maximum expected reward for every element in the batch."""
return tf.reduce_max(_all_rewards(observation, reward_param), axis=1)
regret_metric = tf_bandit_metrics.RegretMetric(optimal_reward)
"""
Explanation: We see from the shape that the chosen_arm_features field has only the feature vector of one arm, and that will be the chosen arm. Note that the policy_info, and with it the chosen_arm_features, is part of the training data, as we saw from inspecting the training data spec, and thus it is available at training time.
Defining the Regret Metric
Before starting the training loop, we define some utility functions that help calculate the regret of our agent. These functions help determining the optimal expected reward given the set of actions (given by their arm features) and the linear parameter that is hidden from the agent.
End of explanation
"""
num_iterations = 20 # @param
steps_per_loop = 1 # @param
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.policy.trajectory_spec,
batch_size=BATCH_SIZE,
max_length=steps_per_loop)
observers = [replay_buffer.add_batch, regret_metric]
driver = dynamic_step_driver.DynamicStepDriver(
env=per_arm_tf_env,
policy=agent.collect_policy,
num_steps=steps_per_loop * BATCH_SIZE,
observers=observers)
regret_values = []
for _ in range(num_iterations):
driver.run()
loss_info = agent.train(replay_buffer.gather_all())
replay_buffer.clear()
regret_values.append(regret_metric.result())
"""
Explanation: Now we are all set for starting our bandit training loop. The driver below takes care of choosing actions using the policy, storing rewards of chosen actions in the replay buffer, calculating the predefined regret metric, and executing the training step of the agent.
End of explanation
"""
plt.plot(regret_values)
plt.title('Regret of LinUCB on the Linear per-arm environment')
plt.xlabel('Number of Iterations')
_ = plt.ylabel('Average Regret')
"""
Explanation: Now let's see the result. If we did everything right, the agent is able to estimate the linear reward function well, and thus the policy can pick actions whose expected reward is close to that of the optimal. This is indicated by our above defined regret metric, which goes down and approaches zero.
End of explanation
"""
|
hosford42/xcs | doc/XCSTutorial.ipynb | bsd-3-clause | import logging
logging.root.setLevel(logging.INFO)
"""
Explanation: XCS Tutorial
This is the official tutorial for the xcs package for Python 3. You can find the latest release and get updates on the project's status at the project home page.
What is XCS?
XCS is a Python 3 implementation of the XCS algorithm as described in the 2001 paper, An Algorithmic Description of XCS, by
Martin Butz and Stewart Wilson. XCS is a type of Learning Classifier System (LCS), a machine learning algorithm that utilizes a genetic algorithm acting on a rule-based system, to solve a reinforcement learning problem.
In its canonical form, XCS accepts a fixed-width string of bits as its input, and attempts to select the best action from a predetermined list of choices using an evolving set of rules that match inputs and offer appropriate suggestions. It then receives a reward signal indicating the quality of its decision, which it uses to adjust the rule set that was used to make the decision. This process is subsequently repeated, allowing the algorithm to evaluate the changes it has already made and further refine the rule set.
A key feature of XCS is that, unlike many other machine learning algorithms, it not only learns the optimal input/output mapping, but also produces a minimal set of rules for describing that mapping. This is a big advantage over other learning algorithms such as neural networks whose models are largely opaque to human analysis, making XCS an important tool in any data scientist's tool belt.
The XCS library provides not only an implementation of the standard XCS algorithm, but a set of interfaces which together constitute a framework for implementing and experimenting with other LCS variants. Future plans for the XCS library include continued expansion of the tool set with additional algorithms, and refinement of the interface to support reinforcement learning algorithms in general.
Terminology
Being both a reinforcement learning algorithm and an evolutionary algorithm, XCS requires an understanding of terms pertaining to both.
Situation
A situation is just another term for an input received by the classifier.
Action
An action is an output produced by the classifier.
Scenario
A series of situations, each of which the algorithm must respond to in order with an appropriate action in order to maximize the total reward received for each action. A scenario may be dynamic, meaning that later training cycles can be affected by earlier actions, or static, meaning that each training cycle is independent of the actions that came before it.
Classifier Rule
A classifier rule, sometimes referred to as just a rule or a classifier, is a pairing between a condition, describing which situations can be matched, and a suggested action. Each classifier rule has an associated prediction indicating the expected reward if the suggested action is taken when the condition matches the situation, a fitness indicating its suitability for reproduction and continued use in the population, and a numerosity value which indicates the number of (virtual) instances of the rule in the population. (There are other parameters associated with each rule, as well, but these are visibly important ones.)
Classifier Set
Also referred to as the population, this is the collection of all rules currently used and tracked by the classifier. The genetic algorithm operates on this set of rules over time to optimize them for accuracy and generality in their descriptiveness of the problem space. Note that the population is virtual, meaning that if the same rule has multiple copies in the population, it is represented only once, with an associated numerosity value to indicate the number of virtual instances of the rule in the population.
Match Set
The match set is the set of rules which match against the current situation.
Action Set
The action set is the set of rules which match against the current situation and recommend the selected action. Thus the action set is a subset of the match set. In fact, the match set can be seen as a collection of mutually exclusive and competing action sets, from which only one is to be selected.
Reward
The reward is a floating point value which acts as the signal the algorithm attempts to maximize. There are three types of reward that are commonly mentioned with respect to temporal difference learning algorithms. The immediate reward (aka raw reward) is the original, unaltered reward value returned by the scenario in response to each action. The expected future reward is the estimated payoff for later reward cycles, specifically excluding the current one; the prediction of the action set on the next reward cycle acts in this role in the canonical XCS algorithm. The payoff or combined reward is the combined sum of the immediate reward, plus the discounted expected future reward. (Discounted means the value is multiplied by a non-negative coefficient whose value is less than 1, which causes the algorithm to value immediate reward more highly than reward received later on.) The term reward, when used alone, is generally used to mean the immediate reward.
Prediction
A prediction is an estimate by a classifier rule or an action set as to the payoff expected to be received by taking the suggested action in the given situation. The prediction of an action set is formed by taking the fitness-weighted average of the predictions made by the individual rules within it.
Fitness
Fitness is another floating point value similar in function to the reward, except that in this case it is an internal signal defined by the algorithm itself, which is then used as a guide for selection of which rules are to act as parents to the next generation. Each rule in the population has its own associated fitness value. In XCS, as opposed to strength-based LCS variants such as ZCS, the fitness is actually based on the accuracy of each rule's reward prediction, as opposed to its size. Thus a rule with a very low expected reward can have a high fitness provided it is accurate in its prediction of low reward, whereas a rule with very high expected reward may have low fitness because the reward it receives varies widely from one reward cycle to the next. Using reward prediction accuracy instead of reward prediction size helps XCS find rules that describe the problem in a stable, predictable way.
Installation
To install xcs, you will of course need a Python 3 interpreter. The latest version of the standard CPython distribution is available for download from the Python Software Foundation, or if you prefer a download that comes with a long list of top-notch machine learning and scientific computing packages already built for you, I recommend Anaconda from Continuum Analytics.
Starting with Python 3.4, the standard CPython distribution comes with the package installation tool, pip, as part of the standard distribution. Anaconda comes with pip regardless of the Python version. If you have pip, installation of xcs is straight forward:
pip install xcs
If all goes as planned, you should see a message like this:
Successfully installed xcs-1.0.0
If for some reason you are unable to use pip, you can still install xcs manually. The latest release can be found here or here. Download the zip file, unpack it, and cd into the directory. Then run:
python setup.py install
You should see a message indicating that the package was successfully installed.
Testing the Newly Installed Package
Let's start things off with a quick test, to verify that everything has been installed properly. First, fire up the Python interpreter. We'll set up Python's built-in logging system so we can see the test's progress.
End of explanation
"""
import xcs
xcs.test()
"""
Explanation: Then we import the xcs module and run the built-in test() function. By default, the test() function runs the canonical XCS algorithm on the 11-bit (3-bit address) MUX problem for 10,000 steps.
End of explanation
"""
from xcs import XCSAlgorithm
from xcs.scenarios import MUXProblem, ScenarioObserver
"""
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.57000
INFO:xcs.scenarios:Steps completed: 200
INFO:xcs.scenarios:Average reward per step: 0.58500
.
.
.
001#0###### => False
Time Stamp: 9980
Average Reward: 1.0
Error: 0.0
Fitness: 0.8161150828153352
Experience: 236
Action Set Size: 25.03847865419106
Numerosity: 9
11#######11 => True
Time Stamp: 9994
Average Reward: 1.0
Error: 0.0
Fitness: 0.9749473121531844
Experience: 428
Action Set Size: 20.685392494947063
Numerosity: 11
INFO:xcs:Total time: 15.05068 seconds
```
Your results may vary somewhat from what is shown here. XCS relies on randomization to discover new rules, so unless you set the random seed with random.seed(), each run will be different.
Usage
Now we'll run through a quick demo of how to use existing algorithms and problems. This is essentially the same code that appears in the test() function we called above.
First, we're going to need to import a few things:
End of explanation
"""
scenario = ScenarioObserver(MUXProblem(50000))
"""
Explanation: The XCSAlgorithm class contains the actual XCS algorithm implementation. The ClassifierSet class is used to represent the algorithm's state, in the form of a set of classifier rules. MUXProblem is the classic multiplexer problem, which defaults to 3 address bits (11 bits total). ScenarioObserver is a wrapper for scenarios which logs the inputs, actions, and rewards as the algorithm attempts to solve the problem.
Now that we've imported the necessary tools, we can define the actual problem, telling it to give the algorithm 10,000 reward cycles to attempt to learn the appropriate input/output mapping, and wrapping it with an observer so we can see the algorithm's progress.
End of explanation
"""
algorithm = XCSAlgorithm()
"""
Explanation: Next, we'll create the algorithm which will be used to manage the classifier set and learn the mapping defined by the problem we have selected:
End of explanation
"""
algorithm.exploration_probability = .1
algorithm.discount_factor = 0
algorithm.do_ga_subsumption = True
algorithm.do_action_set_subsumption = True
"""
Explanation: The algorithm's parameters are set to appropriate defaults for most problems, but it is straight forward to modify them if it becomes necessary.
End of explanation
"""
model = algorithm.new_model(scenario)
"""
Explanation: Here we have selected an exploration probability of .1, which will sacrifice most (9 out of 10) learning opportunities in favor of taking advantage of what has already been learned so far. This makes sense in real-time learning environment; a lower value is more appropriate in cases where the classifier is being trained in advance or is being used simply to learn a minimal rule set. The discount factor is set to 0, since future rewards are not affected at all by the currently selected action. (This is not strictly necessary, since the scenario will inform the algorithm that reward chaining should not be used, but it is useful to highlight this fact.) We have also elected to turn on GA and action set subsumption, which help the system to converge to the minimal effective rule set more quickly in some types of scenarios.
Next, we create the classifier set:
End of explanation
"""
model.run(scenario, learn=True)
"""
Explanation: The algorithm does the work for us, initializing the classifier set as it deems appropriate for the scenario we have provided. It provides the classifier set with the possible actions that can be taken in the given scenario; these are necessary for the classifier set to perform covering operations when the algorithm determines that the classifiers in the population provide insufficient coverage for a particular situation. (Covering is the addition to the population of a randomly generated classifier rule whose condition matches the current situation.)
And finally, this is where all the magic happens:
End of explanation
"""
print(model)
"""
Explanation: We pass the scenario to the classifier set and ask it to run to learn the appropriate input/output mapping. It executes training cycles until the scenario dictates that training should stop. Note that if you wish to see the progress as the algorithm interacts with the scenario, you will need to set the logging level to INFO, as described in the previous section, before calling the run() method.
Now we can observe the fruits of our labors.
End of explanation
"""
print(len(model))
for rule in model:
if rule.fitness > .5 and rule.experience >= 10:
print(rule.condition, '=>', rule.action, ' [%.5f]' % rule.fitness)
"""
Explanation: ```
10001#10100 => True
Time Stamp: 41601
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
00#00100#00 => True
Time Stamp: 48589
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
.
.
.
1111######1 => True
Time Stamp: 49968
Average Reward: 1.0
Error: 0.0
Fitness: 0.9654542879926405
Experience: 131
Action Set Size: 27.598176294274904
Numerosity: 10
010##1##### => True
Time Stamp: 49962
Average Reward: 1.0
Error: 0.0
Fitness: 0.8516265524887351
Experience: 1257
Action Set Size: 27.21325456027306
Numerosity: 13
```
This gives us a printout of each classifier rule, in the form condition => action, followed by various stats about the rule pertaining to the algorithm we selected. The classifier set can also be accessed as an iterable container:
End of explanation
"""
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
pass
"""
Explanation: Defining New Scenario Types
To define a new scenario type, inherit from the Scenario abstract class defined in the xcs.scenarios submodule. Suppose, as an example, that we wish to test the algorithm's ability to find a single important input bit from among a large number of irrelevant input bits.
End of explanation
"""
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
"""
Explanation: We defined a new class, HaystackProblem, to represent this test case, which inherits from Scenario to ensure that we cannot instantiate the problem until the appropriate methods have been implemented.
Now let's define an __init__ method for this class. We'll need a parameter, training_cycles, to determine how many reward cycles the algorithm has to identify the "needle", and another parameter, input_size, to determine how big the "haystack" is.
End of explanation
"""
problem = HaystackProblem()
"""
Explanation: The input_size is saved as a member for later use. Likewise, the value of training_cycles was saved in two places: the remaining_cycles member, which tells the instance how many training cycles remain for the current run, and the initial_training_cycles member, which the instance will use to reset remaining_cycles to the original value for a new run.
We also defined the possible_actions member, which we set to (True, False). This is the value we will return when the algorithm asks for the possible actions. We will expect the algorithm to return True when the needle bit is set, and False when the needle bit is clear, in order to indicate that it has correctly identified the needle's location.
Now let's define some methods for the class. The Scenario base class defines several abstract methods, and one abstract property:
* is_dynamic is a property with a Boolean value that indicates whether the actions from one reward cycle can affect the rewards or situations of later reward cycles.
* get_possible_actions() is a method that should return the actions the algorithm can take.
* reset() should restart the problem for a new run.
* sense() should return a new input (the "situation").
* execute(action) should accept an action from among those returned by get_possible_actions(), in response to the last situation that was returned by sense(). It should then return a reward value indicating how well the algorithm is doing at responding correctly to each situation.
* more() should return a Boolean value to indicate whether the algorithm has remaining reward cycles in which to learn.
The abstract methods and the property must each be defined, or we will get a TypeError when we attempt to instantiate the class:
End of explanation
"""
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
def more(self):
return self.remaining_cycles > 0
"""
Explanation: The implementations for the property and the methods other than sense() and execute() will be trivial, so let's start with those:
End of explanation
"""
import random
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
"""
Explanation: Now we are going to get into the meat of the problem. We want to give the algorithm a random string of bits of size input_size and have it pick out the location of the needle bit through trial and error, by telling us what it thinks the value of the needle bit is. For this to be a useful test, the needle bit needs to be in a fixed location, which we have not yet defined. Let's choose a random bit from among inputs on each run.
End of explanation
"""
import random
from xcs.scenarios import Scenario
from xcs.bitstrings import BitString
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
"""
Explanation: The sense() method is going to create a string of random bits of size input_size and return it. But first it will pick out the value of the needle bit, located at needle_index, and store it in a new member, needle_value, so that execute(action) will know what the correct value for action is.
End of explanation
"""
import random
from xcs.scenarios import Scenario
from xcs.bitstrings import BitString
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
def execute(self, action):
self.remaining_cycles -= 1
return action == self.needle_value
"""
Explanation: Now we need to define the execute(action) method. In order to give the algorithm appropriate feedback to make problem solvable, we should return a high reward when it guesses the correct value for the needle bit, and a low value otherwise. Thus we will return a 1 when the action is the value of the needle bit, and a 0 otherwise. We must also make sure to decrement the remaining cycles to prevent the problem from running indefinitely.
End of explanation
"""
import logging
import xcs
from xcs.scenarios import ScenarioObserver
# Setup logging so we can see the test run as it progresses.
logging.root.setLevel(logging.INFO)
# Create the scenario instance
problem = HaystackProblem()
# Wrap the scenario instance in an observer so progress gets logged,
# and pass it on to the test() function.
xcs.test(scenario=ScenarioObserver(problem))
"""
Explanation: We have now defined all of the methods that Scenario requires. Let's give it a test run.
End of explanation
"""
problem = HaystackProblem(training_cycles=10000, input_size=100)
xcs.test(scenario=ScenarioObserver(problem))
"""
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.55000
.
.
.
INFO:xcs.scenarios:Steps completed: 900
INFO:xcs.scenarios:Average reward per step: 0.51667
INFO:xcs.scenarios:Steps completed: 1000
INFO:xcs.scenarios:Average reward per step: 0.50900
INFO:xcs.scenarios:Run completed.
INFO:xcs.scenarios:Total steps: 1000
INFO:xcs.scenarios:Total reward received: 509.00000
INFO:xcs.scenarios:Average reward per step: 0.50900
INFO:xcs:Classifiers:
010#11110##001###01#101001#00#1##100110##11#111#00#00#1#10#10#1110#100110#1#1100#10#111#1011100###1#1##1#0#1##011#1#0#0##1011010011#0#0101#00#01#0#0##01101##100#00010111##111010#100110##1101110##11#01110##1#0#110#000#010#1011##10#00#0#101011#000000##11#00#1#0110#0110100010##0100011#1#0###11#110#0###1##0100##1#11#1##101####111011#01#110101011001#110110#011111##1#0##1010#011000101001#10#10#0#00##1#110##1011100#1111##01#00#11#010001100#10####01###010001###1##1110#10####100#0#01#0#10##100####1110#00 => False
Time Stamp: 169
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
11##101#1###11101#0010####01#111##100011010###10##01#1100#010#11##01011#00##0#0#1001111#0#11011100010100101#1#1#01#0001000##101100###11#1#1111011110010#01010#101010###010##010##001#1#10#1001##0#1101111##0#0#0#1#11#01011000####111#1#1##10110##1###1#1#00#110##00000#11101110010###01#0#11#1###1#1#01#100110####11##0000#01#0#0011#01##10#100##00##010111##0#1#100#0##10#01000000001#00##1#11001#1011##1##1100011#1###01#####0#0111111#00#1101101##101#01#101#11##001#0000#1011#01#0#11#0#0#0##0#1010#0#01110110# => False
Time Stamp: 254
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
.
.
.
10010010010110#1#01###000100##0#0##0###01#1#1#100101#01#110#0##011#0100#0#1111001##01010##0#1#01011110#0#100110#00##1100##1011##1##0#0####111##111##000##01#001##110##10#01#0#1#00#110#100#10#1#0#1100#010#110##1011##1110#0#01#00#011#0001110#1110#0110111#0#101#01#101#00#0#1110100#1##0#101101#1###11#11###001100010###0#111101##1#111#111010#1##0011##00111000##11110#0#01#0#0#0#1#0#110000###00110##10001001011111#001101#11#111##01#0#1#10#1##000######0110##01#1#010#011#11#001##10111#1101#0#1001##011#10 => True
Time Stamp: 996
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
0101#0010100011#11##1100##001001###010#111001#####111001#1011#1100#1111#00101111#0#1011##1#1###00001011001#10##00###101##011111##1#00#1011001###10001###11####1##1#01#0#1#0#11100001110##11#001001#01#####0110#011011#0#111#1111##0#110111001#100#011111100110#11####0##01#100#11#1000#10#00#00#0#0#1##0100#100#11###01#1100##1###000##01#10#0#0001#0100#10#1#001#11####1001#110#0##11#0#0100#010##0#011100##11#0#11101#000000010#00101#0#0#11110#0010#1100#11#01#11##10#10#10#1100#1#00#0100#10#1##10#00011010100#0 => True
Time Stamp: 998
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 2.65542 seconds
```
Hmm, the classifier set didn't do so hot. Maybe we've found a weakness in the algorithm, or maybe some different parameter settings will improve its performance. Let's reduce the size of the haystack and give it more reward cycles so we can see whether it's learning at all.
End of explanation
"""
problem = HaystackProblem(training_cycles=10000, input_size=500)
algorithm = xcs.XCSAlgorithm()
# Default parameter settings in test()
algorithm.exploration_probability = .1
# Modified parameter settings
algorithm.ga_threshold = 1
algorithm.crossover_probability = .5
algorithm.do_action_set_subsumption = True
algorithm.do_ga_subsumption = False
algorithm.wildcard_probability = .998
algorithm.deletion_threshold = 1
algorithm.mutation_probability = .002
xcs.test(algorithm, scenario=ScenarioObserver(problem))
"""
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.47000
.
.
.
INFO:xcs.scenarios:Steps completed: 9900
INFO:xcs.scenarios:Average reward per step: 0.49222
INFO:xcs.scenarios:Steps completed: 10000
INFO:xcs.scenarios:Average reward per step: 0.49210
INFO:xcs.scenarios:Run completed.
INFO:xcs.scenarios:Total steps: 10000
INFO:xcs.scenarios:Total reward received: 4921.00000
INFO:xcs.scenarios:Average reward per step: 0.49210
INFO:xcs:Classifiers:
11#1001##0110000#101####001010##111111#1110#00#0100#11100#1###0110110####11#011##0#0#1###011#1#11001 => False
Time Stamp: 9771
Average Reward: 1.0
Error: 0.0
Fitness: 8.5e-07
Experience: 0
Action Set Size: 1
Numerosity: 1
00001100##1010#01111101001#0###0#10#10#11###10#1#0#0#11#11010111111#0#01#111#0#100#00#10000111##000 => False
Time Stamp: 8972
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
.
.
.
100#0010010###0#1001#1#0100##0#1##101#011#0#0101110#1111#11#000##0#1#0##001#1110##001011###1001##01# => True
Time Stamp: 9993
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
10#100##110##00#001##0#100100#00#1110##100##1#1##1111###00#0#1#1##00#010##00011#10#1#11##0#0#01100#0 => False
Time Stamp: 9997
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 21.50882 seconds
```
It appears the algorithm isn't learning at all, at least not at a visible rate. But after a few rounds of playing with the parameter values, it becomes apparent that with the correct settings and sufficient training cycles, it is possible for the algorithm to handle the new scenario.
End of explanation
"""
|
mramanathan/pydiary_notes | howto_pickle.ipynb | gpl-3.0 | """
An example to store the output without "pickle"
"""
testfile = 'nopickle.txt'
var1 = 1143
var2 = ["AECS", "LAYOUT", "KUNDALAHALLI"]
var3 = 58.30
var4 = ("Bangalore", 560037)
def ezhudhu():
with open(testfile, 'w+') as f:
f.write(str(var1))
f.write(str(var2))
f.write(str(var3))
f.write(str(var4))
f.close()
return None
def main():
ezhudhu()
if __name__ == '__main__':
main()
"""
Explanation: Howto with examples on 'pickle' module ==> 28/May/2016
End of explanation
"""
"""
Let us now read the contents of 'nopickle.txt', also check it's type
Does it retain the original type of the variables ?
"""
with open(testfile, 'r') as f:
print f.readline()
print(type(f.readline()))
"""
An example of store the outputin a file using 'pickle'
"""
import pickle
testfile = 'pickle.txt'
var1 = 1143
var2 = ["AECS", "LAYOUT", "KUNDALAHALLI"]
var3 = 58.30
var4 = ("Bangalore", 560037)
def baree():
with open(testfile, 'w+') as f:
pickle.dump(var1, f)
pickle.dump(var2, f)
pickle.dump(var3, f)
pickle.dump(var4, f)
f.close()
return None
def main():
baree()
if __name__ == '__main__':
main()
"""
Explanation: This is the content of nopickle.txt
1143['AECS', 'LAYOUT', 'KUNDALAHALLI']58.3('Bangalore', 560037)
End of explanation
"""
"""
Let us now read the contents of 'pickle.txt'.
Do the variables retain their types?
"""
def pickout(fileobj):
print "what's this file object : "
print fileobj
print "it's type : "
print type(fileobj)
print "==" * 5 + "==" * 5
while True:
pickline = pickle.load(fileobj)
yield pickline
with open('pickle.txt', 'rb') as f:
for info in pickout(f):
print info,
print type(info)
"""
Explanation: ** This is the content of pickle.txt
See the section 'DATA' in the help(pickle) to understand
what's the various markers mean i.e. I, aS, p1, a.F
I1143
.(lp0
S'AECS'
p1
aS'LAYOUT'
p2
aS'KUNDALAHALLI'
p3
a.F58.3
.(S'Bangalore'
p0
I560037
tp1
End of explanation
"""
"""
Example on how to read contents of 'pickle.txt'.
End of file condition handled.
"""
def pickout(fileobj):
print "what's this file object : "
print fileobj
print "it's type : "
print type(fileobj)
print "==" * 5 + "==" * 5
try:
while True:
pickline = pickle.load(fileobj)
yield pickline
except EOFError:
pass
with open('pickle.txt', 'rb') as f:
for info in pickout(f):
print info,
print type(info)
"""
Explanation: Short notes on the above code output
We are passing the filehandle to the 'pickout' function.
While in this function, we check the argument passed to it and it's type.
[Stackoverflow thread on reading file contents (written using pickle) in a loop] (http://stackoverflow.com/questions/18675863/load-data-from-python-pickle-file-in-a-loop)
End of explanation
"""
|
GoogleCloudPlatform/tf-estimator-tutorials | 08_Text_Analysis/06 - Part_3 - Text Classification - Hacker News - Custom Estimator Word Embedding.ipynb | apache-2.0 | import os
class Params:
pass
# Set to run on GCP
Params.GCP_PROJECT_ID = 'ksalama-gcp-playground'
Params.REGION = 'europe-west1'
Params.BUCKET = 'ksalama-gcs-cloudml'
Params.PLATFORM = 'local' # local | GCP
Params.DATA_DIR = 'data/news' if Params.PLATFORM == 'local' else 'gs://{}/data/news'.format(Params.BUCKET)
Params.TRANSFORMED_DATA_DIR = os.path.join(Params.DATA_DIR, 'transformed')
Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX = os.path.join(Params.TRANSFORMED_DATA_DIR, 'train')
Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX = os.path.join(Params.TRANSFORMED_DATA_DIR, 'eval')
Params.TEMP_DIR = os.path.join(Params.DATA_DIR, 'tmp')
Params.MODELS_DIR = 'models/news' if Params.PLATFORM == 'local' else 'gs://{}/models/news'.format(Params.BUCKET)
Params.TRANSFORM_ARTEFACTS_DIR = os.path.join(Params.MODELS_DIR,'transform')
Params.TRAIN = True
Params.RESUME_TRAINING = False
Params.EAGER = False
if Params.EAGER:
tf.enable_eager_execution()
"""
Explanation: Text Classification using TensorFlow and Google Cloud - Part 3
This bigquery-public-data:hacker_news contains all stories and comments from Hacker News from its launch in 2006. Each story contains a story id, url, the title of the story, tthe author that made the post, when it was written, and the number of points the story received.
The objective is, given the title of the story, we want to build an ML model that can predict the source of this story.
TF Custom Estimator Word Embedding for Text Classification
This notebook illustrates how to build a TF Custom Estimator for Text Classification. The model will make use of the 'bow' feature in the transformed dataset as the input layer. The 'bow' feature is a sparce vector of integers representing the indecies of the words in the text (title). The model will also make use of the "vocabolary" file produced during the tf.transform pipeline as a lookup for word index.
Define the metadata
Define data input function
Create custom estimator with model_fn
Setup experiement
Hyper-parameters & RunConfig
Serving function (for exported model)
TrainSpec & EvalSpec
Run experiement
Evalute the model
Use SavedModel for prediction
Setting Global Parameters
End of explanation
"""
import tensorflow as tf
from tensorflow import data
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.saved import saved_transform_io
print tf.__version__
"""
Explanation: Importing libraries
End of explanation
"""
RAW_HEADER = 'key,title,source'.split(',')
RAW_DEFAULTS = [['NA'],['NA'],['NA']]
TARGET_FEATURE_NAME = 'source'
TARGET_LABELS = ['github', 'nytimes', 'techcrunch']
TEXT_FEATURE_NAME = 'title'
KEY_COLUMN = 'key'
VOCAB_SIZE = 20000
TRAIN_SIZE = 73124
EVAL_SIZE = 23079
PAD_VALUE = VOCAB_SIZE + 1
VOCAB_LIST_FILE = os.path.join(Params.TRANSFORM_ARTEFACTS_DIR, 'transform_fn/assets/vocab_string_to_int_uniques')
MAX_WORDS_PER_TITLE = 10
raw_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema({
KEY_COLUMN: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
TEXT_FEATURE_NAME: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
TARGET_FEATURE_NAME: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
}))
transformed_metadata = metadata_io.read_metadata(
os.path.join(Params.TRANSFORM_ARTEFACTS_DIR,"transformed_metadata"))
raw_feature_spec = raw_metadata.schema.as_feature_spec()
transformed_feature_spec = transformed_metadata.schema.as_feature_spec()
print transformed_feature_spec
"""
Explanation: 1. Define Metadata
End of explanation
"""
def parse_tf_example(tf_example):
parsed_features = tf.parse_single_example(serialized=tf_example, features=transformed_feature_spec)
target = parsed_features.pop(TARGET_FEATURE_NAME)
return parsed_features, target
def generate_tfrecords_input_fn(files_pattern,
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=200):
def _input_fn():
file_names = data.Dataset.list_files(files_pattern)
if Params.EAGER:
print file_names
dataset = data.TFRecordDataset(file_names )
dataset = dataset.apply(
tf.contrib.data.shuffle_and_repeat(count=num_epochs,
buffer_size=batch_size*2)
)
dataset = dataset.apply(
tf.contrib.data.map_and_batch(parse_tf_example,
batch_size=batch_size,
num_parallel_batches=2)
)
datset = dataset.prefetch(batch_size)
if Params.EAGER:
return dataset
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
return _input_fn
"""
Explanation: 2. Define Input Function
End of explanation
"""
def _bow_to_vector(sparse_bow_indecies):
# Convert sparse tensor to dense tensor by padding each entry to match the longest in the batch
bow_indecies = tf.sparse_tensor_to_dense(sparse_bow_indecies, default_value=PAD_VALUE)
# Create a word_ids padding
padding = tf.constant([[0,0],[0, MAX_WORDS_PER_TITLE]])
# Pad all the word_ids entries to the maximum document length
bow_indecies_padded = tf.pad(bow_indecies, padding)
bow_vector = tf.slice(bow_indecies_padded, [0,0], [-1, MAX_WORDS_PER_TITLE])
# Return the final word_id_vector
return bow_vector
def _shallow_layers(features, params):
# word_id_vector
bow_vector = _bow_to_vector(features['bow'])
# layer to take each word_id and convert it into vector (embeddings)
word_embeddings = tf.contrib.layers.embed_sequence(bow_vector,
vocab_size=VOCAB_SIZE+2,
embed_dim=params.embedding_size)
### CNN Model ############################################################################
if params.model_type == 'CNN':
words_conv = tf.layers.conv1d(word_embeddings,
filters=params.filters,
kernel_size=params.window_size,
strides=int(window_size/2),
padding='SAME', activation=tf.nn.relu)
words_conv_shape = words_conv.get_shape()
dim = words_conv_shape[1] * words_conv_shape[2]
shallow_layer = tf.reshape(words_conv,[-1, dim])
### LSTM Model ############################################################################
elif params.model_type == 'LSTM':
rnn_layers = [tf.nn.rnn_cell.LSTMCell(
num_units=size,
forget_bias=params.forget_bias,
activation=tf.nn.tanh) for size in params.hidden_units]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
shallow_layer = tf.unstack(word_embeddings, axis=1)
### MAX MIN Embedding Model ############################################################################
else:
# Repesent the doc embedding as the concatenation of MIN and MAX of the word embeddings
doc_embeddings_min = tf.reduce_min(word_embeddings, axis=1)
doc_embeddings_max = tf.reduce_max(word_embeddings, axis=1)
shallow_layer = tf.concat([doc_embeddings_min, doc_embeddings_max], axis=1)
return shallow_layer
def _fully_connected_layers(inputs, params):
hidden_layers = inputs
if params.hidden_units is not None:
# Create a fully-connected layer-stack based on the hidden_units in the params
hidden_layers = tf.contrib.layers.stack(
inputs=inputs,
layer=tf.contrib.layers.fully_connected,
stack_args= params.hidden_units,
activation_fn=tf.nn.relu)
return hidden_layers
def model_fn(features, labels, mode, params):
# Create the input layers via CNN, LSTM, or MAX+MIN embeddings
shallow_layers_output = _shallow_layers(features, params)
# Create FCN using hidden units
hidden_layers = _fully_connected_layers(shallow_layers_output, params)
# Number of classes
output_layer_size = len(TARGET_LABELS)
# Connect the output layer (logits) to the hidden layer (no activation fn)
logits = tf.layers.dense(inputs=hidden_layers,
units=output_layer_size,
activation=None)
head = tf.contrib.estimator.multi_class_head(
n_classes=len(TARGET_LABELS),
label_vocabulary=TARGET_LABELS,
name='classification_head'
)
def _train_op_fn(loss):
# Create Optimiser
optimizer = tf.train.AdamOptimizer(
learning_rate=params.learning_rate)
# Create training operation
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
return train_op
return head.create_estimator_spec(
features,
mode,
logits,
labels=labels,
train_op_fn=_train_op_fn
)
"""
Explanation: 3. Create Custom Estimator using Model Function
3.1 Define model_fn
End of explanation
"""
def create_estimator(hparams, run_config):
estimator = tf.estimator.Estimator(model_fn=model_fn,
params=hparams,
config=run_config)
return estimator
"""
Explanation: 3.2 Create Custom Estimator using model_fn
End of explanation
"""
NUM_EPOCHS = 10
BATCH_SIZE = 1000
TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS
EVAL_EVERY_SEC = 60
hparams = tf.contrib.training.HParams(
num_epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
embedding_size = 50, # word embedding vector size
learning_rate=0.01,
hidden_units=[64, 32],
max_steps=TOTAL_STEPS,
model_type='MAXMIN_EMBEDDING', # CNN | LSTM | MAXMIN_EMBEDDING
#CNN Params
window_size = 3,
filters = 2,
#LSTM Params
forget_bias=1.0,
keep_prob = 0.8,
)
MODEL_NAME = 'dnn_estimator_custom'
model_dir = os.path.join(Params.MODELS_DIR, MODEL_NAME)
run_config = tf.estimator.RunConfig(
tf_random_seed=19830610,
log_step_count_steps=1000,
save_checkpoints_secs=EVAL_EVERY_SEC,
keep_checkpoint_max=1,
model_dir=model_dir
)
print(hparams)
print("")
print("Model Directory:", run_config.model_dir)
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", TOTAL_STEPS)
"""
Explanation: 4. Setup Experiment
4.1 HParams and RunConfig
End of explanation
"""
def generate_serving_input_fn():
def _serving_fn():
receiver_tensor = {
'title': tf.placeholder(dtype=tf.string, shape=[None])
}
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(Params.TRANSFORM_ARTEFACTS_DIR, transform_fn_io.TRANSFORM_FN_DIR),
receiver_tensor)
)
return tf.estimator.export.ServingInputReceiver(
transformed_features, receiver_tensor)
return _serving_fn
"""
Explanation: 4.2 Serving function
End of explanation
"""
train_spec = tf.estimator.TrainSpec(
input_fn = generate_tfrecords_input_fn(
Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX+"*",
mode = tf.estimator.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
),
max_steps=hparams.max_steps,
hooks=None
)
eval_spec = tf.estimator.EvalSpec(
input_fn = generate_tfrecords_input_fn(
Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX+"*",
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
),
exporters=[tf.estimator.LatestExporter(
name="estimate", # the name of the folder in which the model will be exported to under export
serving_input_receiver_fn=generate_serving_input_fn(),
exports_to_keep=1,
as_text=False)],
steps=None,
throttle_secs=EVAL_EVERY_SEC
)
"""
Explanation: 4.3 TrainSpec & EvalSpec
End of explanation
"""
from datetime import datetime
import shutil
if Params.TRAIN:
if not Params.RESUME_TRAINING:
print("Removing previous training artefacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(hparams, run_config)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
else:
print "Training was skipped!"
"""
Explanation: 5. Run experiment
End of explanation
"""
tf.logging.set_verbosity(tf.logging.ERROR)
estimator = create_estimator(hparams, run_config)
train_metrics = estimator.evaluate(
input_fn = generate_tfrecords_input_fn(
files_pattern= Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX+"*",
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TRAIN_SIZE),
steps=1
)
print("############################################################################################")
print("# Train Measures: {}".format(train_metrics))
print("############################################################################################")
eval_metrics = estimator.evaluate(
input_fn=generate_tfrecords_input_fn(
files_pattern= Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX+"*",
mode= tf.estimator.ModeKeys.EVAL,
batch_size= EVAL_SIZE),
steps=1
)
print("")
print("############################################################################################")
print("# Eval Measures: {}".format(eval_metrics))
print("############################################################################################")
"""
Explanation: 7. Evaluate the model
End of explanation
"""
import os
export_dir = model_dir +"/export/estimate/"
saved_model_dir = os.path.join(export_dir, os.listdir(export_dir)[0])
print(saved_model_dir)
print("")
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
output = predictor_fn(
{
'title':[
'Microsoft and Google are joining forces for a new AI framework',
'A new version of Python is mind blowing',
'EU is investigating new data privacy policies'
]
}
)
print(output)
"""
Explanation: 8. Use Saved Model for Predictions
End of explanation
"""
|
LucaCanali/Miscellaneous | Spark_Notes/Spark_Histograms/Spark_SQL_Frequency_Histograms.ipynb | apache-2.0 | # Start the Spark Session
# This uses local mode for simplicity
# the use of findspark is optional
# install pyspark if needed
# ! pip install pyspark
# import findspark
# findspark.init("/home/luca/Spark/spark-3.3.0-bin-hadoop3")
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("PySpark histograms")
.master("local[*]")
.getOrCreate()
)
"""
Explanation: How to generate histograms using Apache Spark SQL
This provides and example of how to generate frequency histograms using the Spark SQL.
See also sparkhistogram for a DataFrame API version of this, and the package https://pypi.org/project/sparkhistogram/
Disambiguation: we refer here to computing histograms of the DataFrame data, rather than histograms of the columns statistics used by the cost based optimizer.
End of explanation
"""
# Generate a DataFrame with some data for demo purposes and map it to a temporary view
num_events = 100
scale = 100
seed = 4242
df = spark.sql(f"select random({seed}) * {scale} as random_value from range({num_events})")
# map the df DataFrame to the t1 temporary view so it can be used with Spark SQL
df.createOrReplaceTempView("data")
spark.table("data").show(5)
"""
Explanation: Generate data for demo purposes and map it to a Spark table (temporary view)
End of explanation
"""
# This is a Spark SQL stattement to compute the count histogram of a given column
# Input parameters
table_name = "data" # table or temporary view containing the data
value_col = "random_value" # column name on which to compute the histogram
min = -20 # min: minimum value in the histogram
max = 90 # maximum value in the histogram
bins = 11 # number of histogram buckets to compute
step = (max - min) / bins
histogram = spark.sql(f"""
with hist as (
select
width_bucket({value_col}, {min}, {max}, {bins}) as bucket,
count(*) as cnt
from {table_name}
group by bucket
),
buckets as (
select id+1 as bucket from range({bins})
)
select
bucket, {min} + (bucket - 0.5) * {step} as value,
nvl(cnt, 0) as count
from hist right outer join buckets using(bucket)
order by bucket
""")
"""
Explanation: Compute the histogram using Spark SQL
End of explanation
"""
# Output DataFrame description
# ----------------------------
# bucket: the bucket number, range from 1 to bins (included)
# value: midpoint value of the given bucket
# count: number of values in the bucket
# this triggers the computation as show() is an action
histogram.show()
# Fetch the histogram data into a Pandas DataFrame for visualization
# At this stage data is reduced to a small number of rows (one row per bin)
# so it can be easily handled by one machine
# toPandas() is an action and triggers the computation
hist_pandasDF = histogram.toPandas()
hist_pandasDF
# Optionally normalize the event count into a frequency
# dividing by the total number of events
hist_pandasDF["frequency"] = hist_pandasDF["count"] / sum(hist_pandasDF["count"])
hist_pandasDF
"""
Explanation: Fetch the histogram data it into the driver
The show and toPandas actions trigger the execution and fetches the histogram values
End of explanation
"""
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["count"]
# bar plot
ax.bar(x, y, width = 3.0, color='red')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event count")
ax.set_title("Distribution of event counts")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["frequency"]
# bar plot
ax.bar(x, y, width = 3.0, color='blue')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event frequency")
ax.set_title("Distribution of event frequencies")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
spark.stop()
"""
Explanation: Histogram plotting
The first plot is a histogram with the event counts (number of events per bin).
The second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events).
End of explanation
"""
|
Illumina/interop | docs/src/Tutorial_04_Indexing_Metrics.ipynb | gpl-3.0 | run_folder = ""
"""
Explanation: Using the Illumina InterOp Library in Python: Part 4
Install
If you do not have the Python InterOp library installed, then you can do the following:
$ pip install -f https://github.com/Illumina/interop/releases/latest interop
You can verify that InterOp is properly installed:
$ python -m interop --test
Before you begin
If you plan to use this tutorial in an interactive fashion, then you should get an example run folder that contains IndexMetricsOut.bin.
Please change the path below so that it points at the run folder you wish to use:
End of explanation
"""
from interop import py_interop_run_metrics, py_interop_run, py_interop_summary
run_metrics = py_interop_run_metrics.run_metrics()
"""
Explanation: Getting SAV Indexing Tab-like Metrics
The run_metrics class encapsulates the model for all the individual InterOp files as well as containing information
from the RunInfo.xml. The Modules page contains a subset of the applications programmer's interface
for all the major classes in C++. The available Python models all have the same names (with a few exceptions) and take
the same parameters. This page is useful for accessing specific values loaded from the individual files.
End of explanation
"""
valid_to_load = py_interop_run.uchar_vector(py_interop_run.MetricCount, 0)
py_interop_run_metrics.list_index_metrics_to_load(valid_to_load)
"""
Explanation: By default, the run_metrics class loads all the InterOp files.
run_folder = run_metrics.read(run_folder)
The InterOp library can provide a list of all necessary InterOp files for a specific application. The following shows how to generate that list for the index summary statistics:
End of explanation
"""
run_folder = run_metrics.read(run_folder, valid_to_load)
"""
Explanation: The run_metrics class can use this list to load only the required InterOp files as follows:
End of explanation
"""
summary = py_interop_summary.index_flowcell_summary()
"""
Explanation: The index_flowcell_summary class encapsulates all the metrics displayed on the SAV Indexing tab. This class contains a tree-like
structure where metrics describing the run summary are at the root, there is a branch for each lane summary, and
a sub branch for each count summary.
End of explanation
"""
py_interop_summary.summarize_index_metrics(run_metrics, summary)
"""
Explanation: The index_flowcell_summary object can be populated from the run_metrics object just so:
End of explanation
"""
import pandas as pd
columns = ( ('Index Number', 'id'), ('Sample Id', 'sample_id'), ('Project', 'project_name'), ('Index 1 (I7)', 'index1'), ('Index 2 (I5)', 'index2'), ('% Reads Identified (PF)', 'fraction_mapped'))
lane_summary = summary.at(0)
d = []
for label, func in columns:
d.append( (label, pd.Series([getattr(lane_summary.at(i), func)() for i in range(lane_summary.size())], index=[lane_summary.at(i).id() for i in range(lane_summary.size())])))
df = pd.DataFrame.from_items(d)
df
"""
Explanation: Index Lane Summary
The index flowcell summary composes index lane summaries. An index lane summary contains information summarizing the entire lane as well as child index count summaries that describe a single sample.
Below, we use pandas to display the index count summary portion of the SAV Indexing Tab:
End of explanation
"""
print "\n".join([method for method in dir(lane_summary) if not method.startswith('_') and method not in ("set", "push_back", "reserve", "this", "resize", "clear", "sort")])
"""
Explanation: You can also view the list of available metrics in the summary as follows:
End of explanation
"""
|
tombstone/models | research/object_detection/colab_tutorials/eager_few_shot_od_training_tf2_colab.ipynb | apache-2.0 | !pip install -U --pre tensorflow=="2.2.0"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
import matplotlib
import matplotlib.pyplot as plt
import os
import random
import io
import imageio
import glob
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from IPython.display import display, Javascript
from IPython.display import Image as IPyImage
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import colab_utils
from object_detection.builders import model_builder
%matplotlib inline
"""
Explanation: Eager Few Shot Object Detection Colab
Welcome to the Eager Few Shot Object Detection Colab --- in this colab we demonstrate fine tuning of a (TF2 friendly) RetinaNet architecture on very few examples of a novel class after initializing from a pre-trained COCO checkpoint.
Training runs in eager mode.
Estimated time to run through this colab (with GPU): < 5 minutes.
Imports
End of explanation
"""
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path.
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
def plot_detections(image_np,
boxes,
classes,
scores,
category_index,
figsize=(12, 16),
image_name=None):
"""Wrapper function to visualize detections.
Args:
image_np: uint8 numpy array with shape (img_height, img_width, 3)
boxes: a numpy array of shape [N, 4]
classes: a numpy array of shape [N]. Note that class indices are 1-based,
and match the keys in the label map.
scores: a numpy array of shape [N] or None. If scores=None, then
this function assumes that the boxes to be plotted are groundtruth
boxes and plot all boxes as black with no classes or scores.
category_index: a dict containing category dictionaries (each holding
category index `id` and category name `name`) keyed by category indices.
figsize: size for the figure.
image_name: a name for the image file.
"""
image_np_with_annotations = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_annotations,
boxes,
classes,
scores,
category_index,
use_normalized_coordinates=True,
min_score_thresh=0.8)
if image_name:
plt.imsave(image_name, image_np_with_annotations)
else:
plt.imshow(image_np_with_annotations)
"""
Explanation: Utilities
End of explanation
"""
# Load images and visualize
train_image_dir = 'models/research/object_detection/test_images/ducky/train/'
train_images_np = []
for i in range(1, 6):
image_path = os.path.join(train_image_dir, 'robertducky' + str(i) + '.jpg')
train_images_np.append(load_image_into_numpy_array(image_path))
plt.rcParams['axes.grid'] = False
plt.rcParams['xtick.labelsize'] = False
plt.rcParams['ytick.labelsize'] = False
plt.rcParams['xtick.top'] = False
plt.rcParams['xtick.bottom'] = False
plt.rcParams['ytick.left'] = False
plt.rcParams['ytick.right'] = False
plt.rcParams['figure.figsize'] = [14, 7]
for idx, train_image_np in enumerate(train_images_np):
plt.subplot(2, 3, idx+1)
plt.imshow(train_image_np)
plt.show()
"""
Explanation: Rubber Ducky data
We will start with some toy (literally) data consisting of 5 images of a rubber
ducky. Note that the coco dataset contains a number of animals, but notably, it does not contain rubber duckies (or even ducks for that matter), so this is a novel class.
End of explanation
"""
gt_boxes = []
colab_utils.annotate(train_images_np, box_storage_pointer=gt_boxes)
"""
Explanation: Annotate images with bounding boxes
In this cell you will annotate the rubber duckies --- draw a box around the rubber ducky in each image; click next image to go to the next image and submit when there are no more images.
If you'd like to skip the manual annotation step, we totally understand. In this case, simply skip this cell and run the next cell instead, where we've prepopulated the groundtruth with pre-annotated bounding boxes.
End of explanation
"""
# gt_boxes = [
# np.array([[0.436, 0.591, 0.629, 0.712]], dtype=np.float32),
# np.array([[0.539, 0.583, 0.73, 0.71]], dtype=np.float32),
# np.array([[0.464, 0.414, 0.626, 0.548]], dtype=np.float32),
# np.array([[0.313, 0.308, 0.648, 0.526]], dtype=np.float32),
# np.array([[0.256, 0.444, 0.484, 0.629]], dtype=np.float32)
# ]
"""
Explanation: In case you didn't want to label...
Run this cell only if you didn't annotate anything above and
would prefer to just use our preannotated boxes. Don't forget
to uncomment.
End of explanation
"""
# By convention, our non-background classes start counting at 1. Given
# that we will be predicting just one class, we will therefore assign it a
# `class id` of 1.
duck_class_id = 1
num_classes = 1
category_index = {duck_class_id: {'id': duck_class_id, 'name': 'rubber_ducky'}}
# Convert class labels to one-hot; convert everything to tensors.
# The `label_id_offset` here shifts all classes by a certain number of indices;
# we do this here so that the model receives one-hot labels where non-background
# classes start counting at the zeroth index. This is ordinarily just handled
# automatically in our training binaries, but we need to reproduce it here.
label_id_offset = 1
train_image_tensors = []
gt_classes_one_hot_tensors = []
gt_box_tensors = []
for (train_image_np, gt_box_np) in zip(
train_images_np, gt_boxes):
train_image_tensors.append(tf.expand_dims(tf.convert_to_tensor(
train_image_np, dtype=tf.float32), axis=0))
gt_box_tensors.append(tf.convert_to_tensor(gt_box_np, dtype=tf.float32))
zero_indexed_groundtruth_classes = tf.convert_to_tensor(
np.ones(shape=[gt_box_np.shape[0]], dtype=np.int32) - label_id_offset)
gt_classes_one_hot_tensors.append(tf.one_hot(
zero_indexed_groundtruth_classes, num_classes))
print('Done prepping data.')
"""
Explanation: Prepare data for training
Below we add the class annotations (for simplicity, we assume a single class in this colab; though it should be straightforward to extend this to handle multiple classes). We also convert everything to the format that the training
loop below expects (e.g., everything converted to tensors, classes converted to one-hot representations, etc.).
End of explanation
"""
dummy_scores = np.array([1.0], dtype=np.float32) # give boxes a score of 100%
plt.figure(figsize=(30, 15))
for idx in range(5):
plt.subplot(2, 3, idx+1)
plot_detections(
train_images_np[idx],
gt_boxes[idx],
np.ones(shape=[gt_boxes[idx].shape[0]], dtype=np.int32),
dummy_scores, category_index)
plt.show()
"""
Explanation: Let's just visualize the rubber duckies as a sanity check
End of explanation
"""
# Download the checkpoint and put it into models/research/object_detection/test_data/
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
!tar -xf ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
!mv ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint models/research/object_detection/test_data/
tf.keras.backend.clear_session()
print('Building model and restoring weights for fine-tuning...', flush=True)
num_classes = 1
pipeline_config = 'models/research/object_detection/configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'
checkpoint_path = 'models/research/object_detection/test_data/checkpoint/ckpt-0'
# Load pipeline config and build a detection model.
#
# Since we are working off of a COCO architecture which predicts 90
# class slots by default, we override the `num_classes` field here to be just
# one (for our new rubber ducky class).
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
model_config.ssd.num_classes = num_classes
model_config.ssd.freeze_batchnorm = True
detection_model = model_builder.build(
model_config=model_config, is_training=True)
# Set up object-based checkpoint restore --- RetinaNet has two prediction
# `heads` --- one for classification, the other for box regression. We will
# restore the box regression head but initialize the classification head
# from scratch (we show the omission below by commenting out the line that
# we would add if we wanted to restore both heads)
fake_box_predictor = tf.compat.v2.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.compat.v2.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor)
ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)
ckpt.restore(checkpoint_path).expect_partial()
# Run model through a dummy image so that variables are created
image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))
prediction_dict = detection_model.predict(image, shapes)
_ = detection_model.postprocess(prediction_dict, shapes)
print('Weights restored!')
"""
Explanation: Create model and restore weights for all but last layer
In this cell we build a single stage detection architecture (RetinaNet) and restore all but the classification layer at the top (which will be automatically randomly initialized).
For simplicity, we have hardcoded a number of things in this colab for the specific RetinaNet architecture at hand (including assuming that the image size will always be 640x640), however it is not difficult to generalize to other model configurations.
End of explanation
"""
tf.keras.backend.set_learning_phase(True)
# These parameters can be tuned; since our training set has 5 images
# it doesn't make sense to have a much larger batch size, though we could
# fit more examples in memory if we wanted to.
batch_size = 4
learning_rate = 0.01
num_batches = 100
# Select variables in top layers to fine-tune.
trainable_variables = detection_model.trainable_variables
to_fine_tune = []
prefixes_to_train = [
'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead',
'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead']
for var in trainable_variables:
if any([var.name.startswith(prefix) for prefix in prefixes_to_train]):
to_fine_tune.append(var)
# Set up forward + backward pass for a single train step.
def get_model_train_step_function(model, optimizer, vars_to_fine_tune):
"""Get a tf.function for training step."""
# Use tf.function for a bit of speed.
# Comment out the tf.function decorator if you want the inside of the
# function to run eagerly.
@tf.function
def train_step_fn(image_tensors,
groundtruth_boxes_list,
groundtruth_classes_list):
"""A single training iteration.
Args:
image_tensors: A list of [1, height, width, 3] Tensor of type tf.float32.
Note that the height and width can vary across images, as they are
reshaped within this function to be 640x640.
groundtruth_boxes_list: A list of Tensors of shape [N_i, 4] with type
tf.float32 representing groundtruth boxes for each image in the batch.
groundtruth_classes_list: A list of Tensors of shape [N_i, num_classes]
with type tf.float32 representing groundtruth boxes for each image in
the batch.
Returns:
A scalar tensor representing the total loss for the input batch.
"""
shapes = tf.constant(batch_size * [[640, 640, 3]], dtype=tf.int32)
model.provide_groundtruth(
groundtruth_boxes_list=groundtruth_boxes_list,
groundtruth_classes_list=groundtruth_classes_list)
with tf.GradientTape() as tape:
preprocessed_images = tf.concat(
[detection_model.preprocess(image_tensor)[0]
for image_tensor in image_tensors], axis=0)
prediction_dict = model.predict(preprocessed_images, shapes)
losses_dict = model.loss(prediction_dict, shapes)
total_loss = losses_dict['Loss/localization_loss'] + losses_dict['Loss/classification_loss']
gradients = tape.gradient(total_loss, vars_to_fine_tune)
optimizer.apply_gradients(zip(gradients, vars_to_fine_tune))
return total_loss
return train_step_fn
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
train_step_fn = get_model_train_step_function(
detection_model, optimizer, to_fine_tune)
print('Start fine-tuning!', flush=True)
for idx in range(num_batches):
# Grab keys for a random subset of examples
all_keys = list(range(len(train_images_np)))
random.shuffle(all_keys)
example_keys = all_keys[:batch_size]
# Note that we do not do data augmentation in this demo. If you want a
# a fun exercise, we recommend experimenting with random horizontal flipping
# and random cropping :)
gt_boxes_list = [gt_box_tensors[key] for key in example_keys]
gt_classes_list = [gt_classes_one_hot_tensors[key] for key in example_keys]
image_tensors = [train_image_tensors[key] for key in example_keys]
# Training step (forward pass + backwards pass)
total_loss = train_step_fn(image_tensors, gt_boxes_list, gt_classes_list)
if idx % 10 == 0:
print('batch ' + str(idx) + ' of ' + str(num_batches)
+ ', loss=' + str(total_loss.numpy()), flush=True)
print('Done fine-tuning!')
"""
Explanation: Eager mode custom training loop
End of explanation
"""
test_image_dir = 'models/research/object_detection/test_images/ducky/test/'
test_images_np = []
for i in range(1, 50):
image_path = os.path.join(test_image_dir, 'out' + str(i) + '.jpg')
test_images_np.append(np.expand_dims(
load_image_into_numpy_array(image_path), axis=0))
# Again, uncomment this decorator if you want to run inference eagerly
@tf.function
def detect(input_tensor):
"""Run detection on an input image.
Args:
input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
Note that height and width can be anything since the image will be
immediately resized according to the needs of the model within this
function.
Returns:
A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,
and `detection_scores`).
"""
preprocessed_image, shapes = detection_model.preprocess(input_tensor)
prediction_dict = detection_model.predict(preprocessed_image, shapes)
return detection_model.postprocess(prediction_dict, shapes)
# Note that the first frame will trigger tracing of the tf.function, which will
# take some time, after which inference should be fast.
label_id_offset = 1
for i in range(len(test_images_np)):
input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)
detections = detect(input_tensor)
plot_detections(
test_images_np[i][0],
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.uint32)
+ label_id_offset,
detections['detection_scores'][0].numpy(),
category_index, figsize=(15, 20), image_name="gif_frame_" + ('%02d' % i) + ".jpg")
imageio.plugins.freeimage.download()
anim_file = 'duckies_test.gif'
filenames = glob.glob('gif_frame_*.jpg')
filenames = sorted(filenames)
last = -1
images = []
for filename in filenames:
image = imageio.imread(filename)
images.append(image)
imageio.mimsave(anim_file, images, 'GIF-FI', fps=5)
display(IPyImage(open(anim_file, 'rb').read()))
"""
Explanation: Load test images and run inference with new model!
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session09/Day4/workbook_QPO.ipynb | mit | a = Table()
a.meta['dt'] = 0.0001 # time step, in seconds
a.meta['duration'] = 200 # length of time, in seconds
a.meta['omega'] = 2*np.pi # angular frequency, in radians
a.meta['phi'] = 0.0 # offset angle, in radians
"""
Explanation: Using Fourier Analysis to Analyze Quasi-Periodic Oscillations
By Abigail Stevens
Problem 1: damped harmonic oscillator example
Generating a light curve
End of explanation
"""
freq = fftpack.fftfreq(len(a), d=a.meta['dt'])
nyq_ind = int(len(a)/2.) # the index of the last positive Fourier frequency
"""
Explanation: 1a. Compute the time steps and a cosine harmonic with the above-defined properties.
1b. Compute four exponentially damped versions of the harmonic oscillation.
$$D(t_i) = e^{-\zeta t_i}H(t_i)$$
Pick your own four $\zeta$s! I recommend values between 0.01 and 1.
1c. Plot them all on top of each other.
1d. Take the power spectrum of the harmonic and 4 damped harmonic time series.
The power $P$ at each frequency $\nu_i$, for the Fourier transform $F$, is $$P(\nu_i)=|F(\nu_i)|^2$$
Computing the Fourier frequencies and the index of the Nyquist frequency.
End of explanation
"""
freq = fftpack.fftfreq(len(psd), d=dt)
nyq_ind = int(len(psd)/2.) # the index of the last positive Fourier frequency
"""
Explanation: 1e. Plot them!
Notice the trend between the width of the peak in the power spectrum, and the strength of the damping factor.
For bonus points, put in a key/legend with the corresponding $\zeta$ value for each curve.
Problem 2: Analyzing NICER data of the black hole X-ray binary MAXI J1535-571
Import it with astropy tables from the file "J1535_evt.fits".
The data have come to us as an 'event list', meaning that it's a list of the time at which a photon was detected (in seconds, in spacecraft clock time) and what the energy of the photon was (a channel number; channel/100=photon energy in keV).
2a. Turn this rag-tag list of photons into an evenly-spaced light curve
2a.i. First, clean it up a little by only keeping photons with energies greater than 1 keV and less than 12 keV, using a mask.
2a.ii. Then, make an evenly-spaced light curve array with np.histogram. Pick a light curve time resolution of dt=1/8seconds to start with. To put your light curve in units of count rate, divide the histogram (counts per bin) by dt (seconds per bin); to avoid typecasting errors, do this by multiplying by int(1/dt).
(yes, it takes a second or two; you're using half a million time bins in your light curve!)
2b. Let's try taking the power spectrum of it.
$$P(\nu_i)=|F(\nu_i)|^2$$
where $F$ is the FFT of your light curve.
End of explanation
"""
def make_lc(events, start, end, dt):
"""
Explanation: Plot it!
It's ugly! But more importantly, you can't get useful science out of it.
What's going on?
There are gaps in the light curve due to the orbit of the spacecraft (and occasionally stuff gets in the way). This has the effect of inserting top-hat windows into our function, which give the lumpy bumps at ~0.25 Hz. So, we need to break the light curve up into shorter segments that won't have weird drop-outs.
There is a giant DC component at $\nu=0$. This is not astrophysical in origin, but from the mean of the light curve.
Power spectra are often plotted on log-log scales, but the power gets really noisy and 'scatter-y' at higher frequencies.
The astute observer will notice that we can only go up to a Nyquist frequency of 4 Hz. There's interesting astrophysical signals above 4 Hz, but if we did smaller dt with keeping the very long segment length, we'd have >1 million time bins, which can be asking a lot of a laptop processor.
2c. Segments!
2c.i. Turn your light curve code from 2a.ii. into a function:
End of explanation
"""
time = np.asarray(j1535['TIME']) ## Doing this so that we can re-run
seg_length = 32. # seconds
dt = 1./128.# seconds
n_bins = int(seg_length/dt) # Number of time bins in a segment of light curve
psd_avg = np.zeros(n_bins) # initiating, to keep running sum (then avearge at end)
n_seg = 0
for (start_gti, stop_gti) in zip(gti_tab['START'], gti_tab['STOP']):
start_time = start_gti
end_time = start_time + seg_length
while end_time <= stop_gti:
## Make a mask of events in this segment
## Keep the stuff not in this segment for next time
## Make the light curve
## Turn that into a power spectrum
## Keep a running sum (to average at end)
## Print out progress
if n_seg % 5 == 0:
print(n_seg)
## Incrementing for next loop
n_seg += 1
start_time += seg_length
end_time += seg_length
## Divide summed powers by n_seg to get the average
"""
Explanation: 2c.ii. Sometimes, the detector is on and recording photons, but it's pointed too close to the Earth, or a structure on the spacecraft is occulting part of the view, or the instrument is moving through a zone of high particle background, or other things. The times when these things happen are recorded, and in data reduction you make a list of Good Time Intervals, or GTIs, which is when you can use good science data. I made a list of GTIs for this data file that are longer than 4 seconds long, which you can read in from "J1535_gti.fits".
2c.iii. Not only do we want to only use data in the GTIs, but we want to split the light curve up into multiple equal-length segments, take the power spectrum of each segment, and average them together. By using shorter time segments, we can use finer dt on the light curves without having so many bins for the computation that our computer grinds to a halt. There is the added bonus that the noise amplitudes will tend to cancel each other out, and the signal amplitudes will add, and we get better signal-to-noise!
As you learned in Jess's lecture yesterday, setting the length of the segment determines the lowest frequency you can probe, but for stellar-mass compact objects where we're usually interested in variability above ~0.1 Hz, this is an acceptable trade-off.
End of explanation
"""
def rebin(freq, power, err_power, rebin_factor=1.05):
"""
Re-bin the power spectrum in frequency space by some re-binning factor
(rebin_factor > 1). This is sometimes called 'geometric re-binning' or
'logarithmic re-binning', as opposed to linear re-binning
(e.g., grouping by 2)
Parameters
----------
freq : np.array of floats
1-D array of the Fourier frequencies.
power : np.array of floats
1-D array of the power at each Fourier frequency, with any/arbitrary
normalization.
err_power : np.array of floats
1-D array of the error on the power at each Fourier frequency, with the
same normalization as the power.
rebin_factor : float
The factor by which the data are geometrically re-binned.
Returns
-------
rb_freq : np.array of floats
1-D array of the re-binned Fourier frequencies.
rb_power : np.array of floats
1-D array of the power at the re-binned Fourier frequencies, with the
same normalization as the input power array.
rb_err : np.array of floats
1-D array of the error on the power at the re-binned Fourier
frequencies, with the same normalization as the input error on power.
"""
assert rebin_factor >= 1.0
rb_power = np.asarray([]) # Array of re-binned power
rb_freq = np.asarray([]) # Array of re-binned frequencies
rb_err = np.asarray([]) # Array of error in re-binned power
real_index = 1.0 # The unrounded next index in power
int_index = 1 # The int of real_index, added to current_m every iteration
current_m = 1 # Current index in power
prev_m = 0 # Previous index m
## Loop through the length of the array power, new bin by new bin, to
## compute the average power and frequency of that new geometric bin.
while current_m < len(power):
return rb_freq, rb_power, rb_err
"""
Explanation: Plot it! Use similar code I gave you above to make the array of Fourier frequencies and get the index of the Nyquist frequency.
2d. Mean-subtracted
So, you can see something just to the left 10 Hz much clearer, but there's this giant whopping signal at the lowest frequency bin. This is what I've heard called the 'DC component', which arises from the mean count rate of the light curve. To get rid of it, subtract the mean from your light curve segment before taking the Fourier transform. Otherwise, keep the same code as above for 2c.iii. You may want to put some of this in a function for future use in this notebook.
2e. Error on average power
The average power at a particular frequency has a chi-squared distribution with two degrees of freedom about the true underlying power spectrum. So, error is the power value divided by the square root of the number of segments. A big reason why we love power spectra(/periodograms) is because this is so straight forward!
One way to intuitively check if your errors are way-overestimated or way-underestimated is whether the size of the error bar looks commeasurate with the amount of bin-to-bin scatter of power at neighbouring frequencies.
Plotting, this time with ax.errorbar instead of ax.plot.
The thing at ~8 Hz is a low-frequency QPO, and the hump at-and-below 1 Hz is broadband noise (which we'll discuss in detail this afternoon)!! Now that you've got the basic analysis step complete, we'll focus on plotting the data in a meaningful way so you can easily extract information about the QPO and noise.
2f. Re-binning
We often plot power spectra on log-log scaled axes (so, log on both the X and Y), and you'll notice that there's a big noisy part above 10 Hz. It is common practice to bin up the power spectrum geometrically (which is like making it equal-spaced in when log-plotted).
For this example, I'll use a re-binning factor of 1.03. If new bin 1 has the width of one old bin, new bin 2 will be some 1.03 bins wider. New bin 3 will be 1.03 times wider than that (the width of new bin 2), etc. For the first couple bins, this will round to one old bin (since you can only have an integer number of bins), but eventually a new bin will be two old bins, then more and more as you move higher in frequency. If the idea isn't quite sticking, try drawing out a representation of old bins and how the new bins get progressively larger by the rebinning factor.
For a given new bin x that spans indices a to b in the old bin array:
$$\nu_{x} = \frac{1}{b-a}\sum_{i=a}^{b}\nu_{i}$$
$$P_{x} = \frac{1}{b-a}\sum_{i=a}^{b}P_{i}$$
$$\delta P_{x} = \frac{1}{b-a}\sqrt{\sum_{i=a}^{b}(\delta P_{i})^{2}}$$
End of explanation
"""
seg_length = 64. # seconds
dt = 1./8.# seconds
n_bins = int(seg_length/dt) # Number of time bins in a segment of light curve
"""
Explanation: Apply this to the data (using JUST the frequency, power, and error at positive Fourier frequencies). Start with a rebin factor of 1.03.
Play around with a few different values of rebin_factor to see how it changes the plotted power spectrum. 1 should give back exactly what you put in, and 1.1 tends to bin things up quite a lot.
Congratulations! You can make great-looking power spectra! Now, go back to part 2d. and try 4 or 5 different combinations of dt and seg_length. What happens when you pick too big of a dt to see the QPO frequency? What if your seg_length is really short?
One of the most important things to notice is that for a real astrophysical signal, the QPO (and low-frequency noise) are present for a variety of different dt and seg_length parameters.
2g. Normalization
The final thing standing between us and a publication-ready power spectrum plot is the normalization of the power along the y-axis. The normalization that's commonly used is fractional rms-squared normalization, sometimes just called the rms normalization. For a power spectrum created from counts/second unit light curves, the equation is:
$$P_{frac.rms2} = P \times \frac{2*dt}{N * mean^2}$$
P is the power we already have,
dt is the time step of the light curve,
N is n_bins, the number of bins in one segment, and
mean is the mean count rate (in counts/s) of the light curve (so, you will need to go back to 2d. and re-run keeping a running sum-then-average of the mean count rate).
Don't forget to apply the normalization to the error, and re-bin after!
2h. Poisson noise level
Notice that the Poisson noise is a power law with slope 0 at high frequencies. With this normalization, we can predict the power of the Poisson noise level from the mean counts/s rate of the light curve!
$$P_{noise} = 2/mean$$
Compute this noise level, and plot it with the power spectrum.
Your horizontal Poisson noise line should be really close to the power at and above ~10 Hz.
2i. For plotting purposes, we sometimes subtract the Poisson noise level from the power before plotting.
Once we've done this and removed the noise, we can also plot the data in units of Power, instead of Power/Hz, by multiplying the power by the frequency. Recall that following the propagation of errors, you will need to multiply the error by the frequency as well, but not subtract the Poisson noise level there.
Beautiful! This lets us see the components clearly above the noise and see their relative contributions to the power spectrum (and thus to the light curve).
Recap of what you learned in problem 2:
You are now able to take a light curve, break it into appropriate segments using the given Good Time Intervals, compute the average power spectrum (without weird aliasing artefacts or a strong DC component), and plot it in such away that you can see the signals clearly.
Problem 3: It's pulsar time
We are going to take these skills and now work on two different observations of the same source, the ultra-luminous X-ray pulsar Swift J0243.6+6124. The goal is for you to see how different harmonics in the pulse shape manifest in the power spectrum.
3a. Load the data and GTI
Using the files J0243-122_evt.fits and J0243-134_evt.fits, and the corresponding x_gti.fits.
3b. Apply a mask to remove energies below 0.5 keV and above 12 keV.
3c. Make the average power spectrum for each data file.
Go through in the same way as 2d (using the make_lc function you already wrote). Re-bin and normalize (using the rebin function you already wrote). The spin period is 10 seconds, so I don't recommend using a segment length shorter than that (try 64 seconds). Since the period is quite long (for a pulsar), you can use a longer dt, like 1/8 seconds. Use the same segment length and dt for both data sets.
End of explanation
"""
|
dlsun/symbulate | labs/Lab 6 - Joint and Conditional Distributions.ipynb | mit | from symbulate import *
%matplotlib inline
"""
Explanation: Symbulate Lab 6 - Joint and Conditional Distributions
This Jupyter notebook provides a template for you to fill in. Read the notebook from start to finish, completing the parts as indicated. To run a cell, make sure the cell is highlighted by clicking on it, then press SHIFT + ENTER on your keyboard. (Alternatively, you can click the "play" button in the toolbar above.)
In this lab you will use the Symbulate package. Many of the commands are discussed in the Multiple RV Section, the Conditioning Section, or the Graphics Section of the Symbulate documentation. You should use Symbulate commands whenever possible. If you find yourself writing long blocks of Python code, you are probably doing something wrong. For example, you should not need to write any for loops.
There are 3 parts, and at the end of each part there are some reflection questions. There is no need to type a response to the reflection questions, but you should think about them and discuss them with your partner to try to make sense of your simulation results.
Warning: You may notice that many of the cells in this notebook are not editable. This is intentional and for your own safety. We have made these cells read-only so that you don't accidentally modify or delete them. However, you should still be able to execute the code in these cells.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: Part I: Two Discrete random variables
Roll a fair six-sided die five times and let $X$ be the largest of the five rolls and $Y$ the smallest.
Before proceeding, make some guesses about how the following will behave.
- Joint distribution of $X$ and $Y$
- Conditional distribution of $Y$ given $X=5$.
a)
Define the random variables $X$ and $Y$.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: b)
Simulate 10000 $(X, Y)$ pairs and store the values as xy. Estimate the covariance and the correlation. (Hint and hint and hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: c)
Make a scatterplot of the simulated values. (Hint. Note that it is recommnded to use jitter=True when the variables involved are discrete.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: d)
Make a tile plot of the simulated values. (Hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: e)
Use simulation to approximate the conditional distribution of $Y$ given $X=5$ and approximate the conditional mean $E(Y | X=5)$ and the conditional standard deviation. (Hint, but also see all of the Conditioning Section.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: f) Reflection questions
Recall the guesses you made at the start of the problem, and inspect your results from the previous parts. Can you explain the behavior you observed for the following?
Joint distribution of $X$ and $Y$
Conditional distribution of $Y$ given $X=5$.
TYPE YOUR RESPONSE HERE.
Part II: Two continuous random variables
Suppose that the base $U$ and height $V$ of a random rectangle are independent random variables, with each following a Uniform(0, 1) distribution. Let $X$ be the perimeter of the rectangle and $Y$ its area. In this part you will investigate the joint distribution of $X$ and $Y$.
Before proceeding, make some guesses about how the following will behave.
- Joint distribution of $X$ and $Y$
- Marginal distribution of $Y$
- Conditional distribution of $Y$ given $X=2$.
a)
Define appropriate random variables $U, V, X, Y$. (Hint, but also see the Multiple RV Section in general.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: b)
Simulate 10000 $(X, Y)$ pairs and store the values as xy. Estimate the covariance and the correlation. (Hint and hint and hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: c)
Make a scatterplot of the simulated values. (Hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: d)
Make a two-dimensional histogram of the simulated values. (Hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: e)
Make a two-dimensional density plot of the simulated values. (Hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: f)
Use simulation to approximate the marginal distribution of $Y$ and approximate its mean and standard deviation.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: g)
Use simulation to approximate the conditional distribution of $Y$ given $X=2$ and approximate the conditional mean $E(Y | X=2)$ and the conditional standard deviation. (Warning: be careful! See this hint and especially this hint.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: h) Reflection questions
Recall the guesses you made at the start of the problem, and inspect your results from the previous parts. Can you explain the behavior you observed for the following?
Joint distribution of $X$ and $Y$
Marginal distribution of $Y$
Conditional distribution of $Y$ given $X=2$.
TYPE YOUR RESPONSE HERE.
Part III: Joint Gaussian random variables
Just like Gaussian (Normal) distributions are the most important probability distributions, joint Gaussian (Multivariate Normal) distributions are the most important joint distributions. In this part you will investigate two random variables which have a joint Gaussian distribution.
Suppose that SAT Math ($M$) and Reading ($R$) scores of CalPoly students have a Bivariate Normal
(joint Gaussian) distribution.
- Math scores have mean 635 and SD 85.
- Reading scores have mean 595 and SD 70.
- The correlation between scores is 0.6.
Let $X = M + R$, the total of the two scores. Let $Y = M- R$, the difference between Math and Reading scores.
a)
Define RVs $M, R, X, Y$. (Hint)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: b)
Simulate 10000 $(M, R)$ pairs. Use the simulation results to approximate $E(M)$, $E(R)$, $SD(M)$, $SD(R)$, and $Corr(M, R)$.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: c)
Make a scatterplot of the simulated values. Add histograms of the marginal distributions. (Hint: .plot(type=["scatter", "marginal"]).
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: d)
Make a density plot of the simulated values. Add density plots of the marginal distributions. (Hint: .plot(type=["density", "marginal"]).
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: e)
Now simulate 10000 values of $X = M+R$. Plot the approximate distribution of $X$ and estimate $E(X)$ and $SD(X)$.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: f)
Now simulate 10000 values of $Y = M - R$. Plot the approximate distribution of $Y$ and estimate $E(Y)$ and $SD(Y)$.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: g)
Use simulation to approximate the distribution of $M$ given $R=700$. Make a plot of the approximate distribution and estimate the conditional mean $E(M | R = 700)$ and the conditional standard deviation. (Warning: be careful! See this hint and especially this hint.)
End of explanation
"""
|
SteveDiamond/cvxpy | examples/machine_learning/ridge_regression.ipynb | gpl-3.0 | import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Machine Learning: Ridge Regression
Ridge regression is a regression technique that is quite similar to unadorned least squares linear regression: simply adding an $\ell_2$ penalty on the parameters $\beta$ to the objective function for linear regression yields the objective function for ridge regression.
Our goal is to find an assignment to $\beta$ that minimizes the function
$$f(\beta) = \|X\beta - Y\|_2^2 + \lambda \|\beta\|_2^2,$$
where $\lambda$ is a hyperparameter and, as usual, $X$ is the training data and $Y$ the observations. In practice, we tune $\lambda$ until we find a model that generalizes well to the test data.
Ridge regression is an example of a shrinkage method: compared to least squares, it shrinks the parameter estimates in the hopes of reducing variance, improving prediction accuracy, and aiding interpetation.
In this notebook, we show how to fit a ridge regression model using CVXPY, how to evaluate the model, and how to tune the hyper-parameter $\lambda$.
End of explanation
"""
def loss_fn(X, Y, beta):
return cp.pnorm(cp.matmul(X, beta) - Y, p=2)**2
def regularizer(beta):
return cp.pnorm(beta, p=2)**2
def objective_fn(X, Y, beta, lambd):
return loss_fn(X, Y, beta) + lambd * regularizer(beta)
def mse(X, Y, beta):
return (1.0 / X.shape[0]) * loss_fn(X, Y, beta).value
"""
Explanation: Writing the objective function
We can decompose the objective function as the sum of a least squares loss function and an $\ell_2$ regularizer.
End of explanation
"""
def generate_data(m=100, n=20, sigma=5):
"Generates data matrix X and observations Y."
np.random.seed(1)
beta_star = np.random.randn(n)
# Generate an ill-conditioned data matrix
X = np.random.randn(m, n)
# Corrupt the observations with additive Gaussian noise
Y = X.dot(beta_star) + np.random.normal(0, sigma, size=m)
return X, Y
m = 100
n = 20
sigma = 5
X, Y = generate_data(m, n, sigma)
X_train = X[:50, :]
Y_train = Y[:50]
X_test = X[50:, :]
Y_test = Y[50:]
"""
Explanation: Generating data
Because ridge regression encourages the parameter estimates to be small, and as such tends to lead to models with less variance than those fit with vanilla linear regression. We generate a small dataset that will illustrate this.
End of explanation
"""
beta = cp.Variable(n)
lambd = cp.Parameter(nonneg=True)
problem = cp.Problem(cp.Minimize(objective_fn(X_train, Y_train, beta, lambd)))
lambd_values = np.logspace(-2, 3, 50)
train_errors = []
test_errors = []
beta_values = []
for v in lambd_values:
lambd.value = v
problem.solve()
train_errors.append(mse(X_train, Y_train, beta))
test_errors.append(mse(X_test, Y_test, beta))
beta_values.append(beta.value)
"""
Explanation: Fitting the model
All we need to do to fit the model is create a CVXPY problem where the objective is to minimize the the objective function defined above. We make $\lambda$ a CVXPY parameter, so that we can use a single CVXPY problem to obtain estimates for many values of $\lambda$.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
def plot_train_test_errors(train_errors, test_errors, lambd_values):
plt.plot(lambd_values, train_errors, label="Train error")
plt.plot(lambd_values, test_errors, label="Test error")
plt.xscale("log")
plt.legend(loc="upper left")
plt.xlabel(r"$\lambda$", fontsize=16)
plt.title("Mean Squared Error (MSE)")
plt.show()
plot_train_test_errors(train_errors, test_errors, lambd_values)
"""
Explanation: Evaluating the model
Notice that, up to a point, penalizing the size of the parameters reduces test error at the cost of increasing the training error, trading off higher bias for lower variance; in other words, this indicates that, for our example, a properly tuned ridge regression generalizes better than a least squares linear regression.
End of explanation
"""
def plot_regularization_path(lambd_values, beta_values):
num_coeffs = len(beta_values[0])
for i in range(num_coeffs):
plt.plot(lambd_values, [wi[i] for wi in beta_values])
plt.xlabel(r"$\lambda$", fontsize=16)
plt.xscale("log")
plt.title("Regularization Path")
plt.show()
plot_regularization_path(lambd_values, beta_values)
"""
Explanation: Regularization path
As expected, increasing $\lambda$ drives the parameters towards $0$. In a real-world example, those parameters that approach zero slower than others might correspond to the more informative features. It is in this sense that ridge regression can be considered model selection.
End of explanation
"""
|
philmui/datascience2016fall | lecture03.numpy.pandas/ch05.ipynb | mit | from pandas import Series, DataFrame
import pandas as pd
from __future__ import division
from numpy.random import randn
import numpy as np
import os
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas as pd
np.set_printoptions(precision=4)
%pwd
"""
Explanation: Getting started with pandas
End of explanation
"""
obj = Series([4, 7, -5, 3])
obj
obj.name
obj.values
obj.index
obj2 = Series([4, 7, -5, 3], index=['d', 'b', 'a', 'c'])
obj2
obj2.index
obj2.values
obj2['a']
obj2['d'] = 6
obj2[['c', 'a', 'd']]
obj2[obj2 > 0]
obj2 * 2
np.exp(obj2)
'b' in obj2
'e' in obj2
sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}
obj3 = Series(sdata)
obj3
states = ['California', 'Ohio', 'Oregon', 'Texas']
obj4 = Series(sdata, index=states)
obj4
pd.isnull(obj4)
pd.notnull(obj4)
obj4.isnull()
obj3
obj4
obj3 + obj4
obj4.name = 'population'
obj4.index.name = 'state'
obj4
obj.index = ['Bob', 'Steve', 'Jeff', 'Ryan']
obj
"""
Explanation: Introduction to pandas data structures
Series
End of explanation
"""
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
frame = DataFrame(data)
frame
DataFrame(data, columns=['year', 'state', 'pop'])
frame2 = DataFrame(data, columns=['year', 'state', 'pop', 'debt'],
index=['one', 'two', 'three', 'four', 'five'])
frame2
frame2.columns
"""
Explanation: DataFrame
End of explanation
"""
frame2['state']
frame2.year
"""
Explanation: A column in a DataFrame can be retrieved as a Series either by dict-like notation or by attribute:
End of explanation
"""
frame2.ix['three']
"""
Explanation: Rows can also be retrieved by position or name by a couple of methods, such as the ix indexing field
End of explanation
"""
frame2['debt'] = 16.5
frame2
frame2['debt'] = np.arange(5.)
frame2
"""
Explanation: Columns can be modified by assignment. For example, the empty 'debt' column could be assigned a scalar value or an array of values:
End of explanation
"""
val = Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five'])
frame2['debt'] = val
frame2
"""
Explanation: When assigning lists or arrays to a column, the value’s length must match the length of the DataFrame. If you assign a Series, it will be instead conformed exactly to the DataFrame’s index, inserting missing values in any holes:
End of explanation
"""
frame2['eastern'] = frame2.state == 'Ohio'
frame2
"""
Explanation: Assigning a column that doesn’t exist will create a new column.
End of explanation
"""
del frame2['eastern']
frame2.columns
"""
Explanation: The del keyword will delete columns as with a dict:
End of explanation
"""
pop = {'Nevada': {2001: 2.4, 2002: 2.9},
'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}
frame3 = DataFrame(pop)
frame3
frame3.T
"""
Explanation: If passed to DataFrame, a nested dict of dics will be interpreted as: the outer dict keys as the columns and the inner keys as the row indices:
End of explanation
"""
DataFrame(pop, index=[2001, 2002, 2003])
"""
Explanation: The keys in the inner dicts are unioned and sorted to form the index in the result. This isn’t true if an explicit index is specified:
End of explanation
"""
frame3['Ohio'][:-1]
frame3['Nevada'][:2]
pdata = {'Ohio': frame3['Ohio'][:-1],
'Nevada': frame3['Nevada'][:2]}
DataFrame(pdata)
"""
Explanation: Dicts of Series are treated much in the same way:
End of explanation
"""
frame3.index.name = 'year'; frame3.columns.name = 'state'
frame3
"""
Explanation: If a DataFrame’s index and columns have their name attributes set, these will also be displayed:
End of explanation
"""
frame3.values
"""
Explanation: Like Series, the values attribute returns the data contained in the DataFrame as a 2D ndarray:
End of explanation
"""
frame2.values
"""
Explanation: If the DataFrame’s columns are different dtypes, the dtype of the values array will be chosen to accomodate all of the columns:
End of explanation
"""
obj = Series(range(3), index=['a', 'b', 'c'])
index = obj.index
index
index[1:]
"""
Explanation: Index objects
pandas’s Index objects are responsible for holding the axis labels and other metadata (like the axis name or names). Any array or other sequence of labels used when con- structing a Series or DataFrame is internally converted to an Index:
End of explanation
"""
# this below will generate an error
index[1] = 'd'
"""
Explanation: Index objects are immutable and thus can’t be modified by the user:
End of explanation
"""
index = pd.Index(np.arange(3))
obj2 = Series([1.5, -2.5, 0], index=index)
obj2.index is index
"""
Explanation: Immutability is important so that Index objects can be safely shared among data structures:
End of explanation
"""
frame3
'Ohio' in frame3.columns
2003 in frame3.index
"""
Explanation: In addition to being array-like, an Index also functions as a fixed-size set:
End of explanation
"""
obj = Series([4.5, 7.2, -5.3, 3.6], index=['d', 'b', 'a', 'c'])
obj
"""
Explanation: Essential functionality
Reindexing
A critical method on pandas objects is reindex, which means to create a new object with the data conformed to a new index. Consider a simple example from above:
End of explanation
"""
obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e'])
obj2
"""
Explanation: Calling reindex on this Series rearranges the data according to the new index, intro- ducing missing values if any index values were not already present:
End of explanation
"""
obj.reindex(['a', 'b', 'c', 'd', 'e'], fill_value=0)
obj3 = Series(['blue', 'purple', 'yellow'], index=[0, 2, 4])
obj3.reindex(range(6), method='ffill')
frame = DataFrame(np.arange(9).reshape((3, 3)), index=['a', 'c', 'd'],
columns=['Ohio', 'Texas', 'California'])
frame
frame2 = frame.reindex(['a', 'b', 'c', 'd'])
frame2
"""
Explanation: For ordered data like time series, it may be desirable to do some interpolation or filling of values when reindexing. The method option allows us to do this, using a method such as ffill which forward fills the values:
End of explanation
"""
states = ['Texas', 'Utah', 'California']
frame.reindex(columns=states)
"""
Explanation: The columns can be reindexed using the columns keyword:
End of explanation
"""
frame.reindex(index=['a', 'b', 'c', 'd'], method='ffill',
columns=states)
"""
Explanation: Both can be reindexed in one shot, though interpolation will only apply row-wise (axis 0):
End of explanation
"""
frame.ix[['a', 'b', 'c', 'd'], states]
"""
Explanation: As you’ll see soon, reindexing can be done more succinctly by label-indexing with ix:
End of explanation
"""
obj = Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e'])
new_obj = obj.drop('c')
new_obj
obj.drop(['d', 'c'])
"""
Explanation: Dropping entries from an axis
Dropping one or more entries from an axis is easy if you have an index array or list without those entries. As that can require a bit of munging and set logic, the drop method will return a new object with the indicated value or values deleted from an axis:
End of explanation
"""
data = DataFrame(np.arange(16).reshape((4, 4)),
index=['Ohio', 'Colorado', 'Utah', 'New York'],
columns=['one', 'two', 'three', 'four'])
data.drop(['Colorado', 'Ohio'])
data.drop('Utah', axis=0)
data.drop('two', axis=1)
data.drop(['two', 'four'], axis=1)
"""
Explanation: With DataFrame, index values can be deleted from either axis:
End of explanation
"""
obj = Series(np.arange(4.), index=['a', 'b', 'c', 'd'])
obj
obj['b']
obj[1]
obj[2:4]
obj[['b', 'a', 'd']]
obj[[1, 3]]
obj[obj < 2]
"""
Explanation: Indexing, selection, and filtering
Series indexing (obj[...]) works analogously to NumPy array indexing, except you can use the Series’s index values instead of only integers. Here are some examples of this:
End of explanation
"""
obj['b':'c']
"""
Explanation: Slicing with labels behaves differently than normal Python slicing in that the endpoint is inclusive:
End of explanation
"""
obj['b':'c'] = 5
obj
"""
Explanation: Setting using these methods works just as you would expect:
End of explanation
"""
data = DataFrame(np.arange(16).reshape((4, 4)),
index=['Ohio', 'Colorado', 'Utah', 'New York'],
columns=['one', 'two', 'three', 'four'])
data
data['two']
data[['three', 'one']]
"""
Explanation: As you’ve seen above, indexing into a DataFrame is for retrieving one or more columns either with a single value or sequence:
End of explanation
"""
data[:2]
data[data['three'] > 5]
data < 5
data[data < 5] = 0
data
"""
Explanation: Indexing like this has a few special cases. First selecting rows by slicing or a boolean array:
End of explanation
"""
data.ix['Colorado', ['two', 'three']]
data.ix[['Colorado', 'Utah'], [3, 0, 1]]
data.ix[2]
data.ix[:'Utah', 'two']
data.ix[data.three > 5, :3]
"""
Explanation: Indexing field ix enables you to select a subset of the rows and columns from a DataFrame with NumPy- like notation plus axis labels. This is also a less verbose way to do reindexing:
End of explanation
"""
s1 = Series([7.3, -2.5, 3.4, 1.5], index=['a', 'c', 'd', 'e'])
s2 = Series([-2.1, 3.6, -1.5, 4, 3.1], index=['a', 'c', 'e', 'f', 'g'])
s1
s2
s1 + s2
"""
Explanation: Arithmetic and data alignment
One of the most important pandas features is the behavior of arithmetic between ob- jects with different indexes. When adding together objects, if any index pairs are not the same, the respective index in the result will be the union of the index pairs. Let’s look at a simple example:
End of explanation
"""
df1 = DataFrame(np.arange(9.).reshape((3, 3)), columns=list('bcd'),
index=['Ohio', 'Texas', 'Colorado'])
df2 = DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
df1
df2
"""
Explanation: The internal data alignment introduces NA values in the indices that don’t overlap. Missing values propagate in arithmetic computations.
In the case of DataFrame, alignment is performed on both the rows and the columns:
End of explanation
"""
df1 + df2
"""
Explanation: Adding these together returns a DataFrame whose index and columns are the unions of the ones in each DataFrame:
End of explanation
"""
df1 = DataFrame(np.arange(12.).reshape((3, 4)), columns=list('abcd'))
df2 = DataFrame(np.arange(20.).reshape((4, 5)), columns=list('abcde'))
df1
df2
df1 + df2
df1.add(df2, fill_value=0)
"""
Explanation: Arithmetic methods with fill values
In arithmetic operations between differently-indexed objects, you might want to fill with a special value, like 0, when an axis label is found in one object but not the other:
End of explanation
"""
df1.reindex(columns=df2.columns, fill_value=0)
"""
Explanation: Relatedly, when reindexing a Series or DataFrame, you can also specify a different fill value:
End of explanation
"""
arr = np.arange(12.).reshape((3, 4))
arr
arr[0]
arr - arr[0]
"""
Explanation: Operations between DataFrame and Series
As with NumPy arrays, arithmetic between DataFrame and Series is well-defined. First, as a motivating example, consider the difference between a 2D array and one of its rows:
End of explanation
"""
frame = DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
series = frame.ix[0]
frame
series
"""
Explanation: This is referred to as broadcasting. Operations between a DataFrame and a Series are similar:
End of explanation
"""
frame - series
"""
Explanation: By default, arithmetic between DataFrame and Series matches the index of the Series on the DataFrame's columns, broadcasting down the rows:
End of explanation
"""
series2 = Series(range(3), index=['b', 'e', 'f'])
series2
frame + series2
"""
Explanation: If an index value is not found in either the DataFrame’s columns or the Series’s index, the objects will be reindexed to form the union:
End of explanation
"""
series3 = frame['d']
frame
series3
frame.sub(series3, axis=0)
"""
Explanation: If you want to instead broadcast over the columns, matching on the rows, you have to use one of the arithmetic methods. For example:
End of explanation
"""
frame = DataFrame(np.random.randn(4, 3), columns=list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
frame
np.abs(frame)
f = lambda x: x.max() - x.min()
frame.apply(f)
frame.apply(f, axis=1)
"""
Explanation: Function application and mapping
NumPy ufuncs (element-wise array methods) work fine with pandas objects:
End of explanation
"""
def f(x):
return Series([x.min(), x.max()], index=['min', 'max'])
frame.apply(f)
"""
Explanation: Many of the most common array statistics (like sum and mean) are DataFrame methods, so using apply is not necessary.
The function passed to apply need not return a scalar value, it can also return a Series with multiple values:
End of explanation
"""
format = lambda x: '%.2f' % x
frame.applymap(format)
"""
Explanation: Element-wise Python functions can be used, too. Suppose you wanted to compute a formatted string from each floating point value in frame. You can do this with applymap:
End of explanation
"""
frame['e'].map(format)
"""
Explanation: The reason for the name applymap is that Series has a map method for applying an element-wise function:
End of explanation
"""
obj = Series(range(4), index=['d', 'a', 'b', 'c'])
obj.sort_index()
"""
Explanation: Sorting and ranking
Sorting a data set by some criterion is another important built-in operation. To sort lexicographically by row or column index, use the sort_index method, which returns a new, sorted object:
End of explanation
"""
frame = DataFrame(np.arange(8).reshape((2, 4)), index=['three', 'one'],
columns=['d', 'a', 'b', 'c'])
frame.sort_index()
frame.sort_index(axis=1)
"""
Explanation: With a DataFrame, you can sort by index on either axis:
End of explanation
"""
frame.sort_index(axis=1, ascending=False)
"""
Explanation: The data is sorted in ascending order by default, but can be sorted in descending order, too:
End of explanation
"""
obj = Series([4, 7, -3, 2])
obj.order()
obj.sort_values()
"""
Explanation: To sort a Series by its values, use its order method:
End of explanation
"""
obj = Series([4, np.nan, 7, np.nan, -3, 2])
obj.sort_values()
"""
Explanation: Any missing values are sorted to the end of the Series by default:
End of explanation
"""
frame = DataFrame({'b': [4, 7, -3, 2], 'a': [0, 1, 0, 1]})
frame
frame.sort_index(by='b')
frame.sort_values(by='b')
frame.sort_index(by=['a', 'b'])
frame.sort_values(by=['a', 'b'])
"""
Explanation: On DataFrame, you may want to sort by the values in one or more columns. To do so, pass one or more column names to the by option:
End of explanation
"""
obj = Series([7, -5, 7, 4, 2, 0, 4])
obj
obj.rank?
obj.rank()
"""
Explanation: Ranking is closely related to sorting, assigning ranks from one through the number of valid data points in an array. It is similar to the indirect sort indices produced by numpy.argsort, except that ties are broken according to a rule. The rank methods for Series and DataFrame are the place to look; by default rank breaks ties by assigning each group the mean rank:
End of explanation
"""
obj.rank(method='first')
"""
Explanation: Ranks can also be assigned according to the order they’re observed in the data:
End of explanation
"""
obj.rank(ascending=False, method='max')
"""
Explanation: Naturally, you can rank in descending order, too:
End of explanation
"""
frame = DataFrame({'b': [4.3, 7, -3, 2], 'a': [0, 1, 0, 1],
'c': [-2, 5, 8, -2.5]})
frame
frame.rank(axis=1)
"""
Explanation: The "method" is used for breaking ties:
'average' Default: assign the average rank to each entry in the equal group.
'min' Use the minimum rank for the whole group.
'max' Use the maximum rank for the whole group.
'first' Assign ranks in the order the values appear in the data.
End of explanation
"""
obj = Series(range(5), index=['a', 'a', 'b', 'b', 'c'])
obj
obj.index.is_unique
"""
Explanation: Axis indexes with duplicate values
While many pandas functions (like reindex) require that the labels be unique, it’s not mandatory. Let’s consider a small Series with duplicate indices:
End of explanation
"""
obj['a']
obj['c']
"""
Explanation: Data selection is one of the main things that behaves differently with duplicates. In- dexing a value with multiple entries returns a Series while single entries return a scalar value:
End of explanation
"""
df = DataFrame(np.random.randn(4, 3), index=['a', 'a', 'b', 'b'])
df
df.ix['b']
"""
Explanation: The same logic extends to indexing rows in a DataFrame:
End of explanation
"""
df = DataFrame([[1.4, np.nan], [7.1, -4.5],
[np.nan, np.nan], [0.75, -1.3]],
index=['a', 'b', 'c', 'd'],
columns=['one', 'two'])
df
"""
Explanation: Summarizing and computing descriptive statistics
pandas objects are equipped with a set of common mathematical and statistical meth- ods. Most of these fall into the category of reductions or summary statistics, methods that extract a single value (like the sum or mean) from a Series or a Series of values from the rows or columns of a DataFrame. Compared with the equivalent methods of vanilla NumPy arrays, they are all built from the ground up to exclude missing data. Consider a small DataFrame:
End of explanation
"""
df.sum()
"""
Explanation: Calling DataFrame’s sum method returns a Series containing column sums:
End of explanation
"""
df.sum(axis=1)
"""
Explanation: Passing axis=1 sums over the rows instead:
End of explanation
"""
df.mean(axis=1, skipna=False)
"""
Explanation: NA values are excluded unless the entire slice (row or column in this case) is NA. This can be disabled using the skipna option:
End of explanation
"""
df.idxmax()
"""
Explanation: Common options for each reduction method are:
- axis: Axis to reduce over. 0 for DataFrame’s rows and 1 for columns.
- skipna: Exclude missing values, True by default.
- level: Reduce grouped by level if the axis is hierarchically-indexed (MultiIndex).
Some methods, like idxmin and idxmax, return indirect statistics like the index value where the minimum or maximum values are attained:
End of explanation
"""
df.cumsum()
"""
Explanation: Other methods are accumulations:
End of explanation
"""
df.describe()
"""
Explanation: Another type of method is neither a reduction nor an accumulation. describe is one such example, producing multiple summary statistics in one shot:
End of explanation
"""
obj = Series(['a', 'a', 'b', 'c'] * 4)
obj.describe()
"""
Explanation: On non-numeric data, describe produces alternate summary statistics:
End of explanation
"""
import pandas.io.data as web
all_data = {}
for ticker in ['AAPL', 'IBM', 'MSFT', 'GOOG']:
all_data[ticker] = web.get_data_yahoo(ticker)
price = DataFrame({tic: data['Adj Close']
for tic, data in all_data.iteritems()})
volume = DataFrame({tic: data['Volume']
for tic, data in all_data.iteritems()})
"""
Explanation: Correlation and covariance in Finance
Some summary statistics, like correlation and covariance, are computed from pairs of arguments. Let’s consider some DataFrames of stock prices and volumes obtained from Yahoo! Finance:
End of explanation
"""
price.head()
returns = priceA.pct_change()
returns.tail()
"""
Explanation: Now, compute percent changes of the prices:
End of explanation
"""
returns.MSFT.corr(returns.IBM)
returns.MSFT.cov(returns.IBM)
"""
Explanation: The corr method of Series computes the correlation of the overlapping, non-NA, aligned-by-index values in two Series. Relatedly, cov computes the covariance:
End of explanation
"""
returns.corr()
returns.cov()
"""
Explanation: DataFrame’s corr and cov methods, on the other hand, return a full correlation or covariance matrix as a DataFrame, respectively:
End of explanation
"""
returns.corrwith(returns.IBM)
"""
Explanation: Using DataFrame’s corrwith method, you can compute pairwise correlations between a DataFrame’s columns or rows with another Series or DataFrame. Passing a Series returns a Series with the correlation value computed for each column:
End of explanation
"""
returns.corrwith(volume)
"""
Explanation: Passing a DataFrame computes the correlations of matching column names. Here I compute correlations of percent changes with volume:
End of explanation
"""
obj = Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c', 'c'])
"""
Explanation: Passing axis=1 does things row-wise instead. In all cases, the data points are aligned by label before computing the correlation.
Unique values, value counts, and membership
Another class of related methods extracts information about the values contained in a one-dimensional Series.
isin: Compute boolean array indicating whether each Series value is contained in the passed sequence of values.
unique: Compute array of unique values in a Series, returned in the order observed.
value_counts: Return a Series containing unique values as its index and frequencies as its values, ordered count in descending order.
To illustrate these, consider this example:
End of explanation
"""
uniques = obj.unique()
uniques
"""
Explanation: The first function is unique, which gives you an array of the unique values in a Series:
End of explanation
"""
obj.value_counts()
"""
Explanation: The unique values are not necessarily returned in sorted order, but could be sorted after the fact if needed (uniques.sort()). Relatedly, value_counts computes a Series con- taining value frequencies:
End of explanation
"""
pd.value_counts(obj.values, sort=False)
"""
Explanation: The Series is sorted by value in descending order as a convenience. value_counts is also available as a top-level pandas method that can be used with any array or sequence:
End of explanation
"""
mask = obj.isin(['b', 'c'])
mask
obj[mask]
"""
Explanation: Lastly, isin is responsible for vectorized set membership and can be very useful in filtering a data set down to a subset of values in a Series or column in a DataFrame:
End of explanation
"""
data = DataFrame({'Qu1': [1, 3, 4, 3, 4],
'Qu2': [2, 3, 1, 2, 3],
'Qu3': [1, 5, 2, 4, 4]})
data
"""
Explanation: In some cases, you may want to compute a histogram on multiple related columns in a DataFrame. Here’s an example:
End of explanation
"""
result = data.apply(pd.value_counts).fillna(0)
result
"""
Explanation: Passing pandas.value_counts to this DataFrame’s apply function gives:
End of explanation
"""
string_data = Series(['aardvark', 'artichoke', np.nan, 'avocado'])
string_data
string_data.isnull()
"""
Explanation: Handling missing data
Missing data is common in most data analysis applications. One of the goals in de- signing pandas was to make working with missing data as painless as possible. For example, all of the descriptive statistics on pandas objects exclude missing data as you’ve seen earlier.
pandas uses the floating point value NaN (Not a Number) to represent missing data in both floating as well as in non-floating point arrays. It is just used as a sentinel that can be easily detected:
End of explanation
"""
string_data[0] = None
string_data.isnull()
"""
Explanation: The built-in Python None value is also treated as NA in object arrays:
End of explanation
"""
from numpy import nan as NA
data = Series([1, NA, 3.5, NA, 7])
data.dropna()
"""
Explanation: Filtering out missing data
Common NA handling methods are:
dropna: Filter axis labels based on whether values for each label have missing data, with varying thresholds for how much missing data to tolerate.
fillna: Fill in missing data with some value or using an interpolation method such as 'ffill' or 'bfill'.
isnull: Return like-type object containing boolean values indicating which values are missing / NA.
notnull: Negation of isnull.
You have a number of options for filtering out missing data. While doing it by hand is always an option, dropna can be very helpful. On a Series, it returns the Series with only the non-null data and index values:
End of explanation
"""
data[data.notnull()]
"""
Explanation: Naturally, you could have computed this yourself by boolean indexing:
End of explanation
"""
data = DataFrame([[1., 6.5, 3.], [1., NA, NA],
[NA, NA, NA], [NA, 6.5, 3.]])
cleaned = data.dropna()
data
cleaned
"""
Explanation: With DataFrame objects, these are a bit more complex. You may want to drop rows or columns which are all NA or just those containing any NAs. dropna by default drops any row containing a missing value:
End of explanation
"""
data.dropna(how='all')
"""
Explanation: Passing how='all' will only drop rows that are all NA:
End of explanation
"""
data[4] = NA
data
data.dropna(axis=1, how='all')
"""
Explanation: Dropping columns in the same way is only a matter of passing axis=1:
End of explanation
"""
df = DataFrame(np.random.randn(7, 3))
df.ix[:4, 1] = NA; df.ix[:2, 2] = NA
df
df.dropna(thresh=3)
"""
Explanation: A related way to filter out DataFrame rows tends to concern time series data. Suppose you want to keep only rows containing a certain number of observations. You can indicate this with the thresh argument:
End of explanation
"""
df.fillna(0)
"""
Explanation: Filling in missing data
Rather than filtering out missing data (and potentially discarding other data along with it), you may want to fill in the “holes” in any number of ways. For most purposes, the fillna method is the workhorse function to use. Calling fillna with a constant replaces missing values with that value:
End of explanation
"""
df.fillna({1: 0.5, 2: -1})
"""
Explanation: Calling fillna with a dict you can use a different fill value for each column:
End of explanation
"""
# always returns a reference to the filled object
_ = df.fillna(0, inplace=True)
df
"""
Explanation: fillna returns a new object, but you can modify the existing object in place:
End of explanation
"""
df = DataFrame(np.random.randn(6, 3))
df.ix[2:, 1] = NA; df.ix[4:, 2] = NA
df
df.fillna(method='ffill')
df.fillna(method='ffill', limit=2)
"""
Explanation: The same interpolation methods available for reindexing can be used with fillna:
End of explanation
"""
data = Series([1., NA, 3.5, NA, 7])
data.fillna(data.mean())
"""
Explanation: With fillna you can do lots of other things with a little creativity. For example, you might pass the mean or median value of a Series:
End of explanation
"""
data = Series(np.random.randn(10),
index=[['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'd', 'd'],
[1, 2, 3, 1, 2, 3, 1, 2, 2, 3]])
data
data.index
data['b']
data['b':'c']
data.ix[['b', 'd']]
data[:, 2]
data.unstack()
data.unstack().stack()
frame = DataFrame(np.arange(12).reshape((4, 3)),
index=[['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
columns=[['Ohio', 'Ohio', 'Colorado'],
['Green', 'Red', 'Green']])
frame
frame.index.names = ['key1', 'key2']
frame.columns.names = ['state', 'color']
frame
frame['Ohio']
"""
Explanation: Hierarchical indexing
Hierarchical indexing is an important feature of pandas enabling you to have multiple (two or more) index levels on an axis. Somewhat abstractly, it provides a way for you to work with higher dimensional data in a lower dimensional form. Let’s start with a simple example; create a Series with a list of lists or arrays as the index:
End of explanation
"""
frame.swaplevel('key1', 'key2')
frame.sortlevel(1)
frame.swaplevel(0, 1).sortlevel(0)
"""
Explanation: Reordering and sorting levels
End of explanation
"""
frame.sum(level='key2')
frame.sum(level='color', axis=1)
"""
Explanation: Summary statistics by level
End of explanation
"""
frame = DataFrame({'a': range(7), 'b': range(7, 0, -1),
'c': ['one', 'one', 'one', 'two', 'two', 'two', 'two'],
'd': [0, 1, 2, 0, 1, 2, 3]})
frame
frame2 = frame.set_index(['c', 'd'])
frame2
frame.set_index(['c', 'd'], drop=False)
frame2.reset_index()
"""
Explanation: Using a DataFrame's columns
End of explanation
"""
ser = Series(np.arange(3.))
ser.iloc[-1]
ser
ser2 = Series(np.arange(3.), index=['a', 'b', 'c'])
ser2[-1]
ser.ix[:1]
ser3 = Series(range(3), index=[-5, 1, 3])
ser3.iloc[2]
frame = DataFrame(np.arange(6).reshape((3, 2)), index=[2, 0, 1])
frame.iloc[0]
"""
Explanation: Other pandas topics
Integer indexing
End of explanation
"""
import pandas.io.data as web
pdata = pd.Panel(dict((stk, web.get_data_yahoo(stk))
for stk in ['AAPL', 'GOOG', 'MSFT', 'DELL']))
pdata
pdata = pdata.swapaxes('items', 'minor')
pdata['Adj Close']
pdata.ix[:, '6/1/2012', :]
pdata.ix['Adj Close', '5/22/2012':, :]
stacked = pdata.ix[:, '5/30/2012':, :].to_frame()
stacked
stacked.to_panel()
"""
Explanation: Panel data
End of explanation
"""
|
tensorflow/docs | site/en/tutorials/images/classification.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
"""
Explanation: Image classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial shows how to classify images of flowers. It creates an image classifier using a tf.keras.Sequential model, and loads data using tf.keras.utils.image_dataset_from_directory. You will gain practical experience with the following concepts:
Efficiently loading a dataset off disk.
Identifying overfitting and applying techniques to mitigate it, including data augmentation and dropout.
This tutorial follows a basic machine learning workflow:
Examine and understand data
Build an input pipeline
Build the model
Train the model
Test the model
Improve the model and repeat the process
Import TensorFlow and other libraries
End of explanation
"""
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
"""
Explanation: Download and explore the dataset
This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains five sub-directories, one per class:
flower_photo/
daisy/
dandelion/
roses/
sunflowers/
tulips/
End of explanation
"""
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
"""
Explanation: After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
End of explanation
"""
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
"""
Explanation: Here are some roses:
End of explanation
"""
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
"""
Explanation: And some tulips:
End of explanation
"""
batch_size = 32
img_height = 180
img_width = 180
"""
Explanation: Load data using a Keras utility
Let's load these images off disk using the helpful tf.keras.utils.image_dataset_from_directory utility. This will take you from a directory of images on disk to a tf.data.Dataset in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the Load and preprocess images tutorial.
Create a dataset
Define some parameters for the loader:
End of explanation
"""
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
"""
Explanation: It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
End of explanation
"""
class_names = train_ds.class_names
print(class_names)
"""
Explanation: You can find the class names in the class_names attribute on these datasets. These correspond to the directory names in alphabetical order.
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
"""
Explanation: Visualize the data
Here are the first nine images from the training dataset:
End of explanation
"""
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
"""
Explanation: You will train a model using these datasets by passing them to Model.fit in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
"""
Explanation: The image_batch is a tensor of the shape (32, 180, 180, 3). This is a batch of 32 images of shape 180x180x3 (the last dimension refers to color channels RGB). The label_batch is a tensor of the shape (32,), these are corresponding labels to the 32 images.
You can call .numpy() on the image_batch and labels_batch tensors to convert them to a numpy.ndarray.
Configure the dataset for performance
Let's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data:
Dataset.cache keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.
Dataset.prefetch overlaps data preprocessing and model execution while training.
Interested readers can learn more about both methods, as well as how to cache data to disk in the Prefetching section of the Better performance with the tf.data API guide.
End of explanation
"""
normalization_layer = layers.Rescaling(1./255)
"""
Explanation: Standardize the data
The RGB channel values are in the [0, 255] range. This is not ideal for a neural network; in general you should seek to make your input values small.
Here, you will standardize values to be in the [0, 1] range by using tf.keras.layers.Rescaling:
End of explanation
"""
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixel values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
"""
Explanation: There are two ways to use this layer. You can apply it to the dataset by calling Dataset.map:
End of explanation
"""
num_classes = len(class_names)
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
"""
Explanation: Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here.
Note: You previously resized images using the image_size argument of tf.keras.utils.image_dataset_from_directory. If you want to include the resizing logic in your model as well, you can use the tf.keras.layers.Resizing layer.
Create the model
The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned for high accuracy—the goal of this tutorial is to show a standard approach.
End of explanation
"""
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
"""
Explanation: Compile the model
For this tutorial, choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
End of explanation
"""
model.summary()
"""
Explanation: Model summary
View all the layers of the network using the model's Model.summary method:
End of explanation
"""
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
"""
Explanation: Train the model
End of explanation
"""
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
"""
Explanation: Visualize training results
Create plots of loss and accuracy on the training and validation sets:
End of explanation
"""
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
"""
Explanation: The plots show that training accuracy and validation accuracy are off by large margins, and the model has achieved only around 60% accuracy on the validation set.
Let's inspect what went wrong and try to increase the overall performance of the model.
Overfitting
In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of overfitting.
When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.
There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use data augmentation and add Dropout to your model.
Data augmentation
Overfitting generally occurs when there are a small number of training examples. Data augmentation takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.
You will implement data augmentation using the following Keras preprocessing layers: tf.keras.layers.RandomFlip, tf.keras.layers.RandomRotation, and tf.keras.layers.RandomZoom. These can be included inside your model like other layers, and run on the GPU.
End of explanation
"""
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
"""
Explanation: Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
End of explanation
"""
model = Sequential([
data_augmentation,
layers.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
"""
Explanation: You will use data augmentation to train a model in a moment.
Dropout
Another technique to reduce overfitting is to introduce dropout regularization to the network.
When you apply dropout to a layer, it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.
Let's create a new neural network with tf.keras.layers.Dropout before training it using the augmented images:
End of explanation
"""
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
"""
Explanation: Compile and train the model
End of explanation
"""
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
"""
Explanation: Visualize training results
After applying data augmentation and tf.keras.layers.Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned:
End of explanation
"""
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = tf.keras.utils.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
"""
Explanation: Predict on new data
Finally, let's use our model to classify an image that wasn't included in the training or validation sets.
Note: Data augmentation and dropout layers are inactive at inference time.
End of explanation
"""
|
vascotenner/holoviews | doc/Examples/SRI_Model.ipynb | bsd-3-clause | import collections
import itertools
import math
import numpy as np
np.seterr(divide='ignore')
import numpy.random as rnd
import networkx as nx
import param
import holoviews as hv
SPREADING_SUSCEPTIBLE = 'S'
SPREADING_VACCINATED = 'V'
SPREADING_INFECTED = 'I'
SPREADING_RECOVERED = 'R'
DEAD = 'D'
class SRI_Model(param.Parameterized):
"""
Implementation of the SRI epidemiology model
using NetworkX and HoloViews for visualization.
This code has been adapted from Simon Dobson's
code here:
http://www.simondobson.org/complex-networks-complex-processes/epidemic-spreading.html
In addition to his basic parameters I've added
additional states to the model, a node may be
in one of the following states:
* Susceptible: Can catch the disease from a connected node.
* Vaccinated: Immune to infection.
* Infected: Has the disease and may pass it on to any connected node.
* Recovered: Immune to infection.
* Dead: Edges are removed from graph.
"""
network = param.ClassSelector(class_=nx.Graph, default=None, doc="""
A custom NetworkX graph, instead of the default Erdos-Renyi graph.""")
visualize = param.Boolean(default=True, doc="""
Whether to compute layout of network for visualization.""")
# Initial parameters
N = param.Integer(default=1000, doc="""
Number of nodes to simulate.""")
mean_connections = param.Number(default=10, doc="""
Mean number of connections to make to other nodes.""")
pSick = param.Number(default=0.01, doc="""
Probability of a node to be initialized in sick state.""", bounds=(0, 1))
pVaccinated = param.Number(default=0.1, bounds=(0, 1), doc="""
Probability of a node to be initialized in vaccinated state.""")
# Simulation parameters
pInfect = param.Number(default=0.3, doc="""
Probability of infection on each time step.""", bounds=(0, 1))
pRecover = param.Number(default=0.05, doc="""
Probability of recovering if infected on each timestep.""", bounds=(0, 1))
pDeath = param.Number(default=0.1, doc="""
Probability of death if infected on each timestep.""", bounds=(0, 1))
def __init__(self, **params):
super(SRI_Model, self).__init__(**params)
if not self.network:
self.g = nx.erdos_renyi_graph(self.N, float(self.mean_connections)/self.N)
else:
self.g = self.network
self.vaccinated, self.infected = self.spreading_init()
self.model = self.spreading_make_sir_model()
self.color_mapping = [SPREADING_SUSCEPTIBLE,
SPREADING_VACCINATED,
SPREADING_INFECTED,
SPREADING_RECOVERED, DEAD]
if self.visualize:
self.pos = nx.spring_layout(self.g, iterations = 50,
k = 2/(math.sqrt(self.g.order())))
def spreading_init(self):
"""Initialise the network with vaccinated, susceptible and infected states."""
vaccinated, infected = 0, []
for i in self.g.node.keys():
self.g.node[i]['transmissions'] = 0
if(rnd.random() <= self.pVaccinated):
self.g.node[i]['state'] = SPREADING_VACCINATED
vaccinated += 1
elif(rnd.random() <= self.pSick):
self.g.node[i]['state'] = SPREADING_INFECTED
infected.append(i)
else:
self.g.node[i]['state'] = SPREADING_SUSCEPTIBLE
return vaccinated, infected
def spreading_make_sir_model(self):
"""Return an SIR model function for given infection and recovery probabilities."""
# model (local rule) function
def model( g, i ):
if g.node[i]['state'] == SPREADING_INFECTED:
# infect susceptible neighbours with probability pInfect
for m in g.neighbors(i):
if g.node[m]['state'] == SPREADING_SUSCEPTIBLE:
if rnd.random() <= self.pInfect:
g.node[m]['state'] = SPREADING_INFECTED
self.infected.append(m)
g.node[i]['transmissions'] += 1
# recover with probability pRecover
if rnd.random() <= self.pRecover:
g.node[i]['state'] = SPREADING_RECOVERED
elif rnd.random() <= self.pDeath:
edges = [edge for edge in self.g.edges() if i in edge]
g.node[i]['state'] = DEAD
g.remove_edges_from(edges)
return model
def step(self):
"""Run a single step of the model over the graph."""
for i in self.g.node.keys():
self.model(self.g, i)
def run(self, steps):
"""
Run the network for the specified number of time steps
"""
for i in range(steps):
self.step()
def network_data(self):
"""
Return the network edge paths and node positions,
requires visualize parameter to be enabled.
"""
if not self.visualize:
raise Exception("Enable visualize option to get network data.")
nodeMarkers = []
overlay = []
points = np.array([self.pos[v] for v in self.g.nodes_iter()])
paths = []
for e in self.g.edges_iter():
xs = [ self.pos[e[0]][0], self.pos[e[1]][0] ]
ys = [ self.pos[e[0]][1], self.pos[e[1]][1] ]
paths.append(np.array(zip(xs, ys)))
return paths, points
def stats(self):
"""
Return an ItemTable with statistics on the network data.
"""
state_labels = hv.OrderedDict([('S', 'Susceptible'), ('V', 'Vaccinated'), ('I', 'Infected'),
('R', 'Recovered'), ('D', 'Dead')])
counts = collections.Counter()
transmissions = []
for n in self.g.nodes_iter():
state = state_labels[self.g.node[n]['state']]
counts[state] += 1
if n in self.infected:
transmissions.append(self.g.node[n]['transmissions'])
data = hv.OrderedDict([(l, counts[l])
for l in state_labels.values()])
infected = len(set(self.infected))
unvaccinated = float(self.N-self.vaccinated)
data['$R_0$'] = np.mean(transmissions) if transmissions else 0
data['Death rate DR'] = np.divide(float(data['Dead']),self.N)
data['Infection rate IR'] = np.divide(float(infected), self.N)
if unvaccinated:
unvaccinated_dr = data['Dead']/unvaccinated
unvaccinated_ir = infected/unvaccinated
else:
unvaccinated_dr = 0
unvaccinated_ir = 0
data['Unvaccinated DR'] = unvaccinated_dr
data['Unvaccinated IR'] = unvaccinated_ir
return hv.ItemTable(data)
def animate(self, steps):
"""
Run the network for the specified number of steps accumulating animations
of the network nodes and edges changing states and curves tracking the
spread of the disease.
"""
if not self.visualize:
raise Exception("Enable visualize option to get compute network visulizations.")
# Declare HoloMap for network animation and counts array
network_hmap = hv.HoloMap(key_dimensions=['Time'])
sird = np.zeros((steps, 5))
# Declare dimensions and labels
spatial_dims = [hv.Dimension('x', range=(-1.1, 1.1)),
hv.Dimension('y', range=(-1.1, 1.1))]
state_labels = ['Susceptible', 'Vaccinated', 'Infected', 'Recovered', 'Dead']
# Text annotation
nlabel = hv.Text(0.9, 0.05, 'N=%d' % self.N)
for i in range(steps):
# Get path, point, states and count data
paths, points = self.network_data()
states = [self.color_mapping.index(self.g.node[n]['state'])
for n in self.g.nodes_iter()]
state_array = np.array(states, ndmin=2).T
(sird[i, :], _) = np.histogram(state_array, bins=list(range(6)))
# Create network path and node Elements
network_paths = hv.Path(paths, key_dimensions=spatial_dims)
network_nodes = hv.Points(np.hstack([points, state_array]),
key_dimensions=spatial_dims,
value_dimensions=['State'])
# Create overlay and accumulate in network HoloMap
network_hmap[i] = (network_paths * network_nodes * nlabel).relabel(group='Network', label='SRI')
self.step()
# Create Overlay of Curves
extents = (-1, -1, steps, np.max(sird)+2)
curves = hv.NdOverlay({label: hv.Curve(zip(range(steps), sird[:, i]), extents=extents,
key_dimensions=['Time'], value_dimensions=['Count'])
for i, label in enumerate(state_labels)},
key_dimensions=[hv.Dimension('State', values=state_labels)])
# Animate VLine on top of Curves
distribution = hv.HoloMap({i: (curves * hv.VLine(i)).relabel(group='Counts', label='SRI')
for i in range(steps)}, key_dimensions=['Time'])
return network_hmap + distribution
"""
Explanation: <div class="alert alert-info" role="alert">
This blog post first appeared on <a href="http://philippjfr.com/blog/networkx-epidemiology/">philippjfr.com</a>, however this example will be updated and maintained.
</div>
For a recent talk in my department I talked a little bit about agent based modeling and in the process I came across the simple but quite interesting SIR model in epidemiology. The inspiration for this post was Simon Dobson's post on Epidemic spreading processes, which will provide a much more detailed scientific background and take you through some of the code step by step. However as a brief introduction
I've made some minor tweaks to the model by adding vaccinated and dead states. I've also unified the function based approach into a single Parameterized class, which takes care of initializing, running and visualizing the network.
In this blog post I'll primarily look at how we can quickly create complex visualization about this model using HoloViews. In the process I'll look at some predictions this model can make about herd immunity but won't be giving it any rigorous scientific treatment.
The Code
Here's the code for the model relying only on numpy, networkx, holoviews and matplotlib in the background.
End of explanation
"""
hv.notebook_extension()
# Increase dpi and select the slider widget
%output dpi=120 holomap='widgets'
# Set colors and style options for the Element types
from holoviews import Store, Options
from holoviews.core.options import Palette
opts = Store.options()
opts.Path = Options('style', linewidth=0.2, color='k')
opts.ItemTable = Options('plot', aspect=1.2, fig_size=150)
opts.Curve = Options('style', color=Palette('hot_r'))
opts.Histogram = Options('plot', bgcolor='w', show_grid=False)
opts.Overlay = Options('plot', show_frame=False)
opts.HeatMap = Options('plot', show_values=False, show_grid=False,
aspect=1.5, xrotation=90)
opts.Overlay.Network = Options('plot', xaxis=None, yaxis=None, bgcolor='w')
opts.Overlay.Counts = Options('plot', aspect=1.2, show_grid=True)
opts.Points = {'style': Options(cmap='hot_r', s=50, edgecolors='k'),
'plot': Options(color_index=2)}
opts.VLine = {'style': Options(color='k', linewidth=1),
'plot': Options(show_grid=True)}
"""
Explanation: The style
HoloViews allows use to define various style options in advance on the Store.options object.
End of explanation
"""
import seaborn
seaborn.set()
"""
Explanation: Next we'll simply enable the Seaborn plot style defaults because they look a bit better than the HoloViews defaults for this kind of data.
End of explanation
"""
experiment1_params = dict(pInfect=0.08, pRecover=0.08, pSick=0.15,
N=50, mean_connections=10, pDeath=0.1)
"""
Explanation: Herd Immunity
Experiment 1: Evaluating the effects of a highly infectious and deadly disease in a small population with varying levels of vaccination
Having defined the model and defined the model we can run some real experiments. In particular we can investigate the effect of vaccination on our model.
We'll initialize our model with only 50 inviduals, who will on average make 10 connections to other individuals. Then we will infect a small population ($p=0.1$) so we can track how the disease spreads through the population. To really drive the point home we'll use a very infectious and deadly disease.
End of explanation
"""
sri_model = SRI_Model(pVaccinated=0.1, **experiment1_params)
sri_model.animate(21)
"""
Explanation: Low vaccination population (10%)
Here we'll investigate the spread of the disease in population with a 10% vaccination rate:
End of explanation
"""
sri_model.stats()
"""
Explanation: In figure A we can observe how the disease quickly spreads across almost the entire unvaccinated population. Additionally we can track the number of individuals in a particular state in B. As the disease spreads unimpeded the most individuals either die or recover and therefore gain immunity. Individuals that die are obviously no longer part of the network so their connections to other individuals get deleted, this way we can see the network thin out as the disease wreaks havok among the population.
Next we can view a breakdown of the final state of the simulation including infection and death rates:
End of explanation
"""
sri_model = SRI_Model(pVaccinated=0.65, **experiment1_params)
sri_model.animate(21)
"""
Explanation: As you can see both the infection and death rates are very high in this population. The disease reached a large percentage all individuals causing death in a large fraction of them. Among the unvaccinated population they are of course even higher with almost >90% infected and >40% dead. The disease spread through our network completely unimpeded. Now let's see what happens if a large fraction of the population is vaccinated.
High vaccination population (65%)
If we increase the initial probability of being vaccinated to $p=0.65$ we'll be able to observe how this affects the spread of the disease through the network:
End of explanation
"""
sri_model.stats()
"""
Explanation: Even though we can still see the disease spreading among non-vaccinated individuals we can also observe how the vaccinated individuals stop the spread. If an infected individual is connected with a majority of vaccinated indivuals the probability of the disease spreading is strongly impeded. Unlike in low vaccinated population the disease stops its spread not because too many individuals have died off, rather it quickly runs out of steam, such that a majority of the initial, susceptible but healthy population remains completely unaffected.
This is what's known as herd immunity and its very important. This is because a small percentage of any population cannot be vaccinated, usually because they are immuno-compromised. However when a larger percentage of people decide that they do not want to get vaccinated (for various and invariably stupid reasons), they place the rest of the population in danger, particularly those that cannot get vaccinated for health reasons.
Let's look what higher vaccination rates did to our experimental population:
End of explanation
"""
%%output holomap='scrubber' size=150
sri_model_lv = SRI_Model(pVaccinated=0.1, **dict(experiment1_params, N=1000))
sri_layout = sri_model_lv.animate(31)
sri_layout.Network.SRI[::2]
sri_model_hv = SRI_Model(pVaccinated=0.65, visualize=False, **dict(experiment1_params, N=1000))
sri_model_hv.run(100)
(sri_model_lv.stats().relabel('Low Vaccination Population') +
sri_model_hv.stats().relabel('High Vaccination Population'))
"""
Explanation: The precipetous drop in the whole populations infection rate and death rate are obviously easily explained by the fact that a smaller fraction of the population was susceptible to the disease in the first place, however as herd immunity would predict, a smaller fraction of the unvaccinated population contracted and died of the disease as well. I hope this toy example once again emphasizes how important vaccination and herd immunity is.
Large networks
Before we have a more systematic look at herd immunity we'll increase the population size to 1000 individuals and have a look at what our virulent disease does to this population, if nothing else it'll produce a pretty plot. If you're running this notebook live you could try also try out one of the interactive backends at this point. To choose mpld3 as a backend run:
python
%output backend='matplotlib:mpld3' size=100
or for nbagg:
python
%output backend='matplotlib:nbagg' widgets='live'
Instead we'll choose a video backend so you can look at the network in full screen:
End of explanation
"""
%output size=80
"""
Explanation: As we can see the effect we observed in our smaller simulations from above still hold. Unvaccinated individuals are much safer in the high vaccination population than they are in the low vaccine population.
Experiment 2: Systematically exploring the effect of vaccination rates and connectivity on infection and death rates in a large population
End of explanation
"""
experiment2_params = dict(N=1000, pInfect=0.05, pRecover=0.05,
pSick=0.05, pDeath=0.001, visualize=False)
"""
Explanation: Now let's conduct a more systematic experiment by varying the vaccination rate and number of connections between individuals. In Experiment 1 we saw that vaccination rates could drastically reduce infection and death rates even among the unvaccinated population. Here we'll use a much less deadly disease as we're primarily interested in is how the disease spreads through populations with more and less connections and different vaccination rates. We'll also use a larger population (N=1000) to get a more representative sample.
End of explanation
"""
exp2_dims = ['Connections', 'pVaccinated']
hmap = hv.HoloMap(key_dimensions=exp2_dims)
vacc_rates = np.linspace(0, 1, 21)
mean_conns = [2**i for i in range(7)]
for v, c in itertools.product(vacc_rates, mean_conns):
sri_model = SRI_Model(mean_connections=c, pVaccinated=v, **experiment2_params)
sri_model.run(100)
hmap[c, v] = sri_model.stats()
df = hmap.dframe()
"""
Explanation: Now we explore the parameter space, we'll run the model for vaccination rates from 0% to 100% in 5% increments and for increasing numbers of connections. To speed the whole thing up we've disabled computing the network layout with the visualize parameter and will only be collecting the final simulation statistics. Finally we can simply deconstruct our data into a pandas data frame.
End of explanation
"""
df[::20]
"""
Explanation: Before we start visualizing this data let's have a look at it:
End of explanation
"""
quantities = ['Unvaccinated IR', 'Infection rate IR', 'Death rate DR', '$R_0$']
state_labels = ['Susceptible', 'Vaccinated', 'Infected', 'Recovered', 'Dead']
%%opts Regression (order=2 x_bins=10)
hv.Layout([hv.Table(df).to.regression('pVaccinated', var, ['Connections'])
for var in quantities]).cols(2)
%%opts Layout [fig_size=200]
%%opts Trisurface (cmap='Reds_r' linewidth=0.1)
(hv.Table(df).to.trisurface(['pVaccinated', 'Connections'],
'$R_0$', [], group='$R_0$') +
hv.Table(df).to.trisurface(['pVaccinated', 'Connections'],
'Infection rate IR', [], group='Infection Rate'))
"""
Explanation: Regressions between vaccination, infection and death rates
Using the HoloViews pandas and seaborn extensions we can now perform regressions on the vaccination rates against infection and death rates. However since we also varied the mean number of connections between individuals in the network we want to consider these variables independently. By assigning the number of connections to a HoloMap we can view each plot independently with a widget.
Let's define the quantities we want to visualize
End of explanation
"""
%%output dpi=80 size=100
%%opts Layout [fig_inches=(12,7) aspect_weight=1]
group_colors = zip(quantities, ['Blues', 'Reds', 'Greens', 'Purples'])
hv.Layout([hv.Table(df).to.heatmap(['pVaccinated', 'Connections'],
q, [], group=q)(style=dict(cmap=c)).hist()
for q, c in group_colors]).display('all').cols(2)
"""
Explanation: By varying the number of connections we can observe second order effects that would usually be invisible to us. After playing around with it for a little we can draw the following conclusions:
Greater number of connections in the network lead to drastically higher infection and death rates.
Infection rates scale linearly with death rates for very low and very high number of connections.
For intermediate levels of network connectivity the relationship between vaccination and infection rates more closely resembles exponential decay, i.e. achieving a basic level of vaccination in a population has a greater payoff than boosting vaccination rates in populations where they are already high.
The more highly connected a population the higher the vaccination rates have to be to effectively protect the population.
These results emphasize how important it is to maintain high vaccination rates in the highly connected societies we live in today. Even more importantly they show how important it is to continue vaccination programs in developing countries where they'll have the greatest impact.
We can also present the data in a different way examining all the data at once in a HeatMap.
End of explanation
"""
|
gaufung/PythonStandardLibrary | DataPersistence/Pickle.ipynb | mit | import pickle
import pprint
data = [{'a': 'A', 'b': 2, 'c': 3.0}]
print('DATA:', end=' ')
pprint.pprint(data)
data_string = pickle.dumps(data)
print('PICKLE: {!r}'.format(data_string))
import pickle
import pprint
data1 = [{'a': 'A', 'b': 2, 'c': 3.0}]
print('BEFORE: ', end=' ')
pprint.pprint(data1)
data1_string = pickle.dumps(data1)
data2 = pickle.loads(data1_string)
print('AFTER : ', end=' ')
pprint.pprint(data2)
print('SAME? :', (data1 is data2))
print('EQUAL?:', (data1 == data2))
"""
Explanation: The pickle module implements an algorithm for turning an arbitrary Python object into a series of bytes. This process is also called serializing the object. The byte stream representing the object can then be transmitted or stored, and later reconstructed to create a new object with the same characteristics.
Encoding and decoding Data in String
End of explanation
"""
import io
import pickle
import pprint
class SimpleObject:
def __init__(self, name):
self.name = name
self.name_backwards = name[::-1]
return
data = []
data.append(SimpleObject('pickle'))
data.append(SimpleObject('preserve'))
data.append(SimpleObject('last'))
# Simulate a file.
out_s = io.BytesIO()
# Write to the stream
for o in data:
print('WRITING : {} ({})'.format(o.name, o.name_backwards))
pickle.dump(o, out_s)
out_s.flush()
# Set up a read-able stream
in_s = io.BytesIO(out_s.getvalue())
# Read the data
while True:
try:
o = pickle.load(in_s)
except EOFError:
break
else:
print('READ : {} ({})'.format(
o.name, o.name_backwards))
"""
Explanation: Working with Stream
End of explanation
"""
import pickle
import sys
class SimpleObject:
def __init__(self, name):
self.name = name
l = list(name)
l.reverse()
self.name_backwards = ''.join(l)
data = []
data.append(SimpleObject('pickle'))
data.append(SimpleObject('preserve'))
data.append(SimpleObject('last'))
filename ='test.dat'
with open(filename, 'wb') as out_s:
for o in data:
print('WRITING: {} ({})'.format(
o.name, o.name_backwards))
pickle.dump(o, out_s)
with open(filename, 'rb') as in_s:
while True:
try:
o = pickle.load(in_s)
except EOFError:
break
else:
print('READ: {} ({})'.format(
o.name, o.name_backwards))
"""
Explanation: Problem with Reconstructing Objects
End of explanation
"""
|
Vincibean/machine-learning-with-tensorflow | simulated-linear-regression.ipynb | apache-2.0 | import numpy as np
num_points = 1000
vectors_set = []
for i in range(num_points):
x1= np.random.normal(0.0, 0.55)
y1= x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1])
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
"""
Explanation: Simulated Linear Regression
Abstract
In order to understand what TensorFlow can do, here is a little demo that makes up some phony data following a certain rulw, and then fits a line to it using Linear Regression. In the end, we expect that TensorFlow will be able to find out the parameters used to make up the phony data.
Linear Regression is a Machine Learning algorithm that models the relationship between a dependent variable and one or more independent variables.
Introduction
This tutorial is taken, with slight modification and different annotations, from TensorFlow's official documentation and Professor Jordi Torres' First Contact with TensorFlow.
This tutorial is intended for readers who are new to both machine learning and TensorFlow.
Data Preparation
Let's first start by creating 1000 phony x, y data points. In order to accomplish this, we will use NumPy. NumPy is an extension to the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large library of high-level mathematical functions to operate on these.
In particular, we will take advantage of the numpy.random.normal() function, which draws random samples from a normal (Gaussian) distribution (also called the bell curve because of its characteristic shape). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution
The rule that our phony data points will follow is:
y = x * 0.1 + 0.3
To this, we will add an "error" following a normal distribution.
End of explanation
"""
import tensorflow as tf
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
"""
Explanation: Data Analysis
Linear Regression models can be represented with just two parameters: W (the slope) and b (the y-intercept).
We want to generate a TensorFlow algorithm to find the best parameters W and b that from input data x_data describe the underlying rule.
First, let's begin by defining two Variable ops: one for the slope and one the y-intercept.
End of explanation
"""
y = tf.add(tf.mul(x_data, W), b) # W * x_data + b
"""
Explanation: Then, let's use two other ops for describing the relationship between x_data , W and b, that is the linear function (first degree polynomial).
End of explanation
"""
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
"""
Explanation: In order to find the best W and b, we need to minimize the mean squared error between the predicted y and the actual y_data. The way we accomplish this is using a Gradient Descent Optimizer.
End of explanation
"""
init = tf.initialize_all_variables()
"""
Explanation: Before starting, initialize the variables. We will 'run' this first.
End of explanation
"""
sess = tf.Session()
sess.run(init)
"""
Explanation: Then, we launch the graph.
End of explanation
"""
for step in range(200):
sess.run(train)
"""
Explanation: Now, fit the line. In order to do this, let's iterate 200 times (epochs) on the training data.
End of explanation
"""
print(sess.run(W), sess.run(b))
"""
Explanation: Finally, let's see if TensorFlow learned that the best fit is near W: [0.1], b: [0.3] (because, in our example, the input data were "phony", contained some noise: the "error")
End of explanation
"""
|
waltervh/BornAgain-tutorial | talks/day_3/advanced_geometry_M/EvanescentWave.ipynb | gpl-3.0 | %matplotlib inline
# %load depthprobe_ex.py
import numpy as np
import bornagain as ba
from bornagain import deg, angstrom, nm
# layer thicknesses in angstroms
t_Ti = 130.0 * angstrom
t_Pt = 320.0 * angstrom
t_Ti_top = 100.0 * angstrom
t_TiO2 = 30.0 * angstrom
# beam data
ai_min = 0.0 * deg # minimum incident angle
ai_max = 1.0 * deg # maximum incident angle
n_ai_bins = 500 # number of bins in incident angle axis
beam_sample_ratio = 0.01 # beam-to-sample size ratio
wl = 10 * angstrom # wavelength in angstroms
# angular beam divergence from https://mlz-garching.de/maria
d_ang = np.degrees(3.0e-03)*deg # spread width for incident angle
n_points = 50 # number of points to convolve over
n_sig = 3 # number of sigmas to convolve over
# wavelength divergence from https://mlz-garching.de/maria
d_wl = 0.1*wl # spread width for the wavelength
n_points_wl = 50
n_sig_wl = 2
# depth position span
z_min = -100 * nm # 300 nm to the sample and substrate
z_max = 100 * nm # 100 nm to the ambient layer
n_z_bins = 500
def get_sample():
"""
Constructs a sample with one resonating Ti/Pt layer
"""
# define materials
m_Si = ba.MaterialBySLD("Si", 2.07e-06, 2.38e-11)
m_Ti = ba.MaterialBySLD("Ti", 2.8e-06, 5.75e-10)
m_Pt = ba.MaterialBySLD("Pt", 6.36e-06, 1.9e-09)
m_TiO2 = ba.MaterialBySLD("TiO2", 2.63e-06, 5.4e-10)
m_D2O = ba.MaterialBySLD("D2O", 6.34e-06, 1.13e-13)
# create layers
l_Si = ba.Layer(m_Si)
l_Ti = ba.Layer(m_Ti, 130.0 * angstrom)
l_Pt = ba.Layer(m_Pt, 320.0 * angstrom)
l_Ti_top = ba.Layer(m_Ti, 100.0 * angstrom)
l_TiO2 = ba.Layer(m_TiO2, 30.0 * angstrom)
l_D2O = ba.Layer(m_D2O)
# construct sample
sample = ba.MultiLayer()
sample.addLayer(l_Si)
# put your code here (1 line), take care of correct indents
sample.addLayer(l_Ti)
sample.addLayer(l_Pt)
sample.addLayer(l_Ti_top)
sample.addLayer(l_TiO2)
sample.addLayer(l_D2O)
return sample
def get_simulation():
"""
Returns a depth-probe simulation.
"""
footprint = ba.FootprintFactorSquare(beam_sample_ratio)
simulation = ba.DepthProbeSimulation()
simulation.setBeamParameters(wl, n_ai_bins, ai_min, ai_max, footprint)
simulation.setZSpan(n_z_bins, z_min, z_max)
fwhm2sigma = 2*np.sqrt(2*np.log(2))
# add angular beam divergence
# put your code here (2 lines)
# add wavelength divergence
# put your code here (2 lines)
return simulation
def run_simulation():
"""
Runs simulation and returns its result.
"""
sample = get_sample()
simulation = get_simulation()
simulation.setSample(sample)
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
result = run_simulation()
ba.plot_simulation_result(result)
"""
Explanation: Evanescent wave intensity
From the BornAgain documentation one can see, that for the $q > q_c$ there are two solutions for the Schrödinger equation: reflected wave and transmitted wave. However, for the case $q < q_c$ there is also a real solution, that shows that even when the potential barier is higher than the neutron energy normal to the surface, it can still penetrate to a characteristic depth. This evanescent wave travels along the surface with wave vector $k_{||}$ and after a very short time it is ejected out of the bulk in the specular direction [1].
Evanescent wave intensity is calculated in BornAgain as
$$I_{ew}(z) = \left|\Psi(z)\right|^2 = \left|R\cdot e^{ik_zz} + T\cdot e^{-ik_zz}\right|^2$$
Simulate neutron penetration depth for the resonator [2]
BornAgain uses the following conventions for the DepthProbe geometry description. The surface of the sample will be assigned to $z=0$, while the direction of z-axis will be from the bulk of the sample out to the ambient. Thus, $z<0$ corresponds to the region from the substrate to the surface of the sample, and $z>0$ corresponds to the ambient media.
For the sample under consideration, Si block is considered as an ambient medium and D$_2$O as a substrate.
GUI
Start a new project Welcome view->New project
Go to the Instrument view and add an DepthProbe instrument.
Set the instrument parameters as follows.
Switch to the Sample view. Create a sample made of 6 layers (from bottom to top)
D$_2$O substrate, SLD-based material with SLD=$6.34\cdot 10^{-6} + i\cdot 1.13\cdot 10^{-13}$
TiO$_2$, thickness 3 nm, SLD-based material with SLD=$2.63\cdot 10^{-6} + i\cdot 5.4\cdot 10^{-10}$
Ti, thickness 10 nm, SLD-based material with SLD=$2.8\cdot 10^{-6} + i\cdot 5.75\cdot 10^{-10}$
Pt, thickness 32 nm, SLD-based material with SLD=$6.36\cdot 10^{-6} + i\cdot 1.9\cdot 10^{-9}$
Ti, thickness 13 nm
Si ambient layer, SLD-based material with SLD=$2.07\cdot 10^{-6} + i\cdot 2.38\cdot 10^{-11}$
Switch to Simulation view. Click Run simulation.
Exercise:
Go back to the Instrument view and add a Gaussian wavelength distribution with $\frac{\Delta\lambda}{\lambda}=10$%.
If got stucked, see the solution
Python API
Since Export to Python script does not work for Depth probe in BornAgain 1.14.0, see the prepared script below.
Exercise
Modify script below to simulate resonator with 3 Ti/Pt double layers. Hint: use for statement, take care about indentations. How number of double layers influences the simulation result?
Check for beam divergence parameters for MARIA instrument. Add Gaussian divergency for incident angle and the wavelength. How does the beam divergency influences the simulation result? What influences more: angular divergence or wavelength divergence?
End of explanation
"""
%load depthprobe.py
"""
Explanation: Solution
Run the line below to see the solution
End of explanation
"""
|
balarsen/pymc_learning | Propagation_of_uncertainty/Sullivan1971_GF.ipynb | bsd-3-clause | from pprint import pprint
import numpy as np
import pymc3 as pm
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1.5)
sns.set_context("notebook", rc={"lines.linewidth": 3})
%matplotlib inline
def getBoundedNormal_dist(mean=None, FWHM=None, name=None, lower=0, upper=1e6):
"""
Make a bounded normal distribution
NOTE: https://github.com/pymc-devs/pymc3/issues/1672 bounded dist fail until 3.1 on
non array bounds!!
"""
assert mean is not None
assert FWHM is not None
assert name is not None
BoundedNormal = mc3.Bound(mc3.Normal, lower=lower, upper=upper)
return BoundedNormal('{0}'.format(name), mu=mean, sd=FWHMtoSD_Normal(mean * (FWHM / 100.)))
def Sullivan_Bound(R1, R2, l):
A1 = np.pi*R1**2
A2 = np.pi*R2**2
top = A1*A2
bottom = R1**2+R2**2+l**2
return top/bottom
def Sullivan(R1, R2, l):
f = 0.5*np.pi**2
t1 = R1**2+R2**2+l**2
t2 = 4*R1**2*R2**2
G = f*(t1 - (t1**2-t2)**0.5 )
return G
def frac_bounds(trace):
med = np.median(trace)
bounds = np.percentile(trace, (2.5, 97.5))
frac = (med-bounds[0])/med
return med, frac*100
"""
Explanation: Work through Geometric Factor for Sullivan 1971
How do the results depend on stackup?
Both the full formula and a bounded formula
How do the results depend on diameter?
Both the full formula and a bounded formula
$G=\frac{1}{2}\pi^2 \left[R_1^2+R_2^2+l^2 -\left{\left(R_1^2+R_2^2+l^2\right)^2-4R_1^2R_2\right}^{\frac{1}{2}} \right]$
$G \ge \frac{A_1A_2}{R_1^2+R_2^2+l^2}$
End of explanation
"""
Sullivan(4, 5, 10)
R1 = 0.5
R2 = 0.5
l = np.linspace(2, 20, 100)
fig, ax = plt.subplots(ncols=2, figsize=(8,4))
ax[0].plot(l, Sullivan(R1, R2, l), lw=2)
ax[0].set_xlabel('Distance between [$cm$]')
ax[0].set_ylabel('GF [$cm^2sr$]');
ax[1].loglog(l, Sullivan(R1, R2, l), lw=2)
ax[1].set_xlabel('Distance between [$cm$]');
ax[0].grid(True, which='both')
ax[1.grid(True, which='both')
with pm.Model() as model1:
T1 = pm.Normal('T1', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T2 = pm.Normal('T2', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T3 = pm.Normal('T3', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T4 = pm.Normal('T4', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T5 = pm.Normal('T5', 1.0, 0.1e-2) # 1cm +/- 0.1mm
R1 = 0.5
R2 = 0.5
R3 = 0.5
G = pm.Deterministic('G', Sullivan(R1, R3, T1+T2+T3+T4+T5))
Gbound = pm.Deterministic('Gbound', Sullivan_Bound(R1, R3, T1+T2+T3+T4+T5))
trace = pm.sample(1000, chains=4, target_accept=0.9)
pm.summary(trace).round(3)
pm.traceplot(trace, combined=False);
gf = frac_bounds(trace['G'])
gbf = frac_bounds(trace['Gbound'])
print("G={:.5f} +/- {:.2f}%".format(gf[0], gf[1]))
print("Gbound={:.5f} +/- {:.2f}%".format(gbf[0], gbf[1]))
"""
Explanation: Just thickness
Stack up five colimator discs to compute G
End of explanation
"""
pm = np.logspace(-3, -1, 10)
ans = {}
for ii, p in enumerate(pm):
print(p, ii+1, len(pm))
with mc3.Model() as model2:
T1 = mc3.Normal('T1', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T2 = mc3.Normal('T2', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T3 = mc3.Normal('T3', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T4 = mc3.Normal('T4', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T5 = mc3.Normal('T5', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
R1 = 0.5
R2 = 0.5
R3 = 0.5
G = mc3.Deterministic('G', Sullivan(R1, R3, T1+T2+T3+T4+T5))
Gbound = mc3.Deterministic('Gbound', Sullivan_Bound(R1, R3, T1+T2+T3+T4+T5))
start = mc3.find_MAP()
trace = mc3.sample(10000, start=start, jobs=2)
ans[p] = gf = frac_bounds(trace['G'])
pprint(ans)
vals = np.asarray(list(ans.keys()))
gs = np.asarray([ans[v][0] for v in ans ])
gse = np.asarray([ans[v][1] for v in ans ])
valsf = (vals/1.0)*100
plt.errorbar(valsf, gs, yerr=gse, elinewidth=1, capsize=2, barsabove=True)
plt.ylim([0,15])
# plt.xscale('log')
plt.plot(valsf, gse)
"""
Explanation: As a function of measurement uncertainty
End of explanation
"""
|
gdementen/larray | doc/source/tutorial/tutorial_plotting.ipynb | gpl-3.0 | from larray import *
"""
Explanation: Plotting
Import the LArray library:
End of explanation
"""
demography_eurostat = load_example_data('demography_eurostat')
population = demography_eurostat.population / 1_000_000
# show the 'population' array
population
"""
Explanation: Import the test array population from the demography_eurostat dataset:
End of explanation
"""
%matplotlib inline
"""
Explanation: Inline matplotlib (required in notebooks):
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: In a Python script, add the following import on top of the script:
End of explanation
"""
population['Belgium'].plot()
# shows the figure
plt.show()
"""
Explanation: Create and show a simple plot (last axis define the different curves to draw):
End of explanation
"""
population['Belgium'].plot(grid=True, xticks=population.time, title='Belgium')
# add a label aling the y axis
plt.ylabel('population (millions)')
# saves figure in a file (see matplotlib.pyplot.savefig documentation for more details)
plt.savefig('Belgium_population.png')
# WARNING: show() reset the current figure after showing it! Do not call it before savefig
plt.show()
"""
Explanation: Create a Line plot with grid, title, label on y axis and user-defined xticks.
Save the plot as a png file (using plt.savefig()).
Show the plot:
End of explanation
"""
# line styles: '-' for solid line, '--' for dashed line, '-.' for dash-dotted line and ':' for dotted line
population['Male'].plot(style=['-', '--', '-.'], linewidth=2, xticks=population.time, title='Male')
plt.ylabel('population (millions)')
plt.show()
"""
Explanation: Specify line styles and width:
End of explanation
"""
population['Belgium'].plot(xticks=population.time, title='Male')
plt.ylabel('population (millions)')
# available values for loc are:
# 'best' (default), 'upper right', 'upper left', 'lower left', 'lower right', 'right',
# center left', 'center right', 'lower center', 'upper center', 'center'
plt.legend(loc='lower right')
plt.show()
"""
Explanation: Move the legend inside the graph (using plt.legend(loc='position')):
End of explanation
"""
population['Belgium'].plot(xticks=population.time, title='Male')
plt.ylabel('population (millions)')
plt.legend(bbox_to_anchor=(1.25, 0.6))
plt.show()
"""
Explanation: Put the legend outside the graph (using plt.legend(bbox_to_anchor=(x, y))):
End of explanation
"""
population['Belgium'].plot.bar(title='Belgium')
plt.ylabel('population (millions)')
plt.legend(bbox_to_anchor=(1.25, 0.6))
plt.show()
"""
Explanation: Create a Bar plot:
End of explanation
"""
population['Belgium'].plot.bar(title='Belgium', stacked=True)
plt.ylabel('population (millions)')
plt.legend(bbox_to_anchor=(1.25, 0.6))
plt.show()
"""
Explanation: Create a stacked Bar plot:
End of explanation
"""
figure, axes = plt.subplots(nrows=len(population.country), ncols=1, sharex=True, figsize=(5, 15))
for row, c in enumerate(population.country):
population[c].plot(ax=axes[row], title=str(c))
plt.ylabel('population (millions)')
plt.xticks(population.time)
plt.show()
"""
Explanation: Create a multiplot figure (using plt.subplot(nrows,ncols,index)):
End of explanation
"""
|
sthuggins/phys202-2015-work | days/day08/Display.ipynb | mit | class Ball(object):
pass
b = Ball()
b.__repr__()
print(b)
"""
Explanation: Display of Rich Output
In Python, objects can declare their textual representation using the __repr__ method.
End of explanation
"""
class Ball(object):
def __repr__(self):
return 'TEST'
b = Ball()
print(b)
"""
Explanation: Overriding the __repr__ method:
End of explanation
"""
from IPython.display import display
"""
Explanation: IPython expands on this idea and allows objects to declare other, rich representations including:
HTML
JSON
PNG
JPEG
SVG
LaTeX
A single object can declare some or all of these representations; all of them are handled by IPython's display system. .
Basic display imports
The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.
End of explanation
"""
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
"""
Explanation: A few points:
Calling display on an object will send all possible representations to the Notebook.
These representations are stored in the Notebook document.
In general the Notebook will use the richest available representation.
If you want to display a particular representation, there are specific functions for that:
End of explanation
"""
from IPython.display import Image
i = Image(filename='./ipython-image.png')
display(i)
"""
Explanation: Images
To work with images (JPEG, PNG) use the Image class.
End of explanation
"""
i
"""
Explanation: Returning an Image object from an expression will automatically display it:
End of explanation
"""
Image(url='http://python.org/images/python-logo.gif')
"""
Explanation: An image can also be displayed from raw data or a URL.
End of explanation
"""
from IPython.display import HTML
s = """<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>"""
h = HTML(s)
h
"""
Explanation: HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
End of explanation
"""
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
%%html
<style>
#notebook {
background-color: skyblue;
font-family: times new roman;
}
</style>
"""
Explanation: You can also use the %%html cell magic to accomplish the same thing.
End of explanation
"""
from IPython.display import Javascript
"""
Explanation: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected.
JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
End of explanation
"""
js = Javascript('alert("hi")');
display(js)
"""
Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it.
End of explanation
"""
%%javascript
alert("hi");
"""
Explanation: The same thing can be accomplished using the %%javascript cell magic:
End of explanation
"""
Javascript(
"""$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')"""
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("./flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
"""
Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
End of explanation
"""
from IPython.display import Audio
Audio("./scrubjay.mp3")
"""
Explanation: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
End of explanation
"""
import numpy as np
max_time = 3
f1 = 120.0
f2 = 124.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
"""
Explanation: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur:
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
"""
Explanation: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
End of explanation
"""
from IPython.display import IFrame
IFrame('https://ipython.org', width='100%', height=350)
"""
Explanation: External sites
You can even embed an entire page from another site in an iframe; for example this is IPython's home page:
End of explanation
"""
from IPython.display import FileLink, FileLinks
FileLink('../Visualization/Matplotlib.ipynb')
"""
Explanation: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:
End of explanation
"""
FileLinks('./')
"""
Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
End of explanation
"""
|
Diyago/Machine-Learning-scripts | DEEP LEARNING/Pytorch from scratch/TODO/Autoencoders/convolutional-autoencoder/Convolutional_Autoencoder_Solution.ipynb | apache-2.0 | import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
"""
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. We'll build a convolutional autoencoder to compress the MNIST dataset.
The encoder portion will be made of convolutional and pooling layers and the decoder will be made of transpose convolutional layers that learn to "upsample" a compressed representation.
<img src='notebook_ims/autoencoder_1.png' />
Compressed Representation
A compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!
<img src='notebook_ims/denoising.png' width=60%/>
Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
"""
Explanation: Visualize the Data
End of explanation
"""
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
## encoder layers ##
# conv layer (depth from 1 --> 16), 3x3 kernels
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
# conv layer (depth from 16 --> 4), 3x3 kernels
self.conv2 = nn.Conv2d(16, 4, 3, padding=1)
# pooling layer to reduce x-y dims by two; kernel and stride of 2
self.pool = nn.MaxPool2d(2, 2)
## decoder layers ##
## a kernel of 2 and a stride of 2 will increase the spatial dims by 2
self.t_conv1 = nn.ConvTranspose2d(4, 16, 2, stride=2)
self.t_conv2 = nn.ConvTranspose2d(16, 1, 2, stride=2)
def forward(self, x):
## encode ##
# add hidden layers with relu activation function
# and maxpooling after
x = F.relu(self.conv1(x))
x = self.pool(x)
# add second hidden layer
x = F.relu(self.conv2(x))
x = self.pool(x) # compressed representation
## decode ##
# add transpose conv layers, with relu activation function
x = F.relu(self.t_conv1(x))
# output layer (with sigmoid for scaling from 0 to 1)
x = F.sigmoid(self.t_conv2(x))
return x
# initialize the NN
model = ConvAutoencoder()
print(model)
"""
Explanation: Convolutional Autoencoder
Encoder
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers.
Decoder
The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide, reconstructed image. For example, the representation could be a 7x7x4 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the compressed representation. A schematic of the network is shown below.
<img src='notebook_ims/conv_enc_1.png' width=640px>
Here our final encoder layer has size 7x7x4 = 196. The original images have size 28x28 = 784, so the encoded vector is 25% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, in fact, you're encouraged to add additional layers to make this representation even smaller! Remember our goal here is to find a small representation of the input data.
Transpose Convolutions, Decoder
This decoder uses transposed convolutional layers to increase the width and height of the input layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. PyTorch provides us with an easy way to create the layers, nn.ConvTranspose2d.
It is important to note that transpose convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer.
We'll show this approach in another notebook, so you can experiment with it and see the difference.
TODO: Build the network shown above.
Build the encoder out of a series of convolutional and pooling layers.
When building the decoder, recall that transpose convolutional layers can upsample an input by a factor of 2 using a stride and kernel_size of 2.
End of explanation
"""
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 30
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
# no need to flatten images
images, _ = data
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
outputs = model(images)
# calculate the loss
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
"""
Explanation: Training
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
We are not concerned with labels in this case, just images, which we can get from the train_loader. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows:
loss = criterion(outputs, images)
Otherwise, this is pretty straightfoward training with PyTorch. Since this is a convlutional autoencoder, our images do not need to be flattened before being passed in an input to our model.
End of explanation
"""
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# prep images for display
images = images.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for images, row in zip([images, output], axes):
for img, ax in zip(images, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. These look a little rough around the edges, likely due to the checkerboard effect we mentioned above that tends to happen with transpose layers.
End of explanation
"""
|
NEONScience/NEON-Data-Skills | tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-py/classify_raster_with_threshold-py.ipynb | agpl-3.0 | import numpy as np
import gdal
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: syncID: c324c554a35f463493349dbd0be19cec
title: "Classify a Raster Using Threshold Values in Python - 2017"
description: "Learn how to read NEON lidar raster GeoTIFFs (e.g., CHM, slope, aspect) into Python numpy arrays with gdal and create a classified raster object."
dateCreated: 2017-06-21
authors: Bridget Hass
contributors: Max Burner, Donal O'Leary
estimatedTime: 1 hour
packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot, os
topics: lidar, raster, remote-sensing
languagesTool: python
dataProduct: DP1.30003, DP3.30015, DP3.30024, DP3.30025
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-py/classify_raster_with_threshold-py.ipynb
tutorialSeries: intro-lidar-py-series
urlTitle: classify-raster-thresholds-py
In this tutorial, we will learn how to:
1. Read NEON LiDAR Raster Geotifs (eg. CHM, Slope Aspect) into Python numpy arrays with gdal.
2. Create a classified raster object.
First, let's import the required packages and set our plot display to be in line:
<h3><a href="https://ndownloader.figshare.com/files/21754221">
Download NEON Teaching Data Subset: Imaging Spectrometer Data - HDF5 </a></h3>
These hyperspectral remote sensing data provide information on the
<a href="https://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a>
<a href="https://www.neonscience.org/field-sites/field-sites-map/SJER" target="_blank" > San Joaquin
Exerimental Range field site</a> in March of 2019.
The data were collected over the San Joaquin field site located in California
(Domain 17) and processed at NEON headquarters. This data subset is derived from
the mosaic tile named NEON_D17_SJER_DP3_257000_4112000_reflectance.h5.
The entire dataset can be accessed by request from the
<a href="http://data.neonscience.org" target="_blank"> NEON Data Portal</a>.
<a href="https://ndownloader.figshare.com/files/21754221" class="link--button link--arrow"> Download Dataset</a>
End of explanation
"""
chm_filename = '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_CHM.tif'
#chm_filename = 'NEON_D02_SERC_DP3_367000_4306000_CHM.tif'
chm_dataset = gdal.Open(chm_filename)
"""
Explanation: Open a Geotif with GDAL
Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function:
End of explanation
"""
#Display the dataset dimensions, number of bands, driver, and geotransform
cols = chm_dataset.RasterXSize; print('# of columns:',cols)
rows = chm_dataset.RasterYSize; print('# of rows:',rows)
print('# of bands:',chm_dataset.RasterCount)
print('driver:',chm_dataset.GetDriver().LongName)
"""
Explanation: Read information from Geotif Tags
The Geotif file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands:
End of explanation
"""
print('projection:',chm_dataset.GetProjection())
"""
Explanation: GetProjection
We can use GetProjection to see information about the coordinate system and EPSG code.
End of explanation
"""
print('geotransform:',chm_dataset.GetGeoTransform())
"""
Explanation: GetGeoTransform
The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to:
End of explanation
"""
chm_mapinfo = chm_dataset.GetGeoTransform()
xMin = chm_mapinfo[0]
yMax = chm_mapinfo[3]
xMax = xMin + chm_dataset.RasterXSize/chm_mapinfo[1] #divide by pixel width
yMin = yMax + chm_dataset.RasterYSize/chm_mapinfo[5] #divide by pixel height (note sign +/-)
chm_ext = (xMin,xMax,yMin,yMax)
print('chm raster extent:',chm_ext)
"""
Explanation: In this case, the geotransform values correspond to:
Top-Left X Coordinate = 358816.0
W-E Pixel Resolution = 1.0
Rotation (0 if Image is North-Up) = 0.0
Top-Left Y Coordinate = 4313476.0
Rotation (0 if Image is North-Up) = 0.0
N-S Pixel Resolution = -1.0
We can convert this information into a spatial extent (xMin, xMax, yMin, yMax) by combining information about the origin, number of columns & rows, and pixel size, as follows:
End of explanation
"""
chm_raster = chm_dataset.GetRasterBand(1)
noDataVal = chm_raster.GetNoDataValue(); print('no data value:',noDataVal)
scaleFactor = chm_raster.GetScale(); print('scale factor:',scaleFactor)
chm_stats = chm_raster.GetStatistics(True,True)
print('SERC CHM Statistics: Minimum=%.2f, Maximum=%.2f, Mean=%.3f, StDev=%.3f' %
(chm_stats[0], chm_stats[1], chm_stats[2], chm_stats[3]))
"""
Explanation: GetRasterBand
We can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows:
End of explanation
"""
chm_array = chm_dataset.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(np.float)
chm_array[chm_array==int(noDataVal)]=np.nan #Assign CHM No Data Values to NaN
chm_array=chm_array/scaleFactor
print('SERC CHM Array:\n',chm_array) #display array values
"""
Explanation: ReadAsArray
Finally we can convert the raster to an array using the ReadAsArray command. Use the extension astype(np.float) to ensure the array contains floating-point numbers. Once we generate the array, we want to set No Data Values to NaN, and apply the scale factor:
End of explanation
"""
# Display statistics (min, max, mean); numpy.nanmin calculates the minimum without the NaN values.
print('SERC CHM Array Statistics:')
print('min:',round(np.nanmin(chm_array),2))
print('max:',round(np.nanmax(chm_array),2))
print('mean:',round(np.nanmean(chm_array),2))
# Calculate the % of pixels that are NaN and non-zero:
pct_nan = np.count_nonzero(np.isnan(chm_array))/(rows*cols)
print('% NaN:',round(pct_nan*100,2))
print('% non-zero:',round(100*np.count_nonzero(chm_array)/(rows*cols),2))
# Plot array
# We can use our plot_band_array function from Day 1
# %load plot_band_array
def plot_band_array(band_array,refl_extent,title,cmap_title,colormap='Spectral'):
plt.imshow(band_array,extent=refl_extent);
cbar = plt.colorbar(); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=270,labelpad=20)
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
plot_band_array(chm_array,chm_ext,'SERC Canopy Height','Canopy Height, m')
"""
Explanation: Array Statistics
To get a better idea of the dataset, print some basic statistics:
End of explanation
"""
import copy
chm_nonan_array = copy.copy(chm_array)
chm_nonan_array = chm_nonan_array[~np.isnan(chm_array)]
plt.hist(chm_nonan_array,weights=np.zeros_like(chm_nonan_array)+1./
(chm_array.shape[0]*chm_array.shape[1]),bins=50);
plt.title('Distribution of SERC Canopy Height')
plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
"""
Explanation: Plot Histogram of Data
End of explanation
"""
chm_nonzero_array = copy.copy(chm_array)
chm_nonzero_array[chm_array==0]=np.nan
chm_nonzero_nonan_array = chm_nonzero_array[~np.isnan(chm_nonzero_array)]
# Use weighting to plot relative frequency
plt.hist(chm_nonzero_nonan_array,weights=np.zeros_like(chm_nonzero_nonan_array)+1./
(chm_array.shape[0]*chm_array.shape[1]),bins=50);
# plt.hist(chm_nonzero_nonan_array.flatten(),50)
plt.title('Distribution of SERC Non-Zero Canopy Height')
plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
# plt.xlim(0,25); plt.ylim(0,4000000)
print('min:',np.amin(chm_nonzero_nonan_array),'m')
print('max:',round(np.amax(chm_nonzero_nonan_array),2),'m')
print('mean:',round(np.mean(chm_nonzero_nonan_array),2),'m')
"""
Explanation: We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:
End of explanation
"""
plt.imshow(chm_array,extent=chm_ext,clim=(0,45))
cbar = plt.colorbar(); plt.set_cmap('BuGn');
cbar.set_label('Canopy Height, m',rotation=270,labelpad=20)
plt.title('SERC Non-Zero CHM, <45m'); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
"""
Explanation: From the histogram we can see that the majority of the trees are < 45m. We can replot the CHM array, this time adjusting the color bar to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
End of explanation
"""
# raster2array.py reads in the first band of geotif file and returns an array and associated
# metadata dictionary containing ...
from osgeo import gdal
import numpy as np
def raster2array(geotif_file):
metadata = {}
dataset = gdal.Open(geotif_file)
metadata['array_rows'] = dataset.RasterYSize
metadata['array_cols'] = dataset.RasterXSize
metadata['bands'] = dataset.RasterCount
metadata['driver'] = dataset.GetDriver().LongName
metadata['projection'] = dataset.GetProjection()
metadata['geotransform'] = dataset.GetGeoTransform()
mapinfo = dataset.GetGeoTransform()
metadata['pixelWidth'] = mapinfo[1]
metadata['pixelHeight'] = mapinfo[5]
# metadata['xMin'] = mapinfo[0]
# metadata['yMax'] = mapinfo[3]
# metadata['xMax'] = mapinfo[0] + dataset.RasterXSize/mapinfo[1]
# metadata['yMin'] = mapinfo[3] + dataset.RasterYSize/mapinfo[5]
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = mapinfo[0]
metadata['ext_dict']['xMax'] = mapinfo[0] + dataset.RasterXSize/mapinfo[1]
metadata['ext_dict']['yMin'] = mapinfo[3] + dataset.RasterYSize/mapinfo[5]
metadata['ext_dict']['yMax'] = mapinfo[3]
metadata['extent'] = (metadata['ext_dict']['xMin'],metadata['ext_dict']['xMax'],
metadata['ext_dict']['yMin'],metadata['ext_dict']['yMax'])
if metadata['bands'] == 1:
raster = dataset.GetRasterBand(1)
metadata['noDataValue'] = raster.GetNoDataValue()
metadata['scaleFactor'] = raster.GetScale()
# band statistics
metadata['bandstats'] = {} #make a nested dictionary to store band stats in same
stats = raster.GetStatistics(True,True)
metadata['bandstats']['min'] = round(stats[0],2)
metadata['bandstats']['max'] = round(stats[1],2)
metadata['bandstats']['mean'] = round(stats[2],2)
metadata['bandstats']['stdev'] = round(stats[3],2)
array = dataset.GetRasterBand(1).ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float)
array[array==int(metadata['noDataValue'])]=np.nan
array = array/metadata['scaleFactor']
return array, metadata
elif metadata['bands'] > 1:
print('More than one band ... need to modify function for case of multiple bands')
SERC_chm_array, SERC_chm_metadata = raster2array(chm_filename)
print('SERC CHM Array:\n',SERC_chm_array)
# print(chm_metadata)
#print metadata in alphabetical order
for item in sorted(SERC_chm_metadata):
print(item + ':', SERC_chm_metadata[item])
"""
Explanation: Create a raster2array function to automate conversion of Geotif to array:
Now that we have a basic understanding of how GDAL reads in a Geotif file, we can write a function to read in a NEON geotif, convert it to a numpy array, and store the associated metadata in a Python dictionary in order to more efficiently carry out further analysis:
End of explanation
"""
chm_reclass = copy.copy(chm_array)
chm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1
chm_reclass[np.where((chm_array>0) & (chm_array<=20))] = 2 # 0m < CHM <= 20m - Class 2
chm_reclass[np.where((chm_array>20) & (chm_array<=40))] = 3 # 20m < CHM < 40m - Class 3
chm_reclass[np.where(chm_array>40)] = 4 # CHM > 40m - Class 4
print('Min:',np.nanmin(chm_reclass))
print('Max:',np.nanmax(chm_reclass))
print('Mean:',round(np.nanmean(chm_reclass),2))
import matplotlib.colors as colors
plt.figure(); #ax = plt.subplots()
cmapCHM = colors.ListedColormap(['lightblue','yellow','green','red'])
plt.imshow(chm_reclass,extent=chm_ext,cmap=cmapCHM)
plt.title('SERC CHM Classification')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
# forceAspect(ax,aspect=1) # ax.set_aspect('auto')
# Create custom legend to label the four canopy height classes:
import matplotlib.patches as mpatches
class1_box = mpatches.Patch(color='lightblue', label='CHM = 0m')
class2_box = mpatches.Patch(color='yellow', label='0m < CHM < 20m')
class3_box = mpatches.Patch(color='green', label='20m < CHM < 40m')
class4_box = mpatches.Patch(color='red', label='CHM > 40m')
ax.legend(handles=[class1_box,class2_box,class3_box,class4_box],
handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.)
"""
Explanation: Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into four groups:
- Class 1: CHM = 0 m
- Class 2: 0m < CHM <= 20m
- Class 3: 20m < CHM <= 40m
- Class 4: CHM > 40m
End of explanation
"""
# %load ../hyperspectral_hdf5/array2raster.py
"""
Array to Raster Function from https://pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html)
"""
import gdal, osr #ogr, os, osr
import numpy as np
def array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array,epsg):
cols = array.shape[1]
rows = array.shape[0]
originX = rasterOrigin[0]
originY = rasterOrigin[1]
driver = gdal.GetDriverByName('GTiff')
outRaster = driver.Create(newRasterfn, cols, rows, 1, gdal.GDT_Byte)
outRaster.SetGeoTransform((originX, pixelWidth, 0, originY, 0, pixelHeight))
outband = outRaster.GetRasterBand(1)
outband.WriteArray(array)
outRasterSRS = osr.SpatialReference()
outRasterSRS.ImportFromEPSG(epsg)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
#array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array)
epsg = 32618 #WGS84, UTM Zone 18N
rasterOrigin = (SERC_chm_metadata['ext_dict']['xMin'],SERC_chm_metadata['ext_dict']['yMax'])
print('raster origin:',rasterOrigin)
array2raster('/Users/olearyd/Git/data/SERC_CHM_Classified.tif',rasterOrigin,1,-1,chm_reclass,epsg)
"""
Explanation: Export classified raster to a geotif
End of explanation
"""
TEAK_aspect_tif = '/Users/olearyd/Git/data/2013_TEAK_1_326000_4103000_DTM_aspect.tif'
TEAK_asp_array, TEAK_asp_metadata = raster2array(TEAK_aspect_tif)
print('TEAK Aspect Array:\n',TEAK_asp_array)
#print metadata in alphabetical order
for item in sorted(TEAK_asp_metadata):
print(item + ':', TEAK_asp_metadata[item])
plot_band_array(TEAK_asp_array,TEAK_asp_metadata['extent'],'TEAK Aspect','Aspect, degrees')
aspect_array = copy.copy(TEAK_asp_array)
asp_reclass = copy.copy(aspect_array)
asp_reclass[np.where(((aspect_array>=0) & (aspect_array<=45)) | (aspect_array>=315))] = 1 #North - Class 1
asp_reclass[np.where((aspect_array>=135) & (aspect_array<=225))] = 2 #South - Class 2
asp_reclass[np.where(((aspect_array>45) & (aspect_array<135)) | ((aspect_array>225) & (aspect_array<315)))] = np.nan #W & E - Unclassified
print('Min:',np.nanmin(asp_reclass))
print('Max:',np.nanmax(asp_reclass))
print('Mean:',round(np.nanmean(asp_reclass),2))
def forceAspect(ax,aspect=1):
im = ax.get_images()
extent = im[0].get_extent()
ax.set_aspect(abs((extent[1]-extent[0])/(extent[3]-extent[2]))/aspect)
# plot_band_array(aspect_reclassified,asp_ext,'North and South Facing Slopes \n HOPB')
from matplotlib import colors
fig, ax = plt.subplots()
cmapNS = colors.ListedColormap(['blue','red'])
plt.imshow(asp_reclass,extent=TEAK_asp_metadata['extent'],cmap=cmapNS)
plt.title('TEAK \n N and S Facing Slopes')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
ax = plt.gca(); forceAspect(ax,aspect=1)
# Create custom legend to label N & S
import matplotlib.patches as mpatches
blue_box = mpatches.Patch(color='blue', label='North')
red_box = mpatches.Patch(color='red', label='South')
ax.legend(handles=[blue_box,red_box],handlelength=0.7,bbox_to_anchor=(1.05, 0.45),
loc='lower left', borderaxespad=0.)
TEAK_asp_array
TEAKepsg = 32611 #WGS84 UTM Zone 11N
TEAKorigin = (TEAK_asp_metadata['ext_dict']['xMin'], TEAK_asp_metadata['ext_dict']['yMax'])
print('TEAK raster origin: ',TEAKorigin)
array2raster('/Users/olearyd/Git/data/teak_nsAspect.tif',TEAKorigin,1,-1,TEAK_asp_array,TEAKepsg)
TEAK_asp_metadata['projection']
"""
Explanation: Challenge 1: Document Your Workflow
Look at the code that you created for this lesson. Now imagine yourself months in the future. Document your script so that your methods and process is clear and reproducible for yourself or others to follow in the future.
In documenting your script, synthesize the outputs. Do they tell you anything about the vegetation structure at the field site?
Challenge 2: Try out other Classifications
Create the following threshold classified outputs:
1. A raster where NDVI values are classified into the following categories:
* Low greenness: NDVI < 0.3
* Medium greenness: 0.3 < NDVI < 0.6
* High greenness: NDVI > 0.6
2. A raster where aspect is classified into North and South facing slopes:
Be sure to document your workflow as you go using Jupyter Markdown cells. When you are finished, explore your outputs to HTML by selecting File > Download As > HTML (.html). Save the file as LastName_Tues_classifyThreshold.html. Add this to the Tuesday directory in your DI17-NEON-participants Git directory and push them to your fork in GitHub. Merge with the central repository using a pull request.
Aspect Raster Classification on TEAK Dataset (California)
Next, we will create a classified raster object based on slope using the TEAK dataset. This time, our classifications will be:
- North Facing Slopes: 0-45 & 315-360 degrees ; class=1
- South Facing Slopes: 135-225 degrees ; class=2
- East & West Facing Slopes: 45-135 & 225-315 degrees ; unclassified
<p>
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/geospatial-skills/NSEWclassification_BozEtAl2015.jpg" style="width: 250px;"/>
<center><font size="2">Figure: (Boz et al. 2015)</font></center>
<center><font size="2">http://www.aimspress.com/article/10.3934/energy.2015.3.401/fulltext.html</font></center>
</p>
Further Reading: There are a range of applications for aspect classification. The link above shows an example of classifying LiDAR aspect data to determine suitability of roofs for PV (photovoltaic) systems. Can you think of any other applications where aspect classification might be useful?
Data Tip: You can calculate aspect in Python from a digital elevation (or surface) model using the pyDEM package: https://earthlab.github.io/tutorials/get-slope-aspect-from-digital-elevation-model/
Let's get started. First we can import the TEAK aspect raster geotif and convert it to an array using the raster2array function:
End of explanation
"""
#SERC Aspect Classification - doesn't work well because there is no significant terrain
SERC_asp_array, SERC_asp_metadata = raster2array(chm_filename)
print('SERC Aspect Array:\n',SERC_asp_array)
#print metadata in alphabetical order
for item in sorted(SERC_asp_metadata):
print(item + ':', SERC_chm_metadata[item])
plot_band_array(SERC_asp_array,SERC_asp_metadata['extent'],'SERC Aspect','Aspect, degrees')
import copy
aspect_array = copy.copy(SERC_asp_array)
asp_reclass = copy.copy(aspect_array)
asp_reclass[np.where(((aspect_array>=0) & (aspect_array<=45)) | (aspect_array>=315))] = 1 #North - Class 1
asp_reclass[np.where((aspect_array>=135) & (aspect_array<=225))] = 2 #South - Class 2
asp_reclass[np.where(((aspect_array>45) & (aspect_array<135)) | ((aspect_array>225) & (aspect_array<315)))] = np.nan #W & E - Unclassified
# print(aspect_reclassified.dtype)
# print('Reclassified Aspect Matrix:',asp_reclass.shape)
# print(aspect_reclassified)
print('Min:',np.nanmin(asp_reclass))
print('Max:',np.nanmax(asp_reclass))
print('Mean:',round(np.nanmean(asp_reclass),2))
# plot_band_array(aspect_reclassified,asp_ext,'North and South Facing Slopes \n HOPB')
from matplotlib import colors
fig, ax = plt.subplots()
cmapNS = colors.ListedColormap(['blue','red'])
plt.imshow(asp_reclass,extent=SERC_asp_metadata['extent'],cmap=cmapNS)
plt.title('SERC \n N and S Facing Slopes')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
# Create custom legend to label N & S
import matplotlib.patches as mpatches
blue_box = mpatches.Patch(color='blue', label='North')
red_box = mpatches.Patch(color='red', label='South')
ax.legend(handles=[blue_box,red_box],handlelength=0.7,
bbox_to_anchor=(1.05, 0.45), loc='lower left', borderaxespad=0.)
#Test out slope reclassification on subset of SERC aspect array:
import copy
aspect_subset = aspect_array[1000:1005,1000:1005]
print(aspect_subset)
asp_sub_reclass = copy.copy(aspect_subset)
asp_sub_reclass[np.where(((aspect_subset>=0) & (aspect_subset<=45)) | (aspect_subset>=315))] = 1 #North - Class 1
asp_sub_reclass[np.where((aspect_subset>=135) & (aspect_subset<=225))] = 2 #South - Class 2
asp_sub_reclass[np.where(((aspect_subset>45) & (aspect_subset<135)) | ((aspect_subset>225) & (aspect_subset<315)))] = np.nan #W & E - Unclassified
print('Reclassified Aspect Subset:\n',asp_sub_reclass)
# Another way to read in a Geotif from a gdal workshop:
# http://download.osgeo.org/gdal/workshop/foss4ge2015/workshop_gdal.pdf
ds = gdal.Open(chm_filename)
print('File list:', ds.GetFileList())
print('Width:', ds.RasterXSize)
print('Height:', ds.RasterYSize)
print('Coordinate system:', ds.GetProjection())
gt = ds.GetGeoTransform() # captures origin and pixel size
print('Origin:', (gt[0], gt[3]))
print('Pixel size:', (gt[1], gt[5]))
print('Upper Left Corner:', gdal.ApplyGeoTransform(gt,0,0))
print('Upper Right Corner:', gdal.ApplyGeoTransform(gt,ds.RasterXSize,0))
print('Lower Left Corner:', gdal.ApplyGeoTransform(gt,0,ds.RasterYSize))
print('Lower Right Corner:',gdal.ApplyGeoTransform(gt,ds.RasterXSize,ds.RasterYSize))
print('Center:', gdal.ApplyGeoTransform(gt,ds.RasterXSize/2,ds.RasterYSize/2))
print('Metadata:', ds.GetMetadata())
print('Image Structure Metadata:', ds.GetMetadata('IMAGE_STRUCTURE'))
print('Number of Bands:', ds.RasterCount)
for i in range(1, ds.RasterCount+1):
band = ds.GetRasterBand(i) # in GDAL, band are indexed starting at 1!
interp = band.GetColorInterpretation()
interp_name = gdal.GetColorInterpretationName(interp)
(w,h)=band.GetBlockSize()
print('Band %d, block size %dx%d, color interp %s' % (i,w,h,interp_name))
ovr_count = band.GetOverviewCount()
for j in range(ovr_count):
ovr_band = band.GetOverview(j) # but overview bands starting at 0
print(' Overview %d: %dx%d'%(j, ovr_band.XSize, ovr_band.YSize))
"""
Explanation: References
Bayrakci Boz, M.; Calvert, K.; Brownson, J.R.S. An automated model for rooftop PV systems assessment in ArcGIS using LIDAR. AIMS Energy 2015, 3, 401–420.
Scratch / Test Code
End of explanation
"""
|
csaladenes/csaladenes.github.io | test/eis-metadata-validation/Planon metadata validation2.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: EIS metadata validation script
Used to validate Planon output with spreadsheet input
1. Data import
End of explanation
"""
planon=pd.read_excel('Data Loggers.xlsx',index_col = 'Code')
master_loggerscontrollers = pd.read_csv('LoggersControllers.csv', index_col = 'Asset Code')
master_meterssensors = pd.read_csv('MetersSensors.csv', encoding = 'macroman', index_col = 'Asset Code')
"""
Explanation: Read data. There are two datasets: Planon and Master. The latter is the EIS data nomencalture that was created. Master is made up of two subsets: loggers and meters. Loggers are sometimes called controllers and meters are sometimes called sensors. In rare cases meters or sensors are also called channels.
End of explanation
"""
planon.drop_duplicates(inplace=True)
master_loggerscontrollers.drop_duplicates(inplace=True)
master_meterssensors.drop_duplicates(inplace=True)
"""
Explanation: Drop duplicates (shouldn't be any)
End of explanation
"""
# Split the Planon file into 2, one for loggers & controllers, and one for meters & sensors.
planon_loggerscontrollers = planon.loc[(planon['Classification Group'] == 'EN.EN4 BMS Controller') | (planon['Classification Group'] == 'EN.EN1 Data Logger')]
planon_meterssensors = planon.loc[(planon['Classification Group'] == 'EN.EN2 Energy Meter') | (planon['Classification Group'] == 'EN.EN3 Energy Sensor')]
planon_loggerscontrollers.drop_duplicates(inplace=True)
planon_meterssensors.drop_duplicates(inplace=True)
"""
Explanation: Split Planon import into loggers and meters
Drop duplicates (shouldn't be any)
End of explanation
"""
len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
"""
Explanation: Index unique? show number of duplicates in index
End of explanation
"""
planon_meterssensors.head(3)
"""
Explanation: Meters are not unique. This is becasue of the spaces served. This is ok for now, we will deal with duplicates at the comparison stage. Same is true for loggers - in the unlikely event that there are duplicates in the future.
End of explanation
"""
buildings=set(planon_meterssensors['BuildingNo.'])
buildings
"""
Explanation: 2. Validation
Create list of all buildings present in Planon export. These are buildings to check the data against from Master.
End of explanation
"""
master_meterssensors_for_validation = \
pd.concat([master_meterssensors.loc[master_meterssensors['Building Code'] == building] \
for building in buildings])
master_meterssensors_for_validation.head(2)
#alternative method
master_meterssensors_for_validation2 = \
master_meterssensors[master_meterssensors['Building Code'].isin(buildings)]
master_meterssensors_for_validation2.head(2)
"""
Explanation: 2.1. Meters
Create dataframe slice for validation from master_meterssensors where the only the buildings located in buildings are contained. Save this new slice into master_meterssensors_for_validation. This is done by creating sub-slices of the dataframe for each building, then concatenating them all together.
End of explanation
"""
len(master_meterssensors_for_validation)
len(planon_meterssensors)-len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
"""
Explanation: Planon sensors are not unique because of the spaces served convention in the two data architectures. The Planon architecture devotes a new line for each space served - hence the not unique index. The Master architecture lists all the spaces only once, as a list, therefore it has a unique index. We will need to take this into account and create matching dataframe out of planon for comparison, with a unique index.
End of explanation
"""
master_meterssensors_for_validation.sort_index(inplace=True)
planon_meterssensors.sort_index(inplace=True)
"""
Explanation: Sort datasets after index for easier comparison.
End of explanation
"""
planon_meterssensors.T
master_meterssensors_for_validation.T
"""
Explanation: 2.1.1 Slicing of meters to only certain columns of comparison
End of explanation
"""
#Planon:Master
meters_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Channel",
"Tenant Meter.Name":"Tenant meter",
"Fiscal Meter.Name":"Fiscal meter"
}
"""
Explanation: Create dictionary that maps Planon column names onto Master.
From Nicola:
- Code (Asset Code)
- Description
- EIS ID (Channel)
- Utility Type
- Fiscal Meter
- Tenant Meter
Building code and Building name are implicitly included. Logger Serial Number, IP or MAC would be essential to include, as well as Make and Model. Additional Location Info is not essnetial but would be useful to have. Locations (Locations.Space.Space number and Space Name) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
End of explanation
"""
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation[list(meters_match_dict.values())]
planon_meterssensors_filtered=planon_meterssensors[list(meters_match_dict.keys())]
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
"""
Explanation: Filter both dataframes based on these new columns. Then remove duplicates. Currently, this leads to loss of information of spaces served, but also a unique index for the Planon dataframe, therefore bringing the dataframes closer to each other. When including spaces explicitly in the comparison (if we want to - or just trust the Planon space mapping), this needs to be modified.
End of explanation
"""
planon_meterssensors_filtered.columns=[meters_match_dict[i] for i in planon_meterssensors_filtered]
planon_meterssensors_filtered.drop_duplicates(inplace=True)
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
planon_meterssensors_filtered.head(2)
"""
Explanation: Unify headers, drop duplicates (bear the mind the spaces argument, this where it needs to be brought back in in the future!).
End of explanation
"""
planon_meterssensors_filtered['Fiscal meter']=planon_meterssensors_filtered['Fiscal meter'].isin(['Yes'])
planon_meterssensors_filtered['Tenant meter']=planon_meterssensors_filtered['Tenant meter'].isin(['Yes'])
master_meterssensors_for_validation_filtered['Fiscal meter']=master_meterssensors_for_validation_filtered['Fiscal meter'].isin([1])
master_meterssensors_for_validation_filtered['Tenant meter']=master_meterssensors_for_validation_filtered['Tenant meter'].isin([1])
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
#### 2.1.2. Primitive comparison.head(2)==master_meterssensors_for_validation_filtered.head(2)
"""
Explanation: Fiscal/Tenant meter name needs fixing from Yes/No and 1/0.
End of explanation
"""
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
meterssensors_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(meterssensors_not_in_planon),'/',len(b),':',
round(len(meterssensors_not_in_planon)/len(b)*100,3),'%')
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
meterssensors_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(meterssensors_not_in_master),'/',len(a),':',
round(len(meterssensors_not_in_master)/len(a)*100,3),'%')
"""
Explanation: Cross-check missing meters
End of explanation
"""
print(len(planon_meterssensors_filtered.index))
print(len(set(planon_meterssensors_filtered.index)))
print(len(master_meterssensors_for_validation_filtered.index))
print(len(set(master_meterssensors_for_validation_filtered.index)))
master_meterssensors_for_validation_filtered[master_meterssensors_for_validation_filtered.index.duplicated()]
"""
Explanation: Check for duplicates in index, but not duplicates over the entire row
End of explanation
"""
good_index=[i for i in master_meterssensors_for_validation_filtered.index if str(i).lower().strip()!='nan']
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation_filtered.loc[good_index]
len(planon_meterssensors_filtered)
len(master_meterssensors_for_validation_filtered)
"""
Explanation: The duplicates are the nans. Remove these for now. Could revisit later to do an index-less comparison, only over row contents.
End of explanation
"""
comon_index=list(set(master_meterssensors_for_validation_filtered.index).intersection(set(planon_meterssensors_filtered.index)))
master_meterssensors_for_validation_intersected=master_meterssensors_for_validation_filtered.loc[comon_index].sort_index()
planon_meterssensors_intersected=planon_meterssensors_filtered.loc[comon_index].sort_index()
master_meterssensors_for_validation_intersected.head(2)
planon_meterssensors_intersected.head(2)
"""
Explanation: Do comparison only on common indices. Need to revisit and identify the cause missing meters, both ways (5 Planon->Meters and 30 Meters->Planon in this example).
End of explanation
"""
planon_meterssensors_intersected==master_meterssensors_for_validation_intersected
np.all(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected)
"""
Explanation: 2.1.2. Primitive comparison
End of explanation
"""
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()
"""
Explanation: 2.1.3. Horizontal comparison
Number of cells matching
End of explanation
"""
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
((planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100).plot(kind='bar')
"""
Explanation: Percentage matching
End of explanation
"""
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum())
df
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum()/\
len(planon_meterssensors_intersected.T)*100)
df[df[0]<100]
df[df[0]<100].plot(kind='bar')
"""
Explanation: 2.1.4. Vertical comparison
End of explanation
"""
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
planon_meterssensors_intersected['Description']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Description'].values]
master_meterssensors_for_validation_intersected['Description']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Description'].values]
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
"""
Explanation: 2.1.5. Smart(er) comparison
Not all of the dataframe matches. Let us do some basic string formatting, maybe that helps.
End of explanation
"""
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Description'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Description'])
"""
Explanation: Some errors fixed, some left. Let's see which ones. These are either:
- Input human erros in the description.
- Actual erros somewhere in the indexing.
End of explanation
"""
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
planon_meterssensors_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Logger Channel'].values]
master_meterssensors_for_validation_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Logger Channel'].values]
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
"""
Explanation: Let us repeat the exercise for Logger Channel. Cross-validate, flag as highly likely error where both mismatch.
End of explanation
"""
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Logger Channel'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Logger Channel'])
"""
Explanation: All errors fixed on logger channels.
End of explanation
"""
(planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
"""
Explanation: New error percentage:
End of explanation
"""
master_loggerscontrollers_for_validation = \
pd.concat([master_loggerscontrollers.loc[master_loggerscontrollers['Building Code'] == building] \
for building in buildings])
master_loggerscontrollers_for_validation.head(2)
len(master_loggerscontrollers_for_validation)
len(planon_loggerscontrollers)-len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
master_loggerscontrollers_for_validation.sort_index(inplace=True)
planon_loggerscontrollers.sort_index(inplace=True)
planon_loggerscontrollers.T
master_loggerscontrollers_for_validation.T
"""
Explanation: 2.2. Loggers
End of explanation
"""
#Planon:Master
loggers_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Serial Number",
"Make":"Make",
"Model":"Model"
}
master_loggerscontrollers_for_validation_filtered=master_loggerscontrollers_for_validation[list(loggers_match_dict.values())]
planon_loggerscontrollers_filtered=planon_loggerscontrollers[list(loggers_match_dict.keys())]
master_loggerscontrollers_for_validation_filtered.head(2)
planon_loggerscontrollers_filtered.head(2)
planon_loggerscontrollers_filtered.columns=[loggers_match_dict[i] for i in planon_loggerscontrollers_filtered]
planon_loggerscontrollers_filtered.drop_duplicates(inplace=True)
master_loggerscontrollers_for_validation_filtered.drop_duplicates(inplace=True)
planon_loggerscontrollers_filtered.head(2)
master_loggerscontrollers_for_validation_filtered.head(2)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
loggerscontrollers_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(loggerscontrollers_not_in_planon),'/',len(b),':',
round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%')
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
loggerscontrollers_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(loggerscontrollers_not_in_master),'/',len(a),':',
round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%')
print(len(planon_loggerscontrollers_filtered.index))
print(len(set(planon_loggerscontrollers_filtered.index)))
print(len(master_loggerscontrollers_for_validation_filtered.index))
print(len(set(master_loggerscontrollers_for_validation_filtered.index)))
master_loggerscontrollers_for_validation_filtered[master_loggerscontrollers_for_validation_filtered.index.duplicated()]
comon_index=list(set(master_loggerscontrollers_for_validation_filtered.index).intersection(set(planon_loggerscontrollers_filtered.index)))
master_loggerscontrollers_for_validation_intersected=master_loggerscontrollers_for_validation_filtered.loc[comon_index].sort_index()
planon_loggerscontrollers_intersected=planon_loggerscontrollers_filtered.loc[comon_index].sort_index()
master_loggerscontrollers_for_validation_intersected.head(2)
planon_loggerscontrollers_intersected.head(2)
planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected
"""
Explanation: Create dictionary that maps Planon column names onto Master.
From Nicola:
- EIS ID (Serial Number)
- Make
- Model
- Description
- Code (Asset Code)
- Building Code
Building code and Building name are implicitly included. Logger IP or MAC would be essential to include, as well as Make and Model. Additional Location Info is not essnetial but would be useful to have. Locations (Locations.Space.Space number and Space Name) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
End of explanation
"""
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()
"""
Explanation: Loggers matching
End of explanation
"""
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
((planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100).plot(kind='bar')
"""
Explanation: Percentage matching
End of explanation
"""
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
planon_loggerscontrollers_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_loggerscontrollers_intersected['Building Name'].values]
master_loggerscontrollers_for_validation_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_loggerscontrollers_for_validation_intersected['Building Name'].values]
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
"""
Explanation: Loggers not matching on Building Name.
End of explanation
"""
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Building Name'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name'])
"""
Explanation: That didnt help.
End of explanation
"""
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in planon_loggerscontrollers_intersected['Logger Serial Number'].values]
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in master_loggerscontrollers_for_validation_intersected['Logger Serial Number'].values]
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
"""
Explanation: Follow up with lexical distance comparison. That would flag this as a match.
Loggers not matching on Serial Number.
End of explanation
"""
z1=[]
z2=[]
for i in planon_loggerscontrollers_intersected.index:
if planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']:
if float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])==\
float(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']):
z1.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
z2.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=z1
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=z2
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
"""
Explanation: Technically the same, but there is a number format error. Compare based on float value, if they match, replace one of them. This needs to be amended, as it will throw cannot onvert to float exception if strings are left in from the previous step.
End of explanation
"""
(planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
"""
Explanation: New error percentage:
End of explanation
"""
|
ssunkara1/bqplot | examples/Marks/Pyplot/HeatMap.ipynb | apache-2.0 | import numpy as np
from ipywidgets import Layout
import bqplot.pyplot as plt
from bqplot import *
"""
Explanation: Heatmap
The HeatMap mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance.
HeatMap is very similar to the GridHeatMap, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. GridHeatMap offers more control (interactions, selections), and is better suited for a smaller number of points.
End of explanation
"""
x = np.linspace(-5, 5, 200)
y = np.linspace(-5, 5, 200)
X, Y = np.meshgrid(x, y)
color = np.cos(X**2 + Y**2)
"""
Explanation: Data Input
x is a 1d array, corresponding to the abscissas of the points (size N)
y is a 1d array, corresponding to the ordinates of the points (size M)
color is a 2d array, $\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M))
Scales must be defined for each attribute:
- a LinearScale, LogScale or OrdinalScale for x and y
- a ColorScale for color
End of explanation
"""
fig = plt.figure(title='Cosine',
layout=Layout(width='650px', height='650px'),
min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0)
heatmap = plt.heatmap(color, x=x, y=y)
fig
"""
Explanation: Plotting a 2-dimensional function
This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$
End of explanation
"""
from scipy.misc import ascent
Z = ascent()
Z = Z[::-1, :]
aspect_ratio = Z.shape[1] / Z.shape[0]
img = plt.figure(title='Ascent', layout=Layout(width='650px', height='650px'),
min_aspect_ratio=aspect_ratio,
max_aspect_ratio=aspect_ratio, padding_y=0)
plt.scales(scales={'color': ColorScale(scheme='Greys', reverse=True)})
axes_options = {'x': {'visible': False}, 'y': {'visible': False}, 'color': {'visible': False}}
ascent = plt.heatmap(Z, axes_options=axes_options)
img
"""
Explanation: Displaying an image
The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute
End of explanation
"""
|
DCPROGS/HJCFIT | exploration/CKS.ipynb | gpl-3.0 | %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from dcprogs.likelihood import QMatrix
tau = 0.2
qmatrix = QMatrix([[-1, 1, 0], [19, -29, 10], [0, 0.026, -0.026]], 1)
"""
Explanation: CKF Model
The following tries to reproduce Fig 9 from Hawkes, Jalali, Colquhoun (1992). First we create the $Q$-matrix for this particular model.
End of explanation
"""
from dcprogs.likelihood._methods import exponential_pdfs
def plot_exponentials(qmatrix, tau, x0=None, x=None, ax=None, nmax=2, shut=False):
from dcprogs.likelihood import missed_events_pdf
from dcprogs.likelihood._methods import exponential_pdfs
if x is None: x = np.arange(0, 5*tau, tau/10)
if x0 is None: x0 = x
pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)
graphb = [x0, pdf(x0+tau), '-k']
functions = exponential_pdfs(qmatrix, tau, shut=shut)
plots = ['.r', '.b', '.g']
together = None
for f, p in zip(functions[::-1], plots):
if together is None: together = f(x+tau)
else: together = together + f(x+tau)
graphb.extend([x, together, p])
if ax is None: plot(*graphb)
else: ax.plot(*graphb)
"""
Explanation: We then create a function to plot each exponential component in the asymptotic expression. An explanation on how to get to these plots can be found in the CH82 notebook.
End of explanation
"""
from dcprogs.likelihood import missed_events_pdf
fig = plt.figure(figsize=(12, 10 ))
ax = fig.add_subplot(2, 2, 1)
x = np.arange(0, 10, tau/100)
pdf = missed_events_pdf(qmatrix, 0.2, nmax=2, shut=True)
ax.plot(x, pdf(x), '-k')
ax.set_xlabel('time $t$ (ms)')
ax.set_ylabel('Shut-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
ax = fig.add_subplot(2, 2, 2)
ax.set_xlabel('time $t$ (ms)')
tau = 0.2
x, x0 = np.arange(0, 3*tau, tau/10.0), np.arange(0, 3*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0)
ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax.set_xlabel('time $t$ (ms)')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
ax = fig.add_subplot(2, 2, 3)
tau = 0.05
x, x0 = np.arange(0, 3*tau, tau/10.0), np.arange(0, 3*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0)
ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax.set_xlabel('time $t$ (ms)')
ax = fig.add_subplot(2, 2, 4)
tau = 0.5
x, x0 = np.arange(0, 3*tau, tau/10.0), np.arange(0, 3*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0)
ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax.set_xlabel('time $t$ (ms)')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
fig.tight_layout()
"""
Explanation: For practical reasons, we plot the excess shut-time probability densities in the graph below. In all other particulars, it should reproduce Fig. 9 from Hawkes, Jalali, Colquhoun (1992)
End of explanation
"""
|
google/applied-machine-learning-intensive | content/xx_misc/regular_expressions/colab.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/xx_misc/regular_expressions/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
"""
import re
pattern = re.compile('Hello')
type(pattern)
"""
Explanation: Introduction to Regular Expressions
Regular Expressions are a powerful feature of the Python programming language. You can access Python's regular expression support through the re module.
Matching Literals
A regular expression is simply a string of text. The most basic regular expression is just a string containing only alphanumeric characters.
We use the re.compile(...) method to convert the regular expression string into a Pattern object.
End of explanation
"""
if pattern.match('Hello World'):
print("We found a match")
else:
print("No match found")
"""
Explanation: Now that we have a compiled regular expression, we can see if the pattern matches another string.
End of explanation
"""
if pattern.match('I said Hello World'):
print("We found a match")
else:
print("No match found")
"""
Explanation: In the case above we found a match because 'Hello' is part of 'Hello World'.
What happens if 'Hello' is not at the start of a string?
End of explanation
"""
if pattern.match('HELLO'):
print("We found a match")
else:
print("No match found")
"""
Explanation: So the match only works if the pattern matches the start of the other string. What if the case is different?
End of explanation
"""
if pattern.match('He'):
print("We found a match")
else:
print("No match found")
"""
Explanation: Doesn't work. By default, the match is case sensitive.
What if it is only a partial match?
End of explanation
"""
if "Hello World".startswith("Hello"):
print("We found a match")
else:
print("No match found")
"""
Explanation: From what we have seen so far, matching with a string literal is pretty much functionally equivalent to the Python startswith(...) method that already comes as part of the String class.
End of explanation
"""
pattern = re.compile("ab+c")
for string in (
'abc',
'abbbbbbbc',
'ac',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Well, that isn't too exciting. But it does provide us with an opportunity for a valuable lesson: Regular expressions are often not the best solution for a problem.
As we continue on in this colab, we'll see how powerful and expressive regular expressions can be. It is tempting to whip out a regular expression for many cases where they may not be the best solution. The regular expression engine can be slow for many types of expressions. Sometimes using other built-in tools or coding a solution in standard Python is better; sometimes it isn't.
Repetition
Matching exact characters one-by-one is kind of boring and doesn't allow regular expressions to showcase their true power. Let's move on to some more dynamic parts of the regular expression language. We will begin with repetition.
One or More
There are many cases where you'll need "one or more" of some character. To accomplish this, you simply add the + sign after the character that you want one or more of.
In the example below, we create an expression that looks for one or more 'b' characters. Notice how 'abc' and 'abbbbbbbc' are fine, but if we take all of the 'b' characters out, we don't get a match.
End of explanation
"""
pattern = re.compile("ab*c")
for string in (
'abc',
'abbbbbbbc',
'ac',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Zero or More
Sometimes we find ourselves in a situation where we're actually okay with "zero or more" instances of a character. For this we use the '*' sign.
In the example below we create an expression that looks for zero or more 'b' characters. In this case all of the matches are successful.
End of explanation
"""
pattern = re.compile("ab?c")
for string in (
'abc',
'abbbbbbbc',
'ac',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: One or None
We've now seen cases where we will allow one-and-only-one of a character (exact match), one-or-more of a character, and zero-or-more of a character. The next case is the "one or none" case. For that we use the '?' sign.
End of explanation
"""
pattern = re.compile("ab{7}c")
for string in (
'abc',
'abbbbbbc',
'abbbbbbbc',
'abbbbbbbbc',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: M
What if you want to match a very specific number of a specific character, but you don't want to type all of those characters in? The {m} expression is great for that. The 'm' value specifies exactly how many repetitions you want.
End of explanation
"""
pattern = re.compile("ab{2,}c")
for string in (
'abc',
'abbc',
'abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbc',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: M or More
You can also ask for m-or-more of a character. Leaving a dangling comma in the {m,} does the trick.
End of explanation
"""
pattern = re.compile("ab{4,6}c")
for string in (
'abbbc',
'abbbbc',
'abbbbbc',
'abbbbbbc',
'abbbbbbbc',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: M through N
You can also request a specific range of repetition using {m,n}. Notice that 'n' is inclusive. This is one of the rare times that you'll find ranges in Python that are inclusive at the end. Any ideas why?
End of explanation
"""
pattern = re.compile("ab{,4}c")
for string in (
'abbbbbc',
'abbbbc',
'abbbc',
'abbc',
'abc',
'ac',
'a',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: N or Fewer
Sometimes you want a specific number of repetitions or fewer. For this, you can use a comma before the 'n' parameter like {,n}. Notice that "fewer" includes zero instances of the character.
End of explanation
"""
pattern = re.compile('[aeiou]')
for string in (
'a',
'e',
'i',
'o',
'u',
'x',
'ax',
'ex',
'ix',
'ox',
'ux',
'xa',
'xe',
'xi',
'xo',
'xu',
'xx',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Though we have illustrated these repetition operations on single characters, they actually apply to more complex combinations of characters, as we'll see soon.
Character Sets
Matching a single character with repetition can be very useful, but often we want to work with more than one character. For that, the regular expressions need to have the concept of character sets. Character sets are contained within square brackets: []
The character set below specifies that we'll match any string that starts with a vowel.
End of explanation
"""
pattern = re.compile('[aeiou]{2,}')
for string in (
'aardvark',
'earth',
'eat',
'oar',
'aioli',
'ute',
'absolutely',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Character sets can be bound to any of the repetition symbols that we have already seen. For example, if we wanted to match words that start with at least two vowels we could use the character set below.
End of explanation
"""
pattern = re.compile('[^aeiou]')
for string in (
'aardvark',
'earth',
'ice',
'oar',
'ukulele',
'bathtub',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Character sets can also be negated. Simply put a ^ symbol at the start of the character set.
End of explanation
"""
pattern = re.compile('\d')
for string in (
'abc',
'123',
'1a2b',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Character Classes
Some groupings of characters are so common that they have a shorthand "character class" assigned to them. Common character classes are represented by a backslash and a letter designating the class. For instance \d is the class for digits.
End of explanation
"""
pattern = re.compile('\d{4,}')
for string in (
'a',
'123',
'1234',
'12345',
'1234a',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: These classes can have repetitions after them, just like character sets.
End of explanation
"""
pattern = re.compile('\w\s\d')
for string in (
'a',
'1 3',
'_ 4',
'w 5',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: There are many common character classes.
\d matches digits
\s matches spaces, tabs, etc.
\w matches 'word' characters which include the letters of most languages, digits, and the underscore character
End of explanation
"""
pattern = re.compile('\d+\s\w+')
for string in (
'a',
'16 Candles',
'47 Hats',
'Number 5',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: You can mix these classes with repetitions.
End of explanation
"""
print("Not a digit")
pattern = re.compile('\D')
for string in (
'a',
'1',
' ',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
print("\n")
print("Not a space")
pattern = re.compile('\S')
for string in (
'a',
'1',
' ',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
print("\n")
print("Not a word")
pattern = re.compile('\W')
for string in (
'a',
'1',
' ',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: But what if you want to find everything that isn't a digit? Or everything that isn't a space?
To do that, simply put the character class in upper-case.
End of explanation
"""
pattern = re.compile('.')
for string in (
'a',
' ',
'4',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Placement
We've moved into some pretty powerful stuff, but up until now all of our regular expressions have started matching from the first letter of a string. That is useful, but sometimes you'd like to match from anywhere in the string, or specifically at the end of the string. Let's explore some options for moving past the first character.
The Dot
So far we have always had to have some character to match, but what if we don't care what character we encounter? The dot (.) is a placeholder for any character.
End of explanation
"""
pattern = re.compile('.*s')
for string in (
'as',
' oh no bees',
'does this match',
'maybe',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Though it might seem rather bland at first, the dot can be really useful when combined with repetition symbols.
End of explanation
"""
pattern = re.compile('^a.*s')
for string in (
'as',
'not as',
'a string that matches',
'a fancy string that matches',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: As you can see, using the dot allows us to move past the start of the string we want to match and instead search deeper inside the target string.
Starting Anchor
Now we can search anywhere in a string. However, we might still want to add a starting anchor to the beginning of a string for part of our match. The ^ anchors our match to the start of the string.
End of explanation
"""
pattern = re.compile('.*s$')
for string in (
'as',
'beees',
'sa',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Ending Anchor
We can anchor to the end of a string with the $ symbol.
End of explanation
"""
pattern = re.compile('.*(cat|dog)')
for string in (
'cat',
'dog',
'fat cat',
'lazy dog',
'hog',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Grouping
We have searched for exact patterns in our data, but sometimes we want either one thing or another. We can group searches with parentheses and match only one item in a group.
End of explanation
"""
pattern = re.compile('.*(dog)')
for string in (
'cat',
'dog',
'fat cat',
'lazy dog',
'hog',
):
print("'{}'".format(string), end=' ')
print('matches' if pattern.match(string) else 'does not match')
"""
Explanation: Grouping can also be done on a single item.
End of explanation
"""
pattern = re.compile('.*(dog)')
match = pattern.match("hot diggity dog")
if match:
print(match.group(0))
print(match.group(1))
"""
Explanation: But why would you ever group a single item? It turns out that grouping is 'capture grouping' by default and allows you to extract items from a string.
End of explanation
"""
pattern = re.compile('.*(dog).*(cat)')
match = pattern.match("hot diggity dog barked at a scared cat")
if match:
print(match.group(0))
print(match.group(1))
print(match.group(2))
"""
Explanation: In the case above, the entire string is considered group 0 because it matched the expression, but then the string 'dog' is group 1 because it was 'captured' by the parenthesis.
You can have more than one capture group:
End of explanation
"""
pattern = re.compile('.*(dog).*(mouse|cat)')
match = pattern.match("hot diggity dog barked at a scared cat")
if match:
print(match.group(0))
print(match.group(1))
print(match.group(2))
"""
Explanation: And capture groups can contain multiple values:
End of explanation
"""
pattern = re.compile('(cat|mouse)')
re.sub(pattern, 'whale', 'The dog is afraid of the mouse')
"""
Explanation: Grouping can get even richer. For example:
What happens when you have a group within another group?
Can a group be repeated?
These are more intermediate-to-advanced applications of regular expressions that you might want to explore on your own.
Substitution
So far we have been concerned with finding patterns in a string. Locating things is great, but sometimes you want to take action. A common action is substitution.
Say that I want to replace every instance of 'cat' or 'mouse' in a string with 'whale'. To do that I can compile a pattern that looks for 'cat' or 'mouse' and use that pattern in the re.sub method.
End of explanation
"""
re.sub('(cat|mouse)', 'whale', 'The dog is afraid of the mouse')
"""
Explanation: So far, we have compiled all of our regular expressions before using them. It turns out that many of the regular expression methods can accept a string and will compile that string for you.
You might see something like the code below in practice:
End of explanation
"""
print('\tHello')
print(r'\tHello')
print('\\')
print(r'\\')
"""
Explanation: sub is compiling the string "(cat|mouse)" into a pattern and then applying it to the input string.
Raw Strings
While working with Python code that uses regular expressions, you might occasionally encounter a string that looks like r'my string' instead of the 'my string' that you are accustomed to seeing.
The r designation means that the string is a raw string. Let's look at some examples to see what this means.
End of explanation
"""
test_data = [
'apple',
'banana',
'grapefruit',
'apricot',
'orange'
]
# Create a pattern here
for test in test_data:
pass # Your pattern match goes here
"""
Explanation: You'll notice that the regular string containing \t printed a tab character. The raw string printed a literal \t. Likewise the regular string printed \ while the raw string printed \\.
When processing a string, Python looks for escape sequences like \t (tab), \n (newline), \\ (backslash) and others to make your printed output more visually appealing.
Raw strings turn off that translation. This is useful for regular expressions because the backslash is a common character in regular expressions. Translating backslashes to other characters would break the expression.
Should you always use a raw string when creating a regular expression? Probably. Even if it isn't necessary now, the expression might grow over time, and it is helpful to have it in place as a safeguard.
Exercises
Exercise 1: Starts With 'a'
Create a regular expression pattern object that matches strings starting with the lower-case letter 'a'. Apply it to the test data provided. Loop over each string of test data and print "match" or "no match" as a result of your expression.
Student Solution
End of explanation
"""
test_data = [
'zoo',
'ZOO',
'bazooka',
'ZOOLANDER',
'kaZoo',
'ZooTopia',
'ZOOT Suit',
]
# Create a pattern here
for test in test_data:
pass # Your pattern match goes here
"""
Explanation: Exercise 2: Contains 'zoo' or 'ZOO'
Create a regular expression pattern object that matches strings containing 'zoo' or 'ZOO'. Apply it to the test data provided. Loop over each string of the test data and print "match" or "no match" as a result of your expression.
Student Solution
End of explanation
"""
test_data = [
'sing',
'talking',
'SCREAMING',
'NeVeReNdInG',
'ingeron',
]
# Create a pattern here
for test in test_data:
pass # Your pattern match goes here
"""
Explanation: Exercise 3: Endings
Create a regular expression pattern object that finds words that end with 'ing', independent of case. Apply it to the test data provided. Loop over each string of the test data and print "match" or "no match" as a result of your expression.
Student Solution
End of explanation
"""
|
DJCordhose/ai | notebooks/booster/2-manual-prediction.ipynb | mit | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import pandas as pd
print(pd.__version__)
"""
Explanation: Manuel Prediction and Validation
End of explanation
"""
# df = pd.read_csv('./insurance-customers-300.csv', sep=';')
df = pd.read_csv('./insurance-customers-300-2.csv', sep=';')
y=df['group']
X = df.as_matrix()
df.drop('group', axis='columns', inplace=True)
df.describe()
"""
Explanation: First Step: Load Data and disassemble for our purposes
End of explanation
"""
# ignore this, it is just technical code
# should come from a lib, consider it to appear magically
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fname=None):
xx,yy = meshGrid(x_data, y_data)
plt.figure(figsize=(20,10))
if clf and mesh:
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')
plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')
plt.xlabel(x_label, fontsize=font_size)
plt.ylabel(y_label, fontsize=font_size)
plt.title(title, fontsize=font_size)
if fname:
plt.savefig(fname)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train_kmh_age = X_train[:, :2]
X_test_kmh_age = X_test[:, :2]
X_train_2_dim = X_train_kmh_age
X_test_2_dim = X_test_kmh_age
plotPrediction(None, X_train_2_dim[:, 1], X_train_2_dim[:, 0],
'Age', 'Max Speed', y_train, mesh=False,
title="Train Data Max Speed vs Age", fname='train2.png')
"""
Explanation: Second Step: Split data and validate manual prediction
End of explanation
"""
plotPrediction(None, X_test_2_dim[:, 1], X_test_2_dim[:, 0],
'Age', 'Max Speed', y_test, mesh=False,
title="Test Data Max Speed vs Age", fname='test2.png')
"""
Explanation: Exercise, Part I: Using pen and paper
Separate the three groups of insurance customers visually
every part of the data needs a category
draw as many lines and whatever shapes you want to separate the categories from each other
mark each category by either putting R(ed), G(reen), or Y(ellow)
have a look at this colored image above as it might not be easy to see the colors in the printed sheet
Important: Use a hard pen or pencil and apply some pressure when drawing
Third step: Validate your prediction using test data
Exercise, Part II: Using pen and paper
Validate your categories with test data
turn the page and look at the test data chart
try to see the lines that have been printed through to this one
redraw the lines with the pen for more clarity
how well would your separation perform on this chart?
The test data is the one that matters, as it tells you how well you generalized
End of explanation
"""
|
KECB/learn | BAMM.101x/Functions_part_1.ipynb | mit | x=5
y=7
z=max(x,y) #max is the function. x and y are the arguments
print(z) #print is the function. z is the argument
"""
Explanation: <h1>Functions</h1>
<h2>Calling a function</h2>
End of explanation
"""
!pip install easygui
#pip: python installer program
# ! run the program from the shell (not from python)
# easygui: a python library for GUI widgets
import easygui #Imports easygui into the current namespace. We now have access to functiona and objects in this library
easygui.msgbox("To be or not to be","What Hamlet elocuted") #msgbox is a function in easygui.
"""
Explanation: <h2>Installing libraries and importing functions</h2>
End of explanation
"""
import math #imports the math namespace into our program namespace
math.sqrt(34.23) #Functions in the math namespace have to be disambiguated
import math as m #imports the math namespace into our program namespace but gives it the name 'm'
m.sqrt(34.23) #Functions in the math namespace have to be disambiguated using the name 'm' rather than 'math'
from math import sqrt #imports the sqrt function into our program namespace. No other math functions are accessible
sqrt(34.23) #No disambiguation necessary
"""
Explanation: <h2>Importing functions</h2>
End of explanation
"""
def spam(x,y,k):
if x>y:
z=x
else:
z=y
p = z/k
return p #Only the value of p is returned by the function
spam(6,4,2)
"""
Explanation: <h3>Returning values from a function</h3>
The <b>return</b> statement tells a function what to return to the calling program
End of explanation
"""
def eggs(x,y):
z = x/y
print(eggs(4,2))
"""
Explanation: <h3>If no return statement, python returns None </h3>
End of explanation
"""
def foo(x,y,z):
if z=="DESCENDING":
return max(x,y),min(x,y),z
if z=="ASCENDING":
return min(x,y),max(x,y),z
else:
return x,y,z
a,b,c = foo(4,2,"ASCENDING")
print(a,b,c)
"""
Explanation: <h3>Returning multiple values</h3>
End of explanation
"""
a = foo(4,2,"ASCENDING")
print(a)
"""
Explanation: <h4>Python unpacks the returned value into each of a,b, and c. If there is only one identifier on the LHS, it won't unpack</h4>
End of explanation
"""
a,b = foo(4,2,"DESCENDING")
"""
Explanation: <h4>If there is a mismatch between the number of identifiers on the LHS and the number of values returned, you'll get an error</h4>
End of explanation
"""
def bar(x,y):
return x/y
bar(4,2) #x takes the value 4 and y takes the value 2
def bar(x,y):
return x/y
bar(y=4,x=2) #x takes the value 2 and y takes the value 4 (Explicit assignment)
"""
Explanation: <h2>Value assignment to arguments</h2>
<li>Left to right
<li>Unless explicitly assigned to the argument identifiers in the function definition
End of explanation
"""
def order_by(a,b,order_function):
return order_function(a,b)
print(order_by(4,2,min))
print(order_by(4,2,max))
def change(x):
x = (1,)
print(x)
x = (1, 2)
change(x)
print(x)
def replace(test_string, replace_string):
start_index = test_string.find(replace_string)
result = ""
x = "bodega"
if start_index >= 0:
result = test_string[start_index:start_index+len(replace_string)]
result = test_string.replace(result,x)
return result
print(replace("Hi how are you?", "yu"))
"""
Explanation: <h2>A function can have function arguments</h2>
End of explanation
"""
|
sys-bio/tellurium | examples/notebooks/core/tellurium_stochastic.ipynb | apache-2.0 | from __future__ import print_function
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
import numpy as np
r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 40')
r.integrator = 'gillespie'
r.integrator.seed = 1234
results = []
for k in range(1, 50):
r.reset()
s = r.simulate(0, 40)
results.append(s)
r.plot(s, show=False, alpha=0.7)
te.show()
"""
Explanation: Back to the main Index
Stochastic simulation
Stochastic simulations can be run by changing the current integrator type to 'gillespie' or by using the r.gillespie function.
End of explanation
"""
results = []
for k in range(1, 20):
r.reset()
r.setSeed(123456)
s = r.simulate(0, 40)
results.append(s)
r.plot(s, show=False, loc=None, color='black', alpha=0.7)
te.show()
"""
Explanation: Seed
Setting the identical seed for all repeats results in identical traces in each simulation.
End of explanation
"""
import tellurium as te
import numpy as np
r = te.loada('S1 -> S2; k1*S1; k1 = 0.02; S1 = 100')
r.setSeed(1234)
for k in range(1, 20):
r.resetToOrigin()
res1 = r.gillespie(0, 10)
# change in parameter after the first half of the simulation
r.k1 = r.k1*20
res2 = r.gillespie (10, 20)
sim = np.vstack([res1, res2])
te.plot(sim[:,0], sim[:,1:], alpha=0.7, names=['S1', 'S2'], tags=['S1', 'S2'], show=False)
te.show()
"""
Explanation: Combining Simulations
You can combine two timecourse simulations and change e.g. parameter values in between each simulation. The gillespie method simulates up to the given end time 10, after which you can make arbitrary changes to the model, then simulate again.
When using the te.plot function, you can pass the parameter names, which controls the names that will be used in the figure legend, and tags, which ensures that traces with the same tag will be drawn with the same color.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/noaa-gfdl/cmip6/models/gfdl-am4/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-am4', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-AM4
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: CMIP5:GFDL-CM3
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
DOC.set_value("Other: ice")
DOC.set_value("bare soil")
DOC.set_value("lake")
DOC.set_value("vegetated")
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Soil type prescibed at each grid point")
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Soil type prescibed at each grid point")
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Bare soil albedo prescibed at each grid point")
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Dynamic")
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
DOC.set_value("vegetation state")
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
DOC.set_value("distinction between direct and diffuse albedo")
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(2)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(20)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
DOC.set_value("Other: generalized richards equation")
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(20)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Basic thermodynamics")
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
DOC.set_value("Explicit diffusion")
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
DOC.set_value("Other: plant uptake and ground divergence")
DOC.set_value("soil moisture freeze-thaw")
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(5)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
DOC.set_value("constant")
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
DOC.set_value("ground snow fraction")
DOC.set_value("vegetation snow fraction")
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
DOC.set_value("Other: snow refreezing")
DOC.set_value("snow interception")
DOC.set_value("snow melting")
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
DOC.set_value("vegetation type")
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
DOC.set_value("vegetation types")
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
DOC.set_value("C3 grass")
DOC.set_value("C4 grass")
DOC.set_value("broadleaf tree")
DOC.set_value("needleleaf tree")
DOC.set_value("vegetated")
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
DOC.set_value("dynamical (varying from simulation)")
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
DOC.set_value("CO2")
DOC.set_value("light")
DOC.set_value("temperature")
DOC.set_value("water availability")
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(1)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
DOC.set_value("Other: linked to photosynthesis")
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
DOC.set_value("transpiration")
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(5)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Leaves, storage, fine roots, sapwood, heartwood")
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
DOC.set_value("leaves + fine roots + coarse roots + stems")
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
DOC.set_value("function of plant allometry")
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Fast and slow-respiring soil carbon")
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Carbon dioxide")
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(1)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
DOC.set_value("present day")
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
DOC.set_value("direct (large rivers)")
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
DOC.set_value("heat")
DOC.set_value("water")
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
DOC.set_value("heat")
DOC.set_value("water")
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
DOC.set_value("prognostic")
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
DOC.set_value("vertical")
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
akloster/amplicon_classification | notebooks/amplicon_metadata.ipynb | isc | %load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn
import porekit
import re
import pysam
import random
import feather
%matplotlib inline
"""
Explanation: Preparing the reads
[Loose et al] published their raw read files on ENA. This script uses four of these sets which contain reads of amplicons. These were processed using different "read until" scripts (or none at all), but that doesn't matter. What does matter is to get as much real reads as possible.
End of explanation
"""
directories = ["AmpliconOddEvenControl", "AmpliconOddReadUntil", "AmpliconEvenReadUntil", "Balanced"]
meta_frames = []
for d in directories:
print(d)
df = porekit.gather_metadata("/home/andi/nanopore/%s" % d, workers=4)
print(df.shape[0])
meta_frames.append(df)
meta = pd.concat (meta_frames)
for df in meta_frames:
print (df.shape)
"""
Explanation: Load metadata for 4 datasets
End of explanation
"""
meta_frames[0].index.values[0]
meta_frames[1].index.values[0]
meta_frames[2].index.values[0]
meta_frames[3].index.values[0]
"""
Explanation: The individual filenames will look like this:
End of explanation
"""
def sam_to_dataframe(file_name):
sam = pysam.AlignmentFile(file_name)
records = []
for i, segment in enumerate(sam):
d = dict()
for k in ["query_name", "reference_start", "reference_end", "mapping_quality", ]:
d[k] = getattr(segment, k)
records.append(d)
alignments = pd.DataFrame.from_records(records)
return alignments
base = "/home/andi/nanopore/RUFigs/data"
bams = ["/fig3/RU_dudu/RU_dudu_Template.bam",
"/fig3/RU_udud/RU_udud_Template.bam",
"/fig3/NO_RU/NO_RU_Template.bam",
"/fig4/200/200_Template.bam",
]
alignments = pd.concat([sam_to_dataframe(base+file_name) for file_name in bams])
"""
Explanation: Merging alignment data
[Loose et al] provide all the intermediate data files necessary to recreate their figures. Among these, there are some alignment files in SAM format.
Because it doesn't make sense to classify complement sequences/matches in the Read Until context, we only use the "Template" strands.
End of explanation
"""
regexp = re.compile(r'_(?P<a>\d+)_(?P<b>\d+)_ch(?P<c>\d+)_file(?P<d>\d+)')
def extract(s):
try:
return "_".join(regexp.search(s).groups())
except:
return ""
alignments["alignment_key"] = alignments.query_name.map(extract)
meta["alignment_key"] = meta.index.map(extract)
alignments["alignment_key"].map(lambda s: s.split("_")[0]).unique()
meta["run_number"] = meta["alignment_key"].map(lambda s: s.split("_")[0])
meta2 = meta.reset_index().merge(alignments).set_index("filename")
meta2.shape
meta = meta2
"""
Explanation: Unfortunately filenames and sequence names tend to get a bit mangled when going from Fast5 to SAM, for various reasons. As of now, there is no particular convention for naming read files or naming the exported sequences. On the one hand I don't feel like it is a good idea to abuse filenames as character seperated database rows, on the other hand, using the unique read id from the Fast5 File isn't very human-friendly either.
To assign genomic coordinates to the reads, a regular expression extracts four numbers from the file name/query name making each read unique and matchable.
End of explanation
"""
f, ax = plt.subplots()
f.set_figwidth(13)
ax.hist(meta.reference_start, bins=110);
"""
Explanation: Visualizing the alignments
This is just a simple histogram showing where the "reference_start" values fall.
End of explanation
"""
amplicons = [(52,1980),
(2065,3965),
(4070,5989),
(6059,7981),
(8012,9947),
(10008,11963),
(12006,13941),
(14011,15945),
(16076,17987),
(18022,19972),
(20053,21979),
]
def amplicon_from_position(pos):
for i,c in enumerate(amplicons):
a,b = c
if a<=pos<=b:
return i
meta["amplicon"] = meta.reference_start.map(amplicon_from_position)
"""
Explanation: Processing the amplicons
[Loose et al] pooled 11 amplicons. Each read has to be assigned retroactively to one of these, represented by number from 0 to 10.
End of explanation
"""
meta.amplicon.isnull().sum()
"""
Explanation: How many reads failed to be assigned?
End of explanation
"""
meta = meta[np.isnan(meta.amplicon)==False]
meta.shape
"""
Explanation: Purge these:
End of explanation
"""
meta.query("template_length>500").groupby("amplicon").format.count()
"""
Explanation: The number of viable reads is diminishing quickly. But this can't be helped.
How many reads longer than 500 bases are assigned to each amplicon?
End of explanation
"""
sufficient = meta.query("template_length>=500")
all_files = sufficient.index.values
test_files = []
for i in range(11):
sub = sufficient[sufficient.amplicon==i]
test_files += list(np.random.choice(sub.index.values, 200))
training_files = list(set(sufficient.index.values) - set(test_files))
len(training_files), len(test_files)
test_data = sufficient.ix[np.array(test_files)]
feather.write_dataframe(test_data, "amplicon_test_metadata.feather")
training_data = sufficient.ix[np.array(training_files)]
feather.write_dataframe(training_data, "amplicon_training_metadata.feather")
"""
Explanation: Unfortunately some amplicons are severely underrepresented, with one going as low as 635 reads.
This is a big problem for dividing the data into training and test sets, because blindly sampling from total pool may skew this balance even further. The algorithms will then bias against the least represented amplicons to gain a bit of extra accuracy, which is not what we want. With ten times as much data we could balance both the training and the test set. As it is, I chose to balance the test set only, to get a more realistic view of the performance. My assumption is that, over multiple repetitions of amplification / library preparation and sequencing runs, the amplicons should be roughly equally distributed.
To balance the test set, 200 reads from each amplicon are chosen. This makes for a very weak test set. But again, this can't be helped at this point.
End of explanation
"""
|
maxis42/ML-DA-Coursera-Yandex-MIPT | 4 Stats for data analysis/Homework/14 test AB browser test/AB browser test.ipynb | mit | from __future__ import division
import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
ab_data = pd.read_csv('ab_browser_test.csv')
ab_data.info()
ab_data.head()
#transform 'browser' column to int
ab_data.browser = [int(ab_data.browser[i][9:]) for i in range(ab_data.shape[0])]
ab_data.head(10)
"""
Explanation: AB browser test
В данном задании нужно будет:
* проанализировать АБ тест, проведённый на реальных пользователях Яндекса;
* подтвердить или опровергнуть наличие изменений в пользовательском поведении между контрольной (control) и тестовой (exp) группами;
* определить характер этих изменений и практическую значимость вводимого изменения;
* понять, какая из пользовательских групп более всего проигрывает / выигрывает от тестируемого изменения (локализовать изменение).
Описание данных:
* userID: уникальный идентификатор пользователя
* browser: браузер, который использовал userID
* slot: в каком статусе пользователь участвовал в исследовании (exp = видел измененную страницу, control = видел неизменную страницу)
* n_clicks: количество кликов, которые пользоваль совершил за n_queries
* n_queries: количество запросов, который совершил userID, пользуясь браузером browser
* n_nonclk_queries: количество запросов пользователя, в которых им не было совершено ни одного клика
Обращаем внимание, что не все люди используют только один браузер, поэтому в столбце userID есть повторяющиеся идентификаторы. В предлагаемых данных уникальным является сочетание userID и browser.
End of explanation
"""
#number of people in exp and control groups
ab_data.slot.value_counts()
"""
Explanation: Основная метрика, на которой мы сосредоточимся в этой работе, — это количество пользовательских кликов на web-странице в зависимости от тестируемого изменения этой страницы.
Посчитаем, насколько в группе exp больше пользовательских кликов по сравнению с группой control в процентах от числа кликов в контрольной группе.
End of explanation
"""
#indices split by groups
exp = ab_data.slot.loc[ab_data.slot == 'exp'].index
ctrl = ab_data.slot.loc[ab_data.slot == 'control'].index
#assumption error
err = 1 - ab_data.slot.loc[exp].shape[0] / ab_data.slot.loc[ctrl].shape[0]
print('Assumption error: %.4f' % err)
#total number of clicks in each group
exp_cl_num = ab_data.n_clicks.loc[exp].sum()
ctrl_cl_num = ab_data.n_clicks.loc[ctrl].sum()
print('Total number of clicks in each group')
print('Exp: %d' % exp_cl_num)
print('Control: %d' % ctrl_cl_num)
#proportion increase of clicks for exp over control
prop_inc_clicks = (exp_cl_num / ctrl_cl_num - 1) * 100
print('Proportion increase of clicks for exp over control: %.3f%%' % prop_inc_clicks)
"""
Explanation: Примем в первом приближении, что количество человек в каждой из групп одинаково.
End of explanation
"""
#Clicks mean values
exp_cl_mean = ab_data.n_clicks.loc[exp].mean()
ctrl_cl_mean = ab_data.n_clicks.loc[ctrl].mean()
print('Mean number of clicks in each group')
print('Exp: %.4f' % exp_cl_mean)
print('Control: %.4f' % ctrl_cl_mean)
print('')
#Clicks median values
exp_cl_mean = ab_data.n_clicks.loc[exp].median()
ctrl_cl_mean = ab_data.n_clicks.loc[ctrl].median()
print('Median number of clicks in each group')
print('Exp: %d' % exp_cl_mean)
print('Control: %d' % ctrl_cl_mean)
def get_bootstrap_samples(data, n_samples):
indices = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indices]
return samples
def stat_intervals(stat, alpha):
boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])
return boundaries
%%time
#confidence intervals estimation
np.random.seed(0)
num_of_samples = 500
exp_cl_mean, ctrl_cl_mean = np.empty(num_of_samples), np.empty(num_of_samples)
exp_cl_median, ctrl_cl_median = np.empty(num_of_samples), np.empty(num_of_samples)
ctrl_cl_var = np.empty(num_of_samples)
exp_data = get_bootstrap_samples(ab_data.n_clicks.loc[exp].values, num_of_samples)
ctrl_data = get_bootstrap_samples(ab_data.n_clicks.loc[ctrl].values, num_of_samples)
for i in range(num_of_samples):
exp_cl_mean[i], ctrl_cl_mean[i] = exp_data[i].mean(), ctrl_data[i].mean()
exp_cl_median[i], ctrl_cl_median[i] = np.median(exp_data[i]), np.median(ctrl_data[i])
ctrl_cl_var[i] = ctrl_data[i].var()
delta_mean = map(lambda x: x[0] - x[1], zip(exp_cl_mean, ctrl_cl_mean))
delta_median = map(lambda x: x[0] - x[1], zip(exp_cl_median, ctrl_cl_median))
delta_mean_bnd = stat_intervals(delta_mean, 0.05)
delta_median_bnd = stat_intervals(delta_median, 0.05)
print('Conf. int. delta mean: [%.4f, %.4f]' % (delta_mean_bnd[0], delta_mean_bnd[1]))
print('Conf. int. delta median: [%d, %d]' % (delta_median_bnd[0], delta_median_bnd[1]))
print('legend: diff = exp - control')
"""
Explanation: Давайте попробуем посмотреть более внимательно на разницу между двумя группами (control и exp) относительно количества пользовательских кликов.
Для этого построим с помощью бутстрепа 95% доверительный интервал для средних значений и медиан количества кликов в каждой из двух групп.
End of explanation
"""
_ = plt.figure(figsize=(15,5))
_ = plt.subplot(121)
_ = plt.hist(ab_data.n_clicks.loc[exp], bins=100)
_ = plt.title('Experiment group')
_ = plt.subplot(122)
_ = plt.hist(ab_data.n_clicks.loc[ctrl], bins=100)
_ = plt.title('Control group')
"""
Explanation: Поскольку данных достаточно много (порядка полумиллиона уникальных пользователей), отличие в несколько процентов может быть не только практически значимым, но и значимым статистически. Последнее утверждение нуждается в дополнительной проверке.
End of explanation
"""
#probability plot for means
_ = stats.probplot(ctrl_cl_mean, plot=plt, rvalue=True)
_ = plt.title('Probability plot for means')
#probability plot for variances
_ = stats.probplot(ctrl_cl_var, plot=plt, dist='chi2', sparams=(ctrl_cl_mean.shape[0]-1), rvalue=True)
_ = plt.title('Probability plot for variances')
"""
Explanation: t-критерий Стьюдента имеет множество достоинств, и потому его достаточно часто применяют в AB экспериментах. Иногда его применение может быть необоснованно из-за сильной скошенности распределения данных.
Для простоты рассмотрим одновыборочный t-критерий. Чтобы действительно предположения t-критерия выполнялись необходимо, чтобы:
среднее значение в выборке было распределено нормально N(μ,σ2n)
несмещенная оценка дисперсии c масштабирующим коэффициентом была распределена по хи-квадрат c n−1 степенями свободы χ2(n−1)
Оба этих предположения можно проверить с помощью бутстрепа. Ограничимся сейчас только контрольной группой, в которой распределение кликов будем называть данными в рамках данного вопроса.
Поскольку мы не знаем истинного распределения генеральной совокупности, мы можем применить бутстреп, чтобы понять, как распределены среднее значение и выборочная дисперсия.
Для этого
получим из данных n_boot_samples псевдовыборок.
по каждой из этих выборок посчитаем среднее и сумму квадратов отклонения от выборочного среднего
для получившегося вектора средних значений из n_boot_samples построим q-q plot с помощью scipy.stats.probplot для нормального распределения
для получившегося вектора сумм квадратов отклонения от выборочного среднего построим qq-plot с помощью scipy.stats.probplot для хи-квадрат распределения
End of explanation
"""
users_nclicks_exp = ab_data.loc[exp].groupby(['userID', 'browser']).sum().loc[:,'n_clicks']
users_nclicks_ctrl = ab_data.loc[ctrl].groupby(['userID', 'browser']).sum().loc[:,'n_clicks']
users_nclicks_exp.head()
users_nclicks_ctrl.head()
stats.mannwhitneyu(users_nclicks_exp, users_nclicks_ctrl, alternative='two-sided')
"""
Explanation: Одним из возможных аналогов t-критерия, которым можно воспрользоваться, является тест Манна-Уитни. На достаточно обширном классе распределений он является асимптотически более эффективным, чем t-критерий, и при этом не требует параметрических предположений о характере распределения.
Разделим выборку на две части, соответствующие control и exp группам. Преобразуем данные к виду, чтобы каждому пользователю соответствовало суммарное значение его кликов. С помощью критерия Манна-Уитни проверим гипотезу о равенстве средних.
End of explanation
"""
browsers_nclicks_exp = ab_data.loc[exp].groupby(['browser', 'userID']).sum().loc[:,'n_clicks']
browsers_nclicks_ctrl = ab_data.loc[ctrl].groupby(['browser', 'userID']).sum().loc[:,'n_clicks']
browsers_nclicks_exp.head()
browsers_nclicks_ctrl.head()
#Unique browsers
browsers = np.unique(ab_data.browser)
print('Unique browsers numbers: ' + str(browsers))
print('')
print('Mann-Whitney rank test without multipletest')
mw_p = np.empty(browsers.shape[0])
for i, br in enumerate(browsers):
print('Browser #%d: ' % br),
_, mw_p[i] = stats.mannwhitneyu(browsers_nclicks_exp.loc[br, :], browsers_nclicks_ctrl.loc[br, :], alternative='two-sided')
print('p-value = %.4f' % mw_p[i])
print('')
print('Mann-Whitney rank test with multipletest')
_, mw_p_corr, _, _ = multipletests(mw_p, alpha = 0.05, method = 'holm')
for i, br in enumerate(browsers):
print('Browser #%d: ' % br),
print('p-value = %.4f' % mw_p_corr[i])
"""
Explanation: Проверим, для какого из браузеров наиболее сильно выражено отличие между количеством кликов в контрольной и экспериментальной группах.
Для этого применим для каждого из срезов (по каждому из уникальных значений столбца browser) критерий Манна-Уитни между control и exp группами и сделаем поправку Холма-Бонферрони на множественную проверку с α=0.05.
End of explanation
"""
browsers_nonclk_q_exp = ab_data.loc[exp].groupby(['browser']).sum().loc[:,'n_nonclk_queries']
browsers_clk_q_exp = ab_data.loc[exp].groupby(['browser']).sum().loc[:,'n_queries']
browsers_nonclk_q_prop_exp = browsers_nonclk_q_exp / browsers_clk_q_exp
browsers_nonclk_q_ctrl = ab_data.loc[ctrl].groupby(['browser']).sum().loc[:,'n_nonclk_queries']
browsers_clk_q_ctrl = ab_data.loc[ctrl].groupby(['browser']).sum().loc[:,'n_queries']
browsers_nonclk_q_prop_ctrl = browsers_nonclk_q_ctrl / browsers_clk_q_ctrl
print('Control / experimental groups')
for br in browsers:
print('Browser #%d' % br),
print(browsers_nonclk_q_prop_ctrl.loc[browsers_nonclk_q_prop_ctrl.index == br].values),
print('/'),
print(browsers_nonclk_q_prop_exp.loc[browsers_nonclk_q_prop_ctrl.index == br].values)
"""
Explanation: Для каждого браузера в каждой из двух групп (control и exp) посчитаем долю запросов, в которых пользователь не кликнул ни разу. Это можно сделать, поделив сумму значений n_nonclk_queries на сумму значений n_queries. Умножив это значение на 100, получим процент некликнутых запросов, который можно легче проинтерпретировать.
End of explanation
"""
|
jamesnw/wtb-data | notebooks/Style Similarity.ipynb | mit | import math
# Square the difference of each row, and then return the mean of the column.
# This is the average difference between the two.
# It will be higher if they are different, and lower if they are similar
def similarity(styleA, styleB):
diff = np.square(wtb[styleA] - wtb[styleB])
return diff.mean()
res = []
# Loop through each addition pair
wtb = wtb.T
for styleA in wtb.columns:
for styleB in wtb.columns:
# Skip if styleA and combo B are the same.
# To prevent duplicates, skip if A is after B alphabetically
if styleA != styleB and styleA < styleB:
res.append([styleA, styleB, similarity(styleA, styleB)])
df = pd.DataFrame(res, columns=["styleA", "styleB", "similarity"])
"""
Explanation: Question: I want to know how similar 2 style are. I really like Apricot Blondes, and I want to see what other styles Apricot would go in. Perhaps it would be good in a German Pils.
How to get there: The dataset shows the percentage of votes that said a style-addition combo would likely taste good. So, we can compare the votes on each addition for any two styles, and see how similar they are.
End of explanation
"""
df.sort_values("similarity").head(10)
"""
Explanation: Top 10 most similar styles
End of explanation
"""
df.sort_values("similarity", ascending=False).head(10)
"""
Explanation: 10 Least Similar styles
End of explanation
"""
def comboSimilarity(styleA, styleB):
# styleA needs to be before styleB alphabetically
if styleA > styleB:
addition_temp = styleA
styleA = styleB
styleB = addition_temp
return df.loc[df['styleA'] == styleA].loc[df['styleB'] == styleB]
comboSimilarity('Blonde Ale', 'German Pils')
"""
Explanation: Similarity of a specific combo
End of explanation
"""
df.describe()
"""
Explanation: But is that good or bad? How does it compare to others?
End of explanation
"""
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
n, bins, patches = plt.hist(df['similarity'], bins=50)
similarity = float(comboSimilarity('Blonde Ale', 'German Pils')['similarity'])
# Find the histogram bin that holds the similarity between the two
target = np.argmax(bins>similarity)
patches[target].set_fc('r')
plt.show()
"""
Explanation: We can see that Blonde Ales and German Pils are right between the mean and 50th percentile, so it's not a bad idea, but it's not a good idea either.
We can also take a look at this visually to confirm.
End of explanation
"""
|
junhwanjang/DataSchool | Lecture/09. 기초 확률론 3 - 확률모형/2) 이항 확률 분포.ipynb | mit | N = 10
theta = 0.6
rv = sp.stats.binom(N, theta)
rv
"""
Explanation: 이항 확률 분포
베르누이 시도(Bernoulli trial)란 성공 혹은 실패로 결과가 나오는 것을 말한다.
성공확률이 $\theta$ 인 베르누이 시도를 $N$번 하는 경우를 생각해 보자. 가장 운이 좋을 때에는 $N$번 모두 성공할 것이고 가장 운이 나쁜 경우에는 한 번도 성공하지 못할 겻이다. $N$번 중 성공한 횟수를 확률 변수 $X$ 라고 한다면 $X$의 값은 0 부터 $N$ 까지의 정수 중 하나가 될 것이다.
이러한 확률 변수를 이항 분포(binomial distribution)를 따르는 확률 변수라고 한다.
$$ X \sim \text{Bin}(x;N,\theta) $$
이항 확률 분포를 수식으로 묘사해 보자.
0 또는 1이 나오는 베르누이 확률 분포를 따르는 확률 변수 $Y$를 가정한다.
$$ Y \sim \text{Bern}(y;\theta) $$
이 확률 변수의 $N$개의 샘플을 $y_1, y_2, \cdots, y_N$라고 하자. 이 값은 모두 0(실패) 아니면 1(성공) 이라는 값을 가지기 때문에 $N$번 중 성공한 횟수는 $N$개의 샘플 값의 총합이다.
$$ X = \sum_{i=1}^N y_i $$
이항 확률 분포를 수식으로 쓰면 다음과 같다.
$$ \text{Bin}(x;N,\theta) = \binom N x \theta^x(1-\theta)^{N-x} $$
이 식에서
$$ \binom N x =\dfrac{N!}{x!(N-x)!} $$
$$ N! = N\cdot (N-1) \cdots 2 \cdot 1 $$
SciPy를 사용한 베르누이 분포의 시뮬레이션
Scipy의 stats 서브 패키지에 있는 binom 클래스는 이항 분포 클래스이다. n 인수와 p 인수를 사용하여 모수를 설정한다
End of explanation
"""
xx = np.arange(N + 1)
plt.bar(xx, rv.pmf(xx), align="center")
plt.ylabel("P(x)")
plt.title("pmf of binomial distribution")
plt.show()
"""
Explanation: pmf 메서드를 사용하면 확률 질량 함수(pmf: probability mass function)를 계산할 수 있다.
End of explanation
"""
np.random.seed(0)
x = rv.rvs(100)
x
sns.countplot(x)
plt.show()
"""
Explanation: 시뮬레이션을 하려면 rvs 메서드를 사용한다.
End of explanation
"""
y = np.bincount(x, minlength=N)/len(x)
df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["value", "type", "ratio"]
df.head()
sns.barplot(x="value", y="ratio", hue="type", data=df)
plt.show()
"""
Explanation: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.
End of explanation
"""
|
Diyago/Machine-Learning-scripts | DEEP LEARNING/NLP/BERD pretrained model/Named Entity Recognition With Bert.ipynb | apache-2.0 | MAX_LEN = 75
bs = 32
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
"""
Explanation: We will limit our sequence length to 75 tokens and we will use a batch size of 32 as suggested by the Bert paper. Note, that Bert natively supports sequences of up to 512 tokens.
End of explanation
"""
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
"""
Explanation: The Bert implementation comes with a pretrained tokenizer and a definied vocabulary. We load the one related to the smallest pre-trained model bert-base-uncased. Try also the cased variate since it is well suited for NER.
End of explanation
"""
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print(tokenized_texts[0])
"""
Explanation: Now we tokenize all sentences
End of explanation
"""
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts],
maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
tags = pad_sequences([[tag2idx.get(l) for l in lab] for lab in labels],
maxlen=MAX_LEN, value=tag2idx["O"], padding="post",
dtype="long", truncating="post")
"""
Explanation: Next, we cut and pad the token and label sequences to our desired length.
End of explanation
"""
attention_masks = [[float(i>0) for i in ii] for ii in input_ids]
tr_inputs, val_inputs, tr_tags, val_tags = train_test_split(input_ids, tags,
random_state=2018, test_size=0.1)
tr_masks, val_masks, _, _ = train_test_split(attention_masks, input_ids,
random_state=2018, test_size=0.1)
tr_inputs = torch.tensor(tr_inputs)
val_inputs = torch.tensor(val_inputs)
tr_tags = torch.tensor(tr_tags)
val_tags = torch.tensor(val_tags)
tr_masks = torch.tensor(tr_masks)
val_masks = torch.tensor(val_masks)
train_data = TensorDataset(tr_inputs, tr_masks, tr_tags)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=bs)
valid_data = TensorDataset(val_inputs, val_masks, val_tags)
valid_sampler = SequentialSampler(valid_data)
valid_dataloader = DataLoader(valid_data, sampler=valid_sampler, batch_size=bs)
"""
Explanation: The Bert model supports something called attention_mask, which is similar to the masking in keras. So here we create the mask to ignore the padded elements in the sequences.
End of explanation
"""
model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx))
model.cuda();
"""
Explanation: Setup the Bert model for finetuning
The pytorch-pretrained-bert package provides a BertForTokenClassification class for token-level predictions. BertForTokenClassification is a fine-tuning model that wraps BertModel and adds token-level classifier on top of the BertModel. The token-level classifier is a linear layer that takes as input the last hidden state of the sequence. We load the pre-trained bert-base-uncased model and provide the number of possible labels.
End of explanation
"""
FULL_FINETUNING = True
if FULL_FINETUNING:
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
"""
Explanation: Before we can start the fine-tuning process, we have to setup the optimizer and add the parameters it should update. A common choice is the Adam optimizer. We also add some weight_decay as regularization to the main weight matrices. If you have limited resources, you can also try to just train the linear classifier on top of Bert and keep all other weights fixed. This will still give you a good performance.
End of explanation
"""
from seqeval.metrics import f1_score
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=2).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
"""
Explanation: Finetune Bert
First we define some metrics, we want to track while training. We use the f1_score from the seqeval package. You can find more details here. And we use simple accuracy on a token level comparable to the accuracy in keras.
End of explanation
"""
epochs = 2
max_grad_norm = 1.0
for _ in trange(epochs, desc="Epoch"):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
# add batch to gpu
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# forward pass
loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
# backward pass
loss.backward()
# track train loss
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# VALIDATION on validation set
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
predictions , true_labels = [], []
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
eval_loss = eval_loss/nb_eval_steps
print("Validation loss: {}".format(eval_loss))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
pred_tags = [tags_vals[p_i] for p in predictions for p_i in p]
valid_tags = [tags_vals[l_ii] for l in true_labels for l_i in l for l_ii in l_i]
print("F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
model.eval()
predictions = []
true_labels = []
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = logits.detach().cpu().numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
label_ids = b_labels.to('cpu').numpy()
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
pred_tags = [[tags_vals[p_i] for p_i in p] for p in predictions]
valid_tags = [[tags_vals[l_ii] for l_ii in l_i] for l in true_labels for l_i in l ]
print("Validation loss: {}".format(eval_loss/nb_eval_steps))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
print("Validation F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
"""
Explanation: Finally, we can fine-tune the model. A few epochs should be enough. The paper suggest 3-4 epochs.
End of explanation
"""
|
msultan/msmbuilder | examples/Clustering-Comparison.ipynb | lgpl-2.1 | import numpy as np
from sklearn import datasets
from collections import OrderedDict
np.random.seed(0)
n_samples = 2500
ds = OrderedDict()
ds['noisy_circles'] = datasets.make_circles(
n_samples=n_samples, factor=.5, noise=.05)
ds['noisy_moons'] = datasets.make_moons(
n_samples=n_samples, noise=.05)
ds['blobs'] = np.concatenate([
np.array([[0,-1]]) + 0.5 * np.random.randn(n_samples//3, 2),
np.array([[5, 0]]) + 0.1 * np.random.randn(n_samples//3, 2),
np.array([[0, 5]]) + 2.0 * np.random.randn(n_samples//3, 2),
]), None
ds['no_structure'] = np.random.rand(n_samples, 2), None
# Scale the example data
from sklearn.preprocessing import StandardScaler
ds = OrderedDict((name, StandardScaler().fit_transform(X))
for name, (X, y) in ds.items())
"""
Explanation: Clustering comparison
Generate datasets
We choose the size big enough to see the scalability of the algorithms, but we don't want the example to take too long.
End of explanation
"""
from msmbuilder import cluster
algos = [
(cluster.KCenters, 3),
(cluster.RegularSpatial, 2),
(cluster.KMeans, 3),
(cluster.MiniBatchKMeans, 3),
(cluster.KMedoids, 3),
(cluster.MiniBatchKMedoids, 3),
]
"""
Explanation: Enumerate clustering choices
End of explanation
"""
import time
results = {}
for ds_name, X in ds.items():
for algo, param in algos:
algorithm = algo(param)
t0 = time.time()
algorithm.fit([X])
t1 = time.time()
if hasattr(algorithm, 'labels_'):
y_pred = algorithm.labels_[0].astype(np.int)
else:
y_pred = algorithm.transform([X])[0]
if hasattr(algorithm, 'cluster_centers_'):
centers = algorithm.cluster_centers_
else:
centers = []
results[ds_name, algo.__name__] = (t1-t0, y_pred, centers)
"""
Explanation: Perform clustering
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
colors = np.array([x for x in 'bgrcmykbgrcmykbgrcmykbgrcmyk'])
colors = np.hstack([colors] * 20)
plt.figure(figsize=(14.5, 9.5))
plot_num = 1
titles = True
for ds_name, X in ds.items():
for algo, param in algos:
t, y_pred, centers = results[ds_name, algo.__name__]
plt.subplot(4, 6, plot_num)
if titles:
plt.title(algo.__name__, size=18)
plt.scatter(X[:, 0], X[:, 1], color=colors[y_pred].tolist(), s=6)
center_colors = colors[:len(centers)]
plt.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.xticks(())
plt.yticks(())
plt.text(.99, .01, '{:.2f}s'.format(t),
transform=plt.gca().transAxes, size=20,
horizontalalignment='right')
plot_num += 1
titles = False
plt.tight_layout()
"""
Explanation: Plot
End of explanation
"""
|
harpolea/CMG_testing_workshop | Testing.ipynb | mit | import numpy
from numpy.random import rand
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams.update({'font.size': 18})
from scipy.integrate import quad
import unittest
"""
Explanation: Testing Scientific Codes
End of explanation
"""
def normalise(v):
norm = numpy.sqrt(numpy.sum(v**2))
return v / norm
normalise(numpy.array([0,0]))
"""
Explanation: In the experimental Sciences, new theories are developed by applying the Scientific method. This involves carrying out tests that are accurate, reproducible and reliable. In order for a test to be such, the experimental setup will be tested in order to show that it is working as designed and so as to eliminate or quantify any systematic errors. A result will not be trusted unless the experiment itself has been carried out to a suitable standard.
In computational Science, we should apply the same principles to our code. A result should only be trusted if the code that has produced it has undergone rigorous testing which demonstrates that it is working as intended and that any limitations of the code (e.g. numerical errors) are understood and quantified.
Unfortunately, testing scientific codes can be quite challenging. By their very nature, they are often built in order to investigate systems where the behaviour is to some extent unknown. They can be very complex, often built over a number of years (or even decades!) with contributions from a vast number of people. However, even for the most complicated of codes there are a number of different types of tests that we can apply in order for us to build robust, reliable code whose results can be trusted.
<center>
Writing good code is hard - xkcd</center>
When should I test?
Always and often
The earlier you start testing the better, as it will be possible to catch bugs as they develop and before they become too entrenched in the code. Once written, you should then try and execute tests every time changes are made. Continuous integration (see below) is a useful tool to use to make sure that tests are run frequently - once the tests are written and the CI setup, they can then be forgetten about to a certain extent, safe in the knowledge that if any bugs are introduced in changes to the code, they should be caught.
However, it is important to review your tests regularly. In code that is being actively developed, tests must be amended and new tests written so as to make sure that new features are also tested. Regression tests are useful here to test that changes to the code improve its performance rather than making it worse. Code coverage is a useful tool to make sure that all code is being tested. It's all very well having a testing suite, but if only 20% of the code has been tested, you still cannot trust that the other 80% of the code is producing reliable results.
Effective testing
In order to have an effective set of tests, it's necessary to make sure that the entire parameter space is tested, not just one or two nice cases. Of particular importance are edge and corner cases. If the code needs to run over a set of parameters, then edge cases are those which are at the beginning and end of this range. A corner case is then where one or more edge cases are combined. Such cases tend to be where errors most often arise, as often special code is required to deal with boundary values.
In the code below, we demonstrate the importance of testing edge cases. The code takes a vector $\mathbf{v}$ and normalises it $\hat{\mathbf{v}} = \frac{\mathbf{v} }{ |\mathbf{v}|}$. We see that if the code is run for the vector $(0,0)$, a RuntimeWarning is raised as the function is attempting to divide by zero.
End of explanation
"""
def improved_normalise(v):
norm = numpy.sqrt(numpy.sum(v**2))
if norm == 0.:
return v
return v / norm
improved_normalise(numpy.array([0,0]))
"""
Explanation: We therefore need to amend our function for the case where the norm of the vector is zero. A possible solution is the function below.
End of explanation
"""
improved_normalise("I am a string")
"""
Explanation: Our improved function now tests to see if the norm is zero - if so, it returns the original vector rather than attempting to divide by zero.
It is also important to check that the code breaks as expected. If the code input is garbage but it still manages to run as normal, that is not good behaviour and suggests some data validation of input parameters is needed. For example, let's try to run our improved normalisation function on a string:
End of explanation
"""
def squared(x):
return x*x
def add_2(x):
return x + 2
def square_plus_2(x):
return add_2(squared(x))
class test_units(unittest.TestCase):
def test_squared(self):
self.assertTrue(squared(-5) == 25)
self.assertTrue(squared(1e5) == 1e10)
self.assertRaises(TypeError, squared, "A string")
def test_add_2(self):
self.assertTrue(add_2(-5) == -3)
self.assertTrue(add_2(1e5) == 100002)
self.assertRaises(TypeError, add_2, "A string")
test_units().test_squared()
test_units().test_add_2()
"""
Explanation: Python correctly spots that it cannot perform the power operation on a string and raises a TypeError exception. However, it would probably be more useful to implement some kind of type checking of the function inputs before this (e.g. using numpy.isnumeric), and/or make sure that the code that calls this function is capable of catching such exceptions.
Unit tests
For complicated codes made up of many functions, it is useful to write a serious of tests that check small parts of the code - units - at a time. This makes it easier to track down the exact location of bugs. These units may be individual functions or groups of shorter functions. Unit tests therefore encourage good coding practice, as they require code to be modular.
In the example below, we have three (very simple) functions: squared which returns the square of its input, add_2 which adds 2 to its input and square_plus_2 which calls the two previous functions to return $x^2+2$. To test this code, we could therefore write unit tests for the first two functions to check they are working correctly. We've used the unittest module here as it allows us to test that functions correctly raise exceptions when given invalid data.
End of explanation
"""
class test_integration(unittest.TestCase):
def test_square_plus_2(self):
self.assertTrue(square_plus_2(-5) == 27)
self.assertTrue(square_plus_2(1e5) == 10000000002)
self.assertRaises(TypeError, square_plus_2, "A string")
test_integration().test_square_plus_2()
"""
Explanation: Integration tests
Once you've written your unit tests and are pretty confident that individual parts of the code work on their own, you than need to verify that these different parts work together. To see why this is needed, imagine you were asked to build a car, despite only having a vague idea of how everything fits together. You've been given all the different parts (the engine, the wheels, the steering wheel...) - these have all previously undergone rigorous testing and you have been assured that they all work fine. You put them all together to the best of your ability, but unfortunately cannot get the car to work. Much as with your code, despite the individual parts working, this is no guarantee that they will work when put together.
In the above example, we can add an integration test by writing a test for square_plus_2 - this calls the other two functions, so we'll test that it does this properly.
End of explanation
"""
hs = numpy.array([1. / (4. * 2.**n) for n in range(8)])
errors = numpy.zeros_like(hs)
for i, h in enumerate(hs):
xs = numpy.arange(0., 1.+h, h)
ys = numpy.sin(xs)
# use trapezium rule to approximate integral
integral_approx = sum((xs[1:] - xs[:-1]) * 0.5 * (ys[1:] + ys[:-1]))
errors[i] = -numpy.cos(1) + numpy.cos(0) - integral_approx
plt.loglog(hs, errors, 'x', label='Error')
plt.plot(hs, 0.1*hs**2, label=r'$h^2$')
plt.xlabel(r'$h$')
plt.ylabel('error')
plt.legend(loc='center left', bbox_to_anchor=[1.0, 0.5])
plt.show()
"""
Explanation: As we'll see below, integration tests can be difficult to design. They can encompass a small section of the code, e.g. to check that one function correctly calls another, all the way up to the entire code. Because they can involve many different functions, they are often a lot more complex than unit tests.
Convergence tests
Often we want to calculate a solution on some kind of grid. The solution we find is a discretised approximation of the exact continuous solution. As the resolution of the grid increases, the solution should approach exact solution. Convergence tests are a way of checking this. The solution is calculated for grids of various resolutions. If the code is working correctly, the error of the solution should decrease with increasing resolution approximately at an order that depends on the accuracy of the algorithm (until the error becomes so small it then becomes dominated by floating point errors).
In the example below, we will demonstrate this by using the trapezium rule to approximate the integral of $\sin (x)$ with various different step sizes, $h$. By comparing the calculated errors to a line of gradient $h^2$, it can be seen that the numerical approximation is converging as expected at $O(h^2)$.
End of explanation
"""
data = rand(50,50)
def func(a):
return a**2 * numpy.sin(a)
output = func(data)
plt.imshow(output)
plt.colorbar()
plt.show()
"""
Explanation: Regression tests
When building your code, generally you'll be aiming for its performance to improve with time. Results should get more accurate or, at the very least, should not deteriorate. Regression tests are a way to check this. Multiple versions of the code are run and the outputs compared. If the output has changed such that it is significantly different from the previous output, the test fails. Such tests can help catch bugs that other types of tests may not, and can help ensure the project remains backwards-compatible for such cases where that is important.
Common problems and how to solve them
My code has some randomness and so its output changes every time I run it - what can I test for?
In time evolution problems, it may be that whilst the output at any individual timestep can be somewhat random, the behaviour averaged over a number of timesteps is to some extent known. Tests can therefore be written to check that this is the case. In other problems, it may be more useful to test the average behaviour across the entire domain or sections of the domain. Even if the behaviour is completely random and so it's not possible to take any meaninful averages, the chances are that it should still be within a set of known values - we can therefore write tests that check the data is within these limits. Another strategy is to try to write tests that isolate the random parts so that you can check the non-random parts of the code work. If you are using a random number generator, it can be possible to eliminate the non-determinism by testing using a fixed seed value for the generator.
In the code below, we generate an array of random data and apply some function to it before plotting the results. It can be seen that the output is different every time the code is run.
End of explanation
"""
def test_limits(a):
if numpy.all(a >= 0.) and numpy.all(a <= 1.):
return True
return False
def test_average(a):
if numpy.isclose(numpy.average(a), 0.22, rtol=1.e-2):
return True
return False
if test_limits(output):
print('Function output within correct limits')
else:
print('Function output is not within correct limits')
if test_average(output):
print('Function output has correct average')
else:
print('Function output does not have correct average')
"""
Explanation: The output of this code changes every time the code is run, however we can still write some tests for it. We know that all values in the output array must be $0\leq x \leq 1$. In some circumstances, such as in this case, we may know the statistical distribution of the random data. We can therefore calculate what the average output value should be and compare this to our code's output. In our case, the data is generated from a uniform distribution of numbers between 0 and 1, so the average value of the output is given by $\int_0^1 f(x) dx \simeq 0.22$
End of explanation
"""
xs = numpy.linspace(0.0, 2.0 * numpy.pi)
integrals = numpy.zeros_like(xs)
for i in range(len(xs)):
integrals[i] = quad(numpy.sin, 0.0, xs[i])[0]
plt.plot(xs, -numpy.cos(xs)+1, '-', label=r'$\int f(x)$')
plt.plot(xs, integrals, 'x', label='quad')
plt.legend(loc='center left', bbox_to_anchor=[1.0, 0.5])
plt.show()
"""
Explanation: I don't know what the correct solution should be
In experimental science, the experimental setup will be tested using a control. This is where the experiment is run using a set of input data for which the outcome is known, so that any bugs in the apparatus or systematic errors can be identified. In computational science, there is often a simple system whose behaviour is known which can be used to test the code. E.g. in time evolution problems, a system which is initially static should remain that way. If this is not the case, then this indicates there is something seriously wrong with the code! In physics, we can also check for symmetries of the system (e.g. rotational symmetry, translation symmetry, reflection symmetry). There are also often conserved quantities (e.g. mass, energy, charge) that we can check the code conserves.
In the below example, we look at a black box function - scipy.integrate.quad. Here, this function will stand in for a bit of code that we have written and want to test. Say we wish to use quad to calculate the integral of some complicated function and we have little idea what the solution will be. Before we use it on the complicated function, we will test that it behaves correctly for a function whose integral we already know: $f(x) = \sin(x)$.
End of explanation
"""
x = numpy.linspace(0, 2*numpy.pi, num=500)
initial_data = x**2 * numpy.cos(5*x)
# add noise
noisey_data = initial_data + (rand(len(x)) - 0.5) * 4
plt.plot(x, initial_data, label='initial data')
plt.plot(x, noisey_data, label='noisey data')
plt.legend(loc='center left', bbox_to_anchor=[1.0, 0.5])
plt.xlim(x[0], x[-1])
plt.show()
if numpy.array_equal(initial_data, noisey_data):
print('Noisey data exactly equal to initial data')
else:
print('Noisey data is not exactly equal to initial data')
if numpy.allclose(initial_data, noisey_data, atol=2):
print('Noisey data is close to initial data')
else:
print('Noisey data is not close to initial data')
"""
Explanation: As hoped, quad gives the correct solution:
$$
\int^\alpha_0 \sin(x)\, dx = -\cos(\alpha) + 1
$$
I didn't write most of the code - how do I know that the bit I wrote works?
Unit tests! If the original code can run in isolation, make sure that there are suitable tests which make sure that it works correctly. Any failures in subsequent tests that then incorporate your code will therefore only be the result of bugs in your code. Unit tests of individual functions in your code should also be used.
I know there is some numerical error in my code - how can I test my code is correct up to this error?
In numerical calculations, there will always be some computational error that cannot be avoided. This can from the computer's floating point representation of numerical data or from the choice of algorithm used. It is often the case that we don't require our result to be 100% precise, but rather correct up to some tolerance. We can therefore build tests to reflect this.
In python, we can use numpy.isclose and numpy.allclose to do this. In the example below, we take some data and add a small amount of random noise to it. This random noise is supposed to represent numerical errors that are introduced over the course of a simulation. If we test that the output array is equal to the original array, python correctly tells us that it is not. However, if we test that the output array is close to the original array, we find that this is true.
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II | 05_2D_acoustic_FD_modelling/4_fdac2d_absorbing_boundary.ipynb | gpl-3.0 | # Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
"""
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
from mpl_toolkits.axes_grid1 import make_axes_locatable
# Definition of initial modelling parameters
# ------------------------------------------
xmax = 2000.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction (m)
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
tmax = 0.75 # maximum recording time of the seismogram (s)
dt = 0.0010 # time step
vp0 = 3000. # P-wave speed in medium (m/s)
# acquisition geometry
xsrc = 1000.0 # x-source position (m)
zsrc = xsrc # z-source position (m)
f0 = 100.0 # dominant frequency of the source (Hz)
t0 = 0.1 # source time shift (s)
isnap = 2 # snapshot interval (timesteps)
"""
Explanation: Simple absorbing boundary for 2D acoustic FD modelling
Realistic FD modelling results for surface seismic acquisition geometries require a further modification of the 2D acoustic FD code. Except for the free surface boundary condition on top of the model, we want to suppress the artifical reflections from the other boundaries.
Such absorbing boundaries can be implemented by different approaches. A comprehensive overview is compiled in
Gao et al. 2015, Comparison of artificial absorbing boundaries for acoustic wave equation modelling
Before implementing the absorbing boundary frame, we modify some parts of the optimized 2D acoustic FD code:
End of explanation
"""
@jit(nopython=True) # use JIT for C-performance
def update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2
return d2px, d2pz
"""
Explanation: In order to modularize the code, we move the 2nd partial derivatives of the wave equation into a function update_d2px_d2pz, so the application of the JIT decorator can be restriced to this function:
End of explanation
"""
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
def FD_2D_acoustic_JIT(dt,dx,dz,f0):
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# define clip value: 0.1 * absolute maximum value of source wavelet
clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2
# Define model
# ------------
vp = np.zeros((nx,nz))
vp = model(nx,nz,vp,dx,dz)
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initalize animation of pressure wavefield
# -----------------------------------------
fig = plt.figure(figsize=(7,3.5)) # define figure size
plt.tight_layout()
extent = [0.0,xmax,zmax,0.0] # define model extension
# Plot pressure wavefield movie
ax1 = plt.subplot(121)
image = plt.imshow(p.T, animated=True, cmap="RdBu", extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Pressure wavefield')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
# Plot Vp-model
ax2 = plt.subplot(122)
image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest',
extent=extent)
plt.title('Vp-model')
plt.xlabel('x [m]')
plt.setp(ax2.get_yticklabels(), visible=False)
divider = make_axes_locatable(ax2)
cax2 = divider.append_axes("right", size="2%", pad=0.1)
fig.colorbar(image1, cax=cax2)
plt.ion()
plt.show(block=False)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz)
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# display pressure snapshots
if (it % isnap) == 0:
image.set_data(p.T)
fig.canvas.draw()
"""
Explanation: In the FD modelling code FD_2D_acoustic_JIT, a more flexible model definition is introduced by the function model. The block Initalize animation of pressure wavefield before the time loop displays the velocity model and initial pressure wavefield. During the time-loop, the pressure wavefield is updated with
image.set_data(p.T)
fig.canvas.draw()
at the every isnap timestep:
End of explanation
"""
# Homogeneous model
def model(nx,nz,vp,dx,dz):
vp += vp0
return vp
"""
Explanation: Homogeneous block model without absorbing boundary frame
As a reference, we first model the homogeneous block model, defined in the function model, without an absorbing boundary frame:
End of explanation
"""
%matplotlib notebook
dx = 5.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
f0 = 100.0 # centre frequency of the source wavelet (Hz)
# calculate dt according to the CFL-criterion
dt = dx / (np.sqrt(2.0) * vp0)
FD_2D_acoustic_JIT(dt,dx,dz,f0)
"""
Explanation: After defining the modelling parameters, we can run the modified FD code ...
End of explanation
"""
# Define simple absorbing boundary frame based on wavefield damping
# according to Cerjan et al., 1985, Geophysics, 50, 705-708
def absorb(nx,nz):
FW = 60 # thickness of absorbing frame (gridpoints)
a = 0.0053 # damping variation within the frame
coeff = np.zeros(FW)
# define coefficients
for i in range(FW):
coeff[i] = np.exp(-(a**2 * (FW-i)**2))
# initialize array of absorbing coefficients
absorb_coeff = np.ones((nx,nz))
# compute coefficients for left grid boundaries (x-direction)
zb=0
for i in range(FW):
ze = nz - i - 1
for j in range(zb,ze):
absorb_coeff[i,j] = coeff[i]
# compute coefficients for right grid boundaries (x-direction)
zb=0
for i in range(FW):
ii = nx - i - 1
ze = nz - i - 1
for j in range(zb,ze):
absorb_coeff[ii,j] = coeff[i]
# compute coefficients for bottom grid boundaries (z-direction)
xb=0
for j in range(FW):
jj = nz - j - 1
xb = j
xe = nx - j
for i in range(xb,xe):
absorb_coeff[i,jj] = coeff[j]
return absorb_coeff
"""
Explanation: Notice the strong, artifical boundary reflections in the wavefield movie
Simple absorbing Sponge boundary
The simplest, and unfortunately least efficient, absorbing boundary was developed by Cerjan et al. (1985). It is based on the simple idea to damp the pressure wavefields $p^n_{i,j}$ and $p^{n+1}_{i,j}$ in an absorbing boundary frame by an exponential function:
\begin{equation}
f_{abs} = exp(-a^2(FW-i)^2), \nonumber
\end{equation}
where $FW$ denotes the thickness of the boundary frame in gridpoints, while the factor $a$ defines the damping variation within the frame. It is import to avoid overlaps of the damping profile in the model corners, when defining the absorbing function:
End of explanation
"""
# Plot absorbing damping profile
# ------------------------------
fig = plt.figure(figsize=(6,4)) # define figure size
extent = [0.0,xmax,0.0,zmax] # define model extension
# calculate absorbing boundary weighting coefficients
nx = 400
nz = 400
absorb_coeff = absorb(nx,nz)
plt.imshow(absorb_coeff.T)
plt.colorbar()
plt.title('Sponge boundary condition')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
plt.show()
"""
Explanation: This implementation of the Sponge boundary sets a free-surface boundary condition on top of the model, while inciding waves at the other boundaries are absorbed:
End of explanation
"""
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
def FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0):
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# define clip value: 0.1 * absolute maximum value of source wavelet
clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2
# Define absorbing boundary frame
# -------------------------------
absorb_coeff = absorb(nx,nz)
# Define model
# ------------
vp = np.zeros((nx,nz))
vp = model(nx,nz,vp,dx,dz)
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initalize animation of pressure wavefield
# -----------------------------------------
fig = plt.figure(figsize=(7,3.5)) # define figure size
plt.tight_layout()
extent = [0.0,xmax,zmax,0.0] # define model extension
# Plot pressure wavefield movie
ax1 = plt.subplot(121)
image = plt.imshow(p.T, animated=True, cmap="RdBu", extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Pressure wavefield')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
# Plot Vp-model
ax2 = plt.subplot(122)
image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest',
extent=extent)
plt.title('Vp-model')
plt.xlabel('x [m]')
plt.setp(ax2.get_yticklabels(), visible=False)
divider = make_axes_locatable(ax2)
cax2 = divider.append_axes("right", size="2%", pad=0.1)
fig.colorbar(image1, cax=cax2)
plt.ion()
plt.show(block=False)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz)
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Apply absorbing boundary frame
p *= absorb_coeff
pnew *= absorb_coeff
# Remap Time Levels
# -----------------
pold, p = p, pnew
# display pressure snapshots
if (it % isnap) == 0:
image.set_data(p.T)
fig.canvas.draw()
"""
Explanation: The FD code itself requires only some small modifications, we have to add the absorb function to define the amount of damping in the boundary frame and apply the damping function to the pressure wavefields pnew and p
End of explanation
"""
%matplotlib notebook
dx = 5.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
f0 = 100.0 # centre frequency of the source wavelet (Hz)
# calculate dt according to the CFL-criterion
dt = dx / (np.sqrt(2.0) * vp0)
FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0)
"""
Explanation: Let's evaluate the influence of the Sponge boundaries on the artifical boundary reflections:
End of explanation
"""
|
joshnsolomon/phys202-2015-work | assignments/assignment06/InteractEx05.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import display, SVG
"""
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
s = ' <svg width="100" height="100"> <circle cx="50" cy="50" r="20" fill="aquamarine" /> </svg>'
SVG(s)
"""
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
"""
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
"""Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
"""
a = '<svg width="'+str(width)+'" height="'+str(height)+'"> <circle cx="'+str(cx)+'" cy="'+str(cy)+'" r="'+str(r)+'" fill="'+fill+'"/></svg>'
display(SVG(a))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
"""
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
"""
w=interactive(draw_circle, width=fixed(300), height=fixed(300), cx=(0,300,1), cy=(0,300,1), r=(0,50,1), fill='red');
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
"""
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
"""
display(w)
assert True # leave this to grade the display of the widget
"""
Explanation: Use the display function to show the widgets created by interactive:
End of explanation
"""
|
MingChen0919/learning-apache-spark | notebooks/01-data-strcture/1.4-merge-and-split-columns.ipynb | mit | mtcars = spark.read.csv(path='../../data/mtcars.csv',
sep=',',
encoding='UTF-8',
comment=None,
header=True,
inferSchema=True)
mtcars.show(n=5)
# adjust first column name
colnames = mtcars.columns
colnames[0] = 'model'
mtcars = mtcars.rdd.toDF(colnames)
mtcars.show(5)
"""
Explanation: Merge and split columns
Sometimes we need to merge multiple columns in a Dataframe into one column, or split a column into multiple columns. We can easily achieve this by converting a DataFrame to RDD, applying map functions to manipulate elements, and then converting the RDD back to a DataFrame.
Example data frame
End of explanation
"""
from pyspark.sql import Row
mtcars_rdd = mtcars.rdd.map(lambda x: Row(model=x[0], values=x[1:]))
mtcars_rdd.take(5)
"""
Explanation: Merge multiple columns
We convert DataFrame to RDD and then apply the map function to merge values and convert
elements to Row objects.
End of explanation
"""
mtcars_df = spark.createDataFrame(mtcars_rdd)
mtcars_df.show(5, truncate=False)
"""
Explanation: Then we create a new DataFrame from the obtained RDD.
End of explanation
"""
mtcars_rdd_2 = mtcars_df.rdd.map(lambda x: Row(model=x[0], x1=x[1][:5], x2=x[1][5:]))
# convert RDD back to DataFrame
mtcars_df_2 = spark.createDataFrame(mtcars_rdd_2)
mtcars_df_2.show(5, truncate=False)
"""
Explanation: Split one column
We use the above DataFrame as our example data. Again, we need to convert the DataFrame to an RDD to achieve our goal.
Let's split the values column into two columns: x1 and x2. The first 4 values will be in column x1 and the remaining values will be in column x2.
End of explanation
"""
|
andim/pysnippets | sparse-matrix-updating-benchmarking.ipynb | mit | import numpy as np
import scipy.sparse
sparsity = 0.001
N, M = 10000, 200
rowvec = np.ones(N)
colvec = np.ones(M)
Aorig = np.random.random((N, M)) < sparsity
%timeit scipy.sparse.csr_matrix(Aorig)
%timeit scipy.sparse.lil_matrix(Aorig)
Alil = scipy.sparse.lil_matrix(Aorig)
%timeit scipy.sparse.csr_matrix(Alil)
Adok = scipy.sparse.dok_matrix(Aorig)
%timeit scipy.sparse.csr_matrix(Adok)
"""
Explanation: Benchmarking different ways of updating a sparse matrix
Problem: We want to build a new sparse matrix by replacing or appending a column/row. How can we do so efficiently? We then want to use these matrices for fast matrix vector multiplication potentially requiring a cast to another sparse matrix type. What is the best overall compromise?
End of explanation
"""
index = 3
newrow = np.random.random(N) < sparsity
Arow = Aorig[:, :]
%timeit Arow[:, index] = newrow
def replacerow(A, rowindex, row):
# see http://stackoverflow.com/questions/12129948/scipy-sparse-set-row-to-zeros
# zero extant values
A.data[A.indptr[rowindex]:A.indptr[rowindex+1]] = 0.0
indices = np.nonzero(newrow)
A[indices, rowindex] = newrow[indices]
Acsrrow = scipy.sparse.csr_matrix(Aorig)
replacerow(Acsrrow, index, newrow)
print(np.all(Acsrrow == Arow))
%timeit replacerow(Acsrrow, index, newrow)
Acsrrow = scipy.sparse.csr_matrix(Aorig)
%timeit Acsrrow[:, index] = newrow[:, None]
print(np.all(Acsrrow == Arow))
print(Acsrrow.nnz, np.count_nonzero(Arow))
%timeit Acsrrow.eliminate_zeros()
Acsrrow.nnz, np.count_nonzero(Arow)
Alilrow = scipy.sparse.lil_matrix(Aorig)
%timeit Alilrow[:, index] = newrow[:, None]
print(np.all(Alilrow == Arow))
print(Alilrow.nnz, np.count_nonzero(Arow))
"""
Explanation: replace row
End of explanation
"""
%timeit np.concatenate((Aorig, newrow[:, None]), axis=1)
Aprow = np.concatenate((Aorig, newrow[:, None]), axis=1)
Acsrprow = scipy.sparse.csr_matrix(Aorig)
print(np.all(Aprow==scipy.sparse.hstack((Acsrprow, scipy.sparse.csr_matrix(newrow[:, None])))))
%timeit scipy.sparse.hstack((Acsrprow, scipy.sparse.csr_matrix(newrow[:, None])))
def append(A, x):
A._shape = (A.shape[0]+1, A.shape[1])
A.indptr = np.hstack((A.indptr, A.indptr[-1]))
A[-1, :] = x
return A
newcol = np.random.random(M) < sparsity
np.all((append(scipy.sparse.csr_matrix(Aorig), newcol) == scipy.sparse.vstack((scipy.sparse.csr_matrix(Aorig), scipy.sparse.csr_matrix(newcol[None, :])))).todense())
Acsr = scipy.sparse.csr_matrix(Aorig)
%timeit append(Acsr, newcol)
Acsr = scipy.sparse.csr_matrix(Aorig)
%timeit scipy.sparse.vstack((Acsr, scipy.sparse.csr_matrix(newcol[None, :])))
def spmatreplacecol(A, colindex, newcol):
row, col = A.nonzero()
A.data[col == colindex] = 0.0
indices = np.nonzero(newcol)
A[indices, colindex] = newcol[indices]
return A
colindex = 3
newcol = np.random.random(N) < sparsity
Acsr.shape
Acsr = scipy.sparse.csr_matrix(Aorig)
%time Acsr[:, colindex] = newcol[:, None]
Acsr2 = scipy.sparse.csr_matrix(Aorig)
%time spmatreplacecol(Acsr2, colindex, newcol)
np.all(Acsr2.todense() == Acsr)
"""
Explanation: append row
End of explanation
"""
|
ling7334/tensorflow-get-started | mnist/MNIST_For_ML_Beginners.ipynb | apache-2.0 | from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: ML初学者的MNIST
本指导是写给初学机器学习和Tensorflow的朋友们。如果你已经知道MNIST和softmax(多元无序多分类)回归,你可以浏览快速教程。确保在开始本指导前安装了Tensorflow。
人们学习编程,总是从“Hello World”程序开始的。机器学习也有自己的“Hello World”——MNIST。
MNIST 是一个简单的计算机视觉的数据集。它包含一些手写的数字的图片,例如:
它还给每张图标记,告诉我们这张图是什么。比如,上图的标签是5,0,4,1。
本指导中,我们将训练一个模型来看图片以预测标签。我们的目标是训练一个精密的模型来达到最优秀的表现——之后我们会给出相应代码!现在我们只是初步了解下Tensorflow。我们来看一个简单的模型,Softmax回归算法。
本指导代码十分简短,所有有趣的事都发生在短短三行里。但首先必须明白:Tensorflow是如何工作的以及机器学习的核心理念。我们将细细的讲解。
关于本指导
本指导详细解释了mnist_softmax.py中的代码。
你可以随意进行本指导:
在阅读本指导同时,运行每段代码。
在阅读本指导之前或之后,运行mnist_softmax.py,并阅读本指导来弄清代码的意义。
最终我们将完成:
学习MNIST数据和softmax回归算法。
创建一个能看图识别数字的算法模型
使用Tensorflow来训练模型通过看成百上千的例子来识别数字(以及运行我们首个Tensorflow session)
使用测试数据来测试模型准确度
MNIST数据
Yann LeCun的网站存放了MNIST数据。如果你是直接运行本指导中的代码的话,试着运行这两句来下载和读取数据:
End of explanation
"""
import tensorflow as tf
"""
Explanation: MNIST数据分成3部分:55,000 个样本用作训练 (mnist.train),10,000张样本用作测试(mnist.test),以及5,000个样本用作验证 (mnist.validation)。这种分割十分重要:在机器学习中我们会挑选一部分数据不用做学习,以确保我们的学习是普适的。
就像之前所说的,每条MNIST样本有两个部分:一张手写数字的图片以及相应的标签。我们可以称之图片“x”和标“y”。训练组和测试组都包含图片及标签;举例来说,训练图片是mnist.train.images,训练标签是mnist.train.labels。
每张图片都是28像素乘28像素。我们可以转换成一大个数组:
我们可以将这个数组扁平化为一个28x28 = 784的向量。如何扁平化并不重要,只要是一致的就行。本例中,图像是一系列784维结构丰富的向量空间。
扁平化处理扔掉了2D图像的结构信息。这有什么坏处呢?最好的机器视觉的确会利用这些信息,这些之后会说。本例中的简单的方法,softmax回归不会。
mnist.train.images的结果是一个[55000, 784]张量(n维数组)。第一维是所有图片的索引,第二维是每张图片所有像素的索引。张量中每条记录是一个像素的强度(0到1之间),下例为某张图片的像素:
MNIST中每张图片都有一个标签,0到9的数字,对应图片中的数字。
以本指导的目的来说,我们想要标签是独热码向量。独热码,直观来说就是有多少个状态就有多少位,而且只有一个位是1,其他全为0的一种码制。本例中,第n个数位可以表示为一个第n维置1的n维向量,举例来说,3可以是[0,0,0,1,0,0,0,0,0,0]。因此mnist.train.labels是一个浮点型的[55000, 10]数组。
我们现在准备好来建立模型了!
Softmax回归算法
我们知道每个MNIST的图片都是0到9的手写数字。所以每个图片只有十种可能性。我们可以看一张图片并给出其是某个数位的可能性。例如,我们的模型可能看了一张9的图片,并给出其可能是9的可能性为80%,还给出5%的可能性其是8(因为上半部分都是一个圆圈)以及一些其他数位的可能性,因为它无法100%确定。
这是个经典的softmax回归的例子,自然,简单。如果你想辨别一个目标是某些目标中的一个的可能性,softmax就是这种算法。softmax给我们一系列0到1之间的数字,加起来正好是1。即使以后,我们训练更复杂的模型时,最后一层也会是一层softmax。
softmax回归有两个步骤:首先我们将输入为某个类别的证据相加,然后我们将证据转换为可能性。
总结一张图片是某类的证据,我们加权求和像素的强度。如果某像素不是该类图像的证据,则该权重是负的。反之亦然。
以下是一个模型的权重。红色表示负权重,蓝色表示正权重。
我们还要加一些额外的证据称作偏置。基本上,我们希望能够说一些东西可能是独立于输入的。结果是,给定输入$x$的是类$i$的证据:
$$\text{evidence}i = \sum_j W{i,~ j} x_j + b_i$$
其中 $W_i$ 是权重, $b_i$ 是类 $i$的偏置,$j$ 是是对输入图片$x$像素进行求和的索引。然后我们用“softmax”方法将证据转换为与我们预测相吻合的可能性:$$y = \text{softmax}(\text{evidence})$$
这里softmax作为“激活”或“连接”方法,修整我们线性方法的输出为我们期望的形式 —— 以本例来说,10种可能性的分布结果。您可以将其视为将吻合的证据转化为输入可能是某类的概率:
$$\text{softmax}(x) = \text{normalize}(\exp(x))$$
展开后得到:
$$\text{softmax}(x)_i = \frac{\exp(x_i)}{\sum_j \exp(x_j)}$$
但将softmax按第一种方式理解更有帮助:将其输入乘幂,然后使其归一化。乘幂意味着每多一个证据是以倍数增加权重的。反过来,少一个证据意味着权重是之前权重的分数。没有假设是零或者负的权重。softmax然后归一这些权重,是他们加起来正好等有一,组成有效的可能性分布。(要了解更多关于softmax方法的直觉,请参阅Michael Nielsen的书中的这部分内容,并附有交互式视觉辅助)
你可以将我们的softmax回归看作以下的东西,但有更多的$X$。每个输出,我们计算$x$的加权求和,加上偏置,然后使用softmax。
写作等式为:
我们可以“矢量化”这一过程,转换成矩阵乘法和向量加法。这有助于计算效率。(这也是一个有用的思考方式)
更紧凑地,我们可以写为:
$$y = \text{softmax}(Wx + b)$$
现在让我将这些写成tensorflow程序。
实现回归
在Python里做高效的数值计算,我们通常使用像NumPy之类的库,将庞大的计算量放到Python之外,更有效率的语言中去运行。不幸的是,切换回Python仍有许多开销。这样的开销对于想运行GPU上或分布式计算尤为显著,大量开销浪费在传输数据中。
Tensorflow也将庞大的计算放在Python之外,但它更进一步阻止的这些开销。不像运行单个庞大的计算,Tensorflow让我们描述一个交互操作的图,并整个运行在Python外。(少数机器学习框架使用这种方法)
要使用Tensorflow,首先导入它。
End of explanation
"""
x = tf.placeholder(tf.float32, [None, 784])
"""
Explanation: 我们操作符号变量以描述这些交互操作,让我们创建一个:
End of explanation
"""
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
"""
Explanation: x不是一个特定的值。它是个占位符,当我们让Tensorflow运行时我们会输入数值。我们想要输入任意数量的MNIST图像,每个扁平化为784维向量。将其表示为2维浮点型数值的张量,维度维[None, 784]。(这里None意味着该维可以为任意长度。)
我们的模型还需要权重和偏置。我们可以把这些当作是额外的输入,但Tensorflow有个更好的主意:变量。一个变量是一个存在Tensorflow的交互计算图中的可变的张量。其可被使用甚至修改。对于机器学习程序,一般来说,模型参数是变量。
End of explanation
"""
y = tf.nn.softmax(tf.matmul(x, W) + b)
"""
Explanation: 我们通过给予tf.Variable初始值以创建这些变量:本例中,我们初始化W和b为全是0的张量。既然我们要学习W和b,它们的初始值并不重要。
注意W的维度是[784, 10],因为我们想要将784维图像向量乘以它以得到10维的证据向量。b的维度是[10],我们可以加到输出中去。
我们现在可以实现我们的模型了,这只需要一行代码!
End of explanation
"""
y_ = tf.placeholder(tf.float32, [None, 10])
"""
Explanation: 首先,我们使用tf.matmul(x, W)表达式来将x乘以W。这是从我们的方程中乘以它们的方法翻转出来的,我们有了$W_x$,这是处理具有多个输入的2维张量x的小技巧。然后加b,最后使用tf.nn.softmax。
这就是了。几行设置之后,只用了一行代码来定义我们的模型。这并不是因为Tensorflow被设计为可以特别简单的使用softmax回归:只是描述从机器学习模型到物理模拟的多种数值计算只是一种非常灵活的方式。一旦定义,我们的模型可以在不同的设备上运行:您的计算机的CPU,GPU,甚至手机!
训练
为了训练我们的模型,我们首先需要定义一个指标来评估这个模型是好的。其实,在机器学习,我们通常定义指标来表示一个模型是坏的,这个指标称为成本(cost)或损失(loss),然后尽量最小化这个指标。但是,这两种方式是相同的。
一个非常常见的,非常漂亮的成本函数是“交叉熵”(cross-entropy)。交叉熵产生于信息论里面的信息压缩编码技术,但是它后来演变成为从博弈论到机器学习等其他领域里的重要技术手段。它的定义如下:
$$H_{y'}(y) = -\sum_i y'_i \log(y_i)$$
$y$是我们预测的概率分布, $y'$是实际的分布(我们输入的one-hot vector)。比较粗糙的理解是,交叉熵是用来衡量我们的预测用于描述真相的低效性。更详细的关于交叉熵的解释超出本教程的范畴,但是你很有必要好好理解它。
为了计算交叉熵,我们首先需要添加一个新的占位符用于输入正确值:
End of explanation
"""
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
"""
Explanation: 然后我们可以用 $-\sum y'\log(y)$ 计算交叉熵:
End of explanation
"""
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
"""
Explanation: 首先,用tf.log计算y的每个元素的对数。接下来,我们把y_ 的每一个元素和tf.log(y)的对应元素相乘。然后,根据参数reduction_indices=[1],用tf.reduce_sum函数将y的第二维中所有元素相加,最后tf.reduce_mean计算批次中所有示例的平均值。
注意在源码中,我们没有使用这个方程,因为其为数值不稳定的。我们替代为对非标准化逻辑使用tf.nn.softmax_cross_entropy_with_logits(例:我们对tf.matmul(x, W) + b)调用softmax_cross_entropy_with_logits),因为它计算softmax激活函数更为数值稳定。在你的代码中,考虑使用tf.nn.softmax_cross_entropy_with_logits。
现在我们知道我们需要我们的模型做什么啦,用TensorFlow来训练它是非常容易的。因为TensorFlow拥有一张描述你各个计算单元的图,它可以自动地使用反向传播算法(backpropagation algorithm)来有效地确定你的变量是如何影响你想要最小化的那个成本值的。然后,TensorFlow会用你选择的优化算法来不断地修改变量以降低成本。
End of explanation
"""
sess = tf.InteractiveSession()
"""
Explanation: 在这里,我们要求TensorFlow用梯度下降算法(gradient descent algorithm)以0.5的学习速率最小化交叉熵。梯度下降算法(gradient descent algorithm)是一个简单的学习过程,TensorFlow只需将每个变量一点点地往使成本不断降低的方向移动。当然TensorFlow也提供了许多其他优化算法:只要简单地调整一行代码就可以使用其他的算法。
TensorFlow在这里实际上所做的是,它会在后台给描述你的计算的那张图里面增加一系列新的计算操作单元用于实现反向传播算法和梯度下降算法。然后,它返回给你的只是一个单一的操作,当运行这个操作时,它用梯度下降算法训练你的模型,微调你的变量,不断减少成本。
我们可以在InteractiveSession中运行我们的模型了:
End of explanation
"""
tf.global_variables_initializer().run()
"""
Explanation: 我们首先要添加一个操作来初始化我们创建的变量:
End of explanation
"""
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
"""
Explanation: 然后开始训练模型,这里我们让模型循环训练1000次!
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
"""
Explanation: 该循环的每个步骤中,我们都会随机抓取训练数据中的100个批处理数据点,然后我们用这些数据点作为参数替换之前的占位符来运行train_step。
使用一小部分的随机数据来进行训练被称为随机训练(stochastic training)- 在这里更确切的说是随机梯度下降训练。在理想情况下,我们希望用我们所有的数据来进行每一步的训练,因为这能给我们更好的训练结果,但显然这需要很大的计算开销。所以,每一次训练我们可以使用不同的数据子集,这样做既可以减少计算开销,又可以最大化地学习到数据集的总体特性。
评估我们的模型
那么我们的模型性能如何呢?
首先让我们找出那些预测正确的标签。tf.argmax是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,而tf.argmax(y_,1)代表正确的标签,我们可以用tf.equal来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: 这行代码会给我们一组布尔值。为了确定正确预测项的比例,我们可以把布尔值转换成浮点数,然后取平均值。例如,[True, False, True, True]会变成 [1,0,1,1],取平均值后得到0.75。
End of explanation
"""
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
"""
Explanation: 最后,我们计算所学习到的模型在测试数据集上面的正确率。
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/variance_components.ipynb | bsd-3-clause | import numpy as np
import statsmodels.api as sm
from statsmodels.regression.mixed_linear_model import VCSpec
import pandas as pd
"""
Explanation: Variance Component Analysis
This notebook illustrates variance components analysis for two-level
nested and crossed designs.
End of explanation
"""
np.random.seed(3123)
"""
Explanation: Make the notebook reproducible
End of explanation
"""
def generate_nested(n_group1=200, n_group2=20, n_rep=10, group1_sd=2,
group2_sd=3, unexplained_sd=4):
# Group 1 indicators
group1 = np.kron(np.arange(n_group1), np.ones(n_group2 * n_rep))
# Group 1 effects
u = group1_sd * np.random.normal(size=n_group1)
effects1 = np.kron(u, np.ones(n_group2 * n_rep))
# Group 2 indicators
group2 = np.kron(np.ones(n_group1), np.kron(np.arange(n_group2), np.ones(n_rep)))
# Group 2 effects
u = group2_sd * np.random.normal(size=n_group1*n_group2)
effects2 = np.kron(u, np.ones(n_rep))
e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)
y = effects1 + effects2 + e
df = pd.DataFrame({"y":y, "group1": group1, "group2": group2})
return df
"""
Explanation: Nested analysis
In our discussion below, "Group 2" is nested within "Group 1". As a
concrete example, "Group 1" might be school districts, with "Group
2" being individual schools. The function below generates data from
such a population. In a nested analysis, the group 2 labels that
are nested within different group 1 labels are treated as
independent groups, even if they have the same label. For example,
two schools labeled "school 1" that are in two different school
districts are treated as independent schools, even though they have
the same label.
End of explanation
"""
df = generate_nested()
"""
Explanation: Generate a data set to analyze.
End of explanation
"""
model1 = sm.MixedLM.from_formula("y ~ 1", re_formula="1", vc_formula={"group2": "0 + C(group2)"},
groups="group1", data=df)
result1 = model1.fit()
print(result1.summary())
"""
Explanation: Using all the default arguments for generate_nested, the population
values of "group 1 Var" and "group 2 Var" are 2^2=4 and 3^2=9,
respectively. The unexplained variance, listed as "scale" at the
top of the summary table, has population value 4^2=16.
End of explanation
"""
def f(x):
n = x.shape[0]
g2 = x.group2
u = g2.unique()
u.sort()
uv = {v: k for k, v in enumerate(u)}
mat = np.zeros((n, len(u)))
for i in range(n):
mat[i, uv[g2.iloc[i]]] = 1
colnames = ["%d" % z for z in u]
return mat, colnames
"""
Explanation: If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually.
End of explanation
"""
vcm = df.groupby("group1").apply(f).to_list()
mats = [x[0] for x in vcm]
colnames = [x[1] for x in vcm]
names = ["group2"]
vcs = VCSpec(names, [colnames], [mats])
"""
Explanation: Then we set up the variance components using the VCSpec class.
End of explanation
"""
oo = np.ones(df.shape[0])
model2 = sm.MixedLM(df.y, oo, exog_re=oo, groups=df.group1, exog_vc=vcs)
result2 = model2.fit()
print(result2.summary())
"""
Explanation: Finally we fit the model. It can be seen that the results of the
two fits are identical.
End of explanation
"""
def generate_crossed(n_group1=100, n_group2=100, n_rep=4, group1_sd=2,
group2_sd=3, unexplained_sd=4):
# Group 1 indicators
group1 = np.kron(np.arange(n_group1, dtype=np.int),
np.ones(n_group2 * n_rep, dtype=np.int))
group1 = group1[np.random.permutation(len(group1))]
# Group 1 effects
u = group1_sd * np.random.normal(size=n_group1)
effects1 = u[group1]
# Group 2 indicators
group2 = np.kron(np.arange(n_group2, dtype=np.int),
np.ones(n_group2 * n_rep, dtype=np.int))
group2 = group2[np.random.permutation(len(group2))]
# Group 2 effects
u = group2_sd * np.random.normal(size=n_group2)
effects2 = u[group2]
e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)
y = effects1 + effects2 + e
df = pd.DataFrame({"y":y, "group1": group1, "group2": group2})
return df
"""
Explanation: Crossed analysis
In a crossed analysis, the levels of one group can occur in any
combination with the levels of the another group. The groups in
Statsmodels MixedLM are always nested, but it is possible to fit a
crossed model by having only one group, and specifying all random
effects as variance components. Many, but not all crossed models
can be fit in this way. The function below generates a crossed data
set with two levels of random structure.
End of explanation
"""
df = generate_crossed()
"""
Explanation: Generate a data set to analyze.
End of explanation
"""
vc = {"g1": "0 + C(group1)", "g2": "0 + C(group2)"}
oo = np.ones(df.shape[0])
model3 = sm.MixedLM.from_formula("y ~ 1", groups=oo, vc_formula=vc, data=df)
result3 = model3.fit()
print(result3.summary())
"""
Explanation: Next we fit the model, note that the groups vector is constant.
Using the default parameters for generate_crossed, the level 1
variance should be 2^2=4, the level 2 variance should be 3^2=9, and
the unexplained variance should be 4^2=16.
End of explanation
"""
def f(g):
n = len(g)
u = g.unique()
u.sort()
uv = {v: k for k, v in enumerate(u)}
mat = np.zeros((n, len(u)))
for i in range(n):
mat[i, uv[g.iloc[i]]] = 1
colnames = ["%d" % z for z in u]
return [mat], [colnames]
vcm = [f(df.group1), f(df.group2)]
print(vcm)
mats = [x[0] for x in vcm]
colnames = [x[1] for x in vcm]
names = ["group1", "group2"]
vcs = VCSpec(names, colnames, mats)
"""
Explanation: If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually.
End of explanation
"""
oo = np.ones(df.shape[0])
model4 = sm.MixedLM(df.y, oo[:, None], exog_re=None, groups=oo, exog_vc=vcs)
result4 = model4.fit()
print(result4.summary())
"""
Explanation: Here we fit the model without using formulas, it is simple to check
that the results for models 3 and 4 are identical.
End of explanation
"""
|
xlbaojun/Note-jupyter | 05其他/pandas文档-zh-master/.ipynb_checkpoints/检索 ,查询数据-checkpoint.ipynb | gpl-2.0 | import numpy as np
import pandas as pd
"""
Explanation: 检索,查询数据
这一节学习如何检索pandas数据。
End of explanation
"""
dates = pd.date_range('1/1/2000', periods=8)
dates
df = pd.DataFrame(np.random.randn(8,4), index=dates, columns=list('ABCD'))
df
panel = pd.Panel({'one':df, 'two':df-df.mean()})
panel
"""
Explanation: Python和Numpy的索引操作符[]和属性操作符‘.’能够快速检索pandas数据。
然而,这两种方式的效率在pandas中可能不是最优的,我们推荐使用专门优化过的pandas数据检索方法。而这些方法则是本节要介绍的。
多种索引方式
pandas支持三种不同的索引方式:
* .loc 是基本的基于label的,当然也可以和一个boolean数组一起使用。
* .iloc 是基本的基于整数位置(从0到axis的length-1)的,当然也可以和一个boolean数组一起使用。当提供检索的index越界时会有IndexError错误,注意切片索引(slice index)允许越界。
* .ix 支持基于label和整数位置混合的数据获取方式。默认是基本label的. .ix是最常用的方式,它支持所有.loc和.iloc的输入。如果提供的是纯label或纯整数索引,我们建议使用.loc或 .iloc。
以 .loc为例看一下使用方式:
对象类型 | Indexers
Series | s.loc[indexer]
DataFrame | df.loc[row_indexer, column_indexer]
Panel | p.loc[item_indexer, major_indexer, minor_indexer]
最基本的索引和选择
最基本的选择数据方式就是使用[]操作符进行索引,
对象类型 | Selection | 返回值类型
Series | series[label],这里的label是index名 | 常数
DataFrame| frame[colname],使用列名 | Series对象,相应的colname那一列
Panel | panel[itemname] | DataFrame对象,相应的itemname那一个
下面用示例展示一下
End of explanation
"""
s = df['A'] #使用列名
s#返回的是 Series
"""
Explanation: 我们使用最基本的[]操作符
End of explanation
"""
s[dates[5]] #使用index名
panel['two']
"""
Explanation: Series使用index索引
End of explanation
"""
df
df[['B', 'A']] = df[['A', 'B']]
df
"""
Explanation: 也可以给[]传递一个column name组成的的list,形如df[[col1,col2]], 如果给出的某个列名不存在,会报错
End of explanation
"""
sa = pd.Series([1,2,3],index=list('abc'))
dfa = df.copy()
sa
sa.b #直接把index作为属性
dfa
dfa.A
panel.one
sa
sa.a = 5
sa
sa
dfa.A=list(range(len(dfa.index))) # ok if A already exists
dfa
dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
dfa
"""
Explanation: 通过属性访问 把column作为DataFrame对象的属性
可以直接把Series的index、DataFrame中的column、Panel中的item作为这些对象的属性使用,然后直接访问相应的index、column、item
End of explanation
"""
s
s[:5]
s[::2]
s[::-1]
"""
Explanation: 注意:使用属性和[] 有一点区别:
如果要新建一个column,只能使用[]
毕竟属性的含义就是现在存在的!不存在的列名当然不是属性了
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
使用属性要注意的:
* 如果一个已经存在的函数和列名相同,则不存在相应的属性哦
* 总而言之,属性的适用范围要比[]小
切片范围 Slicing ranges
可以使用 [] 还有.iloc切片,这里先介绍使用[]
对于Series来说,使用[]进行切片就像ndarray一样,
End of explanation
"""
s2 = s.copy()
s2[:5]=0 #赋值
s2
"""
Explanation: []不但可以检索,也可以赋值
End of explanation
"""
df[:3]
df[::-1]
"""
Explanation: 对于DataFrame对象来说,[]操作符按照行进行切片,非常有用。
End of explanation
"""
df1 = pd.DataFrame(np.random.rand(5,4), columns=list('ABCD'), index=pd.date_range('20160101',periods=5))
df1
df1.loc[2:3]
"""
Explanation: 使用Label进行检索
警告:
.loc要求检索时输入必须严格遵守index的类型,一旦输入类型不对,将会引起TypeError。
End of explanation
"""
df1.loc['20160102':'20160104']
"""
Explanation: 输入string进行检索没问题
End of explanation
"""
s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
s1
s1.loc['c':]
s1.loc['b']
"""
Explanation: 细心地你一定发现了,index='20160104'那一行也被检索出来了,没错,loc检索时范围是闭集合[start,end].
整型可以作为label检索,这是没问题的,不过要记住此时整型表示的是label而不是index中的下标!
.loc操作是检索时的基本操作,以下输入格式都是合法的:
* 一个label,比如:5、'a'. 记住这里的5表示的是index中的一个label而不是index中的一个下标。
* label组成的列表或者数组比如['a','b','c']
* 切片,比如'a':'f'.注意loc中切片范围是闭集合!
* 布尔数组
End of explanation
"""
s1.loc['c':]=0
s1
"""
Explanation: loc同样支持赋值操作
End of explanation
"""
df1 = pd.DataFrame(np.random.randn(6,4), index=list('abcdef'),columns=list('ABCD'))
df1
df1.loc[['a','b','c','d'],:]
df1.loc[['a','b','c','d']] #可以省略 ':'
"""
Explanation: 再来看看DataFramed的例子
End of explanation
"""
df1.loc['d':,'A':'C'] #注意是闭集合
df1.loc['a']
"""
Explanation: 使用切片检索
End of explanation
"""
df1.loc['a']>0
df1.loc[:,df1.loc['a']>0]
"""
Explanation: 使用布尔数组检索
End of explanation
"""
df1.loc['a','A']
df1.get_value('a','A')
"""
Explanation: 得到DataFrame中的某一个值, 等同于df1.get_value('a','A')
End of explanation
"""
s1 = pd.Series(np.random.randn(5),index=list(range(0,10,2)))
s1
s1.iloc[:3] #注意检索是半闭半开区间
s1.iloc[3]
"""
Explanation: 根据下标进行检索 Selection By Position
pandas提供了一系列的方法实现基于整型的检索。语义和python、numpy切片几乎一样。下标同样都是从0开始,并且进行的是半闭半开的区间检索[start,end)。如果输入 非整型label当做下标进行检索会引起IndexError。
.iloc的合法输入包括:
* 一个整数,比如5
* 整数组成的列表或者数组,比如[4,3,0]
* 整型表示的切片,比如1:7
* 布尔数组
看一下Series使用iloc检索的示例:
End of explanation
"""
s1.iloc[:3]=0
s1
"""
Explanation: iloc同样也可以进行赋值
End of explanation
"""
df1 = pd.DataFrame(np.random.randn(6,4),index=list(range(0,12,2)), columns=list(range(0,8,2)))
df1
df1.iloc[:3]
"""
Explanation: DataFrame的示例:
End of explanation
"""
df1.iloc[1:5,2:4]
df1.iloc[[1,3,5],[1,2]]
df1.iloc[1:3,:]
df1.iloc[:,1:3]
df1.iloc[1,1]#只检索一个元素
"""
Explanation: 进行行和列的检索
End of explanation
"""
df1.iloc[1]
df1.iloc[1:2]
"""
Explanation: 注意下面两个例子的区别:
End of explanation
"""
x = list('abcdef')
x
x[4:10] #这里x的长度是6
x[8:10]
s = pd.Series(x)
s
s.iloc[4:10]
s.iloc[8:10]
df1 = pd.DataFrame(np.random.randn(5,2), columns=list('AB'))
df1
df1.iloc[:,2:3]
df1.iloc[:,1:3]
df1.iloc[4:6]
"""
Explanation: 如果切片检索时输入的范围越界,没关系,只要pandas版本>=v0.14.0, 就能如同Python/Numpy那样正确处理。
注意:仅限于 切片检索
End of explanation
"""
df1.iloc[[4,5,6]]
"""
Explanation: 上面说到,这种优雅处理越界的能力仅限于输入全是切片,如果输入是越界的 列表或者整数,则会引起IndexError
End of explanation
"""
df1.iloc[:,4]
"""
Explanation: 输入有切片,有整数,如果越界同样不能处理
End of explanation
"""
s = pd.Series([0,1,2,3,4,5])
s
s.sample()
s.sample(n=6)
s.sample(3) #直接输入整数即可
"""
Explanation: 选择随机样本 Selecting Random Samples
使用sample()方法能够从行或者列中进行随机选择,适用对象包括Series、DataFrame和Panel。sample()方法默认对行进行随机选择,输入可以是整数或者小数。
End of explanation
"""
s.sample(frac=0.5)
s.sample(0.5) #必须输入frac=0.5
s.sample(frac=0.8) #6*0.8=4.8
s.sample(frac=0.7)# 6*0.7=4.2
"""
Explanation: 也可以输入小数,则会随机选择N*frac个样本, 结果进行四舍五入
End of explanation
"""
s
s.sample(n=6,replace=False)
s.sample(6,replace=True)
"""
Explanation: sample()默认进行的无放回抽样,可以利用replace=True参数进行可放回抽样。
End of explanation
"""
s = pd.Series([0,1,2,3,4,5])
s
example_weights=[0,0,0.2,0.2,0.2,0.4]
s.sample(n=3,weights=example_weights)
example_weights2 = [0.5, 0, 0, 0, 0, 0]
s.sample(n=1, weights=example_weights2)
s.sample(n=2, weights=example_weights2) #n>1 会报错,
"""
Explanation: 默认情况下,每一行/列都被等可能的采样,如果你想为每一行赋予一个被抽样选择的权重,可以利用weights参数实现。
注意:如果weights中各概率相加和不等于1,pandas会先对weights进行归一化,强制转为概率和为1!
End of explanation
"""
s
s.sample(7) #7不行
s.sample(7,replace=True)
"""
Explanation: 注意:由于sample默认进行的是无放回抽样,所以输入必须n<=行数,除非进行可放回抽样。
End of explanation
"""
df2 = pd.DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
df2
df2.sample(n=3,weights='weight_column')
"""
Explanation: 如果是对DataFrame对象进行有权重采样,一个简单 的方法是新增一列用于表示每一行的权重
End of explanation
"""
df3 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df3
df3.sample(1,axis=1)
"""
Explanation: 对列进行采样, axis=1
End of explanation
"""
df4 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df4
"""
Explanation: 我们也可以使用random_state参数 为sample内部的随机数生成器提供种子数。
End of explanation
"""
df4.sample(n=2, random_state=2)
df4.sample(n=2,random_state=2)
df4.sample(n=2,random_state=3)
"""
Explanation: 注意下面两个示例,输出是相同的,因为使用了相同的种子数
End of explanation
"""
se = pd.Series([1,2,3])
se
se[5]=5
se
"""
Explanation: 使用赋值的方式扩充对象 Setting With Enlargement
用.loc/.ix/[]对不存在的键值进行赋值时,将会导致在对象中添加新的元素,它的键即为赋值时不存在的键。
对于Series来说,这是一种有效的添加操作。
End of explanation
"""
dfi = pd.DataFrame(np.arange(6).reshape(3,2),columns=['A','B'])
dfi
dfi.loc[:,'C']=dfi.loc[:,'A'] #对列进行扩充
dfi
dfi.loc[3]=5 #对行进行扩充
dfi
"""
Explanation: DataFrame可以在行或者列上扩充数据
End of explanation
"""
s.iat[5]
df.at[dates[5],'A']
df.iat[3,0]
"""
Explanation: 标量值的快速获取和赋值
如果仅仅想获取一个元素,使用[]未免太繁重了。pandas提供了快速获取一个元素的方法:at和iat. 适用于Series、DataFrame和Panel。
如果loc方法,at方法的合法输入是label,iat的合法输入是整型。
End of explanation
"""
df.at[dates[-1]+1,0]=7
df
"""
Explanation: 也可以进行赋值操作
End of explanation
"""
s = pd.Series(range(-3, 4))
s
s[s>0]
s[(s<-1) | (s>0.5)]
s[~(s<0)]
"""
Explanation: 布尔检索 Boolean indexing
另一种常用的操作是使用布尔向量过滤数据。运算符有三个:|(or), &(and), ~(not)。
注意:运算符的操作数要在圆括号内。
使用布尔向量检索Series的操作方式和numpy ndarray一样。
End of explanation
"""
df[df['A'] > 0]
"""
Explanation: DataFrame示例:
End of explanation
"""
df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
'c' : np.random.randn(7)})
df2
criterion = df2['a'].map(lambda x:x.startswith('t'))
df2[criterion]
df2[[x.startswith('t') for x in df2['a']]]
df2[criterion & (df2['b'] == 'x')]
"""
Explanation: 利用列表解析和map方法能够产生更加复杂的选择标准。
End of explanation
"""
df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
"""
Explanation: 结合loc、iloc等方法可以检索多个坐标下的数据.
End of explanation
"""
s = pd.Series(np.arange(5), index=np.arange(5)[::-1],dtype='int64')
s
s.isin([2,4,6])
s[s.isin([2,4,6])]
"""
Explanation: 使用isin方法检索 Indexing with isin
isin(is in)
对于Series对象来说,使用isin方法时传入一个列表,isin方法会返回一个布尔向量。布尔向量元素为1的前提是列表元素在Series对象中存在。看起来比较拗口,还是看例子吧:
End of explanation
"""
s[s.index.isin([2,4,6])]
s[[2,4,6]]
"""
Explanation: Index对象中也有isin方法.
End of explanation
"""
df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
'ids2':['a', 'n', 'c', 'n']})
df
values=['a', 'b', 1, 3]
df.isin(values)
"""
Explanation: DataFrame同样有isin方法,参数是数组或字典。二者的区别看例子吧:
End of explanation
"""
values = {'ids': ['a', 'b'], 'vals': [1, 3]}
df.isin(values)
"""
Explanation: 输入一个字典的情形:
End of explanation
"""
values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
row_mark = df.isin(values).all(1)
df[row_mark]
row_mark = df.isin(values).any(1)
df[row_mark]
"""
Explanation: 结合isin方法和any() all()可以对DataFrame进行快速查询。比如选择每一列都符合标准的行:
End of explanation
"""
s[s>0]
"""
Explanation: where()方法 The where() Method and Masking
使用布尔向量对Series对象查询时通常返回的是对象的子集。如果想要返回的shape和原对象相同,可以使用where方法。
使用布尔向量对DataFrame对象查询返回的shape和原对象相同,这是因为底层用的where方法实现。
End of explanation
"""
s.where(s>0)
df[df<0]
df.where(df<0)
"""
Explanation: 使用where方法
End of explanation
"""
df.where(df<0, 2)
df
df.where(df<0, df) #将df作为other的参数值
"""
Explanation: where方法还有一个可选的other参数,作用是替换返回结果中是False的值,并不会改变原对象。
End of explanation
"""
s2 = s.copy()
s2
s2[s2<0]=0
s2
"""
Explanation: 你可能想基于某种判断条件来赋值。一种直观的方法是:
End of explanation
"""
df = pd.DataFrame(np.random.randn(6,5), index=list('abcdef'), columns=list('ABCDE'))
df_orig = df.copy()
df_orig.where(df < 0, -df, inplace=True);
df_orig
"""
Explanation: 默认情况下,where方法并不会修改原始对象,它返回的是一个修改过的原始对象副本,如果你想直接修改原始对象,方法是将inplace参数设置为True
End of explanation
"""
df2 = df.copy()
df2[df2[1:4] >0]=3
df2
df2 = df.copy()
df2.where(df2>0, df2['A'], axis='index')
"""
Explanation: 对齐
where方法会将输入的布尔条件对齐,因此允许部分检索时的赋值。
End of explanation
"""
s.mask(s>=0)
df.mask(df >= 0)
"""
Explanation: mask
End of explanation
"""
n = 10
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df[(df.a<df.b) & (df.b<df.c)]
df.query('(a < b) & (b < c)') #
"""
Explanation: query()方法 The query() Method (Experimental)
DataFrame对象拥有query方法,允许使用表达式检索。
比如,检索列'b'的值介于列‘a’和‘c’之间的行。
注意: 需要安装numexptr。
End of explanation
"""
n = 10
colors = np.random.choice(['red', 'green'], size=n)
foods = np.random.choice(['eggs', 'ham'], size=n)
colors
foods
index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
df = pd.DataFrame(np.random.randn(n,2), index=index)
df
df.query('color == "red"')
"""
Explanation: MultiIndex query() 语法
对于DataFrame对象,可以使用MultiIndex,如同操作列名一样。
End of explanation
"""
df.index.names = [None, None]
df
df.query('ilevel_0 == "red"')
"""
Explanation: 如果index没有名字,可以给他们命名
End of explanation
"""
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df2 = pd.DataFrame(np.random.randn(n+2, 3), columns=df.columns)
df2
expr = '0.0 <= a <= c <= 0.5'
map(lambda frame: frame.query(expr), [df, df2])
"""
Explanation: ilevl_0意思是 0级index。
query() 用例 query() Use Cases
一个使用query()的情景是面对DataFrame对象组成的集合,并且这些对象有共同的的列名,则可以利用query方法对这个集合进行统一检索。
End of explanation
"""
df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
df
df.query('(a<b) &(b<c)')
df[(df.a < df.b) & (df.b < df.c)]
"""
Explanation: Python中query和pandas中query语法比较 query() Python versus pandas Syntax Comparison
End of explanation
"""
df.query('a < b & b < c')
df.query('a<b and b<c')
"""
Explanation: query()可以去掉圆括号, 也可以用and 代替&运算符
End of explanation
"""
df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
'c': np.random.randint(5, size=12),
'd': np.random.randint(9, size=12)})
df
df.query('a in b')
df[df.a.isin(df.b)]
df[~df.a.isin(df.b)]
df.query('a in b and c < d') #更复杂的例子
df[df.b.isin(df.a) & (df.c < df.d)] #Python语法
"""
Explanation: in 和not in 运算符 The in and not in operators
query()也支持Python中的in和not in运算符,实际上是底层调用isin
End of explanation
"""
df.query('b==["a", "b", "c"]')
df[df.b.isin(["a", "b", "c"])] #Python语法
df.query('c == [1, 2]')
df.query('c != [1, 2]')
df.query('[1, 2] in c') #使用in
df.query('[1, 2] not in c')
df[df.c.isin([1, 2])] #Python语法
"""
Explanation: ==和列表对象一起使用 Special use of the == operator with list objects
可以使用==/!=将列表和列名直接进行比较,等价于使用in/not in.
三种方法功能等价: ==/!= VS in/not in VS isin()/~isin()
End of explanation
"""
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df['bools']=np.random.randn(len(df))>0.5
df
df.query('bools')
df.query('not bools')
df.query('not bools') == df[~df.bools]
"""
Explanation: 布尔运算符 Boolean Operators
可以使用not或者~对布尔表达式进行取非。
End of explanation
"""
shorter = df.query('a<b<c and (not bools) or bools>2')
shorter
longer = df[(df.a < df.b) & (df.b < df.c) & (~df.bools) | (df.bools > 2)]
longer
shorter == longer
"""
Explanation: 表达式任意复杂都没关系。
End of explanation
"""
df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
'c': np.random.randn(7)})
df2
df2.duplicated('a') #只观察列a的值是否重复
df2.duplicated('a', keep='last')
df2.drop_duplicates('a')
df2.drop_duplicates('a', keep='last')
df2.drop_duplicates('a', keep=False)
"""
Explanation: query()的性能
DataFrame.query()底层使用numexptr,所以速度要比Python快,特别时当DataFrame对象非常大时。
重复数据的确定和删除 Duplicate Data
如果你想确定和去掉DataFrame对象中重复的行,pandas提供了两个方法:duplicated和drop_duplicates. 两个方法的参数都是列名。
* duplicated 返回一个布尔向量,长度等于行数,表示每一行是否重复
* drop_duplicates 则删除重复的行
默认情况下,首次遇到的行被认为是唯一的,以后遇到内容相同的行都被认为是重复的。不过两个方法都有一个keep参数来确定目标行是否被保留。
* keep='first'(默认):标记/去掉重复行除了第一次出现的那一行
* keep='last': 标记/去掉重复行除了最后一次出现的那一行
* keep=False: 标记/去掉所有重复的行
End of explanation
"""
df2.duplicated(['a', 'b']) #此时列a和b两个元素构成每一个检索的基本单位,
df2
"""
Explanation: 可以传递列名组成的列表
End of explanation
"""
df3 = pd.DataFrame({'a': np.arange(6),
'b': np.random.randn(6)},
index=['a', 'a', 'b', 'c', 'b', 'a'])
df3
df3.index.duplicated() #布尔表达式
df3[~df3.index.duplicated()]
df3[~df3.index.duplicated(keep='last')]
df3[~df3.index.duplicated(keep=False)]
"""
Explanation: 也可以检查index值是否重复来去掉重复行,方法是Index.duplicated然后使用切片操作(因为调用Index.duplicated会返回布尔向量)。keep参数同上。
End of explanation
"""
s = pd.Series([1,2,3], index=['a', 'b', 'c'])
s
s.get('a')
s.get('x', default=-1)
s.get('b')
"""
Explanation: 形似字典的get()方法
Serires, DataFrame和Panel都有一个get方法来得到一个默认值。
End of explanation
"""
df = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC'))
df.select(lambda x: x=='A', axis=1)
"""
Explanation: select()方法 The select() Method
Series, DataFrame和Panel都有select()方法来检索数据,这个方法作为保留手段通常其他方法都不管用的时候才使用。select接受一个函数(在label上进行操作)作为输入返回一个布尔值。
End of explanation
"""
dflookup = pd.DataFrame(np.random.randn(20, 4), columns=list('ABCD'))
dflookup
dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D'])
"""
Explanation: lookup()方法 The lookup()方法
输入行label和列label,得到一个numpy数组,这就是lookup方法的功能。
End of explanation
"""
index = pd.Index(['e', 'd', 'a', 'b'])
index
'd' in index
"""
Explanation: Index对象 Index objects
pandas中的Index类和它的子类可以被当做一个序列可重复集合(ordered multiset),允许数据重复。然而,如果你想把一个有重复值Index对象转型为一个集合这是不可以的。创建Index最简单的方法就是通过传递一个列表或者其他序列创建。
End of explanation
"""
index = pd.Index(['e', 'd', 'a', 'b'], name='something')
index.name
index = pd.Index(list(range(5)), name='rows')
columns = pd.Index(['A', 'B', 'C'], name='cols')
df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
df
df['A']
"""
Explanation: 还可以个Index命名
End of explanation
"""
dfmi = pd.DataFrame([list('abcd'),
list('efgh'),
list('ijkl'),
list('mnop')],
columns=pd.MultiIndex.from_product([['one','two'],
['first','second']]))
dfmi
"""
Explanation: 返回视图VS返回副本 Returning a view versus a copy
当对pandas对象赋值时,一定要注意避免链式索引(chained indexing)。看下面的例子:
End of explanation
"""
dfmi['one']['second']
dfmi.loc[:,('one','second')]
"""
Explanation: 比较下面两种访问方式:
End of explanation
"""
dfmi.loc[:,('one','second')]=value
#实际是
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
"""
Explanation: 上面两种方法返回的结果抖一下,那么应该使用哪种方法呢?答案是我们更推荐大家使用方法二。
dfmi['one']选择了第一级列然后返回一个DataFrame对象,然后另一个Python操作dfmi_with_one['second']根据'second'检索出了一个Series。对pandas来说,这两个操作是独立、有序执行的。而.loc方法传入一个元组(slice(None),('one','second')),pandas把这当作一个事件执行,所以执行速度更快。
为什么使用链式索引赋值为报错?
刚才谈到不推荐使用链式索引是出于性能的考虑。接下来从赋值角度谈一下不推荐使用链式索引。首先,思考Python怎么解释执行下面的代码?
End of explanation
"""
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
"""
Explanation: 但下面的代码解释后结果却不一样:
End of explanation
"""
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
foo['quux'] = value # We don't know whether this will modify df or not!
return foo
"""
Explanation: 看到__getitem__了吗?除了最简单的情况,我们很难预测他到底返回的是视图还是副本(哲依赖于数组的内存布局,这是pandas没有硬性要求的),因此不推荐使用链式索引赋值!
而dfmi.loc.__setitem__直接对dfmi进行操作。
有时候明明没有使用链式索引,也会引起SettingWithCopy警告,这是Pandas设计的bug~
End of explanation
"""
dfb = pd.DataFrame({'a' : ['one', 'one', 'two',
'three', 'two', 'one', 'six'],
'c' : np.arange(7)})
dfb
dfb['c'][dfb.a.str.startswith('o')] = 42 #虽然会引起SettingWithCopyWarning 但也能得到正确结果
pd.set_option('mode.chained_assignment','warn')
dfb[dfb.a.str.startswith('o')]['c'] = 42 #这实际上是对副本赋值!
"""
Explanation: 链式索引中顺序也很重要
此外,在链式表达式中,不同的顺序也可能导致不同的结果。这里的顺序指的是检索时行和列的顺序。
End of explanation
"""
dfc = pd.DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]})
dfc
dfc.loc[0,'A'] = 11
dfc
"""
Explanation: 正确的方式是:老老实实使用.loc
End of explanation
"""
|
turbomanage/training-data-analyst | quests/rl/a2c/a2c_on_gcp.ipynb | apache-2.0 | %%bash
BUCKET=<your-bucket-here> # Change to your bucket name
JOB_NAME=pg_on_gcp_$(date -u +%y%m%d_%H%M%S)
REGION='us-central1' # Change to your bucket region
IMAGE_URI=gcr.io/cloud-training-prod-bucket/pg:latest
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=gs://$BUCKET/$JOB_NAME \
--config=templates/hyperparam.yaml
"""
Explanation: Policy Gradients and A2C
In the <a href="../dqn/dqns_on_gcp.ipynb">previous notebook</a>, we learned how to use hyperparameter tuning to help DQN agents balance a pole on a cart. In this notebook, we'll explore two other types of alogrithms: Policy Gradients and A2C.
Setup
Hypertuning takes some time, and in this case, it can take anywhere between 10 - 30 minutes. If this hasn't been done already, run the cell below to kick off the training job now. We'll step through what the code is doing while our agents learn.
End of explanation
"""
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras import backend as K
CLIP_EDGE = 1e-8
def print_state(state, step, reward=None):
format_string = 'Step {0} - Cart X: {1:.3f}, Cart V: {2:.3f}, Pole A: {3:.3f}, Pole V:{4:.3f}, Reward:{5}'
print(format_string.format(step, *tuple(state), reward))
env = gym.make('CartPole-v0')
"""
Explanation: Thankfully, we can use the same environment for these algorithms as DQN, so this notebook will focus less on the operational work of feeding our agents the data, and more on the theory behind these algorthims. Let's start by loading our libraries and environment.
End of explanation
"""
test_gamma = .5 # Please change me to be between zero and one
episode_rewards = [-.9, 1.2, .5, -.6, 1, .6, .2, 0, .4, .5]
def discount_episode(rewards, gamma):
discounted_rewards = np.zeros_like(rewards)
total_rewards = 0
for t in reversed(range(len(rewards))):
total_rewards = rewards[t] + total_rewards * gamma
discounted_rewards[t] = total_rewards
return discounted_rewards
discount_episode(episode_rewards, test_gamma)
"""
Explanation: The Theory Behind Policy Gradients
Whereas Q-learning attempts to assign each state a value, Policy Gradients tries to find actions directly, increasing or decreaing a chance to take an action depending on how an episode plays out.
To compare, Q-learning has a table that keeps track of the value of each combination of state and action:
|| Meal | Snack | Wait |
|-|-|-|-|
| Hangry | 1 | .5 | -1 |
| Hungry | .5 | 1 | 0 |
| Full | -1 | -.5 | 1.5 |
Instead for Policy Gradients, we can imagine that we have a similar table, but instead of recording the values, we'll keep track of the probability to take the column action given the row state.
|| Meal | Snack | Wait |
|-|-|-|-|
| Hangry | 70% | 20% | 10% |
| Hungry | 30% | 50% | 20% |
| Full | 5% | 15% | 80% |
With Q learning, whenever we take one step in our environment, we can update the value of the old state based on the value of the new state plus any rewards we picked up based on the Q equation:
<img style="background-color:white;" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/47fa1e5cf8cf75996a777c11c7b9445dc96d4637">
Could we do the same thing if we have a table of probabilities instead values? No, because we don't have a way to calculate the value of each state from our table. Instead, we'll use a different <a href="http://incompleteideas.net/papers/sutton-88-with-erratum.pdf"> Temporal Difference Learning</a> strategy.
Q Learning is an evolution of TD(0), and for Policy Gradients, we'll use TD(1). We'll calculate TD(1) accross and entire episode, and use that to indicate whether to increase or decrease the probability correspoding to the action we took. Let's look at a full day of eating.
| Hour | State | Action | Reward |
|-|-|-|-|
|9| Hangry | Wait | -.9 |
|10| Hangry | Meal | 1.2 |
|11| Full | Wait | .5 |
|12| Full | Snack | -.6 |
|13| Full | Wait | 1 |
|14| Full | Wait | .6 |
|15| Full | Wait | .2 |
|16| Hungry | Wait | 0 |
|17| Hungry | Meal | .4 |
|18| Full | Wait| .5 |
We'll work backwards from the last day, using the same discount, or gamma, as we did with DQNs. The total_rewards variable is equivalent to the value of state prime. Using the Bellman Equation, everytime we calculate the value of a state, s<sub>t</sub>, we'll set that as the value of state prime for the state before, s<sub>t-1</sub>.
End of explanation
"""
def custom_loss(y_true, y_pred):
y_pred_clipped = K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE)
log_likelihood = y_true * K.log(y_pred_clipped)
return K.sum(-log_likelihood*g)
"""
Explanation: Wherever our discounted reward is positive, we'll increase the probability corresponding to the action we took. Similarly, wherever our discounted reward is negative, we'll decrease the probabilty.
However, with this strategy, any actions with a positive reward will have it's probability increase, not necessarily the most optimal action. This puts us in a feedback loop, where we're more likely to pick less optimal actions which could further increase their probability. To counter this, we'll divide the size of our increases by the probability to choose the corresponding action, which will slow the growth of popular actions to give other actions a chance.
Here is our update rule for our neural network, where alpha is our learning rate, and pi is our optimal policy, or the probability to take the optimal action, a<sup>*</sup>, given our current state, s.
<img src="images/weight_update.png" width="200" height="100">
Doing some fancy calculus, we can combine the numerator and denominator with a log function. Since it's not clear what the optimal action is, we'll instead use our discounted rewards, or G, to increase or decrease the weights of the respective action the agent took. A full breakdown of the math can be found in this article by Chris Yoon.
<img src="images/weight_update_calculus.png" width="300" height="150">
Below is what it looks like in code. y_true is the one-hot encoding of the action that was taken. y_pred is the probabilty to take each action given the state the agent was in.
End of explanation
"""
def build_networks(
state_shape, action_size, learning_rate, hidden_neurons):
"""Creates a Policy Gradient Neural Network.
Creates a two hidden-layer Policy Gradient Neural Network. The loss
function is altered to be a log-likelihood function weighted
by the discounted reward, g.
Args:
space_shape: a tuple of ints representing the observation space.
action_size (int): the number of possible actions.
learning_rate (float): the nueral network's learning rate.
hidden_neurons (int): the number of neurons to use per hidden
layer.
"""
state_input = layers.Input(state_shape, name='frames')
g = layers.Input((1,), name='g')
hidden_1 = layers.Dense(hidden_neurons, activation='relu')(state_input)
hidden_2 = layers.Dense(hidden_neurons, activation='relu')(hidden_1)
probabilities = layers.Dense(action_size, activation='softmax')(hidden_2)
def custom_loss(y_true, y_pred):
y_pred_clipped = K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE)
log_lik = y_true*K.log(y_pred_clipped)
return K.sum(-log_lik*g)
policy = models.Model(
inputs=[state_input, g], outputs=[probabilities])
optimizer = tf.keras.optimizers.Adam(lr=learning_rate)
policy.compile(loss=custom_loss, optimizer=optimizer)
predict = models.Model(inputs=[state_input], outputs=[probabilities])
return policy, predict
"""
Explanation: We won't have the discounted rewards, or g, when our agent is acting in the environment. No problem, we'll have one neural network with two types of pathways. One pathway, predict, will be the probability to take an action given an inputed state. It's only used for prediction and is not used for backpropogation. The other pathway, policy, will take both a state and a discounted reward, so it can be used for training.
The code in its entirety looks like this. As with Deep Q Networks, the hidden layers of a Policy Gradient can use a CNN if the input state is pixels, but the last layer is typically a Dense layer with a Softmax activation function to convert the output into probabilities.
End of explanation
"""
space_shape = env.observation_space.shape
action_size = env.action_space.n
# Feel free to play with these
test_learning_rate = .2
test_hidden_neurons = 10
test_policy, test_predict = build_networks(
space_shape, action_size, test_learning_rate, test_hidden_neurons)
"""
Explanation: Let's get a taste of how these networks function. Run the below cell to build our test networks.
End of explanation
"""
state = env.reset()
test_predict.predict(np.expand_dims(state, axis=0))
"""
Explanation: We can't use the policy network until we build our learning function, but we can feed a state to the predict network so we can see our chances to pick our actions.
End of explanation
"""
class Memory():
"""Sets up a memory replay buffer for Policy Gradient methods.
Args:
gamma (float): The "discount rate" used to assess TD(1) values.
"""
def __init__(self, gamma):
self.buffer = []
self.gamma = gamma
def add(self, experience):
"""Adds an experience into the memory buffer.
Args:
experience: a (state, action, reward) tuple.
"""
self.buffer.append(experience)
def sample(self):
"""Returns the list of episode experiences and clears the buffer.
Returns:
(list): A tuple of lists with structure (
[states], [actions], [rewards]
}
"""
batch = np.array(self.buffer).T.tolist()
states_mb = np.array(batch[0], dtype=np.float32)
actions_mb = np.array(batch[1], dtype=np.int8)
rewards_mb = np.array(batch[2], dtype=np.float32)
self.buffer = []
return states_mb, actions_mb, rewards_mb
"""
Explanation: Right now, the numbers should be close to [.5, .5], with a little bit of variance due to the randomization of initializing the weights and the cart's starting position. In order to train, we'll need some memories to train on. The memory buffer here is simpler than DQN, as we don't have to worry about random sampling. We'll clear the buffer every time we train as we'll only hold one episode's worth of memory.
End of explanation
"""
test_memory = Memory(test_gamma)
actions = [x % 2 for x in range(200)]
state = env.reset()
step = 0
episode_reward = 0
done = False
while not done and step < len(actions):
action = actions[step] # In the future, our agents will define this.
state_prime, reward, done, info = env.step(action)
episode_reward += reward
test_memory.add((state, action, reward))
step += 1
state = state_prime
test_memory.sample()
"""
Explanation: Let's make a fake buffer to get a sense of the data we'll be training on. The cell below initializes our memory and runs through one episode of the game by alternating pushing the cart left and right.
Try running it to see the data we'll be using for training.
End of explanation
"""
class Partial_Agent():
"""Sets up a reinforcement learning agent to play in a game environment."""
def __init__(self, policy, predict, memory, action_size):
"""Initializes the agent with Policy Gradient networks
and memory sub-classes.
Args:
policy: The policy network created from build_networks().
predict: The predict network created from build_networks().
memory: A Memory class object.
action_size (int): The number of possible actions to take.
"""
self.policy = policy
self.predict = predict
self.action_size = action_size
self.memory = memory
def act(self, state):
"""Selects an action for the agent to take given a game state.
Args:
state (list of numbers): The state of the environment to act on.
Returns:
(int) The index of the action to take.
"""
# If not acting randomly, take action with highest predicted value.
state_batch = np.expand_dims(state, axis=0)
probabilities = self.predict.predict(state_batch)[0]
action = np.random.choice(self.action_size, p=probabilities)
return action
"""
Explanation: Ok, time to start putting together the agent! Let's start by giving it the ability to act. Here, we don't need to worry about exploration vs exploitation because we already have a random chance to take each of our actions. As the agent learns, it will naturally shift from exploration to exploitation. How conveient!
End of explanation
"""
test_agent = Partial_Agent(test_policy, test_predict, test_memory, action_size)
"""
Explanation: Let's see the act function in action. First, let's build our agent.
End of explanation
"""
action = test_agent.act(state)
print("Push Right" if action else "Push Left")
"""
Explanation: Next, run the below cell a few times to test the act method. Is it about a 50/50 chance to push right instead of left?
End of explanation
"""
def learn(self, print_variables=False):
"""Trains a Policy Gradient policy network based on stored experiences."""
state_mb, action_mb, reward_mb = self.memory.sample()
# One hot enocde actions
actions = np.zeros([len(action_mb), self.action_size])
actions[np.arange(len(action_mb)), action_mb] = 1
if print_variables:
print("action_mb:", action_mb)
print("actions:", actions)
# Apply TD(1) and normalize
discount_mb = discount_episode(reward_mb, self.memory.gamma)
discount_mb = (discount_mb - np.mean(discount_mb)) / np.std(discount_mb)
if print_variables:
print("reward_mb:", reward_mb)
print("discount_mb:", discount_mb)
return self.policy.train_on_batch([state_mb, discount_mb], actions)
Partial_Agent.learn = learn
test_agent = Partial_Agent(test_policy, test_predict, test_memory, action_size)
"""
Explanation: Now for the most important part. We need to give our agent a way to learn! To start, we'll one-hot encode our actions. Since the output of our network is a probability for each action, we'll have a 1 corresponding to the action that was taken and 0's for the actions we didn't take.
That doesn't give our agent enough information on whether the action that was taken was actually a good idea, so we'll also use our discount_episode to calculate the TD(1) value of each step within the episode.
One thing to note, is that CartPole doesn't have any negative rewards, meaning, even if it does terribly, the agent will still think the run went well. To help counter this, we'll take the mean and standard deviation of our discounted rewards, or discount_mb, and use that to find the Standard Score for each discounted reward. With this, steps close to dropping the poll will have a negative reward.
End of explanation
"""
state = env.reset()
done = False
while not done:
action = test_agent.act(state)
state_prime, reward, done, _ = env.step(action)
test_agent.memory.add((state, action, reward)) # New line here
state = state_prime
test_agent.learn(print_variables=True)
"""
Explanation: Try adding in some print statements to the code above to get a sense of how the data is transformed before feeding it into the model, then run the below code to see it in action.
End of explanation
"""
test_gamma = .5
test_learning_rate = .01
test_hidden_neurons = 100
with tf.Graph().as_default():
test_memory = Memory(test_gamma)
test_policy, test_predict = build_networks(
space_shape, action_size, test_learning_rate, test_hidden_neurons)
test_agent = Partial_Agent(test_policy, test_predict, test_memory, action_size)
for episode in range(200):
state = env.reset()
episode_reward = 0
done = False
while not done:
action = test_agent.act(state)
state_prime, reward, done, info = env.step(action)
episode_reward += reward
test_agent.memory.add((state, action, reward))
state = state_prime
test_agent.learn()
print("Episode", episode, "Score =", episode_reward)
"""
Explanation: Finally, it's time to put it all together. Policy Gradient Networks have less hypertuning parameters than DQNs, but since our custom loss constructs a TensorFlow Graph under the hood, we'll set up lazy execution by wrapping our traing steps in a default graph.
By changing test_gamma, test_learning_rate, and test_hidden_neurons, can you help the agent reach a score of 200 within 200 episodes? It takes a little bit of thinking and a little bit of luck.
Hover the curser <b title="gamma=.9, learning rate=0.002, neurons=50">on this bold text</b> to see a solution to the challenge.
End of explanation
"""
def build_networks(state_shape, action_size, learning_rate, critic_weight,
hidden_neurons, entropy):
"""Creates Actor Critic Neural Networks.
Creates a two hidden-layer Policy Gradient Neural Network. The loss
function is altered to be a log-likelihood function weighted
by an action's advantage.
Args:
space_shape: a tuple of ints representing the observation space.
action_size (int): the number of possible actions.
learning_rate (float): the nueral network's learning rate.
critic_weight (float): how much to weigh the critic's training loss.
hidden_neurons (int): the number of neurons to use per hidden layer.
entropy (float): how much to enourage exploration versus exploitation.
"""
state_input = layers.Input(state_shape, name='frames')
advantages = layers.Input((1,), name='advantages') # PG, A instead of G
# PG
actor_1 = layers.Dense(hidden_neurons, activation='relu')(state_input)
actor_2 = layers.Dense(hidden_neurons, activation='relu')(actor_1)
probabilities = layers.Dense(action_size, activation='softmax')(actor_2)
# DQN
critic_1 = layers.Dense(hidden_neurons, activation='relu')(state_input)
critic_2 = layers.Dense(hidden_neurons, activation='relu')(critic_1)
values = layers.Dense(1, activation='linear')(critic_2)
def actor_loss(y_true, y_pred): # PG
y_pred_clipped = K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE)
log_lik = y_true*K.log(y_pred_clipped)
entropy_loss = y_pred * K.log(K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE)) # New
return K.sum(-log_lik * advantages) - (entropy * K.sum(entropy_loss))
# Train both actor and critic at the same time.
actor = models.Model(
inputs=[state_input, advantages], outputs=[probabilities, values])
actor.compile(
loss=[actor_loss, 'mean_squared_error'], # [PG, DQN]
loss_weights=[1, critic_weight], # [PG, DQN]
optimizer=tf.keras.optimizers.Adam(lr=learning_rate))
critic = models.Model(inputs=[state_input], outputs=[values])
policy = models.Model(inputs=[state_input], outputs=[probabilities])
return actor, critic, policy
"""
Explanation: The Theory Behind Actor - Critic
Now that we have the hang of Policy Gradients, let's combine this strategy with Deep Q Agents. We'll have one architecture to rule them all!
Below is the setup for our neural networks. There are plenty of ways to go combining the two strategies. We'll be focusing on one varient called A2C, or Advantage Actor Critic.
<img src="images/a2c_equation.png" width="300" height="150">
Here's the philosophy: We'll use our critic pathway to estimate the value of a state, or V(s). Given a state-action-new state transition, we can use our critic and the Bellman Equation to calculate the discounted value of the new state, or r + γ * V(s').
Like DQNs, this discounted value is the label the critic will train on. While that is happening, we can subtract V(s) and the discounted value of the new state to get the advantage, or A(s,a). In human terms, how much value was the action the agent took? This is what the actor, or the policy gradient portion or our network, will train on.
Too long, didn't read: the critic's job is to learn how to asses the value of a state. The actor's job is to assign probabilities to it's available actions such that it increases its chance to move into a higher valued state.
Below is our new build_networks function. Each line has been tagged with whether it comes from Deep Q Networks (# DQN), Policy Gradients (# PG), or is something new (# New).
End of explanation
"""
def actor_loss(y_true, y_pred): # PG
y_pred_clipped = K.clip(y_pred, 1e-8, 1-1e-8)
log_lik = y_true*K.log(y_pred_clipped)
entropy_loss = y_pred * K.log(K.clip(y_pred, 1e-8, 1-1e-8)) # New
return K.sum(-log_lik * advantages) - (entropy * K.sum(entropy_loss))
"""
Explanation: The above is one way to go about combining both of the algorithms. Here, we're combining training of both pwathways into on operation. Keras allows for the training against multiple outputs. They can even have their own loss functions as we have above. When minimizing the loss, Keras will take the weighted sum of all the losses, with the weights provided in loss_weights. The critic_weight is now another hyperparameter for us to tune.
We could even have completely separate networks for the actor and the critic, and that type of design choice is going to be problem dependent. Having shared nodes and training between the two will be more efficient to train per batch, but more complicated problems could justify keeping the two separate.
The loss function we used here is also slightly different than the one for Policy Gradients. Let's take a look.
End of explanation
"""
class Memory():
"""Sets up a memory replay for actor-critic training.
Args:
gamma (float): The "discount rate" used to assess state values.
batch_size (int): The number of elements to include in the buffer.
"""
def __init__(self, gamma, batch_size):
self.buffer = []
self.gamma = gamma
self.batch_size = batch_size
def add(self, experience):
"""Adds an experience into the memory buffer.
Args:
experience: (state, action, reward, state_prime_value, done) tuple.
"""
self.buffer.append(experience)
def check_full(self):
return len(self.buffer) >= self.batch_size
def sample(self):
"""Returns formated experiences and clears the buffer.
Returns:
(list): A tuple of lists with structure [
[states], [actions], [rewards], [state_prime_values], [dones]
]
"""
# Columns have different data types, so numpy array would be awkward.
batch = np.array(self.buffer).T.tolist()
states_mb = np.array(batch[0], dtype=np.float32)
actions_mb = np.array(batch[1], dtype=np.int8)
rewards_mb = np.array(batch[2], dtype=np.float32)
dones_mb = np.array(batch[3], dtype=np.int8)
value_mb = np.squeeze(np.array(batch[4], dtype=np.float32))
self.buffer = []
return states_mb, actions_mb, rewards_mb, dones_mb, value_mb
"""
Explanation: We've added a new tool called entropy. We're calculating the log-likelihood again, but instead of comparing the probabilities of our actions versus the action that was taken, we calculating it for the probabilities of our actions against themselves.
Certainly a mouthful, but the idea is to encourage exploration: if our probability prediction is very confident (or close to 1), our entropy will be close to 0. Similary, if our probability isn't confident at all (or close to 0), our entropy will again be zero. Anywhere inbetween, our entropy will be non-zero. This encourages exploration versus exploitation, as the entropy will discourage overconfident predictions.
Now that the networks are out of the way, let's look at the Memory. We could go with Experience Replay, like with DQNs, or we could calculate TD(1) like with Policy Gradients. This time, we'll do something in between. We'll give our memory a batch_size. Once there are enough experiences in the buffer, we'll use all the experiences to train and then clear the buffer to start fresh.
In order to speed up training, instead of recording state_prime, we'll record the value of state prime in state_prime_values or next_values. This will give us enough information to calculate the discounted values and advantages.
End of explanation
"""
class Agent():
"""Sets up a reinforcement learning agent to play in a game environment."""
def __init__(self, actor, critic, policy, memory, action_size):
"""Initializes the agent with DQN and memory sub-classes.
Args:
network: A neural network created from deep_q_network().
memory: A Memory class object.
epsilon_decay (float): The rate at which to decay random actions.
action_size (int): The number of possible actions to take.
"""
self.actor = actor
self.critic = critic
self.policy = policy
self.action_size = action_size
self.memory = memory
def act(self, state):
"""Selects an action for the agent to take given a game state.
Args:
state (list of numbers): The state of the environment to act on.
traning (bool): True if the agent is training.
Returns:
(int) The index of the action to take.
"""
# If not acting randomly, take action with highest predicted value.
state_batch = np.expand_dims(state, axis=0)
probabilities = self.policy.predict(state_batch)[0]
action = np.random.choice(self.action_size, p=probabilities)
return action
def learn(self, print_variables=False):
"""Trains the Deep Q Network based on stored experiences."""
gamma = self.memory.gamma
experiences = self.memory.sample()
state_mb, action_mb, reward_mb, dones_mb, next_value = experiences
# One hot enocde actions
actions = np.zeros([len(action_mb), self.action_size])
actions[np.arange(len(action_mb)), action_mb] = 1
#Apply TD(0)
discount_mb = reward_mb + next_value * gamma * (1 - dones_mb)
state_values = self.critic.predict([state_mb])
advantages = discount_mb - np.squeeze(state_values)
if print_variables:
print("discount_mb", discount_mb)
print("next_value", next_value)
print("state_values", state_values)
print("advantages", advantages)
else:
self.actor.train_on_batch(
[state_mb, advantages], [actions, discount_mb])
"""
Explanation: Ok, time to build out the agent! The act method is the exact same as it was for Policy Gradients. Nice! The learn method is where things get interesting. We'll find the discounted future state like we did for DQN to train our critic. We'll then subtract the value of the discount state from the value of the current state to find the advantage, which is what the actor will train on.
End of explanation
"""
# Change me please.
test_gamma = .9
test_batch_size = 32
test_learning_rate = .02
test_hidden_neurons = 50
test_critic_weight = 0.5
test_entropy = 0.0001
test_memory = Memory(test_gamma, test_batch_size)
test_actor, test_critic, test_policy = build_networks(
space_shape, action_size,
test_learning_rate, test_critic_weight,
test_hidden_neurons, test_entropy)
test_agent = Agent(
test_actor, test_critic, test_policy, test_memory, action_size)
state = env.reset()
episode_reward = 0
done = False
while not done:
action = test_agent.act(state)
state_prime, reward, done, _ = env.step(action)
episode_reward += reward
next_value = test_agent.critic.predict([[state_prime]])
test_agent.memory.add((state, action, reward, done, next_value))
state = state_prime
test_agent.learn(print_variables=True)
"""
Explanation: Run the below cell to initialize an agent, and the cell after that to see the variables used for training. Since it's early, the critic hasn't learned to estimate the values yet, and the advatanges are mostly positive because of it.
Once the crtic has learned how to properly assess states, the actor will start to see negative advantages. Try playing around with the variables to help the agent see this change sooner.
End of explanation
"""
with tf.Graph().as_default():
test_memory = Memory(test_gamma, test_batch_size)
test_actor, test_critic, test_policy = build_networks(
space_shape, action_size,
test_learning_rate, test_critic_weight,
test_hidden_neurons, test_entropy)
test_agent = Agent(
test_actor, test_critic, test_policy, test_memory, action_size)
for episode in range(200):
state = env.reset()
episode_reward = 0
done = False
while not done:
action = test_agent.act(state)
state_prime, reward, done, _ = env.step(action)
episode_reward += reward
next_value = test_agent.critic.predict([[state_prime]])
test_agent.memory.add((state, action, reward, done, next_value))
#if test_agent.memory.check_full():
#test_agent.learn(print_variables=True)
state = state_prime
test_agent.learn()
print("Episode", episode, "Score =", episode_reward)
"""
Explanation: Have a set of variables you're happy with? Ok, time to shine! Run the below cell to see how the agent trains.
End of explanation
"""
|
TUW-GEO/rt1 | doc/examples/example01.ipynb | apache-2.0 | # imports
from rt1.rt1 import RT1
from rt1.volume import Rayleigh
from rt1.surface import CosineLobe
import numpy as np
import pandas as pd
# definition of volume and surface
V = Rayleigh(tau=0.7, omega=0.3)
SRF = CosineLobe(ncoefs=10, i=5, NormBRDF=np.pi)
"""
Explanation: Example 01
This example reproduces the results from Example 1 of from Quast & Wagner (2016)
The definition of the volume phase function and surface BRDF is as follows:
Volume definition
optical depth ($\tau = 0.7$)
single scattering albedo ($\omega = 0.3$)
Volume phase function: Rayleigh phase function
Surface definition
Cosine Lobe function with 10 coefficients
End of explanation
"""
# Specify imaging geometry
inc = np.arange(1.,89.,1.) # specify incidence angle range [deg]
t_0 = np.deg2rad(inc) # [rad]
# scattering angle; here the same as incidence angle, as backscatter
t_ex = t_0*1.
# azimuth geometry angles
p_0 = np.ones_like(inc)*0.
p_ex = np.ones_like(inc)*0. + np.pi # 180 degree shift as backscatter
"""
Explanation: Imaging geometry (backscattering case)
End of explanation
"""
# do actual calculations with specifies geometries
I0=1. # set incident intensity
R = RT1(I0, t_0, t_ex, p_0, p_ex, V=V, SRF=SRF, geometry='mono', verbosity=1)
res = pd.DataFrame(dict(zip(('tot','surf','vol','inter'), R.calc())), index=inc)
"""
Explanation: Perform the simulations
To perform the simulations, the RT model needs to estimate once coefficients. As these are the same for all imaging geometries, it makes sense to estimate these once and then transfer them to the subsequent calls, using the optional fn_input and _fnevals_input parameters.
End of explanation
"""
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(25, 7))
ax1 = fig.add_subplot(111, projection='polar')
ax2 = fig.add_subplot(121, projection='polar')
# plot BRDF and phase function
plot1 = SRF.polarplot(inc = list(np.linspace(0,85,5)), multip = 1.5, legpos = (0.,0.5), polarax=ax2,
label='Surface scattering phase function', legend=True)
plot2 = V.polarplot(inc = list(np.linspace(0,120,5)) ,multip = 1.5, legpos = (0.0,0.5), polarax=ax1,
label='Volume scattering phase function', legend=True)
fig.tight_layout()
# plot only BRDF
#V.polarplot()
#plot only p
#SRF.polarplot()
# plot backscatter as function of incidence angle
f, [ax, ax2] = plt.subplots(1,2, figsize=(12,5))
ax.set_title('Backscattered Intensity'+'\n$\\omega$ = ' + str(R.V.omega) + '$ \quad \\tau$ = ' + str(R.V.tau))
ax2.set_title('Fractional contributions to the signal')
resdb = 10.*np.log10(res[res!=0])
resdb.plot(ax=ax)
ax.legend()
ax2.plot(res.div(res.tot, axis=0))
_ = ax.set_ylim(-25,-3)
ax.set_ylim()
f.tight_layout()
"""
Explanation: Plot results
Plot both, the phase function and the BRDF. For more examples, see examples.py
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.1/examples/notebooks/generated/quantile_regression.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
data = sm.datasets.engel.load_pandas().data
data.head()
"""
Explanation: Quantile regression
This example page shows how to use statsmodels' QuantReg class to replicate parts of the analysis published in
Koenker, Roger and Kevin F. Hallock. "Quantile Regression". Journal of Economic Perspectives, Volume 15, Number 4, Fall 2001, Pages 143–156
We are interested in the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 (the Engel data).
Setup
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
End of explanation
"""
mod = smf.quantreg('foodexp ~ income', data)
res = mod.fit(q=.5)
print(res.summary())
"""
Explanation: Least Absolute Deviation
The LAD model is a special case of quantile regression where q=0.5
End of explanation
"""
quantiles = np.arange(.05, .96, .1)
def fit_model(q):
res = mod.fit(q=q)
return [q, res.params['Intercept'], res.params['income']] + \
res.conf_int().loc['income'].tolist()
models = [fit_model(x) for x in quantiles]
models = pd.DataFrame(models, columns=['q', 'a', 'b', 'lb', 'ub'])
ols = smf.ols('foodexp ~ income', data).fit()
ols_ci = ols.conf_int().loc['income'].tolist()
ols = dict(a = ols.params['Intercept'],
b = ols.params['income'],
lb = ols_ci[0],
ub = ols_ci[1])
print(models)
print(ols)
"""
Explanation: Visualizing the results
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
Prepare data for plotting
For convenience, we place the quantile regression results in a Pandas DataFrame, and the OLS results in a dictionary.
End of explanation
"""
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig, ax = plt.subplots(figsize=(8, 6))
for i in range(models.shape[0]):
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', color='grey')
y = get_y(ols['a'], ols['b'])
ax.plot(x, y, color='red', label='OLS')
ax.scatter(data.income, data.foodexp, alpha=.2)
ax.set_xlim((240, 3000))
ax.set_ylim((240, 2000))
legend = ax.legend()
ax.set_xlabel('Income', fontsize=16)
ax.set_ylabel('Food expenditure', fontsize=16);
"""
Explanation: First plot
This plot compares best fit lines for 10 quantile regression models to the least squares fit. As Koenker and Hallock (2001) point out, we see that:
Food expenditure increases with income
The dispersion of food expenditure increases with income
The least squares estimates fit low income observations quite poorly (i.e. the OLS line passes over most low income households)
End of explanation
"""
n = models.shape[0]
p1 = plt.plot(models.q, models.b, color='black', label='Quantile Reg.')
p2 = plt.plot(models.q, models.ub, linestyle='dotted', color='black')
p3 = plt.plot(models.q, models.lb, linestyle='dotted', color='black')
p4 = plt.plot(models.q, [ols['b']] * n, color='red', label='OLS')
p5 = plt.plot(models.q, [ols['lb']] * n, linestyle='dotted', color='red')
p6 = plt.plot(models.q, [ols['ub']] * n, linestyle='dotted', color='red')
plt.ylabel(r'$\beta_{income}$')
plt.xlabel('Quantiles of the conditional food expenditure distribution')
plt.legend()
plt.show()
"""
Explanation: Second plot
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confidence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/launching_into_ml/labs/improve_data_quality.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
"""
Explanation: Improving Data Quality
Learning Objectives
Resolve missing values
Convert the Date feature column to a datetime format
Rename a feature column, remove a value from a feature column
Create one-hot encoding features
Understand temporal feature conversions
Introduction
Recall that machine learning models can only consume numeric data, and that numeric data should be "1"s or "0"s. Data is said to be "messy" or "untidy" if it is missing attribute values, contains noise or outliers, has duplicates, wrong data, upper/lower case column names, and is essentially not ready for ingestion by a machine learning algorithm.
This notebook presents and solves some of the most common issues of "untidy" data. Note that different problems will require different methods, and they are beyond the scope of this notebook.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
# Importing necessary tensorflow library and printing the TF version.
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
import os
import pandas as pd # First, we'll import Pandas, a data processing and CSV file I/O library
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Start by importing the necessary libraries for this lab.
Import Libraries
End of explanation
"""
if not os.path.isdir("../data/transport"):
os.makedirs("../data/transport")
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/untidy_vehicle_data_toy.csv ../data/transport
!ls -l ../data/transport
"""
Explanation: Load the Dataset
The dataset is based on California's Vehicle Fuel Type Count by Zip Code report. The dataset has been modified to make the data "untidy" and is thus a synthetic representation that can be used for learning purposes.
Let's download the raw .csv data by copying the data from a cloud storage bucket.
End of explanation
"""
df_transport = pd.read_csv('../data/transport/untidy_vehicle_data_toy.csv')
df_transport.head() # Output the first five rows.
"""
Explanation: Read Dataset into a Pandas DataFrame
Next, let's read in the dataset just copied from the cloud storage bucket and create a Pandas DataFrame. We also add a Pandas .head() function to show you the top 5 rows of data in the DataFrame. Head() and Tail() are "best-practice" functions used to investigate datasets.
End of explanation
"""
df_transport.info()
"""
Explanation: DataFrame Column Data Types
DataFrames may have heterogenous or "mixed" data types, that is, some columns are numbers, some are strings, and some are dates etc. Because CSV files do not contain information on what data types are contained in each column, Pandas infers the data types when loading the data, e.g. if a column contains only numbers, Pandas will set that column’s data type to numeric: integer or float.
Run the next cell to see information on the DataFrame.
End of explanation
"""
print(df_transport)
"""
Explanation: From what the .info() function shows us, we have six string objects and one float object. Let's print out the first and last five rows of each column. We can definitely see more of the "string" object values now!
End of explanation
"""
df_transport.describe()
"""
Explanation: Summary Statistics
At this point, we have only one column which contains a numerical value (e.g. Vehicles). For features which contain numerical values, we are often interested in various statistical measures relating to those values. We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, that because we only have one numeric feature, we see only one summary stastic - for now.
End of explanation
"""
df_transport.groupby('Fuel').first() # Get the first entry for each month.
"""
Explanation: Let's investigate a bit more of our data by using the .groupby() function.
End of explanation
"""
df_transport.isnull().sum()
"""
Explanation: Checking for Missing Values
Missing values adversely impact data quality, as they can lead the machine learning model to make inaccurate inferences about the data. Missing values can be the result of numerous factors, e.g. "bits" lost during streaming transmission, data entry, or perhaps a user forgot to fill in a field. Note that Pandas recognizes both empty cells and “NaN” types as missing values.
Let's show the null values for all features in the DataFrame.
End of explanation
"""
print (df_transport['Date'])
print (df_transport['Date'].isnull())
print (df_transport['Make'])
print (df_transport['Make'].isnull())
print (df_transport['Model Year'])
print (df_transport['Model Year'].isnull())
"""
Explanation: To see a sampling of which values are missing, enter the feature column name. You'll notice that "False" and "True" correpond to the presence or abscence of a value by index number.
End of explanation
"""
print ("Rows : " ,df_transport.shape[0])
print ("Columns : " ,df_transport.shape[1])
print ("\nFeatures : \n" ,df_transport.columns.tolist())
print ("\nUnique values : \n",df_transport.nunique())
print ("\nMissing values : ", df_transport.isnull().sum().values.sum())
"""
Explanation: What can we deduce about the data at this point?
First, let's summarize our data by row, column, features, unique, and missing values,
End of explanation
"""
df_transport.tail()
"""
Explanation: Let's see the data again -- this time the last five rows in the dataset.
End of explanation
"""
# TODO 1a
# TODO -- Your code here.
"""
Explanation: What Are Our Data Quality Issues?
Data Quality Issue #1:
Missing Values:
Each feature column has multiple missing values. In fact, we have a total of 18 missing values.
Data Quality Issue #2:
Date DataType: Date is shown as an "object" datatype and should be a datetime. In addition, Date is in one column. Our business requirement is to see the Date parsed out to year, month, and day.
Data Quality Issue #3:
Model Year: We are only interested in years greater than 2006, not "<2006".
Data Quality Issue #4:
Categorical Columns: The feature column "Light_Duty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. In addition, we need to "one-hot encode the remaining "string"/"object" columns.
Data Quality Issue #5:
Temporal Features: How do we handle year, month, and day?
Data Quality Issue #1:
Resolving Missing Values
Most algorithms do not accept missing values. Yet, when we see missing values in our dataset, there is always a tendency to just "drop all the rows" with missing values. Although Pandas will fill in the blank space with “NaN", we should "handle" them in some way.
While all the methods to handle missing values is beyond the scope of this lab, there are a few methods you should consider. For numeric columns, use the "mean" values to fill in the missing numeric values. For categorical columns, use the "mode" (or most frequent values) to fill in missing categorical values.
In this lab, we use the .apply and Lambda functions to fill every column with its own most frequent value. You'll learn more about Lambda functions later in the lab.
Let's check again for missing values by showing how many rows contain NaN values for each feature column.
Lab Task #1a: Check for missing values by showing how many rows contain NaN values for each feature column.
End of explanation
"""
# TODO 1b
# TODO -- Your code here.
"""
Explanation: Lab Task #1b: Apply the lambda function.
End of explanation
"""
# TODO 1c
# TODO -- Your code here.
"""
Explanation: Lab Task #1c: Check again for missing values.
End of explanation
"""
# TODO 2a
# TODO -- Your code here.
"""
Explanation: Data Quality Issue #2:
Convert the Date Feature Column to a Datetime Format
The date column is indeed shown as a string object.
Lab Task #2a: Convert the datetime datatype with the to_datetime() function in Pandas.
End of explanation
"""
# TODO 2b
# TODO -- Your code here.
"""
Explanation: Lab Task #2b: Show the converted Date.
End of explanation
"""
df_transport['year'] = df_transport['Date'].dt.year
df_transport['month'] = df_transport['Date'].dt.month
df_transport['day'] = df_transport['Date'].dt.day
#df['hour'] = df['date'].dt.hour - you could use this if your date format included hour.
#df['minute'] = df['date'].dt.minute - you could use this if your date format included minute.
df_transport.info()
"""
Explanation: Let's parse Date into three columns, e.g. year, month, and day.
End of explanation
"""
# Here, we are creating a new dataframe called "grouped_data" and grouping by on the column "Make"
grouped_data = df_transport.groupby(['Make'])
# Get the first entry for each month.
df_transport.groupby('month').first()
"""
Explanation: Next, let's confirm the Date parsing. This will also give us a another visualization of the data.
End of explanation
"""
plt.figure(figsize=(10,6))
sns.jointplot(x='month',y='Vehicles',data=df_transport)
#plt.title('Vehicles by Month')
"""
Explanation: Now that we have Dates as a integers, let's do some additional plotting.
End of explanation
"""
# TODO 3a
# TODO -- Your code here.
"""
Explanation: Data Quality Issue #3:
Rename a Feature Column and Remove a Value.
Our feature columns have different "capitalizations" in their names, e.g. both upper and lower "case". In addition, there are "spaces" in some of the column names. In addition, we are only interested in years greater than 2006, not "<2006".
Lab Task #3a: Remove all the spaces for feature columns by renaming them.
End of explanation
"""
# TODO 3b
# TODO -- Your code here.
"""
Explanation: Note: Next we create a copy of the dataframe to avoid the "SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" warning. Run the cell to remove the value '<2006' from the modelyear feature column.
Lab Task #3b: Create a copy of the dataframe to avoid copy warning issues.
End of explanation
"""
df['modelyear'].value_counts(0)
"""
Explanation: Next, confirm that the modelyear value '<2006' has been removed by doing a value count.
End of explanation
"""
df['lightduty'].value_counts(0)
"""
Explanation: Data Quality Issue #4:
Handling Categorical Columns
The feature column "lightduty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. We need to convert the binary answers from strings of yes/no to integers of 1/0. There are various methods to achieve this. We will use the "apply" method with a lambda expression. Pandas. apply() takes a function and applies it to all values of a Pandas series.
What is a Lambda Function?
Typically, Python requires that you define a function using the def keyword. However, lambda functions are anonymous -- which means there is no need to name them. The most common use case for lambda functions is in code that requires a simple one-line function (e.g. lambdas only have a single expression).
As you progress through the Course Specialization, you will see many examples where lambda functions are being used. Now is a good time to become familiar with them.
First, lets count the number of "Yes" and"No's" in the 'lightduty' feature column.
End of explanation
"""
df.loc[:,'lightduty'] = df['lightduty'].apply(lambda x: 0 if x=='No' else 1)
df['lightduty'].value_counts(0)
# Confirm that "lightduty" has been converted.
df.head()
"""
Explanation: Let's convert the Yes to 1 and No to 0. Pandas. apply() . apply takes a function and applies it to all values of a Pandas series (e.g. lightduty).
End of explanation
"""
# Making dummy variables for categorical data with more inputs.
data_dummy = pd.get_dummies(df[['zipcode','modelyear', 'fuel', 'make']], drop_first=True)
data_dummy.head()
"""
Explanation: One-Hot Encoding Categorical Feature Columns
Machine learning algorithms expect input vectors and not categorical features. Specifically, they cannot handle text or string values. Thus, it is often useful to transform categorical features into vectors.
One transformation method is to create dummy variables for our categorical features. Dummy variables are a set of binary (0 or 1) variables that each represent a single class from a categorical feature. We simply encode the categorical variable as a one-hot vector, i.e. a vector where only one element is non-zero, or hot. With one-hot encoding, a categorical feature becomes an array whose size is the number of possible choices for that feature.
Panda provides a function called "get_dummies" to convert a categorical variable into dummy/indicator variables.
End of explanation
"""
# TODO 4a
# TODO -- Your code here.
"""
Explanation: Lab Task #4a: Merge (concatenate) original data frame with 'dummy' dataframe.
End of explanation
"""
# TODO 4b
# TODO -- Your code here.
# Confirm that 'zipcode','modelyear', 'fuel', and 'make' have been dropped.
df.head()
"""
Explanation: Lab Task #4b: Drop attributes for which we made dummy variables.
End of explanation
"""
print ('Unique values of month:',df.month.unique())
print ('Unique values of day:',df.day.unique())
print ('Unique values of year:',df.year.unique())
"""
Explanation: Data Quality Issue #5:
Temporal Feature Columns
Our dataset now contains year, month, and day feature columns. Let's convert the month and day feature columns to meaningful representations as a way to get us thinking about changing temporal features -- as they are sometimes overlooked.
Note that the Feature Engineering course in this Specialization will provide more depth on methods to handle year, month, day, and hour feature columns.
First, let's print the unique values for "month" and "day" in our dataset.
End of explanation
"""
df['day_sin'] = np.sin(df.day*(2.*np.pi/31))
df['day_cos'] = np.cos(df.day*(2.*np.pi/31))
df['month_sin'] = np.sin((df.month-1)*(2.*np.pi/12))
df['month_cos'] = np.cos((df.month-1)*(2.*np.pi/12))
# TODO 5
# TODO -- Your code here.
# scroll left to see the converted month and day coluumns.
df.tail(4)
"""
Explanation: Next, we map each temporal variable onto a circle such that the lowest value for that variable appears right next to the largest value. We compute the x- and y- component of that point using sin and cos trigonometric functions. Don't worry, this is the last time we will use this code, as you can develop an input pipeline to address these temporal feature columns in TensorFlow and Keras - and it is much easier! But, sometimes you need to appreciate what you're not going to encounter as you move through the course!
Run the cell to view the output.
Lab Task #5: Drop month, and day
End of explanation
"""
|
keras-team/keras-io | examples/generative/ipynb/gan_ada.ipynb | apache-2.0 | import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: Data-efficient GANs with Adaptive Discriminator Augmentation
Author: András Béres<br>
Date created: 2021/10/28<br>
Last modified: 2021/10/28<br>
Description: Generating images from limited data using the Caltech Birds dataset.
Introduction
GANs
Generative Adversarial Networks (GANs) are a popular
class of generative deep learning models, commonly used for image generation. They
consist of a pair of dueling neural networks, called the discriminator and the generator.
The discriminator's task is to distinguish real images from generated (fake) ones, while
the generator network tries to fool the discriminator by generating more and more
realistic images. If the generator is however too easy or too hard to fool, it might fail
to provide useful learning signal for the generator, therefore training GANs is usually
considered a difficult task.
Data augmentation for GANS
Data augmentation, a popular technique in deep learning, is the process of randomly
applying semantics-preserving transformations to the input data to generate multiple
realistic versions of it, thereby effectively multiplying the amount of training data
available. The simplest example is left-right flipping an image, which preserves its
contents while generating a second unique training sample. Data augmentation is commonly
used in supervised learning to prevent overfitting and enhance generalization.
The authors of StyleGAN2-ADA show that discriminator
overfitting can be an issue in GANs, especially when only low amounts of training data is
available. They propose Adaptive Discriminator Augmentation to mitigate this issue.
Applying data augmentation to GANs however is not straightforward. Since the generator is
updated using the discriminator's gradients, if the generated images are augmented, the
augmentation pipeline has to be differentiable and also has to be GPU-compatible for
computational efficiency. Luckily, the
Keras image augmentation layers
fulfill both these requirements, and are therefore very well suited for this task.
Invertible data augmentation
A possible difficulty when using data augmentation in generative models is the issue of
"leaky augmentations" (section 2.2), namely when the
model generates images that are already augmented. This would mean that it was not able
to separate the augmentation from the underlying data distribution, which can be caused
by using non-invertible data transformations. For example, if either 0, 90, 180 or 270
degree rotations are performed with equal probability, the original orientation of the
images is impossible to infer, and this information is destroyed.
A simple trick to make data augmentations invertible is to only apply them with some
probability. That way the original version of the images will be more common, and the
data distribution can be infered. By properly choosing this probability, one can
effectively regularize the discriminator without making the augmentations leaky.
Setup
End of explanation
"""
# data
num_epochs = 10 # train for 400 epochs for good results
image_size = 64
# resolution of Kernel Inception Distance measurement, see related section
kid_image_size = 75
padding = 0.25
dataset_name = "caltech_birds2011"
# adaptive discriminator augmentation
max_translation = 0.125
max_rotation = 0.125
max_zoom = 0.25
target_accuracy = 0.85
integration_steps = 1000
# architecture
noise_size = 64
depth = 4
width = 128
leaky_relu_slope = 0.2
dropout_rate = 0.4
# optimization
batch_size = 128
learning_rate = 2e-4
beta_1 = 0.5 # not using the default value of 0.9 is important
ema = 0.99
"""
Explanation: Hyperparameterers
End of explanation
"""
def round_to_int(float_value):
return tf.cast(tf.math.round(float_value), dtype=tf.int32)
def preprocess_image(data):
# unnormalize bounding box coordinates
height = tf.cast(tf.shape(data["image"])[0], dtype=tf.float32)
width = tf.cast(tf.shape(data["image"])[1], dtype=tf.float32)
bounding_box = data["bbox"] * tf.stack([height, width, height, width])
# calculate center and length of longer side, add padding
target_center_y = 0.5 * (bounding_box[0] + bounding_box[2])
target_center_x = 0.5 * (bounding_box[1] + bounding_box[3])
target_size = tf.maximum(
(1.0 + padding) * (bounding_box[2] - bounding_box[0]),
(1.0 + padding) * (bounding_box[3] - bounding_box[1]),
)
# modify crop size to fit into image
target_height = tf.reduce_min(
[target_size, 2.0 * target_center_y, 2.0 * (height - target_center_y)]
)
target_width = tf.reduce_min(
[target_size, 2.0 * target_center_x, 2.0 * (width - target_center_x)]
)
# crop image
image = tf.image.crop_to_bounding_box(
data["image"],
offset_height=round_to_int(target_center_y - 0.5 * target_height),
offset_width=round_to_int(target_center_x - 0.5 * target_width),
target_height=round_to_int(target_height),
target_width=round_to_int(target_width),
)
# resize and clip
# for image downsampling, area interpolation is the preferred method
image = tf.image.resize(
image, size=[image_size, image_size], method=tf.image.ResizeMethod.AREA
)
return tf.clip_by_value(image / 255.0, 0.0, 1.0)
def prepare_dataset(split):
# the validation dataset is shuffled as well, because data order matters
# for the KID calculation
return (
tfds.load(dataset_name, split=split, shuffle_files=True)
.map(preprocess_image, num_parallel_calls=tf.data.AUTOTUNE)
.cache()
.shuffle(10 * batch_size)
.batch(batch_size, drop_remainder=True)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
train_dataset = prepare_dataset("train")
val_dataset = prepare_dataset("test")
"""
Explanation: Data pipeline
In this example, we will use the
Caltech Birds (2011) dataset for
generating images of birds, which is a diverse natural dataset containing less then 6000
images for training. When working with such low amounts of data, one has to take extra
care to retain as high data quality as possible. In this example, we use the provided
bounding boxes of the birds to cut them out with square crops while preserving their
aspect ratios when possible.
End of explanation
"""
class KID(keras.metrics.Metric):
def __init__(self, name="kid", **kwargs):
super().__init__(name=name, **kwargs)
# KID is estimated per batch and is averaged across batches
self.kid_tracker = keras.metrics.Mean()
# a pretrained InceptionV3 is used without its classification layer
# transform the pixel values to the 0-255 range, then use the same
# preprocessing as during pretraining
self.encoder = keras.Sequential(
[
layers.InputLayer(input_shape=(image_size, image_size, 3)),
layers.Rescaling(255.0),
layers.Resizing(height=kid_image_size, width=kid_image_size),
layers.Lambda(keras.applications.inception_v3.preprocess_input),
keras.applications.InceptionV3(
include_top=False,
input_shape=(kid_image_size, kid_image_size, 3),
weights="imagenet",
),
layers.GlobalAveragePooling2D(),
],
name="inception_encoder",
)
def polynomial_kernel(self, features_1, features_2):
feature_dimensions = tf.cast(tf.shape(features_1)[1], dtype=tf.float32)
return (features_1 @ tf.transpose(features_2) / feature_dimensions + 1.0) ** 3.0
def update_state(self, real_images, generated_images, sample_weight=None):
real_features = self.encoder(real_images, training=False)
generated_features = self.encoder(generated_images, training=False)
# compute polynomial kernels using the two sets of features
kernel_real = self.polynomial_kernel(real_features, real_features)
kernel_generated = self.polynomial_kernel(
generated_features, generated_features
)
kernel_cross = self.polynomial_kernel(real_features, generated_features)
# estimate the squared maximum mean discrepancy using the average kernel values
batch_size = tf.shape(real_features)[0]
batch_size_f = tf.cast(batch_size, dtype=tf.float32)
mean_kernel_real = tf.reduce_sum(kernel_real * (1.0 - tf.eye(batch_size))) / (
batch_size_f * (batch_size_f - 1.0)
)
mean_kernel_generated = tf.reduce_sum(
kernel_generated * (1.0 - tf.eye(batch_size))
) / (batch_size_f * (batch_size_f - 1.0))
mean_kernel_cross = tf.reduce_mean(kernel_cross)
kid = mean_kernel_real + mean_kernel_generated - 2.0 * mean_kernel_cross
# update the average KID estimate
self.kid_tracker.update_state(kid)
def result(self):
return self.kid_tracker.result()
def reset_state(self):
self.kid_tracker.reset_state()
"""
Explanation: After preprocessing the training images look like the following:
Kernel inception distance
Kernel Inception Distance (KID) was proposed as a
replacement for the popular
Frechet Inception Distance (FID)
metric for measuring image generation quality.
Both metrics measure the difference in the generated and training distributions in the
representation space of an InceptionV3
network pretrained on
ImageNet.
According to the paper, KID was proposed because FID has no unbiased estimator, its
expected value is higher when it is measured on fewer images. KID is more suitable for
small datasets because its expected value does not depend on the number of samples it is
measured on. In my experience it is also computationally lighter, numerically more
stable, and simpler to implement because it can be estimated in a per-batch manner.
In this example, the images are evaluated at the minimal possible resolution of the
Inception network (75x75 instead of 299x299), and the metric is only measured on the
validation set for computational efficiency.
End of explanation
"""
# "hard sigmoid", useful for binary accuracy calculation from logits
def step(values):
# negative values -> 0.0, positive values -> 1.0
return 0.5 * (1.0 + tf.sign(values))
# augments images with a probability that is dynamically updated during training
class AdaptiveAugmenter(keras.Model):
def __init__(self):
super().__init__()
# stores the current probability of an image being augmented
self.probability = tf.Variable(0.0)
# the corresponding augmentation names from the paper are shown above each layer
# the authors show (see figure 4), that the blitting and geometric augmentations
# are the most helpful in the low-data regime
self.augmenter = keras.Sequential(
[
layers.InputLayer(input_shape=(image_size, image_size, 3)),
# blitting/x-flip:
layers.RandomFlip("horizontal"),
# blitting/integer translation:
layers.RandomTranslation(
height_factor=max_translation,
width_factor=max_translation,
interpolation="nearest",
),
# geometric/rotation:
layers.RandomRotation(factor=max_rotation),
# geometric/isotropic and anisotropic scaling:
layers.RandomZoom(
height_factor=(-max_zoom, 0.0), width_factor=(-max_zoom, 0.0)
),
],
name="adaptive_augmenter",
)
def call(self, images, training):
if training:
augmented_images = self.augmenter(images, training)
# during training either the original or the augmented images are selected
# based on self.probability
augmentation_values = tf.random.uniform(
shape=(batch_size, 1, 1, 1), minval=0.0, maxval=1.0
)
augmentation_bools = tf.math.less(augmentation_values, self.probability)
images = tf.where(augmentation_bools, augmented_images, images)
return images
def update(self, real_logits):
current_accuracy = tf.reduce_mean(step(real_logits))
# the augmentation probability is updated based on the dicriminator's
# accuracy on real images
accuracy_error = current_accuracy - target_accuracy
self.probability.assign(
tf.clip_by_value(
self.probability + accuracy_error / integration_steps, 0.0, 1.0
)
)
"""
Explanation: Adaptive discriminator augmentation
The authors of StyleGAN2-ADA propose to change the
augmentation probability adaptively during training. Though it is explained differently
in the paper, they use integral control on the augmentation
probability to keep the discriminator's accuracy on real images close to a target value.
Note, that their controlled variable is actually the average sign of the discriminator
logits (r_t in the paper), which corresponds to 2 * accuracy - 1.
This method requires two hyperparameters:
target_accuracy: the target value for the discriminator's accuracy on real images. I
recommend selecting its value from the 80-90% range.
integration_steps:
the number of update steps required for an accuracy error of 100% to transform into an
augmentation probability increase of 100%. To give an intuition, this defines how slowly
the augmentation probability is changed. I recommend setting this to a relatively high
value (1000 in this case) so that the augmentation strength is only adjusted slowly.
The main motivation for this procedure is that the optimal value of the target accuracy
is similar across different dataset sizes (see figure 4 and 5 in the paper),
so it does not have to be retuned, because the
process automatically applies stronger data augmentation when it is needed.
End of explanation
"""
# DCGAN generator
def get_generator():
noise_input = keras.Input(shape=(noise_size,))
x = layers.Dense(4 * 4 * width, use_bias=False)(noise_input)
x = layers.BatchNormalization(scale=False)(x)
x = layers.ReLU()(x)
x = layers.Reshape(target_shape=(4, 4, width))(x)
for _ in range(depth - 1):
x = layers.Conv2DTranspose(
width, kernel_size=4, strides=2, padding="same", use_bias=False,
)(x)
x = layers.BatchNormalization(scale=False)(x)
x = layers.ReLU()(x)
image_output = layers.Conv2DTranspose(
3, kernel_size=4, strides=2, padding="same", activation="sigmoid",
)(x)
return keras.Model(noise_input, image_output, name="generator")
# DCGAN discriminator
def get_discriminator():
image_input = keras.Input(shape=(image_size, image_size, 3))
x = image_input
for _ in range(depth):
x = layers.Conv2D(
width, kernel_size=4, strides=2, padding="same", use_bias=False,
)(x)
x = layers.BatchNormalization(scale=False)(x)
x = layers.LeakyReLU(alpha=leaky_relu_slope)(x)
x = layers.Flatten()(x)
x = layers.Dropout(dropout_rate)(x)
output_score = layers.Dense(1)(x)
return keras.Model(image_input, output_score, name="discriminator")
"""
Explanation: Network architecture
Here we specify the architecture of the two networks:
generator: maps a random vector to an image, which should be as realistic as possible
discriminator: maps an image to a scalar score, which should be high for real and low
for generated images
GANs tend to be sensitive to the network architecture, I implemented a DCGAN architecture
in this example, because it is relatively stable during training while being simple to
implement. We use a constant number of filters throughout the network, use a sigmoid
instead of tanh in the last layer of the generator, and use default initialization
instead of random normal as further simplifications.
As a good practice, we disable the learnable scale parameter in the batch normalization
layers, because on one hand the following relu + convolutional layers make it redundant
(as noted in the
documentation).
But also because it should be disabled based on theory when using spectral normalization
(section 4.1), which is not used here, but is common
in GANs. We also disable the bias in the fully connected and convolutional layers, because
the following batch normalization makes it redundant.
End of explanation
"""
class GAN_ADA(keras.Model):
def __init__(self):
super().__init__()
self.augmenter = AdaptiveAugmenter()
self.generator = get_generator()
self.ema_generator = keras.models.clone_model(self.generator)
self.discriminator = get_discriminator()
self.generator.summary()
self.discriminator.summary()
def compile(self, generator_optimizer, discriminator_optimizer, **kwargs):
super().compile(**kwargs)
# separate optimizers for the two networks
self.generator_optimizer = generator_optimizer
self.discriminator_optimizer = discriminator_optimizer
self.generator_loss_tracker = keras.metrics.Mean(name="g_loss")
self.discriminator_loss_tracker = keras.metrics.Mean(name="d_loss")
self.real_accuracy = keras.metrics.BinaryAccuracy(name="real_acc")
self.generated_accuracy = keras.metrics.BinaryAccuracy(name="gen_acc")
self.augmentation_probability_tracker = keras.metrics.Mean(name="aug_p")
self.kid = KID()
@property
def metrics(self):
return [
self.generator_loss_tracker,
self.discriminator_loss_tracker,
self.real_accuracy,
self.generated_accuracy,
self.augmentation_probability_tracker,
self.kid,
]
def generate(self, batch_size, training):
latent_samples = tf.random.normal(shape=(batch_size, noise_size))
# use ema_generator during inference
if training:
generated_images = self.generator(latent_samples, training)
else:
generated_images = self.ema_generator(latent_samples, training)
return generated_images
def adversarial_loss(self, real_logits, generated_logits):
# this is usually called the non-saturating GAN loss
real_labels = tf.ones(shape=(batch_size, 1))
generated_labels = tf.zeros(shape=(batch_size, 1))
# the generator tries to produce images that the discriminator considers as real
generator_loss = keras.losses.binary_crossentropy(
real_labels, generated_logits, from_logits=True
)
# the discriminator tries to determine if images are real or generated
discriminator_loss = keras.losses.binary_crossentropy(
tf.concat([real_labels, generated_labels], axis=0),
tf.concat([real_logits, generated_logits], axis=0),
from_logits=True,
)
return tf.reduce_mean(generator_loss), tf.reduce_mean(discriminator_loss)
def train_step(self, real_images):
real_images = self.augmenter(real_images, training=True)
# use persistent gradient tape because gradients will be calculated twice
with tf.GradientTape(persistent=True) as tape:
generated_images = self.generate(batch_size, training=True)
# gradient is calculated through the image augmentation
generated_images = self.augmenter(generated_images, training=True)
# separate forward passes for the real and generated images, meaning
# that batch normalization is applied separately
real_logits = self.discriminator(real_images, training=True)
generated_logits = self.discriminator(generated_images, training=True)
generator_loss, discriminator_loss = self.adversarial_loss(
real_logits, generated_logits
)
# calculate gradients and update weights
generator_gradients = tape.gradient(
generator_loss, self.generator.trainable_weights
)
discriminator_gradients = tape.gradient(
discriminator_loss, self.discriminator.trainable_weights
)
self.generator_optimizer.apply_gradients(
zip(generator_gradients, self.generator.trainable_weights)
)
self.discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, self.discriminator.trainable_weights)
)
# update the augmentation probability based on the discriminator's performance
self.augmenter.update(real_logits)
self.generator_loss_tracker.update_state(generator_loss)
self.discriminator_loss_tracker.update_state(discriminator_loss)
self.real_accuracy.update_state(1.0, step(real_logits))
self.generated_accuracy.update_state(0.0, step(generated_logits))
self.augmentation_probability_tracker.update_state(self.augmenter.probability)
# track the exponential moving average of the generator's weights to decrease
# variance in the generation quality
for weight, ema_weight in zip(
self.generator.weights, self.ema_generator.weights
):
ema_weight.assign(ema * ema_weight + (1 - ema) * weight)
# KID is not measured during the training phase for computational efficiency
return {m.name: m.result() for m in self.metrics[:-1]}
def test_step(self, real_images):
generated_images = self.generate(batch_size, training=False)
self.kid.update_state(real_images, generated_images)
# only KID is measured during the evaluation phase for computational efficiency
return {self.kid.name: self.kid.result()}
def plot_images(self, epoch=None, logs=None, num_rows=3, num_cols=6, interval=5):
# plot random generated images for visual evaluation of generation quality
if epoch is None or (epoch + 1) % interval == 0:
num_images = num_rows * num_cols
generated_images = self.generate(num_images, training=False)
plt.figure(figsize=(num_cols * 2.0, num_rows * 2.0))
for row in range(num_rows):
for col in range(num_cols):
index = row * num_cols + col
plt.subplot(num_rows, num_cols, index + 1)
plt.imshow(generated_images[index])
plt.axis("off")
plt.tight_layout()
plt.show()
plt.close()
"""
Explanation: GAN model
End of explanation
"""
# create and compile the model
model = GAN_ADA()
model.compile(
generator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),
discriminator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),
)
# save the best model based on the validation KID metric
checkpoint_path = "gan_model"
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
monitor="val_kid",
mode="min",
save_best_only=True,
)
# run training and plot generated images periodically
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=val_dataset,
callbacks=[
keras.callbacks.LambdaCallback(on_epoch_end=model.plot_images),
checkpoint_callback,
],
)
"""
Explanation: Training
One can should see from the metrics during training, that if the real accuracy
(discriminator's accuracy on real images) is below the target accuracy, the augmentation
probability is increased, and vice versa. In my experience, during a healthy GAN
training, the discriminator accuracy should stay in the 80-95% range. Below that, the
discriminator is too weak, above that it is too strong.
Note that we track the exponential moving average of the generator's weights, and use that
for image generation and KID evaluation.
End of explanation
"""
# load the best model and generate images
model.load_weights(checkpoint_path)
model.plot_images()
"""
Explanation: Inference
End of explanation
"""
|
kaczla/PJN | src/NLTK/nltk.ipynb | gpl-2.0 | import nltk
"""
Explanation: <center>NLTK</center>
<center>PYTHON</center>
Spis treści:
Co to NLTK?
Wymagania
Instalacja NLTK
Przykład dla języka angielskiego:
Import NLTK
Word Tokenize
Sentence Tokenize
Pos Tagger
Lemmatizer
Sentence Tokenize dla języka niemieckiego
Wersja Online
Uwagi
Co to NLTK?
<b>Natural Language Toolkit</b>, znany też jako <b>NLTK</b> – zestaw bibliotek i programów do symbolicznego i statystycznego przetwarzania języka naturalnego. NLTK zawiera demonstracje graficzne i przykładowe dane.
<br>
Autorzy: <i>Steven Bird</i>, <i>Edward Loper</i>, <i>Ewan Klein</i>
<br>
Oficjalna strona projektu
<div align="right">↑ Powrót do spisu treści ↑</div>
Wymagania
Znajomość podstaw Przetwarzania Języka Naturalnego
Znajomość podstaw Python'a
Python 2.7 lub 3.2+
<div align="right">↑ Powrót do spisu treści ↑</div>
Instalacja NLTK
Instalacja <b>Python-pip</b>:
<b>Debian/Ubuntu</b>:
sudo apt-get install python-pip
<b>Fedora</b>:
sudo yum install python-pip<br><br>
Instalacja <b>NLTK</b>:
sudo pip install -U nltk<br><br>
(OPCJONALNE) Instalacja <b>Numpy</b>:
sudo pip install -U numpy<br><br>
Test użycia:
python -c "import nltk"<br><br>
Pobranie zewnętrznych danych:
Automatycznie (pobranie danych do /usr/share/nltk_data):
sudo python -m nltk.downloader -d /usr/share/nltk_data all<br><br>
Graficznie:
python -c "import nltk;nltk.download()"
wybrać lokalizację instalacji w polu <b>Download Directory</b>
wybrać interesujące nas pakiety i akceptować przyciskiem <b>Download</b>
<div align="right">↑ Powrót do spisu treści ↑</div>
<center>Import NLTK</center>
Zaimportowanie całego NLTK:
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
text = "Natural language processing (NLP) is a field of e.g. computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human-computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation."
one_sentence = "I eat breakfast at 8:00 a.m."
"""
Explanation: <center>Język angielski</center>
Przykłady dla języka angielskiego:
End of explanation
"""
tokens = nltk.word_tokenize(text)
tokens
"""
Explanation: <center>Word Tokenize</center>
Podział na tokeny (<b>bez możliwości wyboru języka</b>) tworzy prostą listę słów i znaków interpunkcyjnych:
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
sent_tokenize = nltk.data.load('tokenizers/punkt/english.pickle')
sent_tokenize.tokenize(text)
"""
Explanation: <center>Sentence Tokenize</center>
Dla podziału na zdania należy załadować <b>tokenizers/punkt/english.pickle</b> (dla języka angielskiego):
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
tagged = nltk.pos_tag(tokens)
tagged
"""
Explanation: <center>Pos Tagger</center>
Wykorzystanie prostego Pos Tagger'a:
|Skrót|Nazwa|Przykład|
|-|-|
|ADJ|adjective|new, good, high, special, big, local|
|ADV|adverb|really, already, still, early, now|
|CNJ|conjunction|and, or, but, if, while, although|
|DET|determiner|the, a, some, most, every, no|
|EX|existential|there, there's|
|FW|foreign word|dolce, ersatz, esprit, quo, maitre|
|MOD|modal verb|will, can, would, may, must, should|
|N|noun|year, home, costs, time, education|
|NP|proper noun|Alison, Africa, April, Washington|
|NUM|number|twenty-four, fourth, 1991, 14:24|
|PRO|pronoun|he, their, her, its, my, I, us|
|P|preposition|on, of, at, with, by, into, under|
|TO|the word to|to|
|UH|interjection|ah, bang, ha, whee, hmpf, oops|
|V|verb|is, has, get, do, make, see, run|
|VD|past tense|said, took, told, made, asked|
|VG|present participle|making, going, playing, working|
|VN|past participle|given, taken, begun, sung|
|WH|wh determiner|who, which, when, what, where, how|
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
one_sentence_pos_tags = nltk.pos_tag(nltk.word_tokenize(one_sentence))
entities = nltk.chunk.ne_chunk(one_sentence_pos_tags)
entities
"""
Explanation: <b>Graficznie:</b>
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
wnl = nltk.stem.WordNetLemmatizer()
print "aardwolves \t=", wnl.lemmatize("aardwolves")
print "dogs \t=", wnl.lemmatize("dogs")
print "abaci \t=", wnl.lemmatize("abaci")
"""
Explanation: <center>Lemmatizer</center>
Lematyzator bazuje na WordNet.
Słowa (ANG = PL):
- aardwolves = Protel, protel grzywiasty, hiena grzywiasta
- dogs - pies
- abacuses, abaci = abakus (liczydło)
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
text_de = u"Haben Sie Lust mit zu mir zu kommen und alles das zu tun, z.B. was ich allen anderen morgen sowieso erzählen werde? Ich weigere mich, in dem alten u. verspukten Schloss zu schlafen. ich habe Angst vor Geistern!"
sent_tokenize_de = nltk.data.load('tokenizers/punkt/german.pickle')
sents_de = sent_tokenize_de.tokenize(text_de)
for i in sents_de:
print "'" + i + "'"
"""
Explanation: <center>Sentence Tokenize DE</center>
Niemickie skróty:
- <b>z.B.</b> (zum Beispiel) = <b>np.</b> (na przykład)
- <b>u.</b> (und) = spójnik <b>i</b>
<div align="right">↑ Powrót do spisu treści ↑</div>
End of explanation
"""
|
sn0wle0pard/tracer | example/sort/.ipynb_checkpoints/Insertion-checkpoint.ipynb | mit | def insertion_sort(unsorted_list):
x = ipytracer.List1DTracer(unsorted_list)
display(x)
for i in range(1, len(x)):
j = i - 1
key = x[i]
while x[j] > key and j >= 0:
x[j+1] = x[j]
j = j - 1
x[j+1] = key
return x.data
"""
Explanation: Insertion Sort (Insert Sort)
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
Complexity
Time
Worst-case: $O(n^2)$
Bast-case: $O(n)$
Average: $O(n^2)$
Reference
Wikipedea
Code1 - List1DTracer
End of explanation
"""
insertion_sort([6,4,7,9,3,5,1,8,2])
"""
Explanation: work
End of explanation
"""
def insertion_sort(unsorted_list):
x = ipytracer.ChartTracer(unsorted_list)
display(x)
for i in range(1, len(x)):
j = i - 1
key = x[i]
while x[j] > key and j >= 0:
x[j+1] = x[j]
j = j - 1
x[j+1] = key
return x.data
"""
Explanation: Code2 - ChartTracer
End of explanation
"""
insertion_sort([6,4,7,9,3,5,1,8,2])
"""
Explanation: work
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX - Corrections - Leakage fit-out.ipynb | mit | #bsearch_ph_sel = 'all-ph'
#bsearch_ph_sel = 'Dex'
bsearch_ph_sel = 'DexDem'
data_file = 'results/usALEX-5samples-PR-raw-%s.csv' % bsearch_ph_sel
"""
Explanation: Executed: Mon Mar 27 11:37:02 2017
Duration: 3 seconds.
Leakage coefficient fit
This notebook estracts the leakage coefficient from the set of 5 us-ALEX smFRET measurements.
What it does?
For each measurement, we fit the donor-only peak position of the uncorrected proximity ratio histogram. These values are saved in a .txt file. This notebook just performs a weighted mean where the weights are the number of bursts in each measurement.
This notebook read data from the file:
End of explanation
"""
from __future__ import division
import numpy as np
import pandas as pd
from IPython.display import display
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
sns.set_style('whitegrid')
palette = ('Paired', 10)
sns.palplot(sns.color_palette(*palette))
sns.set_palette(*palette)
data = pd.read_csv(data_file).set_index('sample')
data
display(data[['E_pr_do_gauss', 'E_pr_do_kde', 'E_pr_do_hsm', 'n_bursts_do']])
print('KDE Mean (%): ', data.E_pr_do_kde.mean()*100)
print('KDE Std. Dev. (%):', data.E_pr_do_kde.std()*100)
d = data[['E_pr_do_gauss', 'E_pr_do_kde', 'E_pr_do_hsm']]#, 'n_bursts_do']]
d.plot(lw=3);
"""
Explanation: To recompute the PR data used by this notebook run the
8-spots paper analysis notebook.
Computation
End of explanation
"""
E_table = data[['E_pr_do_gauss', 'E_pr_do_kde']]
E_table
lk_table = E_table / (1 - E_table)
lk_table.columns = [c.replace('E_pr_do', 'lk') for c in E_table.columns]
lk_table['num_bursts'] = data['n_bursts_do']
lk_table
"""
Explanation: Create Leakage Table
End of explanation
"""
data.E_pr_do_kde
lk_table.lk_kde
E_m = np.average(data.E_pr_do_kde, weights=data.n_bursts_do)
E_m
k_E_m = E_m / (1 - E_m)
k_E_m
k_m = np.average(lk_table.lk_kde, weights=data.n_bursts_do)
k_m
"""
Explanation: Average leakage coefficient
End of explanation
"""
stats = pd.concat([lk_table.mean(), lk_table.std()], axis=1, keys=['mean', 'std']).T
stats
table_to_save = lk_table.append(stats)
table_to_save = table_to_save.round({'lk_gauss': 5, 'lk_kde': 5, 'num_bursts': 2})
table_to_save
table_to_save.to_csv('results/table_usalex_5samples_leakage_coeff.csv')
"""
Explanation: Conclusions
Either averaging $E_{PR}$ or the corresponding $k = n_d/n_a$ the result for the leakage coefficient is ~10 % (D-only peak fitted finding the maximum of the KDE).
Save data
Full table
End of explanation
"""
'%.5f' % k_m
with open('results/usALEX - leakage coefficient %s.csv' % bsearch_ph_sel, 'w') as f:
f.write('%.5f' % k_m)
"""
Explanation: Average coefficient
End of explanation
"""
|
ilyasku/jpkfile | notes_on_jpk_archives/notes_on_jpk.ipynb | mit | from zipfile import ZipFile
fname = "../examples/force-save-2016.06.15-13.17.08.jpk-force"
z = ZipFile(fname)
"""
Explanation: JPK archive
JPK files are zipped archives of data.
There is a header file at the top-level
Header files are normal text files, nothing special needed to read them
There is a segments folder
the segments folder contains numbered folders, one per segment
each folder in segments contains another header file
each folder in segments contains a folder named channels
each channels folder contains several data files
data files contain data in C short format, at least in my example file
there seems to be no header in the .dat files, only pure (integer, i.e. short) data
Reading data from JPK archives
1. Open the zipped archive using zipfile
End of explanation
"""
list_of_files = z.filelist
for f in list_of_files:
print f.filename
print list_of_files[0].filename
f = z.open(list_of_files[0].filename)
lines = f.readlines()
print lines[0]
print lines[1]
print lines[2]
"""
Explanation: It you can get the list of files stored in the zip archive, and you can open files using the instance's open function
End of explanation
"""
from dateutil import parser
t = parser.parse(lines[0][1:])
print t
"""
Explanation: 2. Parse header files to dictionaries
As printed above, the first line of the top-level header.properties file contains date and time, preceded by a '#'.
The following lines contain properties of the form "key=value".
To extract the time, one can use dateutil.parser
End of explanation
"""
_properties = {}
for line in lines[1:]:
key, value = line.split("=")
value.strip()
_properties[key] = value
for p in _properties:
print p," = ",_properties[p]
"""
Explanation: The remainder of the lines should contain properties following the syntax mentioned above. They can easily be parsed to a dictionary.
End of explanation
"""
properties = {}
for line in lines[1:]:
key,value = line.split("=")
value = value.strip()
split_key = key.split(".")
d = properties
if len(split_key) > 1:
for s in split_key[:-1]:
if d.keys().count(s):
d = d[s]
else:
d[s] = {}
d = d[s]
d[split_key[-1]] = value
for p in properties:
print p, " = ",properties[p]
properties['force-scan-series']['header']['force-settings']['force-baseline-adjust-settings']
"""
Explanation: 2.1 Parsing properties into tree-like dictionary
Properties seem to have a tree like structure, with node labels separated by dots. It appears more appropriate to parse them into a dictionary with sub-dictionaries recursively.
End of explanation
"""
fname = z.filelist[-6].filename
print fname
f = z.open(fname)
lines = f.readlines()
print(lines[0])
print(lines[1])
"""
Explanation: 2.2 Lower level header files appear to have a slightly different header with one additional line
So here one would have to skip one line at the start, apart from that the format seems to be identical.
End of explanation
"""
from struct import unpack
fname = z.filelist[-12].filename
print fname
f = z.open(fname)
content = f.read()
print(len(content))
"""
Explanation: 3. Read data from files
Data files (.dat) contain data apparently exclusively in C short format. To convert it to python-compatible integers, use the struct module.
End of explanation
"""
content[0], content[1], content[2], content[3]
data = unpack(">i", content[0:4])
print data
"""
Explanation: According to the JPKay guys, every 4 items make one data point
End of explanation
"""
fname = z.filelist[-13].filename
print fname
f = z.open(fname)
lines = f.readlines()
properties = {}
for line in lines[2:]:
key,value = line.split("=")
value = value.strip()
split_key = key.split(".")
d = properties
if len(split_key) > 1:
for s in split_key[:-1]:
if d.keys().count(s):
d = d[s]
else:
d[s] = {}
d = d[s]
d[split_key[-1]] = value
print properties['channel']['height']['data']['type']
"""
Explanation: According to the struct.unpack documentation, however, every 2 items should make a data point in short format.
I don't get why the header says format is short ...
End of explanation
"""
data = unpack(">h", content[0:2])
print data
"""
Explanation: ... and they still use 4 items instead of 2.
End of explanation
"""
properties['force-segment-header']['num-points']
"""
Explanation: With 120000 items per data file, and a number of points apparently 60000 ...
End of explanation
"""
|
dsacademybr/PythonFundamentos | Cap10/Mini-Projeto2-Solucao/Mini-Projeto2 - Analise1.ipynb | gpl-3.0 | # Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Imports
import os
import subprocess
import stat
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib as mat
import matplotlib.pyplot as plt
from datetime import datetime
sns.set(style="white")
%matplotlib inline
np.__version__
pd.__version__
sns.__version__
mat.__version__
# Dataset
clean_data_path = "dataset/autos.csv"
df = pd.read_csv(clean_data_path,encoding="latin-1")
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font>
Download: http://github.com/dsacademybr
Mini-Projeto 2 - Análise Exploratória em Conjunto de Dados do Kaggle
Análise 1
End of explanation
"""
# Crie um Plot com a Distribuição de Veículos com base no Ano de Registro
fig, ax = plt.subplots(figsize=(8,6))
sns.histplot(df["yearOfRegistration"], color="#33cc33",kde=True, ax=ax)
ax.set_title('Distribuição de Veículos com base no Ano de Registro', fontsize= 15)
plt.ylabel("Densidade (KDE)", fontsize= 15)
plt.xlabel("Ano de Registro", fontsize= 15)
plt.show()
# Salvando o plot
fig.savefig("plots/Analise1/vehicle-distribution.png")
"""
Explanation: Distribuição de Veículos com base no Ano de Registro
End of explanation
"""
# Crie um Boxplot para avaliar os outliers
sns.set_style("whitegrid")
fig, ax = plt.subplots(figsize=(8,6))
sns.boxplot(x="vehicleType", y="price", data=df)
ax.text(5.25,27000,"Análise de Outliers",fontsize=18,color="r",ha="center", va="center")
ax.xaxis.set_label_text("Tipo de Veículo",fontdict= {'size':14})
ax.yaxis.set_label_text("Range de Preço",fontdict= {'size':14})
plt.show()
# Salvando o plot
fig.savefig("plots/Analise1/price-vehicleType-boxplot.png")
"""
Explanation: Variação da faixa de preço pelo tipo de veículo
End of explanation
"""
# Crie um Count Plot que mostre o número de veículos pertencentes a cada categoria
sns.set_style("white")
g = sns.catplot(x="vehicleType", data=df, kind="count", palette="BuPu", height=6, aspect=1.5)
g.ax.xaxis.set_label_text("Tipo de Veículo",fontdict= {'size':16})
g.ax.yaxis.set_label_text("Total de Veículos Para Venda", fontdict= {'size':16})
g.ax.set_title("Contagem total de veículos à venda conforme o tipo de veículo",fontdict= {'size':18})
# to get the counts on the top heads of the bar
for p in g.ax.patches:
g.ax.annotate((p.get_height()), (p.get_x()+0.1, p.get_height()+500))
# Salvando o plot
g.savefig("plots/Analise1/count-vehicleType.png")
"""
Explanation: Contagem total de veículos à venda conforme o tipo de veículo
End of explanation
"""
|
PMEAL/OpenPNM | examples/reference/class_inheritance/creating_a_custom_phase.ipynb | mit | import numpy as np
import openpnm as op
pn = op.network.Cubic(shape=[3, 3, 3], spacing=1e-4)
print(pn)
"""
Explanation: Creating a Custom Phase
Creating a custom fluid using GenericPhase
OpenPNM comes with a small selection of pre-written phases (Air, Water, Mercury). In many cases users will want different options but it is not feasible or productive to include a wide variety of fluids. Consequntly OpenPNM has a mechanism for creating custom phases for this scneario. This requires that the user have correlations for the properties of interest, such as the viscosity as a function of temperature in the form of a polynomial for instance. This is process is described in the following tutuorial:
Import the usual packages and instantiate a small network for demonstration purposes:
End of explanation
"""
oil = op.phases.GenericPhase(network=pn)
print(oil)
"""
Explanation: Now that a network is defined, we can create a GenericPhase object associated with it. For this demo we'll make an oil phase, so let's call it oil:
End of explanation
"""
oil['pore.molecular_mass'] = 100.0 # g/mol
print(oil['pore.molecular_mass'])
"""
Explanation: As can be seen in the above printout, this phase has a temperature and pressure set at all locations, but has no other physical properties.
There are 2 ways add physical properties. They can be hard-coded, or added as a 'pore-scale model'.
- Some are suitable as hard coded values, such as molecular mass
- Others should be added as a model, such as viscosity, which is a function of temperature so could vary spatially and should be updated depending on changing conditions in the simulation.
Start with hard-coding:
End of explanation
"""
oil['pore.molecular_mass'] = np.ones(shape=[pn.Np, ])*120.0
print(oil['pore.molecular_mass'])
"""
Explanation: As can be seen, this puts the value of 100.0 g/mol in every pore. Note that you could also assign each pore explicitly with a numpy array. OpenPNM automatically assigns a scalar value to every location as shown above.
End of explanation
"""
oil['pore.viscosity'] = 1600.0 # cP
"""
Explanation: You can also specify something like viscosity this way as well, but it's not recommended:
End of explanation
"""
oil['pore.temperature'] = 100.0 # C
print(oil['pore.viscosity'])
"""
Explanation: The problem with specifying the viscosity as a hard-coded value is that viscosity is a function of temperature (among other things), so if we adjust the temperature on the oil object it will have no effect on the hard-coded viscosity:
End of explanation
"""
mod = op.models.misc.polynomial
oil.add_model(propname='pore.viscosity', model=mod,
a=[1600, 12, -0.05], prop='pore.temperature')
"""
Explanation: The correct way to specify something like viscosity is to use pore-scale models. There is a large libary of pre-written models in the openpnm.models submodule. For instance, a polynomial can be used as follows:
$$ viscosity = a_0 + a_1 \cdot T + a_2 \cdot T^2 = 1600 + 12 T - 0.05 T^2$$
End of explanation
"""
print(oil['pore.viscosity'])
"""
Explanation: We can now see that our previously written values of viscosity (1600.0) have been overwritten by the values coming from the model:
End of explanation
"""
oil['pore.temperature'] = 40.0 # C
oil.regenerate_models()
print(oil['pore.viscosity'])
"""
Explanation: And moreover, if we change the temperature the model will update the viscosity values:
End of explanation
"""
print(oil.models)
"""
Explanation: Note the call to regenerate_models, which is necessary to actually re-run the model using the new temperature.
When a pore-scale model is added to an object, it is stored under the models attribute, which is a dictionary with names corresponding the property that is being calculated (i.e. 'pore.viscosity'):
End of explanation
"""
oil.models['pore.viscosity']['a'] = [1200, 10, -0.02]
oil.regenerate_models()
print(oil['pore.viscosity'])
"""
Explanation: We can reach into this dictionary and alter the parameters of the model if necessary:
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp | day03/Advanced_Keras_Tutorial/2.0. AutoEncoders and Embeddings.ipynb | mit | from keras.layers import Input, Dense
from keras.models import Model
from keras.datasets import mnist
import numpy as np
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
#note: x_train, x_train :)
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
"""
Explanation: Unsupervised learning
AutoEncoders
An autoencoder, is an artificial neural network used for learning efficient codings.
The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.
<img src="../imgs/autoencoder.png" width="25%">
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
Reference
Based on https://blog.keras.io/building-autoencoders-in-keras.html
Introducing Keras Functional API
The Keras functional API is the way to go for defining complex models, such as multi-output models, directed acyclic graphs, or models with shared layers.
All the Functional API relies on the fact that each keras.Layer object is a callable object!
See 8.2 Multi-Modal Networks for further details.
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Testing the Autoencoder
End of explanation
"""
encoded_imgs = np.random.rand(10,32)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# generation
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Sample generation with Autoencoder
End of explanation
"""
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
conv_autoencoder = Model(input_img, decoded)
conv_autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from keras import backend as K
if K.image_data_format() == 'channels_last':
shape_ord = (28, 28, 1)
else:
shape_ord = (1, 28, 28)
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, ((x_train.shape[0],) + shape_ord))
x_test = np.reshape(x_test, ((x_test.shape[0],) + shape_ord))
x_train.shape
from keras.callbacks import TensorBoard
batch_size=128
steps_per_epoch = np.int(np.floor(x_train.shape[0] / batch_size))
conv_autoencoder.fit(x_train, x_train, epochs=50, batch_size=128,
shuffle=True, validation_data=(x_test, x_test),
callbacks=[TensorBoard(log_dir='./tf_autoencoder_logs')])
decoded_imgs = conv_autoencoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Convolutional AutoEncoder
Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders.
In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
End of explanation
"""
conv_encoder = Model(input_img, encoded)
encoded_imgs = conv_encoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: We coudl also have a look at the 128-dimensional encoded middle representation
End of explanation
"""
# Use the encoder to pretrain a classifier
"""
Explanation: Pretraining encoders
One of the powerful tools of auto-encoders is using the encoder to generate meaningful representation from the feature vectors.
End of explanation
"""
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
"""
Explanation: Application to Image Denoising
Let's put our convolutional autoencoder to work on an image denoising problem. It's simple: we will train the autoencoder to map noisy digits images to clean digits images.
Here's how we will generate synthetic noisy digits: we just apply a gaussian noise matrix and clip the images between 0 and 1.
End of explanation
"""
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Here's how the noisy digits look like:
End of explanation
"""
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras.callbacks import TensorBoard
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (7, 7, 32)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
"""
Explanation: Question
If you squint you can still recognize them, but barely.
Can our autoencoder learn to recover the original digits? Let's find out.
Compared to the previous convolutional autoencoder, in order to improve the quality of the reconstructed, we'll use a slightly different model with more filters per layer:
End of explanation
"""
autoencoder.fit(x_train_noisy, x_train,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy, x_test),
callbacks=[TensorBoard(log_dir='/tmp/autoencoder_denoise',
histogram_freq=0, write_graph=False)])
"""
Explanation: Let's train the AutoEncoder for 100 epochs
End of explanation
"""
decoded_imgs = autoencoder.predict(x_test_noisy)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Now Let's Take a look....
End of explanation
"""
batch_size = 100
original_dim = 784
latent_dim = 2
intermediate_dim = 256
epochs = 50
epsilon_std = 1.0
x = Input(batch_shape=(batch_size, original_dim))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_sigma = Dense(latent_dim)(h)
"""
Explanation: Variational AutoEncoder
(Reference https://blog.keras.io/building-autoencoders-in-keras.html)
Variational autoencoders are a slightly more modern and interesting take on autoencoding.
What is a variational autoencoder ?
It's a type of autoencoder with added constraints on the encoded representations being learned.
More precisely, it is an autoencoder that learns a latent variable model for its input data.
So instead of letting your neural network learn an arbitrary function, you are learning the parameters of a probability distribution modeling your data.
If you sample points from this distribution, you can generate new input data samples:
a VAE is a "generative model".
How does a variational autoencoder work?
First, an encoder network turns the input samples $x$ into two parameters in a latent space, which we will note $z_{\mu}$ and $z_{log_{\sigma}}$.
Then, we randomly sample similar points $z$ from the latent normal distribution that is assumed to generate the data, via $z = z_{\mu} + \exp(z_{log_{\sigma}}) * \epsilon$, where $\epsilon$ is a random normal tensor.
Finally, a decoder network maps these latent space points back to the original input data.
The parameters of the model are trained via two loss functions:
a reconstruction loss forcing the decoded samples to match the initial inputs (just like in our previous autoencoders);
and the KL divergence between the learned latent distribution and the prior distribution, acting as a regularization term.
You could actually get rid of this latter term entirely, although it does help in learning well-formed latent spaces and reducing overfitting to the training data.
Encoder Network
End of explanation
"""
from keras.layers.core import Lambda
from keras import backend as K
def sampling(args):
z_mean, z_log_sigma = args
epsilon = K.random_normal(shape=(batch_size, latent_dim),
mean=0., stddev=epsilon_std)
return z_mean + K.exp(z_log_sigma) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_sigma])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_sigma])
"""
Explanation: We can use these parameters to sample new similar points from the latent space:
End of explanation
"""
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
"""
Explanation: Decoder Network
Finally, we can map these sampled latent points back to reconstructed inputs:
End of explanation
"""
# end-to-end autoencoder
vae = Model(x, x_decoded_mean)
# encoder, from inputs to latent space
encoder = Model(x, z_mean)
# generator, from latent space to reconstructed inputs
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)
"""
Explanation: What we've done so far allows us to instantiate 3 models:
an end-to-end autoencoder mapping inputs to reconstructions
an encoder mapping inputs to the latent space
a generator that can take points on the latent space and will output the corresponding reconstructed samples.
End of explanation
"""
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(vae).create(prog='dot', format='svg'))
## Exercise: Let's Do the Same for `encoder` and `generator` Model(s)
"""
Explanation: Let's Visualise the VAE Model
End of explanation
"""
from keras.objectives import binary_crossentropy
def vae_loss(x, x_decoded_mean):
xent_loss = binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1)
return xent_loss + kl_loss
vae.compile(optimizer='rmsprop', loss=vae_loss)
"""
Explanation: VAE on MNIST
We train the model using the end-to-end model, with a custom loss function: the sum of a reconstruction term, and the KL divergence regularization term.
End of explanation
"""
from keras.datasets import mnist
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
vae.fit(x_train, x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test))
"""
Explanation: Traing on MNIST Digits
End of explanation
"""
x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)
plt.colorbar()
plt.show()
"""
Explanation: Because our latent space is two-dimensional, there are a few cool visualizations that can be done at this point.
One is to look at the neighborhoods of different classes on the latent 2D plane:
End of explanation
"""
# display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# we will sample n points within [-15, 15] standard deviations
grid_x = np.linspace(-15, 15, n)
grid_y = np.linspace(-15, 15, n)
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]]) * epsilon_std
x_decoded = generator.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()
"""
Explanation: Each of these colored clusters is a type of digit. Close clusters are digits that are structurally similar (i.e. digits that share information in the latent space).
Because the VAE is a generative model, we can also use it to generate new digits! Here we will scan the latent plane, sampling latent points at regular intervals, and generating the corresponding digit for each of these points. This gives us a visualization of the latent manifold that "generates" the MNIST digits.
End of explanation
"""
|
ehongdata/Network-Analysis-Made-Simple | 7. Bipartite Graphs (Instructor).ipynb | mit | import networkx as nx
from networkx.algorithms import bipartite
# Initialize the city/person bipartite graph.
B = nx.Graph()
cities = ['Beijing', "Xi'an", 'Vancouver', 'San Francisco', 'Austin', 'Boston'] # populate a list of cities
people = ['Eric', 'Nan'] # populate a list of people's names
B.add_nodes_from(cities, bipartite='cities')
B.add_nodes_from(people, bipartite='people')
edges = [("Eric", "Vancouver"), ("Nan", "Xi'an"), ("Eric", "San Francisco"), ("Nan", 'Boston'), ("Eric", 'Boston'), ("Nan", 'Beijing')] # populate a list of 2-tuples, which are the edges. Each 2-tuple should join one city with one person.
B.add_edges_from(edges)
"""
Explanation: Introduction
Bipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.
Bipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.
Class Exercise
Earlier on in class, I had asked you to write down a list of cities for which you had visited. Now, I would like you to go about creating the bipartite graph of people & cities in the class. You may wish to get up from your seats to finish this last task.
End of explanation
"""
# Betweenness Centrality
bipartite.betweenness_centrality(B, cities)
# Degree Centrality
bipartite.degree_centrality(B, cities)
"""
Explanation: Explore the graph by going through the following algorithms
End of explanation
"""
bipartite.projected_graph(B, people).edges()
"""
Explanation: Think about it...
Which metric is the better indicator of city popularity? How about how well-travelled an individual is?
Projection to a uni-partite graph
A bipartite graph can be projected down to a unipartite graph. The projection joins nodes of the same partition, based on their connections to nodes in the other partition. For example, if A is joined to 1 and B is joined to 1, then A and B will be joined in the uni-partite graph projection.
Exercise
Use the bipartite.projected_graph(G, nodes) function to construct the 'people' uni-partite projection.
Hint: bipartite.projected_graph(G, nodes) returns a NetworkX Graph or MultiGraph (with multiple edges between nodes).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/6f729e8febc223b1d7003304f207eea9/plot_40_epochs_to_data_frame.ipynb | bsd-3-clause | import os
import seaborn as sns
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
"""
Explanation: Exporting Epochs to Pandas DataFrames
This tutorial shows how to export the data in :class:~mne.Epochs objects to a
:class:Pandas DataFrame <pandas.DataFrame>, and applies a typical Pandas
:doc:split-apply-combine <pandas:user_guide/groupby> workflow to examine the
latencies of the response maxima across epochs and conditions.
:depth: 2
We'll use the sample-dataset dataset, but load a version of the raw file
that has already been filtered and downsampled, and has an average reference
applied to its EEG channels. As usual we'll start by importing the modules we
need and loading the data:
End of explanation
"""
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(sample_data_events_file)
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 µV
eog=200e-6) # 200 µV
tmin, tmax = (-0.2, 0.5) # epoch from 200 ms before event to 500 ms after it
baseline = (None, 0) # baseline period from start of epoch to time=0
epochs = mne.Epochs(raw, events, event_dict, tmin, tmax, proj=True,
baseline=baseline, reject=reject_criteria, preload=True)
del raw
"""
Explanation: Next we'll load a list of events from file, map them to condition names with
an event dictionary, set some signal rejection thresholds (cf.
tut-reject-epochs-section), and segment the continuous data into
epochs:
End of explanation
"""
df = epochs.to_data_frame()
df.iloc[:5, :10]
"""
Explanation: Converting an Epochs object to a DataFrame
Once we have our :class:~mne.Epochs object, converting it to a
:class:~pandas.DataFrame is simple: just call :meth:epochs.to_data_frame()
<mne.Epochs.to_data_frame>. Each channel's data will be a column of the new
:class:~pandas.DataFrame, alongside three additional columns of event name,
epoch number, and sample time. Here we'll just show the first few rows and
columns:
End of explanation
"""
df = epochs.to_data_frame(time_format=None,
scalings=dict(eeg=1, mag=1, grad=1))
df.iloc[:5, :10]
"""
Explanation: Scaling time and channel values
By default, time values are converted from seconds to milliseconds and
then rounded to the nearest integer; if you don't want this, you can pass
time_format=None to keep time as a :class:float value in seconds, or
convert it to a :class:~pandas.Timedelta value via
time_format='timedelta'.
Note also that, by default, channel measurement values are scaled so that EEG
data are converted to µV, magnetometer data are converted to fT, and
gradiometer data are converted to fT/cm. These scalings can be customized
through the scalings parameter, or suppressed by passing
scalings=dict(eeg=1, mag=1, grad=1).
End of explanation
"""
df = epochs.to_data_frame(index=['condition', 'epoch'],
time_format='timedelta')
df.iloc[:5, :10]
"""
Explanation: Notice that the time values are no longer integers, and the channel values
have changed by several orders of magnitude compared to the earlier
DataFrame.
Setting the index
It is also possible to move one or more of the indicator columns (event name,
epoch number, and sample time) into the index <pandas:indexing>, by
passing a string or list of strings as the index parameter. We'll also
demonstrate here the effect of time_format='timedelta', yielding
:class:~pandas.Timedelta values in the "time" column.
End of explanation
"""
long_df = epochs.to_data_frame(time_format=None, index='condition',
long_format=True)
long_df.head()
"""
Explanation: Wide- versus long-format DataFrames
Another parameter, long_format, determines whether each channel's data is
in a separate column of the :class:~pandas.DataFrame
(long_format=False), or whether the measured values are pivoted into a
single 'value' column with an extra indicator column for the channel name
(long_format=True). Passing long_format=True will also create an
extra column ch_type indicating the channel type.
End of explanation
"""
channels = ['MEG 1332', 'MEG 1342']
data = long_df.loc['auditory/left'].query('channel in @channels')
# convert channel column (CategoryDtype → string; for a nicer-looking legend)
data['channel'] = data['channel'].astype(str)
sns.lineplot(x='time', y='value', hue='channel', data=data)
"""
Explanation: Generating the :class:~pandas.DataFrame in long format can be helpful when
using other Python modules for subsequent analysis or plotting. For example,
here we'll take data from the "auditory/left" condition, pick a couple MEG
channels, and use :func:seaborn.lineplot to automatically plot the mean and
confidence band for each channel, with confidence computed across the epochs
in the chosen condition:
End of explanation
"""
df = epochs.to_data_frame(time_format=None)
peak_latency = (df.filter(regex=r'condition|epoch|MEG 1332|MEG 2123')
.groupby(['condition', 'epoch'])
.aggregate(lambda x: df['time'].iloc[x.idxmax()])
.reset_index()
.melt(id_vars=['condition', 'epoch'],
var_name='channel',
value_name='latency of peak')
)
ax = sns.violinplot(x='channel', y='latency of peak', hue='condition',
data=peak_latency, palette='deep', saturation=1)
"""
Explanation: We can also now use all the power of Pandas for grouping and transforming our
data. Here, we find the latency of peak activation of 2 gradiometers (one
near auditory cortex and one near visual cortex), and plot the distribution
of the timing of the peak in each channel as a :func:~seaborn.violinplot:
End of explanation
"""
|
mapagron/Boot_camp | hw6/Homework#6.ipynb | gpl-3.0 | # Dependencies
import json
import requests as req
import random
import seaborn as sns
import pandas as pd
import math as math
import time
import numpy as np
import matplotlib.pyplot as plt
from citipy import citipy
"""
Explanation: In this example, you'll be creating a Python script to visualize the weather of 500+ cities across the world of varying distance from the equator. To accomplish this, you'll be utilizing a simple Python library, the OpenWeatherMap API, and a little common sense to create a representative model of weather across world cities.
Your objective is to build a series of scatter plots to showcase the following relationships:
Temperature (F) vs. Latitude
Humidity (%) vs. Latitude
Cloudiness (%) vs. Latitude
Wind Speed (mph) vs. Latitude
Your final notebook must:
Randomly select at least 500 unique (non-repeat) cities based on latitude and longitude.
Perform a weather check on each of the cities using a series of successive API calls.
Include a print log of each city as it's being processed with the city number, city name, and requested URL.
Save both a CSV of all data retrieved and png images for each scatter plot.
1. Latitude and Longitude random coordinates
500+ cities across the world of varying distance from the equator
1.1. Basic Definitions
a) Latitude: When looking at a map, latitude lines run horizontally. Degrees latitude are numbered from 0° to 90° north and south. Zero degrees is the equator, the imaginary line which divides our planet into the northern and southern hemispheres. 90° north is the North Pole and 90° south is the South Pole.
b) Longitude: The vertical longitude lines are also known as meridians. They converge at the poles and are widest at the equator (about 69 miles or 111 km apart). Zero degrees longitude is located at Greenwich, England (0°). The degrees continue 180° east and 180° west where they meet and form the International Date Line in the Pacific Ocean.
End of explanation
"""
import random
import sys
import math
latitude = 0.794501
longitude = -0.752568
file_n = 'random_lat_lon.csv'
num_rows = 1500
def generate_random_data(lat, lon, num_rows, file_name):
with open(file_name, 'w') as output:
for i in range(num_rows):
random_value = random.random()
hex1 = '%012x' % random.randrange(16**12)
flt = float(random.randint(0,100))
dec_lat = random.uniform(-90,90)
dec_lon = random.uniform(-180,180)
output.write('%s,%.1f,%.6f,%.6f \n' % (hex1.lower(), flt, lon+dec_lon, lat+dec_lat))
generate_random_data(latitude, longitude, num_rows, file_n)
"""
Explanation: 1.2. Data Retreived lat-long ramdonly - need to be a CVS file
a) As citypi requieres a latitude and longitude to be call then we need to generate them ramdomly <p>
<a> Source: <a> https://stackoverflow.com/questions/30246435/generate-random-data-with-lat-long
End of explanation
"""
# Import the csv into a pandas DataFrame and naming the columns
randLatLon_dataframe = pd.read_csv("random_lat_lon.csv", names = ["hex1.lower", "flt", "lon", "lat"])
LatandLongTuple = list(zip(randLatLon_dataframe.lat, randLatLon_dataframe.lon))
LatandLongTuple
# get the cities nearest these coordinates
cities = []
from citipy import citipy
for coordinateTuple in LatandLongTuple:
lat, lon = coordinateTuple # tuple allows natural split
cities.append(citipy.nearest_city(lat, lon))
print(citipy.nearest_city(lat, lon).city_name + ", " + citipy.nearest_city(lat,lon).country_code)
# to make sure we have unique cities
uniqueCities = set(cities)
len(uniqueCities)
#print(uniqueCities)
# create a df using list comprehensions
df = pd.DataFrame({'city_name': [item.city_name for item in uniqueCities], 'country_code': [item.country_code for item in uniqueCities]})
df.head()
#Checking columns
#df.head()
#df.columns
#checked
"""
Explanation: 2. Getting cities names (citypi)
End of explanation
"""
df['Temp'] = ""
df['Humidity'] = ""
df['Cloudiness'] = ""
df['Wind Speed'] = ""
df["Lat"] =""
df["Long"] =""
#Parameters for the call (target_url; size; and values)
api_key="86b081d84d2cb22a16322fc70a11869f"
# Set the sample size.
sample_size = 500
target_url = 'http://api.openweathermap.org/data/2.5/weather?q='
units = 'imperial'
#Option#2 - ApiCall
import json
record = 0
for index, row in df.iterrows():
city_name = row['city_name']
country_code = row['country_code']
#api_key="86b081d84d2cb22a16322fc70a11869f"
#target_url = 'http://api.openweathermap.org/data/2.5/weather?q='
#units = "imperial" #for Fahrenheit
url = target_url + city_name + ',' + country_code + '&units=' + units + '&APPID=' + api_key
print (url)
try:
weather_response = req.get(url)
weather_json = weather_response.json()
latitude = weather_json["coord"]["lat"]
longitude = weather_json["coord"]["lon"]
temp = weather_json["main"]["temp"]
humidity = weather_json["main"]["humidity"]
cloud = weather_json["clouds"]["all"]
wind = weather_json["wind"]["speed"]
df.set_value(index,"Temp", temp)
df.set_value(index,"Humidity",humidity)
df.set_value(index,"Wind Speed", wind)
df.set_value(index,"Cloudiness",cloud)
df.set_value(index,"Lat", latitude)
df.set_value(index,"Long",longitude)
print("Retrieved data for %s, %s" % (city_name, country_code))
except:
print("No data for %s, %s" % (city_name,country_code))
record += 1
if record % 59 == 0:
time.sleep(60)
df.head()
df.columns
df["Temp"]
df['Temp'] = df['Temp'].astype('float64')
df["Temp"]
# data to csv
import os
csvPath = os.path.join("WeatherPyCopy.csv")
df.to_csv(csvPath)
"""
Explanation: 3. Retrieving data from WeatherAPI
End of explanation
"""
import seaborn as sns; sns.set(color_codes=True)
%pylab notebook
df['Temp'] = pd.to_numeric(df['Temp'], errors = 'coerce') #coerce - For example - "a" changed to Null non error
df['Lat'] = pd.to_numeric(df['Lat'], errors = 'coerce')
df['Cloudiness'] = pd.to_numeric(df['Cloudiness'], errors = 'coerce')
df['Wind Speed'] = pd.to_numeric(df['Wind Speed'], errors = 'coerce')
plt.scatter(df["Lat"], df["Temp"], marker="o", facecolors="red", edgecolors="black", alpha=0.4)
plt.xlim(-90,90)
plt.xlabel("Latitude")
plt.ylabel("Temperature (Fahrenheit)")
plt.axvline(0, c="k", alpha=.1)
plt.axvline(23.5, c="k", alpha=.05)
plt.axvline(-23.5, c="k", alpha=.05)
plt.title("Temperatures Vs Latitude")
plt.gcf().text(.50, .24, "Equator", fontsize=8, rotation="vertical")
plt.show()
plt.savefig("LatVsTem.png")
x= df["Lat"]
y= df["Humidity"]
plt.scatter (x,y,edgecolor = 'black')
plt.xlim(-90,90)
plt.title('Humidity (%) vs. Latitude')
plt.ylabel("Humidity (%)")
plt.xlabel('Latitude')
plt.axvline(0, c="k", alpha=.1)
plt.gcf().text(.50, .24, "Equator", fontsize=8, rotation="vertical")
plt.show()
plt.savefig("LatVsHum.png")
x= df["Lat"]
y= df["Cloudiness"]
plt.scatter (x,y,edgecolor = 'black')
plt.xlim(-90,90)
plt.title('Cloudiness (%) vs. Latitude')
plt.ylabel("Cloudiness (%)")
plt.xlabel('Latitude')
plt.axvline(0, c="k", alpha=.1)
plt.gcf().text(.50, .24, "Equator", fontsize=8, rotation="vertical")
plt.show()
plt.savefig("LatVsCloud.png")
x= df["Lat"]
y= df["Wind Speed"]
plt.scatter (x,y,edgecolor = 'black')
plt.xlim(-90,90)
plt.title('Wind Speed (%) vs. Latitude')
plt.ylabel("Wind Speed (%)")
plt.xlabel('Latitude')
plt.axvline(0, c="k", alpha=.1)
plt.gcf().text(.50, .24, "Equator", fontsize=8, rotation="vertical")
plt.show()
plt.savefig("WindSpeedvsLatitude.png")
"""
Explanation: 4. Visual Comparison
End of explanation
"""
#Trying with seaborn
df['Temp'] = pd.to_numeric(df['Temp'], errors = 'coerce') #coerce - For example - "a" changed to Null non error
df['Lat'] = pd.to_numeric(df['Lat'], errors = 'coerce')
df['Cloudiness'] = pd.to_numeric(df['Cloudiness'], errors = 'coerce')
df['Wind Speed'] = pd.to_numeric(df['Wind Speed'], errors = 'coerce')
g = sns.lmplot(x= df["Temp"],
y= df["Lat"],
#hue="type",
fit_reg=False,
data= Temp)
"""
Explanation: Documentation
End of explanation
"""
|
AAbercrombie0492/satellite_imagery_feature_detection | notebooks/data_wrangling/interface_Spark_with_EC2.ipynb | mit | # Environment at time of execution
%load_ext watermark
%pylab inline
%watermark -a "Anthony Abercrombie" -d -t -v -p numpy,pandas,matplotlib -g
from __future__ import print_function
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import dotenv
import os
import sys
import dotenv
import subprocess
import glob
from tqdm import tqdm
#File path to get to the project root
PROJ_ROOT = os.path.join(os.path.pardir, os.pardir)
# add local python functions
sys.path.append(os.path.join(PROJ_ROOT, "src"))
#Load AWS keys as environment variables
dotenv_path = os.path.join(PROJ_ROOT, '.env')
dotenv.load_dotenv(dotenv_path)
AWS_ACCESS_KEY = os.environ.get("AWS_ACCESS_KEY")
AWS_SECRET_ACCESS_KEY = os.environ.get("AWS_SECRET_ACCESS_KEY")
SPARK_HOME = os.environ.get("SPARK_HOME")
ec2_keypair = os.environ.get("ec2_keypair")
ec2_keypair_pem = os.environ.get("ec2_keypair_pem")
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
"""
Explanation: Import Dependencies
End of explanation
"""
SPARK_HOME
def spinup_spark_ec2(SPARK_HOME, keypair, keyfile, num_slaves, cluster_name):
bash_command = '{}/ec2/spark-ec2 -k {} -i {}> -s {} launch {}'.format(SPARK_HOME, keypair, keyfile, num_slaves, cluster_name)
return bash_command
args = (SPARK_HOME, ec2_keypair, ec2_keypair_pem, 1, 'spark_ec2_cluster')
x = spinup_spark_ec2(*args)
x
x
'{}/bin/spark-ec2 -k {}<keypair> -i {}<key-file> -s {}<num-slaves> launch {}<cluster-name>'
"""
Explanation: where is spark-ec2?
End of explanation
"""
def connect_master_node(SPARK_HOME, keypair, keyfile, region,cluster_name):
bash_cmd = '{}/ec2/spark-ec2 -k {} -i {} --region={} login {}'.format(SPARK_HOME, keypair, keyfile, region,cluster_name)
return bash_cmd
args = (SPARK_HOME, ec2_keypair, ec2_keypair_pem, 'us-west-2b', 'spark_ec2_cluster')
y = connect_master_node(*args)
y
"""
Explanation: Numerical DataFlow with Spark and Tensorflow
checkout tensorframes from databricks
```python
df = sqlContext.createDataFrame()
x= tf.placeholder(tf.int32, name='x')
y= tf.placeholder(tf.int32, name='y')
output = tf.add(x, 3*y, name='z')
session = tf.session()
output_value = session.run(output, {x:3, y:5})
output_df = tfs.map_rows(output, df)
output_df.collect()
```
Connect to master node
End of explanation
"""
|
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera | Convolutional Neural Networks/Face+Recognition+for+the+Happy+House+-+v3.ipynb | mit | from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
"""
Explanation: Face Recognition for the Happy House
Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace.
Face recognition problems commonly fall into two categories:
Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
In this assignment, you will:
- Implement the triplet loss function
- Use a pretrained model to map face images into 128-dimensional encodings
- Use these encodings to perform face verification and face recognition
In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
Let's load the required packages.
End of explanation
"""
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
"""
Explanation: 0 - Naive Face Verification
In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 1 </u></center></caption>
Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.
1 - Encoding face images into a 128-dimensional vector
1.1 - Using an ConvNet to compute encodings
The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).
The key things you need to know are:
This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images.
End of explanation
"""
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)))
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)))
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.maximum(basic_loss, 0)
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
"""
Explanation: Expected Output
<table>
<center>
Total Params: 3743280
</center>
</table>
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:
<img src="images/distance_kiank.png" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other
- The encodings of two images of different persons are very different
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;">
<br>
<caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
1.2 - The Triplet Loss
For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
<!--
We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
!-->
Training will use triplets of images $(A, P, N)$:
A is an "Anchor" image--a picture of a person.
P is a "Positive" image--a picture of the same person as the Anchor image.
N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$
Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
Notes:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
- $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$.
Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.
Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
3. Compute the full formula by taking the max with zero and summing over the training examples:
$$\mathcal{J} = \sum^{m}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$
Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum().
For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
End of explanation
"""
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**loss**
</td>
<td>
528.143
</td>
</tr>
</table>
2 - Loading the trained model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
End of explanation
"""
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
"""
Explanation: Here're some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;">
<br>
<caption><center> <u> <font color='purple'> Figure 4:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Let's now use this model to perform face verification and face recognition!
3 - Applying the model
Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.
However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.
So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.
3.1 - Face Verification
Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
End of explanation
"""
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist<0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
"""
Explanation: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps:
1. Compute the encoding of the image from image_path
2. Compute the distance about this encoding and the encoding of the identity image stored in the database
3. Open the door if the distance is less than 0.7, else do not open.
As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
End of explanation
"""
verify("images/camera_0.jpg", "younes", database, FRmodel)
"""
Explanation: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg" style="width:100px;height:100px;">
End of explanation
"""
verify("images/camera_2.jpg", "kian", database, FRmodel)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**It's younes, welcome home!**
</td>
<td>
(0.65939283, True)
</td>
</tr>
</table>
Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
End of explanation
"""
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path=image_path, model=model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist<min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**It's not kian, please go away**
</td>
<td>
(0.86224014, False)
</td>
</tr>
</table>
3.2 - Face Recognition
Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!
To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.
Exercise: Implement who_is_it(). You will have to go through the following steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items().
- Compute L2 distance between the target "encoding" and the current "encoding" from the database.
- If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
End of explanation
"""
who_is_it("images/camera_0.jpg", database, FRmodel)
"""
Explanation: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
End of explanation
"""
|
SIMEXP/Projects | NSC2006/labo1/labo_NSC2006_donnees_multidimentionnelles_Octave.ipynb | mit | %matplotlib inline
from pymatbridge import Octave
octave = Octave()
octave.start()
%load_ext pymatbridge
"""
Explanation: <div align="center">
<h2> Méthodes quantitatives en neurosciences </h2>
</div>
<div align="center">
<b><i> Cours NSC-2006, année 2015</i></b><br>
<b>Laboratoire d'analyse de données multidimensionnelle-Réponses</b><br>
*Pierre Bellec, Yassine Ben Haj Ali*
</div>
Objectif:
Ce laboratoire a pour but de vous initier à la manipulation
d’informations multidimensionnelles avec Matlab. Nous allons pour cela
analyser des données électrophysiologiques de décharges d’activité
neuronale. Nous allons effectuer différentes opérations visant à
visualiser, résumer et modéliser ces données. Les données
sont tirée de l’expérience de Georgopoulos1982 sur l’encodage
neuronale du mouvement du bras chez un macaque avec des implants
neuronaux. L’animal commence l’expérience en fixant un curseur au centre
d’une cible, ensuite il doit rejoindre des cibles périphériques
apparaissent dans une des 8 directions arrangé en cercle. Une fois
la cible apparue, l’animal doit attendre ( 100-1500 ms) le signal de départ avant d’aller rejoindre la cible pour
une durée de 500ms, ensuite il retourne au point de départ (au centre).
Cette séquence de mouvement est appelée essai et dans cette expérience
il y en a 47. Le but de l’expérience de Georgopoulos et collègues est de
déterminer l’orientation spatiale préférentielle du neurone en question
dans la région MI, et qu’il est possible de prédire la direction du
mouvement à partir d’enregistrements physiologiques. Leurs résultats
indiquent qu’il y a bel et bien une préférence vers les angles de
mouvement entre 90 et 180 degrés. Durant ce travail nous allons reproduire certaines des analyses de données et
la visualisation des résultats de cette expérience.
Pour réaliser ce laboratoire, il est nécessaire de récupérer les
resources suivantes sur studium:
Chap17_Data.mat: le jeu de données tiré de Georgopoulos1982.
Les scripts diagramme_dispersion.m et diagramme_dispersion_essais.m pour la Section 1.
Les scripts histogramme_essai1.m, histogramme_essais.m pour la Section 2.
Notez que le laboratoire est noté. Il faudra remettre un rapport
détaillé incluant une réponse pour l’ensemble des questions numérotées
ci dessous. Chaque réponse fera typiquement quelques lignes, incluant du
code et une figure si demandé dans l’énoncé.
Ne pas tenir compte et ne pas executer cette partie du code:
End of explanation
"""
%%matlab
load('Chap17_Data')
"""
Explanation: Section 1 : Diagramme de dispersion
Nous allons commencer par effectuer un diagramme de dispersion
(scatter plot) de l’activation d’un neurone tout au long de la durée
d’un essai. Voir le script pour suivre ces étapes.
<ol start="1">
<h4><li>Commençons par charger les données:</li></h4>
</ol>
End of explanation
"""
%%matlab
whos
"""
Explanation: <ol start="2">
<h4><li>La commande `whos` nous permet de déterminer quelles variables sont disponibles dans l’espace de travail, ainsi que leur type.</li></h4>
</ol>
End of explanation
"""
%%matlab
fieldnames(spike)
"""
Explanation: <font color="red">Quelles variables sont présentes? Quel est le type de la variable spike? Quelle est sa taille?</font>
<font color="blue">Les variables sont direction, go, intruction, spike et unit. La variable spike est de type structure et de taille 1x47</font>
<ol start="3">
<h4><li>La variable spike contient les temps des potentiels d’action détectés pour
un neurone. Chaque entrée de la structure contient les données d’un
essai différent. Il est possible de lister les champs de la
structure avec la commande fieldnames:</li></h4>
</ol>
End of explanation
"""
%%matlab
size(spike(1).times)
"""
Explanation: Le champ "spikes(1).times" contient les temps de décharges de potentiels d’action pour
le premier essai. La commande permet de déterminer la taille de ce
vecteur, c’est à dire le nombre de décharges détectées:
End of explanation
"""
%%matlab
size(spike(2).times) %nb de décharges pour l'essai 2
"""
Explanation: <font color="red">Combien y-a-t-il eu de décharges pour l’essai 2? pour l’essai 10?</font>
End of explanation
"""
%%matlab
size(spike(10).times) %nb de décharges pour l'essai 10
"""
Explanation: <font color="blue">Il y a 51 décharges pour l'essai 2.</font>
End of explanation
"""
%%matlab
t1 = spike(1).times;
t2 = spike(2).times;
"""
Explanation: <font color="blue">Il y a 85 décharges pour l'essai 10.</font>
<ol start="4">
<h4><li> La commande suivante va présenter l’ensemble des temps de décharge pour l’essai 1. </li></h4>
</ol>
>> spike(1).times
-0.9893
-0.9402
-0.9158
(...)
<font color="red">Quelle est l’unité probable de ces temps? Pourquoi y-a-t-il des
valeurs négatives?</font>
<font color="blue">L'unité est en secondes et les valeurs négatives indiquent un temps de décharge qui précède le signal de départ le de l'expérience</font>
<ol start="5">
<h4><li> On extrait les temps de décharges des deux premiers essais dans deux
variables `t1` et `t2`: </li></h4>
</ol>
End of explanation
"""
%%matlab
figure
hold on
"""
Explanation: <ol start="6">
<h4><li> On ouvre une nouvelle fenêtre, dédiée à la visualisation: </li></h4>
</ol>
End of explanation
"""
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 1])
end
"""
Explanation: Attention à la deuxième instruction! Elle permet de dessiner
plusieurs objets sur une même figure, l’un à la suite de l’autre,
sans ré-initialiser la figure.
<ol start="7">
<h4><li> Maintenant on va tracer la première ligne du diagramme. Notez que le
nombre de décharges dans l’essai 1 est . On applique une boucle : </li></h4>
</ol>
End of explanation
"""
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 1])
end
xlabel('Temp (sec)');
%Idem pour l’axe des y:
ylabel('Essai #')
%Enfin, on fixe les limites de l’axe des y
ylim([0 3])
% save the result
print('figure_dispersion.png','-dpng')
"""
Explanation: Notez l’utilisation de la commande line.
<ol start="8">
<h4><li> On va maintenant ajouter un label sur l’axe des x et y . </li></h4>
</ol>
<ol start="9">
<h4><li> Sauvegardez la figure dans un fichier png (utilisez la commande print), sous le nom figure_dispersion.png </li></h4>
</ol>
End of explanation
"""
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 0.5])
end
xlabel('Temp (sec)')
%Idem pour l’axe des y:
ylabel('Essai #')
%Enfin, on fixe les limites de l’axe des y
ylim([0 3])
"""
Explanation: <font color="red"> <ol start="10">
<h4><li> Faire une nouvelle figure où chaque barre du diagramme a une hauteur
de 0.5, plutôt que 1. Sauvegardez ce fichier sous le nom "figure_dispersion_lignes.png". </li></h4>
</ol></font>
End of explanation
"""
%%matlab
% Charger les donnees
load('Chap17_Data')
% Preparer une figure
figure
% permettre la superposition de plusieurs graphiques dans la meme figure
hold on
% Donner un label à l'axe des x
xlabel('Temp (sec)');
% Donner un label à l'axe des y
ylabel('Essai #');
% Ajuster les limites de l'axe des y
ylim([0 length(spike)]);
for num_spike = 1:length(spike) %faire une boucle pour tout les essaies
t = spike(num_spike).times; %definir la variable pour chaque essai
for num_temps=1:length(t) %faire une boucle pour tous les points temps
line([t(num_temps) t(num_temps)], [0+(num_spike-1) 1+(num_spike-1)]); %dessiner une line pour chaque point temps t1(i) avec longueur de 1
end
end
"""
Explanation: <font color="red"> <ol start="11">
<h4><li> Vous allez compléter, à partir du fichier , les 4 lignes de code
manquante à l’interieure de la boucle pour tracer tous les essais
(47) dans une même figure. Le résultat ressemblerait à la figure
suivante: </li></h4>
</ol></font>
End of explanation
"""
%%matlab
clear
"""
Explanation: Section 2 : Histogramme
Nous allons continuer l’exploration des données à travers un histogramme
qui résumerait la somme des activations dans un intervalle de temps
donné. Voir le script histograme_essai1.m pour reproduire les commandes suivantes:
<ol start="1">
<h4><li>Commençons par nettoyer l’espace de travail: </li></h4>
</ol>
End of explanation
"""
%%matlab
load('Chap17_Data')
"""
Explanation: <ol start="2">
<h4><li> Chargeons de nouveau les données: </li></h4>
</ol>
End of explanation
"""
%%matlab
centres = [-0.95:0.1:0.95];
"""
Explanation: <ol start="3">
<h4><li>Définissons les bords et le pas des catégories de l’histogramme </li></h4>
</ol>
End of explanation
"""
%%matlab
histo = zeros(1,length(centres));
"""
Explanation: <ol start="4">
<h4><li>Initialiser une matrice de zéros dont la longueur est égale au
nombre d’intervalles: </li></h4>
</ol>
End of explanation
"""
%%matlab
histo = hist(spike(1).times,centres);
"""
Explanation: <ol start="5">
<h4><li>Récupérez le nombre de décharges par intervalle de temps dans
l’essai numéro 1, à l’aide avec la fonction . </li></h4>
</ol>
End of explanation
"""
%%matlab
whos histo % elle est de taille 1x20
%%matlab
min(histo)
max(histo)
mean(histo)
"""
Explanation: <font color="red">Examinez le contenu de la variable histo. Quelle est sa taille ? Son minimum, maximum, sa moyenne (voir les fonctions matlab min, max, mean ).</font>
End of explanation
"""
%%matlab
bar(centres,histo);
%Ajuster les limites de l’axe des x
xlim([-1.1 1]);
xlabel('Temps (sec)'); %Donner un label à l’axe des x
ylabel('# essai');%Donner un label à l’axe des y
"""
Explanation: <ol start="6">
<h4><li>Dessinez l’histogramme avec la fonction `bar`. </li></h4>
</ol>
End of explanation
"""
%%matlab
%Charger les donnees
load('Chap17_Data')
% Definir les centres des intervalles pour l'histogramme
centres = [-0.95:0.1:0.95];
% Initialiser une matrice de zéros histo dont la longueur est égale au nombre d'intervalles:
histo = zeros(length(centres),1);
% Faire une boucle à travers tous les essais et recuperer le nombre de decharges par intervalle avec la fonction histc
for jj = 1:47
histo=histo+histc(spike(jj).times,centres);
end
% Dessiner l'histograme avec la fonction bar
bar(centres,histo);
%Ajuster les limites de l'axe des x
xlim([-1.1 1]);
%Donner un label à l’axe des x
xlabel('Temps (sec)');
%Donner un label à l’axe des y
ylabel('# essai');
"""
Explanation: <font color="red"> <ol start="10">
<h4><li>Reprenez le code du fichier et remplissez la boucle afin de réaliser
un histogramme pour l’ensemble des essais </li></h4>
</ol></font>
End of explanation
"""
%%matlab
clear
"""
Explanation: Section 3 : Régression
Nous allons maintenant implémenter une régression à l’aide de Matlab.
<ol start="1">
<h4><li>Commençons par nettoyer l’espace de travail: </li></h4>
</ol>
End of explanation
"""
%%matlab
x = [ 165 165 157 170 175 165 182 178 ]';
y = [ 47 56 49 60 82 52 78 90 ]';
"""
Explanation: <ol start="2">
<h4><li>Maintenant nous allons récuperer les données de “notes” du cours. </li></h4>
</ol>
End of explanation
"""
%%matlab
whos
"""
Explanation: <font color="red"> Quels est la taille et le contenu des vecteurs x et y? A quoi sert l'opération ' ? </font>
End of explanation
"""
%%matlab
ftheta = inline('theta(1)+theta(2)*x','theta','x');
"""
Explanation: <font color="blue"> Les variables x et y sont tout deux des vecteurs de 1 colonne et 8 lignes. L'operation ' transposer un vecteur ligne en colonne ou le contraire.</font>
<ol start="3">
<h4><li>On définit une nouvelle fonction en ligne: </li></h4>
</ol>
End of explanation
"""
%%matlab
whos ftheta
"""
Explanation: <font color="red"> Quel est le type de la variable ftheta?</font>
End of explanation
"""
%%matlab
% theta_chap = nlinfit(x, y, ftheta, [1 1] );
theta_chap = [-237.5729 1.7794];
"""
Explanation: <font color="blue">ftheta est une fonction en ligne (inline).</font>
<ol start="4">
<h4><li>Estimez les coefficients de régression à l’aide de la fonction : </li></h4>
</ol>
End of explanation
"""
%%matlab
figure
plot(x,y,'b.');
hold on
plot(x,ftheta(theta_chap,x),'r');
"""
Explanation: <font color="red"> Quelles sont les valeurs de theta_chap? A quoi sert l’argument [1 1]? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
<font color="blue">Le paramètre theta_chap vaut [-237.5729 1.7794]. L'argument [1 1] est une valeur initiale de la méthode qui cherche la valeur theta_chap. En répétant l'expérience pour plusieurs valeurs ([2 2], [30 30], [-30 -30]) on voit que la résultat theta_chap ne semble pas dépendre ici de ce paramètre. </font>
<ol start="5">
<h4><li>Maintenant représenter le résultat de la régression. </li></h4>
</ol>
End of explanation
"""
%%matlab
figure
plot(x,y,'b.');
hold on
plot(x,ftheta(theta_chap,x),'r');
ylim([40 95])
xlabel('taille')
ylabel('poids')
print('regression_notes.png','-dpng')
"""
Explanation: <font color="red"><ol start="6">
<h4><li>Utilisez la fonction `ylim` pour changer les limites de l’axe y de 40 à 95.
Ajouter le label `taille` sur l’axe des x avec la commande `xlabel`, et le label `poids` sur
l’axe des y avec la commande `ylabel`. Faites une sauvegarde de cette image,
dans un fichier `regression_notes.png`. </li></h4>
</ol></font>
End of explanation
"""
%%matlab
clear
x = 0:0.1:30;
y = cos(x) + randn(1,301);
"""
Explanation: <ol start="7">
<h4><li>Maintenant nous allons ajuster une courbe plus complexe, un cosinus.
On commence par simuler des données: </li></h4>
</ol>
End of explanation
"""
%%matlab
size(x)
size(y)
"""
Explanation: <font color="red">Quelle est la taille de x? La taille de y?</font>
End of explanation
"""
%%matlab
help randn
"""
Explanation: <font color="blue">Les variables x et y sont des vecteurs lignes de longueur 301.</font>
<font color="red"> A quoi sert la fonction randn (utilisez la commande help ).</font>
End of explanation
"""
%%matlab
figure
plot(x,y,'.')
print('donnees_cosinus.png','-dpng')
"""
Explanation: <font color="blue">La commande randn permet de simuler du bruit suivant une distribution normale (Gaussienne) de moyenne nulle et de variance 1.</font>
<font color="red"> Générer un graphe de la relation entre x et y, et sauvegardez cette image dans un fichier donnees_cosinus.png.</font>
End of explanation
"""
%%matlab
ftheta = inline('theta(1)+theta(2)*cos(x-theta(3))','theta','x');
"""
Explanation: <ol start="8">
<h4><li>On va maintenant définir une fonction de trois paramètres: </li></h4>
</ol>
End of explanation
"""
%%matlab
ftheta([0 1 1],0)
"""
Explanation: <font color="red">Quelle est la valeur de la fonction ftheta pour theta=[0 1 1] et x=0 ?</font>
End of explanation
"""
%%matlab
theta_chap = nlinfit(x, y, ftheta, [0 1 1] );
"""
Explanation: <ol start="9">
<h4><li>Estimez les coefficients de régression à l’aide de la fonction : </li></h4>
</ol>
End of explanation
"""
%%matlab
figure
plot(x,y,'b');
hold on
plot(x,ftheta(theta_chap,x),'r');
Faites une sauvegarde de cette image, dans un fichier .
"""
Explanation: <font color="red">Quelles sont les valeurs de theta_chap ?</font>
<font color="blue">La variable theta_chap vaut [-0.0216 1.1005 0.1539].</font>
<font color="red">A quoi sert l’argument [0 1 1] ? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
<font color="blue">L'argument [0 1 1] jour le même rôle que le paramètre [1 1] à la question 4. En utilisant une valeur différente du paramètre (par exemple [0 1 10]) on trouve une valeur différente pour theta_chap. C'est parce que la fonction ftheta est périodique, et il existe donc une infinité de valeurs d'entrée qui donnent la même sortie.</font>
<font color="red"><ol start="10">
<h4><li> Maintenant représenter le résultat de la régression. </li></h4>
</ol></font>
End of explanation
"""
|
ga7g08/ga7g08.github.io | _notebooks/2015-02-09-Gibbs-sampler-with-a-bivariate-normal-distribution.ipynb | mit | from numpy.random import normal
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['axes.labelsize'] = 22
def GibbsSampler(theta0, y, k, rho):
""" Simple implementation of the Gibbs sampler for a bivariate normal
distribution. """
theta = [theta0]
for i in range(k):
theta2 = theta[-1][1] # theta2 from previous iteration
theta1 = normal(y[0] + rho*(theta2 - y[1]), 1-rho**2)
theta.append([theta1, theta2])
theta2 = normal(y[1] + rho*(theta1 - y[0]), 1-rho**2)
theta.append([theta1, theta2])
return np.array(theta)
# Data as given by Gelman et al.
y = [0, 0]
rho = 0.8
k = 500
# Four chains starting from four points of a square
theta0_list = [[-2.5, -2.5], [2.5, -2.5], [-2.5, 2.5], [2.5, 2.5]]
data = []
for theta0 in theta0_list:
data.append(GibbsSampler(theta0, y, k, rho))
data = np.array(data)
print np.array(data).shape
"""
Explanation: Gibbs sampler with the Bivariate normal disttibution
This is a short post on recreating the Gibbs sampler example from p277 of Gelman's
Bayesian Data Analysis
Alternating conditional sampling
To remind my future self of how this works I am going to recreate the details of
the Gibbs sampler for a two dimensional problem. Suppose the parameter vector
$\theta$ has two components $\theta= [\theta_1, \theta_2]$, then each iteration
of the Gibbs sampler cycles through each component and draws a new value conditional
on all the others. There are thus 2 steps for each iteration. We consider the example
of the bivariate normal distribution with unknown mean $\theta$, but known
covariance matrix
$$\left(\begin{array}{cc}1 & \rho \ \rho & 1 \end{array}\right).$$
If one observation $y=[y_1, y2]$ is made and a uniform prior on $\theta$
is used, the posterior is given by
$$ \left(\begin{array}{c} \theta_1 \ \theta_2 \end{array}\right) \biggr\rvert y
\sim N\left(\left(\begin{array}{c} y_1 \ y_2 \end{array}\right),
\left(\begin{array}{cc}1 & \rho \ \rho & 1 \end{array}\right)\right)
$$
In order to illustrate the use of the Gibbs sampler we need the conditional
posterior distributions, which from the properties of multivariate normal
distributions are given by
$$ \theta_1 | \theta_2, y \sim N\left(y_1 + \rho(\theta_2 - y_2), 1-\rho^{2}\right) $$
$$ \theta_1 | \theta_1, y \sim N\left(y_2 + \rho(\theta_1 - y_1), 1-\rho^{2}\right) $$
The Gibbs sampler proceeds by alternately sampling from these two normal
distributions. This is now coded in simple Python deliberately making the
steps obvious.
End of explanation
"""
nsteps = 10
for i in range(4):
plt.plot(data[i, 0:nsteps, 0], data[i, 0:nsteps, 1], "o-")
plt.xlabel(r"$\theta_1$")
plt.ylabel(r"$\theta_2$", rotation="horizontal")
plt.show()
"""
Explanation: Here data is a $4 \times 2k+1 \times d$ numpy array. The first axis gives the four chains (started from four different initial conditions, the second gives the iteration number (of length $2k+1$ for each chain because we are
saving the data after each update and we added the initial data, the final axis is the number of dimensions (in this case 2).
Recreate figure 11.2 a
First lets just look at the first few steps of all four chains. What we are plotting
here is the location, in thet $\theta$ parameter space, of each chain.
End of explanation
"""
for i in range(4):
plt.plot(data[i, 0:, 0], data[i, 0:, 1], "o-")
plt.xlabel(r"$\theta_1$")
plt.ylabel(r"$\theta_2$", rotation="horizontal")
plt.show()
"""
Explanation: Recreating figure 11.2 b
Now we increase the number of steps to the full range
End of explanation
"""
data_reduced = data[:, ::2, :]
print data_reduced.shape
"""
Explanation: Removing the repeated data
In the example so far we purposefully left in data from the updates during
each iteration. Before trying to do any analysis this should be removed.
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True)
for j in range(4):
ax1.plot(range(k+1), data_reduced[j, :, 0])
ax2.plot(range(k+1), data_reduced[j, :, 1])
ax1.set_ylabel(r"$\theta_1$", rotation="horizontal")
ax2.set_ylabel(r"$\theta_2$", rotation="horizontal")
plt.show()
"""
Explanation: Illustration of the burn in period
The step which is not so obvious in Gelman BDA is why they remove the
first half of data - the so-called burn-in period. To illustrate we
plot the values of $\theta_i$ against the iteration number. This is done
for all four chains.
End of explanation
"""
# Generate figure and axes
fig = plt.figure(figsize=(10, 8))
ndim = 2
ax1 = plt.subplot2grid((ndim, ndim), (0, 0))
ax2 = plt.subplot2grid((ndim, ndim), (1, 0))
ax3 = plt.subplot2grid((ndim, ndim), (1, 1))
# Remove labels
ax1.set_xticks([])
ax1.set_yticks([])
ax3.set_yticks([])
# Get the final data
burnin=50
data_burnt = data_reduced[:, burnin:, :] # Burn first 100 points
data_final = data_burnt.reshape(4 * (k-burnin + 1), 2) # Flatten chain data
# Plot the marginal distribution for theta1
hist, bin_edges = np.histogram(data_final[:, 0], bins=40, normed=True)
bin_mids = bin_edges[:-1] + np.diff(bin_edges)
ax1.step(bin_mids, hist, color="k")
# Plot the joint distribution
ax2.hist2d(data_final[:, 0], data_final[:, 1], bins=30, cmap=plt.cm.Greys)
# Plot the marginal distribution for theta1
hist, bin_edges = np.histogram(data_final[:, 1], bins=40, normed=True)
bin_mids = bin_edges[:-1] + np.diff(bin_edges)
ax3.step(bin_mids, hist, color="k")
ax2.set_xlabel(r"$\Theta_1$")
ax2.set_ylabel(r"$\Theta_2$", rotation="horizontal", labelpad=10)
ax3.set_xlabel(r"$\Theta_2$")
plt.subplots_adjust(hspace=0, wspace=0)
plt.show()
"""
Explanation: The takeaway here is that at least for the first 50 iterations
the chains display same memory of their initial position. After
this the appear to have 'converged' to an approximation of
the posterior distribution. For this reason Gelman discards
the first half of data.
Plotting the marginal and joint posterior
11.2 c is the joint posterior distribution $p(\theta_1, \theta_2| y$.
We can now plot this along with the marginal distributions for each
individual parameter in a so-called triangle plot.
End of explanation
"""
|
sadahanu/DataScience_SideProject | Movie_Rating/Culture_difference_movie_rating.ipynb | mit | imdb_dat = pd.read_csv("movie_metadata.csv")
imdb_dat.info()
"""
Explanation: Data from <font color = "red"> the "IMDB5000"</font> database
End of explanation
"""
import requests
import re
from bs4 import BeautifulSoup
import time
import string
# return the douban movie rating that matches the movie name and year
# read in the movie name
def doubanRating(name):
movie_name = name.decode('gbk').encode('utf-8')
url_head = 'http://movie.douban.com/subject_search'
pageload = {'search_text': movie_name}
r = requests.get(url_head,params = pageload)
soup = BeautifulSoup(r.text,'html.parser')
first_hit = soup.find_all(class_= 'nbg')
try:
r2_link = first_hit[0].get('href')
# sometime douban returns items like celebrity instead of movies
if 'subject' not in r2_link:
r2_link = first_hit[1].get('href')
r2 = requests.get(r2_link)
soup2 = BeautifulSoup(r2.text,'html.parser')
title = soup2.find(property = "v:itemreviewed")
title = title.get_text() # in unicode
# remove Chinese characters
title = ' '.join((title.split(' '))[1:])
title = filter(lambda x:x in set(string.printable),title)
flag = True
if title != name:
print "Warning: name may not match"
flag = False
year = (soup2.find(class_='year')).get_text()# in unicode
rating = (soup2.find(class_="ll rating_num")).get_text() # in unicode
num_review = (soup2.find(property="v:votes")).get_text()
return [title, year, rating,num_review,flag]
except:
print "Record not found for: "+name
return [name, None, None, None, None]
#%%2. Store scrapped data
dataset = pd.read_csv("movie_metadata.csv")
total_length = 5043
#first_query = 2500
res = pd.DataFrame(columns = ('movie_title','year','rating','num_review','flag'))
for i in xrange(1,total_length):
name = dataset['movie_title'][i].strip().strip('\xc2\xa0')
res.loc[i] = doubanRating(name)
print "slowly and finally done %d query"%i
time.sleep(10)
if (i%50==0):
res.to_csv("douban_movie_review.csv")
print "saved until record: %d"%i
"""
Explanation: Scrape data from <font color = "red"> DOUBAN.COM </font>
End of explanation
"""
douban_dat = pd.read_csv("douban_movie_review.csv")
douban_dat.rename(columns = {'movie_title':'d_movie_title','year':'d_year','rating':'douban_score','num_review':'dnum_review','flag':'dflag'},inplace = True)
douban_dat.info()
res_dat = pd.concat([imdb_dat,douban_dat],axis = 1)
res_dat.info()
"""
Explanation: 1. Preliminary data visualization and analysis
End of explanation
"""
# 1. visulize the gross distribution of ratings from imdb(x-axis) and douban(y-axis)
import seaborn as sns
g = sns.jointplot(x = 'imdb_score',y = 'douban_score',data = res_dat)
g.ax_joint.set(xlim=(1, 10), ylim=(1, 10))
"""
Explanation: 1.1 Visulze the gross distribution of rating from <font color = "red">IMDB (x-axis)</font> and <font color = "red"> Douban (y-axis)</font>
End of explanation
"""
# plot distribution and bar graphs(significantly different)
from scipy import stats
nbins = 15
fig,axes = plt.subplots(nrows = 1,ncols = 2, figsize = (10,8))
ax0,ax1 = axes.flatten()
ax0.hist([res_dat.douban_score,res_dat.imdb_score],nbins, histtype = 'bar',label = ["Douban","IMDB"])
ax0.set_title('The distribution of movie ratings')
ax0.set_xlabel('Rating')
ax0.set_ylabel('Count')
ax0.legend()
imdb_score = np.mean(res_dat.imdb_score)
douban_score = np.mean(res_dat.douban_score)
ax1.bar([0,1],[imdb_score, douban_score], yerr = [np.std(res_dat.imdb_score),np.std(res_dat.douban_score)],
align = 'center',color = ['green','blue'], ecolor = 'black')
ax1.set_xticks([0,1])
ax1.set_xticklabels(['IMDB','Douban'])
ax1.set_ylabel('Score')
_,p = stats.ttest_rel(res_dat['imdb_score'], res_dat['douban_score'],nan_policy = 'omit')
ax1.set_title('A comparison of ratings\n'+'t-test: p = %.4f***'%p)
#fig.tight_layout()
plt.show()
# any significant differences
"""
Explanation: 1.2 Is it necessary to recenter(scale) <font color = "red">IMDB score </font> and <font color = "red"> Douban score</font>?
End of explanation
"""
from sklearn import preprocessing
data = res_dat.dropna()
print " delete null values, the remaining data is",data.shape
data.loc[:,'scaled_imdb'] = preprocessing.scale(data['imdb_score'])
data.loc[:,'scaled_douban'] = preprocessing.scale(data['douban_score'])
#stats.ttest_rel(data['scaled_imdb'], data['scaled_douban'],nan_policy = 'omit')
from scipy.stats import norm, lognorm
import matplotlib.mlab as mlab
fig,axes = plt.subplots(nrows = 1,ncols = 2, figsize = (10,8))
ax0,ax1 = axes.flatten()
ax0.plot(data['scaled_imdb'],data['scaled_douban'],'ro')
ax0.set_title('Normalized Scores')
ax0.set_xlabel('Scaled IMDB score')
ax0.set_ylabel('Scaled Douban score')
data.loc[:,'rating_diff'] = data['scaled_imdb'] - data['scaled_douban']
(mu,sigma) = norm.fit(data['rating_diff'])
_,bins,_ = ax1.hist(data['rating_diff'],60,normed = 1, histtype = 'bar',alpha = 0.75)
ax1.plot(bins, mlab.normpdf(bins,mu,sigma),'r--',linewidth = 2)
ax1.set_xlabel('IMDB_score - Douban_score')
ax1.set_ylabel('percentage')
ax1.set_title('Rating difference Distribution')
fig.tight_layout()
plt.show()
"""
Explanation: 1.3 <font color = "red">Normalize</font> IMDB and Douban rating scores
End of explanation
"""
data.describe()
data.describe(include = ['object'])
ind = data['rating_diff'].argmin()
print data.iloc[ind].movie_title
print data.iloc[ind].scaled_imdb
print data.iloc[ind].scaled_douban
print data.iloc[ind].title_year
print data.iloc[ind].movie_imdb_link
print data.iloc[ind].d_year
print data.iloc[ind].douban_score
print data.iloc[ind].imdb_score
data.columns
# 2. Predict differences in ratings
res_dat['diff_rating'] = res_dat['douban_score']-res_dat['imdb_score']
# 2.1. covert categorical variable Genre to Dummy variables
# only extract the first genre out of the list to simplify the problem
res_dat['genre1'] = res_dat.apply(lambda row:(row['genres'].split('|'))[0],axis = 1)
#res_dat['genre1'].value_counts()
# Because there are 21 genres, here we only choose the top 7 to convert to index
top_genre = ['Comedy','Action','Drama','Adventure','Crime','Biography','Horror']
# The rest of genre types we just consider them as others
res_dat['top_genre'] = res_dat.apply(lambda row:row['genre1'] if row['genre1'] in top_genre else 'Other',axis =1)
#select num_user_for_reviews ,director_facebook_likes ,actor_1_facebook_likes ,gross , genres,
#budget,# dnum_review # for EDA
res_subdat = res_dat[['top_genre','num_user_for_reviews','director_facebook_likes','actor_1_facebook_likes','gross','budget','dnum_review','diff_rating']]
res_subdat = pd.get_dummies(res_subdat,prefix =['top_genre'])
#res_dat = pd.get_dummies(res_dat,prefix = ['top_genre'])
res_subdat.shape
# create a subset for visualization and preliminary analysis
col2 = [u'num_user_for_reviews', u'director_facebook_likes',
u'actor_1_facebook_likes', u'gross', u'budget', u'dnum_review', u'top_genre_Action', u'top_genre_Adventure',
u'top_genre_Biography', u'top_genre_Comedy', u'top_genre_Crime',
u'top_genre_Drama', u'top_genre_Horror', u'top_genre_Other',u'diff_rating']
res_subdat = res_subdat[col2]
# a subset for plotting correlation
col_cat = [u'gross', u'budget', u'dnum_review',u'num_user_for_reviews',u'top_genre_Action', u'top_genre_Adventure',
u'top_genre_Biography', u'top_genre_Comedy', u'top_genre_Crime',
u'top_genre_Drama', u'top_genre_Horror', u'diff_rating']
res_subdat_genre = res_subdat[col_cat]
# show pair-wise correlation between differences in ratings and estimators
import matplotlib.pylab as plt
import numpy as np
corr = res_subdat_genre.corr()
sns.set(style = "white")
f,ax = plt.subplots(figsize=(11,9))
cmap = sns.diverging_palette(220,10,as_cmap=True)
mask = np.zeros_like(corr,dtype = np.bool)
sns.heatmap(corr,mask = mask,cmap = cmap, vmax=.3,square = True, linewidths = .5,
cbar_kws = {"shrink": .5},ax = ax)
# prepare trainning set and target set
col_train = col2[:len(col2)-1]
col_target = col2[len(col2)-1]
#cl_res_subdat = res_subdat.dropna(axis =0)
cl_res_subdat.shaperating_diff
# 2.2 Use Random Forest Regressor for prediction
X_cat = res_subdat.ix[:,'top_genre_Action':'top_genre_Other']
num_col = []
for i in res_dat.columns:
if res_dat[i].dtype != 'object':
num_col.append(i)
X_num = res_dat[num_col]
X = pd.concat([X_cat,X_num],axis = 1)
X = X.dropna(axis = 0)
y = X['diff_rating']
X = X.iloc[:,:-1]
X.drop(['imdb_score','douban_score'],axis = 1,inplace = True)
from sklearn.model_selection import train_test_split
# METHOD 1: BUILD randomforestregressor
X_train,X_val,y_train,y_val = train_test_split(X,y,test_size = 0.1,random_state = 42)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 500)
forest = rf.fit(X_train, y_train)
score_r2 = rf.score(X_val,y_val)
# print: R-sqr
print score_r2
rf_features = sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), X.columns),reverse = True)
import matplotlib.pyplot as plt;
imps,feas = zip(*(rf_features[0:4]+rf_features[6:12]))
ypos = np.arange(len(feas))
plt.barh(ypos,imps,align = 'center',alpha = 0.5)
plt.yticks(ypos,feas)
plt.xlabel('Feature Importance')
plt.subplot(1,2,1)
plt.plot(y_train,rf.predict(X_train),'o')
plt.xlabel('Training_y')
plt.ylabel('Predict_y')
plt.xlim(-6,6)
plt.ylim(-6,6)
plt.subplot(1,2,2)
plt.plot(y_val,rf.predict(X_val),'o')
plt.xlabel('val_y')
plt.ylabel('Predict_y')
plt.xlim(-3,4)
plt.ylim(-3,4)
X.columns
# Lasso method
from sklearn.linear_model import Lasso
Lassoreg = Lasso(alpha = 1e-4,normalize = True,random_state = 42)
Lassoreg.fit(X,y)
score_r2 = Lassoreg.score(X_val,y_val)
print score_r2
Ls_features = sorted(zip(map(lambda x:round(x,4),Lassoreg.coef_),X.columns))
print Ls_features
y_val_rf = rf.predict(X_val)
y_val_Ls = Lassoreg.predict(X_val)
y_val_pred = (y_val_rf+y_val_Ls)/2
from sklearn.metrics import r2_score
print r2_score(y_val,y_val_pred)
import matplotlib.pyplot as plt;
imps,feas = zip(*(Ls_features[0:4]+Ls_features[-4:]))
ypos = np.arange(len(feas))
plt.barh(ypos,imps,align = 'center',alpha = 0.5)
plt.yticks(ypos,feas)
plt.xlabel('Feature Importance (Coefficient)')
plt.subplot(1,2,1)
plt.plot(y_train,Lassoreg.predict(X_train),'o')
plt.xlabel('Training_y')
plt.ylabel('Predict_y')
plt.xlim(-6,6)
plt.ylim(-6,6)
plt.subplot(1,2,2)
plt.plot(y_val,Lassoreg.predict(X_val),'o')
plt.xlabel('val_y')
plt.ylabel('Predict_y')
plt.xlim(-3,4)
plt.ylim(-3,4)
"""
Explanation: 1.4 Visulze Features
End of explanation
"""
|
UWashington-Astro300/Astro300-A17 | FirstLast_Sympy.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
"""
Explanation: First Last - SymPy
End of explanation
"""
sp.init_printing()
x = sp.symbols('x')
my_x = np.linspace(-10,10,100)
"""
Explanation: $$ \Large {\displaystyle f(x)=3e^{-{\frac {x^{2}}{8}}}} \sin(x/3)$$
Find the first four terms of the Taylor expansion of the above equation
Make a plot of the function
Plot size 10 in x 4 in
X limts -5, 5
Y limits -2, 2
Over-plot the 1-term Taylor expansion using a different color
Over-plot the 2-term Taylor expansion using a different color
Over-plot the 3-term Taylor expansion using a different color
Over-plot the 4-term Taylor expansion using a different color
End of explanation
"""
|
zihangdai/xlnet | notebooks/colab_imdb_gpu.ipynb | apache-2.0 | ! pip install sentencepiece
"""
Explanation: <a href="https://colab.research.google.com/github/zihangdai/xlnet/blob/master/notebooks/colab_imdb_gpu.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
XLNet IMDB movie review classification project
This notebook is for classifying the imdb sentiment dataset. It will be easy to edit this notebook in order to run all of the classification tasks referenced in the XLNet paper. Whilst you cannot expect to obtain the state-of-the-art results in the paper on a GPU, this model will still score very highly.
Setup
Install dependencies
End of explanation
"""
# only needs to be done once
! wget https://storage.googleapis.com/xlnet/released_models/cased_L-24_H-1024_A-16.zip
! unzip cased_L-24_H-1024_A-16.zip
"""
Explanation: Download the pretrained XLNet model and unzip
End of explanation
"""
! wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
! tar zxf aclImdb_v1.tar.gz
"""
Explanation: Download extract the imdb dataset - surpessing output
End of explanation
"""
! git clone https://github.com/zihangdai/xlnet.git
"""
Explanation: Git clone XLNet repo for access to run_classifier and the rest of the xlnet module
End of explanation
"""
SCRIPTS_DIR = 'xlnet' #@param {type:"string"}
DATA_DIR = 'aclImdb' #@param {type:"string"}
OUTPUT_DIR = 'proc_data/imdb' #@param {type:"string"}
PRETRAINED_MODEL_DIR = 'xlnet_cased_L-24_H-1024_A-16' #@param {type:"string"}
CHECKPOINT_DIR = 'exp/imdb' #@param {type:"string"}
"""
Explanation: Define Variables
Define all the dirs: data, xlnet scripts & pretrained model.
If you would like to save models then you can authenticate a GCP account and use that for the OUTPUT_DIR & CHECKPOINT_DIR - you will need a large amount storage to fix these models.
Alternatively it is easy to integrate a google drive account, checkout this guide for I/O in colab but rememeber these will take up a large amount of storage.
End of explanation
"""
train_command = "python xlnet/run_classifier.py \
--do_train=True \
--do_eval=True \
--eval_all_ckpt=True \
--task_name=imdb \
--data_dir="+DATA_DIR+" \
--output_dir="+OUTPUT_DIR+" \
--model_dir="+CHECKPOINT_DIR+" \
--uncased=False \
--spiece_model_file="+PRETRAINED_MODEL_DIR+"/spiece.model \
--model_config_path="+PRETRAINED_MODEL_DIR+"/xlnet_config.json \
--init_checkpoint="+PRETRAINED_MODEL_DIR+"/xlnet_model.ckpt \
--max_seq_length=128 \
--train_batch_size=8 \
--eval_batch_size=8 \
--num_hosts=1 \
--num_core_per_host=1 \
--learning_rate=2e-5 \
--train_steps=4000 \
--warmup_steps=500 \
--save_steps=500 \
--iterations=500"
! {train_command}
"""
Explanation: Run Model
This will set off the fine tuning of XLNet. There are a few things to note here:
This script will train and evaluate the model
This will store the results locally on colab and will be lost when you are disconnected from the runtime
This uses the large version of the model (base not released presently)
We are using a max seq length of 128 with a batch size of 8 please refer to the README for why this is.
This will take approx 4hrs to run on GPU.
End of explanation
"""
|
kubeflow/community | scripts/github_stats.ipynb | apache-2.0 | import os
import subprocess
if os.path.exists("/var/run/secrets/kubernetes.io/serviceaccount"):
subprocess.check_call(["pip", "install", "--user", "-r", "requirements.txt"], stderr=subprocess.STDOUT, bufsize=1)
# NOTE: The RuntimeWarnings (if any) are harmless. See ContinuumIO/anaconda-issues#6678.
import altair as alt
from pandas.io import gbq
import pandas as pd
import numpy as np
from importlib import reload
import itertools
import getpass
import subprocess
# Configuration Variables. Modify as desired.
PROJECT = subprocess.check_output(["gcloud", "config", "get-value", "project"]).strip().decode()
#matplotlib
"""
Explanation: Compute GitHub Stats
Notebook setup
End of explanation
"""
import datetime
month = datetime.datetime.now().month
year = datetime.datetime.now().year
num_months = 12
months = []
for i in range(num_months):
months.append("\"{0}{1:02}\"".format(year, month))
month -= 1
if month == 0:
month = 12
year -=1
"""
Explanation: Setup Authorization
If you are using a service account run
%%bash
Activate Service Account provided by Kubeflow.
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
If you are running using user credentials
gcloud auth application-default login
End of explanation
"""
query = """
SELECT
DATE(created_at) AS pr_date,
actor.id,
actor.login,
JSON_EXTRACT(payload, '$.pull_request.user.id') as user_id,
JSON_EXTRACT(payload, '$.pull_request.id') as pr_id,
JSON_EXTRACT(payload, '$.pull_request.merged') as merged
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND type = 'PullRequestEvent'
AND org.login = 'kubeflow'
AND JSON_EXTRACT(payload, '$.action') IN ('"closed"')
""".format(",".join(months))
all_prs=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
# Filter PRs to merged PRs
v=all_prs["merged"].values == 'true'
merged_all_prs = all_prs.iloc[v]
p=pd.Series(data=merged_all_prs["user_id"].values,index=merged_all_prs["pr_date"])
p=p.sort_index()
# Some solutions here: https://stackoverflow.com/questions/46470743/how-to-efficiently-compute-a-rolling-unique-count-in-a-pandas-time-series
# Need to figure out how to do a time based window
# TODO(jlewi): Is there a bug in the rolling window computation? creators ends up having the same number
# of rows as p; so we end up with multiple datapoints for each day; but the values aren't the same for
# each day. What is causing this effect?
creators = p.rolling('28d').apply(lambda arr: pd.Series(arr).nunique())
# We need to group the days. Rolling window will create a point for each data point
creators_df = pd.DataFrame({"day": creators.index, "num_authors": creators.values})
creators_df = creators_df.groupby("day", as_index=False).max()
import altair as alt
chart = alt.Chart(creators_df, title= "Unique PR Authors (Last 28 Days)")
line = chart.mark_line().encode(
x= alt.X('day', title = "Day"),
y=alt.Y("num_authors", title="# Unique Authors"),
)
point = line + line.mark_point()
point.interactive()
"""
Explanation: Unique PR Creators
End of explanation
"""
pr_impulse=pd.Series(data=merged_all_prs["pr_id"].values,index=merged_all_prs["pr_date"])
pr_impulse=pr_impulse.sort_index()
unique_prs = pr_impulse.rolling('28d').apply(lambda arr: pd.Series(arr).nunique())
prs_df = pd.DataFrame({"day": unique_prs.index, "num_prs": unique_prs.values})
prs_df = prs_df.groupby("day", as_index=False).max()
chart = alt.Chart(prs_df, title= "Merged PRs (Last 28 Days)")
line = chart.mark_line().encode(
x= alt.X('day', title = "Day"),
y=alt.Y("num_prs", title="# PRs"),
)
point = line + line.mark_point()
point.interactive()
"""
Explanation: Number Prs
End of explanation
"""
release_months = []
year = 2019
for month in range(8, 11):
release_months.append("\"{0}{1:02}\"".format(year, month))
query = """
SELECT
DATE(created_at) AS pr_date,
actor.id,
actor.login,
JSON_EXTRACT(payload, '$.pull_request.merged') as merged,
JSON_EXTRACT(payload, '$.pull_request.id') as pr_id,
JSON_EXTRACT(payload, '$.pull_request.url') as pr_url,
JSON_EXTRACT(payload, '$.pull_request.user.id') as user_id
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND type = 'PullRequestEvent'
AND org.login = 'kubeflow'
AND JSON_EXTRACT(payload, '$.action') IN ('"closed"')
""".format(",".join(release_months))
prs=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
# Filter PRs to merged PRs
v=prs["merged"].values == 'true'
merged_prs = prs.iloc[v]
unique_pr_logins = prs["user_id"].unique()
unique_prs = prs["pr_id"].unique()
merged_unique_logins = merged_prs["user_id"].unique()
merged_unique_prs = merged_prs["pr_id"].unique()
print("Number of unique pr authors (merged & unmerged) {0}".format(unique_pr_logins.shape))
print("Number of unique prs (merged & unmerged) {0}".format(unique_prs.shape))
print("Number of unique pr authors (merged) {0}".format(merged_unique_logins.shape))
print("Number of unique prs (merged) {0}".format(merged_unique_prs.shape))
"""
Explanation: Release stats per release (quarter)
Compute stats about a release
We do this based on time
You can see a sample of the payload at https://api.github.com/repos/kubeflow/pipelines/pulls/1038
End of explanation
"""
query = """
SELECT
distinct JSON_EXTRACT(payload, '$.action')
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
""".format(",".join(months))
actions=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
actions
"""
Explanation: Get a list of distinct actions
Here's a list of events in the api
It looks like these are different from the ones in the github archive
End of explanation
"""
query = """
SELECT
DATE(created_at) AS issue_date,
actor.id,
actor.login,
JSON_EXTRACT(payload, '$.pull_request.id') as issue_id,
JSON_EXTRACT(payload, '$.pull_request.url') as issue_url
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND type = 'IssuesEvent'
AND org.login = 'kubeflow'
AND JSON_EXTRACT(payload, '$.action') IN ('"opened"')
""".format(",".join(months))
issues=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
issue_counts=issues["issue_date"].value_counts()
issue_counts=issue_counts.sort_index()
rolling_issue_count = issue_counts.rolling('28d').sum()
issues_df = pd.DataFrame({"day": rolling_issue_count.index, "num_issues": rolling_issue_count.values})
issues_df = issues_df.groupby("day", as_index=False).max()
# Truncate the first 28 days because it will be a windowing effect.
chart = alt.Chart(issues_df[28:], title= "New Issues (Last 28 Days)")
line = chart.mark_line().encode(
x= alt.X('day', title = "Day"),
y=alt.Y("num_issues", title="# issues"),
)
point = line + line.mark_point()
point.interactive()
import matplotlib
from matplotlib import pylab
matplotlib.rcParams.update({'font.size': 22})
hf = pylab.figure()
hf.set_size_inches(18.5, 10.5)
pylab.plot(rolling_issue_count, linewidth=5)
ha = pylab.gca()
ha.set_title("New Kubeflow Issues (28 Days)")
ha.set_xlabel("Date")
ha.set_ylabel("# Of Issues")
"""
Explanation: New Issues Last 28 Days
End of explanation
"""
query = """
SELECT
*
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND type = 'IssuesEvent'
AND org.login = 'kubeflow'
limit 20
""".format(",".join(months))
events=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
events
"""
Explanation: GetSomeSampleIssue Events
End of explanation
"""
query = """
SELECT
*
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND type = 'PullRequestEvent'
AND org.login = 'kubeflow'
limit 20
""".format(",".join(months))
events=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
import pprint
import json
data = json.loads(events["payload"].values[3])
pprint.pprint(data)
data["pull_request"]["id"]
"""
Explanation: Get some sample pull request events
Want to inspect the data
End of explanation
"""
query = """
SELECT
distinct type
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND org.login = 'kubeflow'
limit 20
""".format(",".join(months))
events=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
events
"""
Explanation: Get Distinct Types
End of explanation
"""
|
BrownDwarf/ApJdataFrames | notebooks/Rayner2009.ipynb | mit | import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
"""
Explanation: ApJdataFrames Rayner et al. 2009
Title: THE INFRARED TELESCOPE FACILITY (IRTF) SPECTRAL LIBRARY: COOL STARS
Authors: John T. Rayner, Michael C. Cushing, and William D. Vacca
Data is from this paper:
http://iopscience.iop.org/article/10.1088/0067-0049/185/2/289/meta
End of explanation
"""
#! curl http://iopscience.iop.org/0067-0049/185/2/289/suppdata/apjs311476t7_ascii.txt > ../data/Rayner2009/apjs311476t7_ascii.txt
#! head ../data/Rayner2009/apjs311476t7_ascii.txt
nn = ['wl1', 'id1', 'wl2', 'id2', 'wl3', 'id3', 'wl4', 'id4']
tbl7 = pd.read_csv("../data/Rayner2009/apjs311476t7_ascii.txt", index_col=False,
sep="\t", skiprows=[0,1,2,3], names= nn)
"""
Explanation: Table 7 - Strong metal lines in the Arcturus spectrum
End of explanation
"""
line_list_unsorted = pd.concat([tbl7[[nn[0], nn[1]]].rename(columns={"wl1":"wl", "id1":"id"}),
tbl7[[nn[2], nn[3]]].rename(columns={"wl2":"wl", "id2":"id"}),
tbl7[[nn[4], nn[5]]].rename(columns={"wl3":"wl", "id3":"id"}),
tbl7[[nn[6], nn[7]]].rename(columns={"wl4":"wl", "id4":"id"})], ignore_index=True, axis=0)
line_list = line_list_unsorted.sort_values('wl').dropna().reset_index(drop=True)
"""
Explanation: This is a verbose way to do this, but whatever, it works:
End of explanation
"""
#line_list.tail()
sns.distplot(line_list.wl)
"""
Explanation: Finally:
End of explanation
"""
line_list.to_csv('../data/Rayner2009/tbl7_clean.csv', index=False)
"""
Explanation: The lines drop off towards $K-$band.
Save the file:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/93beebed8738eca8bfe26d41a12f4260/10_stc_class.ipynb | bsd-3-clause | import os
from mne import read_source_estimate
from mne.datasets import sample
print(__doc__)
# Paths to example data
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
"""
Explanation: The :class:SourceEstimate <mne.SourceEstimate> data structure
Source estimates, commonly referred to as STC (Source Time Courses),
are obtained from source localization methods.
Source localization method solve the so-called 'inverse problem'.
MNE provides different methods for solving it:
dSPM, sLORETA, LCMV, MxNE etc.
Source localization consists in projecting the EEG/MEG sensor data into
a 3-dimensional 'source space' positioned in the individual subject's brain
anatomy. Hence the data is transformed such that the recorded time series at
each sensor location maps to time series at each spatial location of the
'source space' where is defined our source estimates.
An STC object contains the amplitudes of the sources over time.
It only stores the amplitudes of activations but
not the locations of the sources. To get access to the locations
you need to have the :class:source space <mne.SourceSpaces>
(often abbreviated src) used to compute the
:class:forward operator <mne.Forward> (often abbreviated fwd).
See tut-forward for more details on forward modeling, and
tut-inverse-methods
for an example of source localization with dSPM, sLORETA or eLORETA.
Source estimates come in different forms:
- :class:`mne.SourceEstimate`: For cortically constrained source spaces.
- :class:`mne.VolSourceEstimate`: For volumetric source spaces
- :class:`mne.VectorSourceEstimate`: For cortically constrained source
spaces with vector-valued source activations (strength and orientation)
- :class:`mne.MixedSourceEstimate`: For source spaces formed of a
combination of cortically constrained and volumetric sources.
<div class="alert alert-info"><h4>Note</h4><p>:class:`(Vector) <mne.VectorSourceEstimate>`
:class:`SourceEstimate <mne.SourceEstimate>` are surface representations
mostly used together with `FreeSurfer <tut-freesurfer-mne>`
surface representations.</p></div>
Let's get ourselves an idea of what a :class:mne.SourceEstimate really
is. We first set up the environment and load some data:
End of explanation
"""
stc = read_source_estimate(fname_stc, subject='sample')
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# Plot surface
brain = stc.plot(**surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'SourceEstimate', 'title', font_size=16)
"""
Explanation: Load and inspect example data
This data set contains source estimation data from an audio visual task. It
has been mapped onto the inflated cortical surface representation obtained
from FreeSurfer <tut-freesurfer-mne>
using the dSPM method. It highlights a noticeable peak in the auditory
cortices.
Let's see how it looks like.
End of explanation
"""
shape = stc.data.shape
print('The data has %s vertex locations with %s sample points each.' % shape)
"""
Explanation: SourceEstimate (stc)
A source estimate contains the time series of a activations
at spatial locations defined by the source space.
In the context of a FreeSurfer surfaces - which consist of 3D triangulations
- we could call each data point on the inflated brain
representation a vertex . If every vertex represents the spatial location
of a time series, the time series and spatial location can be written into a
matrix, where to each vertex (rows) at multiple time points (columns) a value
can be assigned. This value is the strength of our signal at a given point in
space and time. Exactly this matrix is stored in stc.data.
Let's have a look at the shape
End of explanation
"""
shape_lh = stc.lh_data.shape
print('The left hemisphere has %s vertex locations with %s sample points each.'
% shape_lh)
"""
Explanation: We see that stc carries 7498 time series of 25 samples length. Those time
series belong to 7498 vertices, which in turn represent locations
on the cortical surface. So where do those vertex values come from?
FreeSurfer separates both hemispheres and creates surfaces
representation for left and right hemisphere. Indices to surface locations
are stored in stc.vertices. This is a list with two arrays of integers,
that index a particular vertex of the FreeSurfer mesh. A value of 42 would
hence map to the x,y,z coordinates of the mesh with index 42.
See next section on how to get access to the positions in a
:class:mne.SourceSpaces object.
Since both hemispheres are always represented separately, both attributes
introduced above, can also be obtained by selecting the respective
hemisphere. This is done by adding the correct prefix (lh or rh).
End of explanation
"""
is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0]
print('The number of vertices in stc.lh_data and stc.rh_data do ' +
('not ' if not is_equal else '') +
'sum up to the number of rows in stc.data')
"""
Explanation: Since we did not change the time representation, only the selected subset of
vertices and hence only the row size of the matrix changed. We can check if
the rows of stc.lh_data and stc.rh_data sum up to the value we had
before.
End of explanation
"""
peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True,
time_as_index=True)
"""
Explanation: Indeed and as the mindful reader already suspected, the same can be said
about vertices. stc.lh_vertno thereby maps to the left and
stc.rh_vertno to the right inflated surface representation of
FreeSurfer.
Relationship to SourceSpaces (src)
As mentioned above, :class:src <mne.SourceSpaces> carries the mapping from
stc to the surface. The surface is built up from a
triangulated mesh <https://en.wikipedia.org/wiki/Surface_triangulation>_
for each hemisphere. Each triangle building up a face consists of 3 vertices.
Since src is a list of two source spaces (left and right hemisphere), we can
access the respective data by selecting the source space first. Faces
building up the left hemisphere can be accessed via src[0]['tris'], where
the index $0$ stands for the left and $1$ for the right
hemisphere.
The values in src[0]['tris'] refer to row indices in src[0]['rr'].
Here we find the actual coordinates of the surface mesh. Hence every index
value for vertices will select a coordinate from here. Furthermore
src[0]['vertno'] stores the same data as stc.lh_vertno,
except when working with sparse solvers such as
:func:mne.inverse_sparse.mixed_norm, as then only a fraction of
vertices actually have non-zero activations.
In other words stc.lh_vertno equals src[0]['vertno'], whereas
stc.rh_vertno equals src[1]['vertno']. Thus the Nth time series in
stc.lh_data corresponds to the Nth value in stc.lh_vertno and
src[0]['vertno'] respectively, which in turn map the time series to a
specific location on the surface, represented as the set of cartesian
coordinates stc.lh_vertno[N] in src[0]['rr'].
Let's obtain the peak amplitude of the data as vertex and time point index
End of explanation
"""
peak_vertex_surf = stc.lh_vertno[peak_vertex]
peak_value = stc.lh_data[peak_vertex, peak_time]
"""
Explanation: The first value thereby indicates which vertex and the second which time
point index from within stc.lh_vertno or stc.lh_data is used. We can
use the respective information to get the index of the surface vertex
resembling the peak and its value.
End of explanation
"""
brain = stc.plot(**surfer_kwargs)
# We add the new peak coordinate (as vertex index) as an annotation dot
brain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue')
# We add a title as well, stating the amplitude at this time and location
brain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_size=14)
"""
Explanation: Let's visualize this as well, using the same surfer_kwargs as in the
beginning.
End of explanation
"""
|
I2MAX-LearningProject/Flask-server | Tests/Prophet_trial2_8_16.ipynb | mit | rawArrayDatas=[["2017-08-11", "2017-08-12", "2017-08-13", "2017-08-14", "2017-08-15","2017-08-16"],
[20.0, 30.0, 40.0, 50.0, 60.0,20.0]]
processId=12
forecastDay=4
"""
Explanation: 실제함수 input
End of explanation
"""
mockForecast={}
rmse={}
forecast=[]
realForecast={}
trainSize=int(len(rawArrayDatas[0]) * 0.7)
testSize=len(rawArrayDatas[0])-trainSize
"""
Explanation: bug fix1
rawArrayDatas -> rawArrayDatas[0]
rawArrayDatas이 이차원 배열이어서 len(rawArrayDatas)=2가 되고,
len(rawArrayDatas[0])가 5가 된다.
End of explanation
"""
print(trainSize)
print(testSize)
"""
Explanation: bug fix1
rawArrayDatas -> rawArrayDatas[0]
rawArrayDatas이 이차원 배열이어서 len(rawArrayDatas)=2가 되고,
len(rawArrayDatas[0])가 5가 된다.
End of explanation
"""
ds = rawArrayDatas[0]
y = list(np.log(rawArrayDatas[1]))
sales = list(zip(ds, y))
rawArrayDatas
sales
print(type(ds), type(y))
preprocessedData= pd.DataFrame(data = sales, columns=['ds', 'y'])
preprocessedData
model = Prophet()
model.fit(preprocessedData)
future = model.make_future_dataframe(periods=forecastDay)
forecast = future[-forecastDay:]
# Python
forecast = model.predict(future)
forecast[['ds', 'yhat']].tail()
forecast
forecast
model.plot(forecast)
forecastData= [np.exp(y) for y in forecast['yhat'][-forecastDay:]]
print(forecastData)
data=rawArrayDatas[1]+forecastData
data
ans=np.log10(10)
ans
np.exp(ans)
data= [np.exp(y) for y in forecast['yhat']]
print(data)
date= [d.strftime('%Y-%m-%d') for d in forecast['ds']]
date
dateStamp = list(forecast['ds'][-forecastDay:])
dateStamp
date = [p.strftime('%Y-%m-%d') for p in dateStamp]
date
realForecast['Bayseian'] = Bayseian(preprocessedData=XY, forecastDay=forecastDay)[0]
realForecast
XY = PrepareBayseian(rawArrayDatas)
Bayseian(preprocessedData=XY, forecastDay=forecastDay)[0]
LearningModuleRunner(rawArrayDatas, processId, forecastDay)
realForecast
type(realForecast['Bayseian'])
import sys
print(sys.version)
#data 준비위한 하드코딩
#rawArrayDatas
df0=pd.read_csv('./data/397_replace0with1.csv')
df0['y'] = np.log(df0['y'])
ds=df0['ds']
y=df0['y']
#processId
processId=1
#정식적인 data input과정
rawArrayDatas=[['2016-01-01','2016-01-02','2016-01-03','2016-01-04','2016-01-05'],[10,10,12,13,14]]
ds=rawArrayDatas[0]
y=rawArrayDatas[1]
sales=list(zip(ds,y))
day=5
sales
rawArrayDatas[:][:2]
rawArrayDatas=[['2016-01-01','2016-01-02','2016-01-03','2016-01-04','2016-01-05'],[10,10,12,13,14]]
ds=rawArrayDatas[0]
y=rawArrayDatas[1]
sales = list(zip(rawArrayDatas[0], rawArrayDatas[1]))
y
sales
ds=rawArrayDatas[0]
#-->year, month, dayOfWeek 추출
year=[2016,2016, 2016, 2017]
month=[1,1,1,1]
dayOfWeek=[1,2,3,4]
y=rawArrayDatas[1][:len(train)]
ds=rawArrayDatas[0]
#-->year, month, dayOfWeek 추출
year=np.random.beta(2000, 2017, len(train))*(2017-2000)
month=np.random.beta(1, 12, len(train))*(12-1)
dayOfWeek=np.random.beta(0, 6, len(train))*(6-0)
y=rawArrayDatas[1][:len(train)]
year
month
dayOfWeek
sales=list(zip(year, month, dayOfWeek, y))
sales
x = pd.DataFrame(data = sales, columns=['year', 'month', 'dayOfWeek','y'])
x
y
np.size(y)
x = pd.DataFrame(data = sales, columns=['year', 'month', 'dayOfWeek','y'])
x['month']
type(rawArrayDatas)
x
type(x)
list(df)
#값에 0이 있으면 log를 할 때 inf가 되므로 Initialization failed. 오류가 나니 주의할 것.
m = Prophet()
m.fit(df);
future = m.make_future_dataframe(periods=day)
future.tail()
future[-day:]
future
forecast=m.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
temp=list(forecast['ds'][-day:])
date=[p.strftime('%Y-%m-%d') for p in temp]
date=[p.strftime('%Y-%m-%d') for p in temp]
date
strftime('We are the %d, %b %Y')
m.plot(forecast)
m.plot_components(forecast)
"""
Explanation: 전체 data 전처리
End of explanation
"""
|
ajgpitch/qutip-notebooks | development/development-smesolver-new-methods.ipynb | lgpl-3.0 | %matplotlib inline
%config InlineBackend.figure_formats = ['svg']
from qutip import *
from qutip.ui.progressbar import BaseProgressBar
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
y_sse = None
import time
"""
Explanation: Test for different SME solvers against analytical solution for oscillator squeezing
Manuel Grimm, Niels Lörch, and Denis V. Vasilyev
30 August 2016
Updated by Eric Giguere
March 2018
We solve the stochastic master equation for an oscillator coupled to a 1D field as discussed in [1]. There is a deterministic differential equation for the variances of the oscillator quadratures $\langle\delta X^2\rangle$ and $\langle\delta P^2\rangle$. This allows for a direct comparison between the numerical solution and the exact solution for a single quantum trajectory. In particular, we study scaling of deviations from analytical solution as a function of stepsize for different solvers:
'euler-maruyama', 'pc-euler', 'milstein', 'milstein-imp', 'taylor15', 'taylor15-imp'.
It is important to check the correct scaling since it is very easy to implement a higher order method in a wrong way such that it still works but as a lower order method.
In this section we solve SME with a single Wiener increment:
$\mathrm{d}\rho = D[s]\rho\mathrm{d}t + H[s]\rho \mathrm{d}W + \gamma D[a]\rho\mathrm{d}t$
The steady state solution for the variance $V_{\mathrm{c}} = \langle X^2\rangle - \langle X\rangle^2$ reads
$V_{\mathrm{c}} = \frac1{4\alpha^{2}}\left[\alpha\beta - \gamma + \sqrt{(\gamma-\alpha\beta )^{2} + 4\gamma \alpha^2}\right]$
where $\alpha$ and $\beta$ are parametrizing the interaction between light and the oscillator such that the jump operator is given by $s = \frac{\alpha+\beta}2 a + \frac{\alpha-\beta}2 a^{\dagger}$
[1] D. V. Vasilyev, C. a. Muschik, and K. Hammerer, Physical Review A 87, 053820 (2013). <a href="http://arxiv.org/abs/1303.5888">arXiv:1303.5888</a>
End of explanation
"""
def arccoth(x):
return 0.5*np.log((1.+x)/(x-1.))
############ parameters #############
th = 0.1 # Interaction parameter
alpha = np.cos(th)
beta = np.sin(th)
gamma = 1.
def gammaf(t):
return 0.25+t/12+t*t/6
def f_gamma(t,*args):
return (0.25+t/12+t*t/6)**(0.5)
################# Solution of the differential equation for the variance Vc ####################
T = 6.
N_store = int(20*T+1)
tlist = np.linspace(0,T,N_store)
y0 = 0.5
def func(y, t):
return -(gammaf(t) - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gammaf(t)
y_td = odeint(func, y0, tlist)
y_td = y_td.ravel()
def func(y, t):
return -(gamma - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gamma
y = odeint(func, y0, tlist)
############ Exact steady state solution for Vc #########################
Vc = (alpha*beta - gamma + np.sqrt((gamma-alpha*beta)**2 + 4*gamma*alpha**2))/(4*alpha**2)
#### Analytic solution
A = (gamma**2 + alpha**2 * (beta**2 + 4*gamma) - 2*alpha*beta*gamma)**0.5
B = arccoth((-4*alpha**2*y0 + alpha*beta - gamma)/A)
y_an = (alpha*beta - gamma + A / np.tanh(0.5*A*tlist - B))/(4*alpha**2)
f, (ax, ax2) = plt.subplots(2, 1, sharex=True)
ax.set_title('Variance as a function of time')
ax.plot(tlist,y)
ax.plot(tlist,Vc*np.ones_like(tlist))
ax.plot(tlist,y_an)
ax.set_ylim(0,0.5)
ax2.set_title('Deviation of odeint from analytic solution')
ax2.set_xlabel('t')
ax2.set_ylabel(r'$\epsilon$')
ax2.plot(tlist,y_an - y.T[0]);
####################### Model ###########################
N = 30 # number of Fock states
Id = qeye(N)
a = destroy(N)
s = 0.5*((alpha+beta)*a + (alpha-beta)*a.dag())
x = (a + a.dag())/np.sqrt(2)
H = Id
c_op = [np.sqrt(gamma)*a]
c_op_td = [[a,f_gamma]]
sc_op = [s]
e_op = [x, x*x]
rho0 = fock_dm(N,0) # initial vacuum state
#sc_len=1 # one stochastic operator
############## time steps and trajectories ###################
ntraj = 1 #100 # number of trajectories
T = 6. # final time
N_store = int(20*T+1) # number of time steps for which we save the expectation values/density matrix
tlist = np.linspace(0,T,N_store)
ddt = (tlist[1]-tlist[0])
Nsubs = (10*np.logspace(0,1,10)).astype(int)
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
"""
Explanation: Just check that analytical solution coincides with the solution of ODE for the variance
End of explanation
"""
ntraj = 1
def run_cte_cte(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op, sc_op, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_an - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_cte_cte(**kw):
start = time.time()
y = run_cte_cte(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_cte_cte = []
stats_cte_cte.append(get_stats_cte_cte(solver='euler-maruyama'))
stats_cte_cte.append(get_stats_cte_cte(solver='platen'))
stats_cte_cte.append(get_stats_cte_cte(solver='pred-corr'))
stats_cte_cte.append(get_stats_cte_cte(solver='milstein'))
stats_cte_cte.append(get_stats_cte_cte(solver='milstein-imp'))
stats_cte_cte.append(get_stats_cte_cte(solver='pred-corr-2'))
stats_cte_cte.append(get_stats_cte_cte(solver='explicit1.5'))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor1.5"))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor1.5-imp", args={"tol":1e-8}))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor2.0"))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor2.0", noiseDepth=500))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_cte_cte):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.loglog(stepsizes, 0.2*np.array(stepsizes)**2.0, label="$\propto\Delta t^{2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
"""
Explanation: Test of different SME solvers
Plotting the figure
End of explanation
"""
ntraj = 1
def run_(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_td - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_d1(**kw):
start = time.time()
y = run_(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stat_d1 = []
stat_d1.append(get_stats_d1(solver='euler-maruyama'))
stat_d1.append(get_stats_d1(solver='platen'))
stat_d1.append(get_stats_d1(solver='pc-euler'))
stat_d1.append(get_stats_d1(solver='milstein'))
stat_d1.append(get_stats_d1(solver='milstein-imp'))
stat_d1.append(get_stats_d1(solver='pc-euler-2'))
stat_d1.append(get_stats_d1(solver='explicit1.5'))
stat_d1.append(get_stats_d1(solver="taylor1.5"))
stat_d1.append(get_stats_d1(solver="taylor1.5-imp", args={"tol":1e-8}))
stat_d1.append(get_stats_d1(solver="taylor2.0"))
stat_d1.append(get_stats_d1(solver="taylor2.0", noiseDepth=500))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stat_d1):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.12*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.loglog(stepsizes, 0.7*np.array(stepsizes)**2.0, label="$\propto\Delta t^{2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
"""
Explanation: Deterministic part depend on time
End of explanation
"""
def f(t, args):
return 0.5+0.25*t-t*t*0.125
Nsubs = (15*np.logspace(0,0.8,10)).astype(int)
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
sc_op_td = [[sc_op[0],f]]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op_td, e_op, nsubsteps=1000, method="homodyne",solver="taylor15")
y_btd = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_btd)
ntraj = 1
def run_(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op_td, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_btd - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_d2(**kw):
start = time.time()
y = run_(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_d2 = []
stats_d2.append(get_stats_d2(solver='euler-maruyama'))
stats_d2.append(get_stats_d2(solver='platen'))
stats_d2.append(get_stats_d2(solver='pc-euler'))
stats_d2.append(get_stats_d2(solver='milstein'))
stats_d2.append(get_stats_d2(solver='milstein-imp'))
stats_d2.append(get_stats_d2(solver='pc-euler-2'))
stats_d2.append(get_stats_d2(solver='explicit1.5'))
stats_d2.append(get_stats_d2(solver='taylor1.5'))
stats_d2.append(get_stats_d2(solver='taylor1.5-imp', args={"tol":2e-9}))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_d2):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
"""
Explanation: Both d1 and d2 time-dependent
Using a taylor simulation with large nsubsteps instead of analytical solution.
End of explanation
"""
def f(t, args):
return 0.5+0.25*t-t*t*0.125
def g(t, args):
return 0.25+0.25*t-t*t*0.125
Nsubs = (20*np.logspace(0,0.6,8)).astype(int)
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
sc_op2_td = [[sc_op[0],f],[sc_op[0],g]]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op2_td, e_op, nsubsteps=1000, method="homodyne",solver=152)
y_btd2 = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_btd2)
ntraj = 1
def run_multi(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op2_td, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_btd2 - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_multi(**kw):
start = time.time()
y = run_multi(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_multi = []
stats_multi.append(get_stats_multi(solver='euler-maruyama'))
stats_multi.append(get_stats_multi(solver='platen'))
stats_multi.append(get_stats_multi(solver='pc-euler'))
stats_multi.append(get_stats_multi(solver='milstein'))
stats_multi.append(get_stats_multi(solver='milstein-imp'))
stats_multi.append(get_stats_multi(solver='pc-euler-2'))
stats_multi.append(get_stats_multi(solver='explicit1.5'))
stats_multi.append(get_stats_multi(solver="taylor1.5"))
stats_multi.append(get_stats_multi(solver="taylor1.5-imp"))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_multi):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.2*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
"""
Explanation: Multiple sc_ops with time dependence
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Versions
End of explanation
"""
|
drvinceknight/cfm | assets/assessment/2020-2021/ind/assignment.ipynb | mit | import random
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Computing for Mathematics - 2020/2021 individual coursework
Important Do not delete the cells containing:
```
BEGIN SOLUTION
END SOLUTION
```
write your solution attempts in those cells.
To submit this notebook:
Change the name of the notebook from main to: <student_number>. For example, if your student number is c1234567 then change the name of the notebook to c1234567.
Write all your solution attempts in the correct locations;
Do not delete any code that is already in the cells;
Save the notebook (File>Save As);
Follow the instructions given in class/email to submit.
Question 1
(Hint: This question is similar to the first exercise of the Probability chapter of Python for mathematics.)
For each of the following, write a function sample_experiment, and repeatedly use it to simulate the probability of an event occurring with the following chances.
For each chance output the simulated probability.
a. $0$
Available marks: 2
End of explanation
"""
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. $1/2$
Available marks: 2
End of explanation
"""
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: c. $3/4$
Available marks: 2
End of explanation
"""
def sample_experiment():
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: d. $1$
Available marks: 2
End of explanation
"""
import itertools
pets = ("cat", "dog", "fish", "lizard", "hamster")
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Question 2
(Hint: This question is similar to the second exercise of the Combinatorics chapter of Python for mathematics.)
a. Create a variable number_of_permutations that gives the number of permutations of pets = ("cat", "dog", "fish", "lizard", "hamster) of size 4. Do this by generating and counting them.
Available marks: 2
End of explanation
"""
import scipy.special
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation.
Available marks: 1
End of explanation
"""
import sympy as sym
x = sym.Symbol("x")
c1 = sym.Symbol("c1")
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Question 3
(Hint: This question uses concepts from the Algebra and Calculus chapters of Python for mathematics.)
Consider the second derivative $f''(x)=4 x + \cos(x)$.
a. Create a variable derivative which has value $f'(x)$ (use the variables x and c1 if necessary):
Available marks: 3
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. Create a variable equation that has value the equation $f'(0)=0$.
Available marks: 4
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$.
Available marks: 4
End of explanation
"""
c = sym.Symbol("c")
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Question 4
(Hint: This question uses concepts from the Calculus and Sequences chapters of Python for mathematics.)
Consider this recursive definition for the sequence $a_n$:
$$
a_n = \begin{cases}
c & \text{ if n = 1}\
3a_{n - 1} + \frac{c}{n}
\end{cases}
$$
a. Output the sum of the 15 terms.
Available marks: 5
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. Given that $c=2$ output $\frac{df}{dx}$ where:
$$
f(x) = a_1 + a_2 x + a_3 x ^ 2 + a_4 x ^ 3
$$
Available marks: 4
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: c. Given that $c=2$ output $\int f(x)dx$
Available marks: 4
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/production_ml/labs/distributed_training_with_TF.ipynb | apache-2.0 | # Import TensorFlow
import tensorflow as tf
"""
Explanation: Distributed training with TensorFlow
Learning Objectives
1. Create MirroredStrategy
2. Integrate tf.distribute.Strategy with tf.keras
3. Create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset
Introduction
tf.distribute.Strategy is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.
tf.distribute.Strategy has been designed with these key goals in mind:
Easy to use and support multiple user segments, including researchers, ML engineers, etc.
Provide good performance out of the box.
Easy switching between strategies.
tf.distribute.Strategy can be used with a high-level API like Keras, and can also be used to distribute custom training loops (and, in general, any computation using TensorFlow).
In TensorFlow 2.x, you can execute your programs eagerly, or in a graph using tf.function. tf.distribute.Strategy intends to support both these modes of execution, but works best with tf.function. Eager mode is only recommended for debugging purpose and not supported for TPUStrategy. Although training is the focus of this guide, this API can also be used for distributing evaluation and prediction on different platforms.
You can use tf.distribute.Strategy with very few changes to your code, because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we explain various types of strategies and how you can use them in different situations. To learn how to debug performance issues, see the Optimize TensorFlow GPU Performance guide.
Note: For a deeper understanding of the concepts, please watch this deep-dive presentation. This is especially recommended if you plan to write your own training loop.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
End of explanation
"""
# Show the currently installed version of TensorFlow
print(tf.__version__)
"""
Explanation: This notebook uses TF2.x. Please check your tensorflow version using the cell below.
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
"""
Explanation: Types of strategies
tf.distribute.Strategy intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
Synchronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
Hardware platform: You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, there are six strategies available. The next section explains which of these are supported in which scenarios in TF. Here is a quick overview:
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:----------------------- |:------------------- |:--------------------- |:--------------------------------- |:--------------------------------- |:-------------------------- |
| Keras API | Supported | Supported | Supported | Experimental support | Supported planned post 2.4 |
| Custom training loop | Supported | Supported | Supported | Experimental support | Experimental support |
| Estimator API | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
Note: Experimental support means the APIs are not covered by any compatibilities guarantees.
Note: Estimator support is limited. Basic training and evaluation are experimental, and advanced features—such as scaffold—are not implemented. We recommend using Keras or custom training loops if a use case is not covered.
MirroredStrategy
tf.distribute.MirroredStrategy supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called MirroredVariable. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. You can choose from a few other options, or write your own.
Here is the simplest way of creating MirroredStrategy:
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
"""
Explanation: This will create a MirroredStrategy instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
End of explanation
"""
# TODO 1 - Your code goes here.
"""
Explanation: If you wish to override the cross device communication, you can do so using the cross_device_ops argument by supplying an instance of tf.distribute.CrossDeviceOps. Currently, tf.distribute.HierarchicalCopyAllReduce and tf.distribute.ReductionToOneDevice are two options other than tf.distribute.NcclAllReduce which is the default.
End of explanation
"""
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
"""
Explanation: TPUStrategy
tf.distribute.TPUStrategy lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the TensorFlow Research Cloud and Cloud TPU.
In terms of distributed training architecture, TPUStrategy is the same MirroredStrategy - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in TPUStrategy.
Here is how you would instantiate TPUStrategy:
Note: To run this code in Colab, you should select TPU as the Colab runtime. See TensorFlow TPU Guide.
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.TPUStrategy(cluster_resolver)
The TPUClusterResolver instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it.
If you want to use this for Cloud TPUs:
- You must specify the name of your TPU resource in the tpu argument.
- You must initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation. Initializing the tpu system also wipes out the TPU memory, so it's important to complete this step first in order to avoid losing state.
MultiWorkerMirroredStrategy
tf.distribute.MultiWorkerMirroredStrategy is very similar to MirroredStrategy. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it creates copies of all variables in the model on each device across all workers.
Here is the simplest way of creating MultiWorkerMirroredStrategy:
End of explanation
"""
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
"""
Explanation: MultiWorkerMirroredStrategy has two implementations for cross-device communications. CommunicationImplementation.RING is RPC-based and supports both CPU and GPU. CommunicationImplementation.NCCL uses Nvidia's NCCL and provides the state of art performance on GPU, but it doesn't support CPU. CollectiveCommunication.AUTO defers the choice to Tensorflow.
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. The TF_CONFIG environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. Learn more about setting up TF_CONFIG.
ParameterServerStrategy
Parameter server training is a common data-parallel method to scale up model training on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. Please see the parameter server training tutorial for details.
TensorFlow 2 parameter server training uses a central-coordinator based architecture via the tf.distribute.experimental.coordinator.ClusterCoordinator class.
In this implementation the worker and parameter server tasks run tf.distribute.Servers that listen for tasks from the coordinator. The coordinator creates resources, dispatches training tasks, writes checkpoints, and deals with task failures.
In the programming running on the coordinator, you will use a ParameterServerStrategy object to define a training step and use a ClusterCoordinator to dispatch training steps to remote workers. Here is the simplest way to create them:
Python
strategy = tf.distribute.experimental.ParameterServerStrategy(
tf.distribute.cluster_resolver.TFConfigClusterResolver(),
variable_partitioner=variable_partitioner)
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy)
Note you will need to configure TF_CONFIG environment variable if you use TFConfigClusterResolver. It is similar to TF_CONFIG in MultiWorkerMirroredStrategy but has additional caveats.
In TF 1, ParameterServerStrategy is available only with estimator via tf.compat.v1.distribute.experimental.ParameterServerStrategy symbol.
Note: This strategy is experimental as it is currently under active development.
CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
Create an instance of CentralStorageStrategy by:
End of explanation
"""
default_strategy = tf.distribute.get_strategy()
"""
Explanation: This will create a CentralStorageStrategy instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables.
Note: This strategy is experimental as it is currently a work in progress.
Other strategies
In addition to the above strategies, there are two other strategies which might be useful for prototyping and debugging when using tf.distribute APIs.
Default Strategy
Default strategy is a distribution strategy which is present when no explicit distribution strategy is in scope. It implements the tf.distribute.Strategy interface but is a pass-through and provides no actual distribution. For instance, strategy.run(fn) will simply call fn. Code written using this strategy should behave exactly as code written without any strategy. You can think of it as a "no-op" strategy.
Default strategy is a singleton - and one cannot create more instances of it. It can be obtained using tf.distribute.get_strategy() outside any explicit strategy's scope (the same API that can be used to get the current strategy inside an explicit strategy's scope).
End of explanation
"""
# In optimizer or other library code
# Get currently active strategy
strategy = tf.distribute.get_strategy()
strategy.reduce("SUM", 1., axis=None) # reduce some values
"""
Explanation: This strategy serves two main purposes:
It allows writing distribution aware library code unconditionally. For example, in tf.optimizers can use tf.distribute.get_strategy() and use that strategy for reducing gradients - it will always return a strategy object on which we can call the reduce API.
End of explanation
"""
if tf.config.list_physical_devices('GPU'):
strategy = tf.distribute.MirroredStrategy()
else: # use default strategy
strategy = tf.distribute.get_strategy()
with strategy.scope():
# do something interesting
print(tf.Variable(1.))
"""
Explanation: Similar to library code, it can be used to write end users' programs to work with and without distribution strategy, without requiring conditional logic. A sample code snippet illustrating this:
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
# TODO 2 - Your code goes here.
model.compile(loss='mse', optimizer='sgd')
"""
Explanation: OneDeviceStrategy
tf.distribute.OneDeviceStrategy is a strategy to place all variables and computation on a single specified device.
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
This strategy is distinct from the default strategy in a number of ways. In default strategy, the variable placement logic remains unchanged when compared to running TensorFlow without any distribution strategy. But when using OneDeviceStrategy, all variables created in its scope are explicitly placed on the specified device. Moreover, any functions called via OneDeviceStrategy.run will also be placed on the specified device.
Input distributed through this strategy will be prefetched to the specified device. In default strategy, there is no input distribution.
Similar to the default strategy, this strategy could also be used to test your code before switching to other strategies which actually distribute to multiple devices/machines. This will exercise the distribution strategy machinery somewhat more than default strategy, but not to the full extent as using MirroredStrategy or TPUStrategy etc. If you want code that behaves as if no strategy, then use default strategy.
So far you've seen the different strategies available and how you can instantiate them. The next few sections show the different ways in which you can use them to distribute your training.
Using tf.distribute.Strategy with tf.keras.Model.fit
tf.distribute.Strategy is integrated into tf.keras which is TensorFlow's implementation of the
Keras API specification. tf.keras is a high-level API to build and train models. By integrating into tf.keras backend, we've made it seamless for you to distribute your training written in the Keras training framework using model.fit.
Here's what you need to change in your code:
Create an instance of the appropriate tf.distribute.Strategy.
Move the creation of Keras model, optimizer and metrics inside strategy.scope.
We support all types of Keras models - sequential, functional and subclassed.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
End of explanation
"""
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
"""
Explanation: This example usees MirroredStrategy so you can run this on a machine with multiple GPUs. strategy.scope() indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows us to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. MirroredStrategy takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.
End of explanation
"""
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
"""
Explanation: Here a tf.data.Dataset provides the training and eval input. You can also use numpy arrays:
End of explanation
"""
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
"""
Explanation: In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using MirroredStrategy with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use strategy.num_replicas_in_sync to get the number of replicas.
End of explanation
"""
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
"""
Explanation: What's supported now?
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | ParameterServerStrategy | CentralStorageStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras APIs | Supported | Supported | Experimental support | Experimental support | Experimental support |
Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
Tutorial to train MNIST with MirroredStrategy.
Tutorial to train MNIST using MultiWorkerMirroredStrategy.
Guide on training MNIST using TPUStrategy.
Tutorial for parameter server training in TensorFlow 2 with ParameterServerStrategy.
TensorFlow Model Garden repository containing collections of state-of-the-art models implemented using various strategies.
Using tf.distribute.Strategy with custom training loops
As you've seen, using tf.distribute.Strategy with Keras model.fit requires changing only a couple lines of your code. With a little more effort, you can also use tf.distribute.Strategy with custom training loops.
If you need more flexibility and control over your training loops than is possible with Estimator or Keras, you can write custom training loops. For instance, when using a GAN, you may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training.
The tf.distribute.Strategy classes provide a core set of methods through to support custom training loops. Using these may require minor restructuring of the code initially, but once that is done, you should be able to switch between GPUs, TPUs, and multiple machines simply by changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
First, create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
End of explanation
"""
# TODO 3 - Your code goes here.
"""
Explanation: Next, create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset to distribute the dataset based on the strategy.
End of explanation
"""
loss_object = tf.keras.losses.BinaryCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)
def train_step(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
predictions = model(features, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def distributed_train_step(dist_inputs):
per_replica_losses = mirrored_strategy.run(train_step, args=(dist_inputs,))
return mirrored_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
"""
Explanation: Then, define one step of the training. Use tf.GradientTape to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, put it in a function train_step and pass it to tf.distrbute.Strategy.run along with the dataset inputs you got from the dist_dataset created before:
End of explanation
"""
for dist_inputs in dist_dataset:
print(distributed_train_step(dist_inputs))
"""
Explanation: A few other things to note in the code above:
It used tf.nn.compute_average_loss to compute the loss. tf.nn.compute_average_loss sums the per example loss and divide the sum by the global_batch_size. This is important because later after the gradients are calculated on each replica, they are aggregated across the replicas by summing them.
It used the tf.distribute.Strategy.reduce API to aggregate the results returned by tf.distribute.Strategy.run. tf.distribute.Strategy.run returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can reduce them to get an aggregated value. You can also do tf.distribute.Strategy.experimental_local_results to get the list of values contained in the result, one per local replica.
When apply_gradients is called within a distribution strategy scope, its behavior is modified. Specifically, before applying gradients on each parallel instance during synchronous training, it performs a sum-over-all-replicas of the gradients.
Finally, once you have defined the training step, we can iterate over dist_dataset and run the training in a loop:
End of explanation
"""
iterator = iter(dist_dataset)
for _ in range(10):
print(distributed_train_step(next(iterator)))
"""
Explanation: In the example above, you iterated over the dist_dataset to provide input to your training. We also provide the tf.distribute.Strategy.make_experimental_numpy_dataset to support numpy inputs. You can use this API to create a dataset before calling tf.distribute.Strategy.experimental_distribute_dataset.
Another way of iterating over your data is to explicitly use iterators. You may want to do this when you want to run for a given number of steps as opposed to iterating over the entire dataset.
The above iteration would now be modified to first create an iterator and then explicitly call next on it to get the input data.
End of explanation
"""
|
CNS-OIST/STEPS_Example | user_manual/source/diffusion.ipynb | gpl-2.0 | import math
import numpy
import pylab
import random
import time
import steps.model as smodel
import steps.solver as solvmod
import steps.geom as stetmesh
import steps.rng as srng
"""
Explanation: Simulating Diffusion in Volumes
The simulation script described in this chapter is available at STEPS_Example repository.
This chapter introduces how to model and simulate diffusion systems.
First we will look at how to describe the diffusive motion of molecules
by using object of class steps.model.Diff, then how to import
a tetrahedral mesh by using the steps.utilities.meshio methods
and finally how to create a steps.solver.Tetexact object to be used for
the simulation itself. The Tetexact solver builds on the Wmdirect solver
(we have used up until now) extended for diffusive fluxes between tetrahedral
elements in a mesh. Each individual tetrahedron behaves like a well-mixed
compartment where reactions can take place. Diffusive flux between
tetrahedral elements is represented by a series of first-order reactions
with rate constants derived from the local geometry and the diffusion
constant parameter. Thus, this solver object can be used to simulate
full reaction-diffusion systems in complex geometries, but in this introduction
we will start with a simple diffusion system.
We wish to simulate diffusion of one molecular species from a point source
in an infinite volume, a problem chosen for simplicity and with a known analytical
solution we can compare to our STEPS simulation results. As the volume in STEPS
must of course be finite we will create a large spherical mesh, inject our
molecules into the central tetrahedron (as there is no concept of a point
source in STEPS) and compare our results to the analytical solution up to a
time when there are zero or an insignificant number of boundary events.
Analytical solution
To compare our mean results in STEPS to an analytical solution we must solve
the diffusion equation for one spatial dimension, the radial distance from the
point source. The problem is simplified to one dimension because the symmetry
of the problem dictates that, in the deterministic limit, the concentration at any given radial distance r
from the point source will be equal at all points in space forming a
two-dimensional “shell” at that r.
If all molecules exist at a single point at time 0, within an infinite boundary,
the analytical solution is (see e.g. Crank, J. (1975) The Mathematics of Diffusion.
Oxford: Clarendon Press):
\begin{equation}
C(r,t)=\frac{N}{8(\pi Dt)^{3/2}}\exp\left(\frac{-r^{2}}{4Dt}\right)
\end{equation}
where C is the concentration (if length units are meters the units of C are number of molecules/ $m^{\text{3}}$)
at radial distance r from source at time t, N is the total number of injected
molecules and D is the diffusion constant (in units $m^{\text{2}}/s$) .
Modelling solution
Organisation of code
To set up our model and run our simulation we will create a Python script,
organising the script according to a certain template chosen for a good
organisation of our workflow. However, for clarity, in these examples we
will show the code as if it was typed at the Python prompt. As in previous
chapters we will go through the code step by step and look at the statements in
detail as we go.
The first thing to do is to write statements to import all our steps packages
with all the methods available to describe our model and run our simulation.
We will also import other packages we require at this point in the script,
such as math, numpy, pylab and random. We will make use of the random package
to help with selecting tetrahedrons from our mesh to sample (which we look at
in detail later) and math contains many useful basic mathematical functions we'll use
for finding the analytical solution:
End of explanation
"""
# The number of iterations to run
NITER = 10
# The data collection time increment (s)
DT = 0.001
# The simulation endtime (s)
INT = 0.101
# The number of molecules to be injected into the centre
NINJECT = 10000
# The number of tetrahedral elements to sample data from.
SAMPLE = 2000
# The diffusion constant for our diffusing species (m^2/s)
DCST = 20.0e-12
"""
Explanation: Now we set some parameters for our simulation. By keeping these variables grouped
together at the beginning of a Python script we would make it easy to locate and
change these simulation parameters if we wished to. We will use capital letters
for the variables here so it is clear later in the script that they are constants:
End of explanation
"""
# Array to hold tetrahedron indices (integers)
tetidxs = numpy.zeros(SAMPLE, dtype='int')
# Array to hold tetrahedron radial distances (floats)
tetrads = numpy.zeros(SAMPLE)
"""
Explanation: At what stage these constants will be used will become clear as we work
through the code.
We want to sample data from individual tetrahedrons so we can analyse spatial
data, so we now create two objects to store the indices and radial distance from
center of the mesh tetrahedrons we will sample. Note: Tetrahedrons are identified by an integer index (as are nodes and
triangles). We look at this in more detail in the geometry section.
We have decided in this case that
we don't want to save data for every single tetrahedron, but rather randomly
select 2000 tetrahedrons by setting the SAMPLE variable to 2000, perhaps due to memory constraints.
We will look at how we select which tetrahedrons to sample in Geometry specification
, but for now we just create NumPy arrays initialized to zeros.
The reason for creating these arrays at this point in the script will become
clear later:
End of explanation
"""
mdl = smodel.Model()
"""
Explanation: Model specification
So we now move on to our model description. This time we will organise the code
into a function, which will return the steps.model.Model object we create. It is
entirely up to you if you wish to organise your model description in this way,
but it can be useful for larger models. Note: In this way, for example, multiple model descriptions can be defined
in a separate module with each description clearly separated inside functions.
You can then import whichever model description objects you chose into the
simulation scripts. We will not explore this topic in detail here, but it is a
good idea to keep in mind that this organisation is an option.
This is our first function
definition, so lets mention a little about the syntax for defining functions in
Python. Firstly, we use the def statement to create a function object and assign
it a name. Then we must provide all our function code with the same indentation.
As soon as our indentation returns to the indentation for the def statement, we
exit the function definition. We wish to return our steps.model.Model object,
so we will provide a return statement at the end of the function. First we create
our function and name it gen_model. In this simple example the function will not
require any arguments, so its signature will be gen_model().
We start, as always, by creating our steps.model.Model container object mdl.
End of explanation
"""
A = smodel.Spec('A', mdl)
vsys = smodel.Volsys('cytosolv', mdl)
"""
Explanation: Then we create our molecular species (only one in this simple model we will call A) and our volume system vsys, much as in previous chapters.
End of explanation
"""
diff_A = smodel.Diff('diff_A', vsys, A)
diff_A.setDcst(DCST)
"""
Explanation: After that we can create our diffusion rule. In STEPS this means creating a steps.model.Diff
object and assigning it to a volume system (steps.model.Volsys). As well as the usual identifier string and a reference to the parent volume system, a required parameter for the object construction is a reference to the molecular species object to which this diffusion rule applies. An optional parameter to the object constructor is the diffusion constant, which is given in s.i. units (i.e. $m^{2}/s$ so for example a diffusion constant of $100 \mu m^{2}/s = 100*10^{-12} m^{\text{2}}/s$).
The default value for the diffusion constant can be changed with object method
steps.model.Diff.setDcst and can even be changed from the default value in any compartment during simulation, much like the reaction constants we looked at in previous chapters. However, in
this model we will not alter the diffusion constant later in the script, so this
is the value that will be used during our simulation. Recall we defined the diffusion constant at the beginning or our script, the reason for which is that this value is available in a variable when we come to finding the analytical solution.
End of explanation
"""
def gen_model():
mdl = smodel.Model()
A = smodel.Spec('A', mdl)
vsys = smodel.Volsys('cytosolv', mdl)
diff_A = smodel.Diff('diff_A', vsys, A)
diff_A.setDcst(DCST)
return mdl
"""
Explanation: So our complete function is:
End of explanation
"""
import steps.utilities.meshio as smeshio
mesh = smeshio.loadMesh('meshes/sphere_rad10_11Ktets')[0]
"""
Explanation: Geometry specification
We now move on to describing our geometry. This is the section that stands out as
very different to our previous well-mixed simulations. The methods we
provide for describing mesh-based geometry in STEPS give tools for importing meshes
from some powerful mesh-generation packages, and methods for initialising and
controlling your simulation conditions which are beyond the ground covered in this
chapter. It is likely that you will only start to get the feel of how to use these
methods to achieve your required description in more complex models with hands-on
experience. This chapter begins to introduce some useful methods for mesh
manipulation and it is hoped that this will give the user enough experience with
the workflow to be able to go on to more advanced object manipulation that may be
required for more complex models. A full list of tetrahedral mesh methods is
available in steps.geom.Tetmesh and steps.utilities.meshio.
We chose to structure all our geometry code into a function, much like our model description.
We start by loading our mesh object. STEPS provides all mesh loading and saving tools in
module steps.utilities.meshio. This module currently provides support for
TetGen, CUBIT and
NETGEN mesh generators, along with any
others supporting the Abaqus output format. The details of creating a mesh
from these 3rd party packages and importing into STEPS are beyond the scope of
this chapter, however we provide a full list of meshio methods in steps.utilities.meshio.
One of the functions provided in meshio is steps.utilities.meshio.saveMesh, which allows the user to
save the imported mesh in STEPS format. This is important because the information
passed on from the mesh generators is often only very basic information about the
mesh and STEPS must find a vast amount of information to pass on to the steps.solver.Tetexact
reaction-diffusion solver object internally. This can be time-consuming for large
meshes, however this process only needs to be performed once. With steps.utilities.meshio.saveMesh the
mesh is saved with basic information in an XML file, with an accompanying ASCII file
containing all the extra information STEPS computed when importing the mesh. After
these files have been created, the mesh can then be imported with the steps.utilities.meshio.loadMesh
method, which will only take a few seconds or minutes to load even very large
meshes. Therefore, it is highly recommended that all meshes are saved in this way
by using the steps.utilities.meshio.saveMesh function. For this example we assume that we have mesh
files sphere_rad10_11Ktets.xml and sphere_rad10_11Ktets.txt available in folder meshes/ in the current working
directory, which we created previously with steps.utilities.meshio.saveMesh from a mesh we imported from
a mesh-generator with the steps.utilities.meshio.importAbaqus function.
End of explanation
"""
# Find the total number of tetrahedrons in the mesh
ntets = mesh.countTets()
# Create a compartment containing all tetrahedrons
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
comp.addVolsys('cytosolv')
"""
Explanation: Our tetrahedral mesh geometry object is very different from our well-mixed geometry
(steps.geom.Geom) object. The mesh geometry is described by a
steps.geom.Tetmesh object, which contains all the functionality of a
steps.geom.Geom object, extended with many more methods which only make sense
for a tetrahedral mesh. A steps.geom.Tetmesh object is created in steps.utilities.meshio.loadMesh
and returned to the caller, so in the above code the object is referenced by
variable mesh. We will be introduced to some of the steps.geom.Tetmesh methods as we find our
sample tetrahedrons, but first we must create our mesh compartments. A compartment
object in a mesh is of type steps.geom.TmComp and requires a little extra
information than a well-mixed compartment.
A tetrahedral-mesh compartment is comprised of a group of tetrahedrons, so we must supply the object constructor
with the indices of the enclosed tetrahedrons in a Python sequence (e.g. a list).
A mesh can be separated into as many compartments as the user wishes,
though compartments should usually be separated physically by a boundary
(i.e. by a membrane) as there is no implicit diffusion between compartments
in STEPS, unless connected by a Diffusion Boundary (see Diffusion Boundary ).
If a user wishes to modify behaviour for certain sections of a compartment, this can be achieved by grouping
tetrahedrons together and utilising the simulation methods, all achievable in
the Python interface (see steps.solver.Tetexact for all available methods). However,
for our example we only wish to have one compartment, and for that compartment
to enclose the entire mesh. So we use a steps.geom.Tetmesh object method to
return the number of tetrahedrons in the mesh, and then pass a sequence of all
the indices to the steps.geom.TmComp object constructor. Note: Tetrahedron indices in STEPS always begin at 0 and increment by 1, regardless of their indices in the mesh-generation software. So if a mesh has n tetrahedrons, we can use the Python function range(n) will return a sequence [0, 1, ..., n-1] and iterate through each of the tetrahedrons in the mesh.
End of explanation
"""
# Fetch the central tetrahedron index and store:
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.0])
tetidxs[0] = ctetidx
"""
Explanation: Note that we do not (and indeed can not) set the volume of the compartment because
the volume is calculated from the combined volume of the enclosed tetrahedrons.
And that's it for our relatively simple geometry description. The remainder of our gen_geom() function
is used to collect, at random, the tetrahedrons to sample data from. This is
introduced here because it is often undesirable to collect data from all
tetrahedrons in large meshes and the user may wish to pick and chose certain
tetrahedrons to sample. Such groups can be stored in a Python sequence with for
loops used to loop over these groups and set simulation parameters or collect data.
In this simple example we will just store the central tetrahedron and it's 4
neighbours, then find the rest at random, making sure not to store the same
tetrahedron more than once. We will store the sample tetrahedron indices in the
tetidxs NumPy array we created at the top of our script. Along the way we will
be introduced to some new steps.geom.Tetmesh methods, which will be described as
we go along. This section is intended to be only an introduction to finding
information from the mesh, though a full list of the many Tetmesh methods that can
be used for more complex tasks is available in steps.geom.Tetmesh. First, we use the
steps.geom.Tetmesh.findTetByPoint method to get the index of the tetrahedron in the centre of our
mesh. steps.geom.Tetmesh.findTetByPoint returns the tetrahedron by index that encompasses the
location given in Cartesian coordinates (in a Python sequence of length 3), and returns -1 if the location
given is not inside the mesh. The mesh is a sphere, radius 10 microns, centered
on the origin, so the centre of the mesh is simply at 0.0, 0.0, 0.0 in Cartesian
coordinates. The returned idx is stored in our tetidxs array:
End of explanation
"""
# Find the central tetrahedron's four neighbours:
neighbidcs = mesh.getTetTetNeighb(ctetidx)
tetidxs[1],tetidxs[2],tetidxs[3],tetidxs[4] = neighbidcs
"""
Explanation: Next we wish to make sure that we include data from around the central tetrahedron,
so we find the central tetrahedron's four neighbours. To do this we use method
steps.geom.Tetmesh.getTetTetNeighb, which returns any tetrahedron's 4 neighbours by index in a
tuple. If any neighbour index is returned as -1 this means that this face of the
tetrahedron is on the boundary and therefore has no neighbour in that direction. Note: This property can be very useful if you wish to find information
about border tetrahedrons or surface triangles.
In this example it is safe to assume that the central tetrahedron is not
on a surface and we add our 4 neighbour indices to our tetidxs array:
End of explanation
"""
# Keep track how many tet indices we have stored so far
stored = 5
# Find the maximum and minimum coordinates of the mesh:
max = mesh.getBoundMax()
min = mesh.getBoundMin()
# Run a loop until we have stored all tet indices we require
while (stored < SAMPLE):
# Fetch 3 random numbers between 0 and 1:
rnx = random.random()
rny = random.random()
rnz = random.random()
# Find the related coordinates in the mesh:
xcrd = min[0] + (max[0]-min[0])*rnx
ycrd = min[1] + (max[1]-min[1])*rny
zcrd = min[2] + (max[2]-min[2])*rnz
# Find the tetrahedron that encompasses this point:
tidx = mesh.findTetByPoint([xcrd, ycrd, zcrd])
# -1 was returned if point is outside the mesh
if (tidx == stetmesh.UNKNOWN_TET): continue
if (tidx not in tetidxs):
tetidxs[stored] = tidx
stored += 1
"""
Explanation: Then we fill the rest of our tetidxs array with tetrahedrons chosen at random.
A way to do this would be to simply fetch one randomly-generated number between
0 and 1 and pick the nearest integer that it corresponds to when multiplied by
the total number of tetrahedrons. However, the following technique is a different
approach and finds a random point in space in the 3D bounding box of the mesh and
stores the corresponding tetrahedron index if it not already stored (and the point
is not outside the mesh). This would then make it easier to provide a bias towards
the center of the mesh in order to get a more even distribution of radial distances,
but this is not shown in this simple example. We will use methods steps.geom.Tetmesh.getBoundMax
and steps.geom.Tetmesh.getBoundMin, which return the maximum and minimum Cartesian coordinates
of the mesh respectively.
End of explanation
"""
# Find the barycenter of the central tetrahedron
cbaryc = mesh.getTetBarycenter(ctetidx)
for i in range(SAMPLE):
# Fetch the barycenter of the tetrahedron:
baryc = mesh.getTetBarycenter(tetidxs[i])
# Find the radial distance of this tetrahedron to mesh centre:
r = math.sqrt(math.pow((baryc[0]-cbaryc[0]),2) \
+ math.pow((baryc[1]-cbaryc[1]),2) \
+ math.pow((baryc[2]-cbaryc[2]),2))
# Store the radial distance (in microns):
tetrads[i] = r*1.0e6
"""
Explanation: This example is intended to demonstrate that there is a lot of functionality
in STEPS to enable you to find and store whatever spatial information is required
that could not be passed on from the mesh generator, and that some knowledge of
Python is very useful at this stage to enable you to produce code to get maximum
benefit from the available methods. This geometry description stage is a good time
to find and collect whatever spatial information is required for simulation
initialization and data collection. We should note that in this example there is
little error checking and more should be included in real simulation scripts
(for example SAMPLE must be lower than the total number of tetrahedrons in the mesh).
For a full list of the available methods please see steps.geom.Tetmesh.
Now, the final task we wish to perform at the geometry level
is to find the radial distances of the tetrahedrons and fill our tetrads array
with this information. These are stored separately from the indices in our example (for clarity)
and we must make sure that the distances saved
relate to the distance for the tetrahedron at the same location in the tetidxs
array, although we could easily have stored the indices and radial distances
together in a 2D array. We will take the radial distance as the distance from
the tetrahedron's barycenter to the barycenter of the central tetrahedron.
To find the barycenters we use method steps.geom.Tetmesh.getTetBarycenter, which returns the
barycenter Cartesian coordinates in a tuple.
End of explanation
"""
import steps.utilities.meshio as smeshio
def gen_geom():
print("Loading mesh...")
mesh = smeshio.loadMesh('meshes/sphere_rad10_11Ktets')[0]
print("Mesh Loaded")
# Find the total number of tetrahedrons in the mesh
ntets = mesh.countTets()
# Create a compartment containing all tetrahedrons
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
comp.addVolsys('cytosolv')
print("Finding tetrahedron samples...")
# Fetch the central tetrahedron index and store:
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.0])
tetidxs[0] = ctetidx
# Find the central tetrahedron's four neighbours:
neighbidcs = mesh.getTetTetNeighb(ctetidx)
tetidxs[1:5] = neighbidcs
# Keep track how many tet indices we have stored so far
stored = 5
# Find the maximum and minimum coordinates of the mesh
max = mesh.getBoundMax()
min = mesh.getBoundMin()
# Run a loop until we have stored all tet indices we require
while stored < SAMPLE:
# Fetch 3 random numbers between 0 and 1
rnx = random.random()
rny = random.random()
rnz = random.random()
# Find the coordinates in the mesh that these numbers relate to
xcrd = min[0] + (max[0] - min[0]) * rnx
ycrd = min[1] + (max[1] - min[1]) * rny
zcrd = min[2] + (max[2] - min[2]) * rnz
# Find the tetrahedron that encompasses this point.
tidx = mesh.findTetByPoint([xcrd, ycrd, zcrd])
# -1 was returned if point is outside the mesh:
if tidx == stetmesh.UNKNOWN_TET: continue
if tidx not in tetidxs:
tetidxs[stored] = tidx
stored += 1
# Find the barycenter of the central tetrahedron
cbaryc = mesh.getTetBarycenter(ctetidx)
for i in range(SAMPLE):
# Fetch the barycenter of the tetrahedron:
baryc = mesh.getTetBarycenter(int(tetidxs[i]))
# Find the radial distance of this tetrahedron to mesh center:
r = math.sqrt(math.pow((baryc[0] - cbaryc[0]), 2) + \
math.pow((baryc[1] - cbaryc[1]), 2) + \
math.pow((baryc[2] - cbaryc[2]), 2))
# Store the radial distance (in microns):
tetrads[i] = r * 1.0e6
print("Tetrahedron samples found")
return mesh
"""
Explanation: Finally, we return the steps.geom.Tetmesh object required for simulation object construction.
The complete function then reads as follows.
End of explanation
"""
model = gen_model()
tmgeom = gen_geom()
rng = srng.create('mt19937', 512)
rng.initialize(2903)
"""
Explanation: Simulation with Tetexact
Now it’s time to run a simulation and visualize the collected data, much as we did in previous chapters, though this time collecting and plotting spatial data, that is concentrations from individual tetrahedrons and not whole compartments. This time we must call the gen_model and gen_geom functions to set up our model and return the container objects. We then create our random number generator object just as for the Wmdirect simulations.
End of explanation
"""
sim = solvmod.Tetexact(model, tmgeom, rng)
"""
Explanation: Now we can create our reaction-diffusion steps.solver.Tetexact solver object, which requires a steps.geom.Tetmesh object to it’s initializing function (if we try to present it with simple well-mixed geometry an error message will appear):
End of explanation
"""
tpnts = numpy.arange(0.0, INT, DT)
# Find how many "time points" we have
ntpnts = tpnts.shape[0]
# Create the data structure: iterations x time points x tet samples
res = numpy.zeros((NITER, ntpnts, SAMPLE))
"""
Explanation: This solver builds on the functionality of the well-mixed solvers, with methods for manipulating certain regions in the mesh. We will see some examples in the following snippets of code, and a full list of available methods is available in steps.solver.Tetexact. Similarly to our well-mixed simulations we must create the data structures for saving our results. We create the “time points“ array (based on parameters we set at the beginning of our script) and the “results“ array, which in this case will store data for all the tetrahedrons we are sampling:
End of explanation
"""
# Fetch the index of the tetrahedron at the centre of the mesh
ctetidx = tmgeom.findTetByPoint([0.0, 0.0, 0.0])
# Run NITER number of iterations:
for i in range(NITER):
sim.reset()
# Inject all molecules into the central tet:
sim.setTetCount(ctetidx, 'A', NINJECT)
for j in range(ntpnts):
sim.run(tpnts[j])
# Loop over the tetrahedrons we are saving data for
for k in range(SAMPLE):
# Save the concentration in the tetrahedron, in uM
res[i, j, k] = sim.getTetConc(int(tetidxs[k]), 'A') * 1.0e6
"""
Explanation: We are now ready to run a simulation. This will look quite similar to our previous code for running a well-mixed simulation, but this time we are injecting molecules into and recording data from individual tetrahedrons, not the whole compartment (though this is also possible). We first need to find the central tetrahedron index again (as we did not pass this information on from the gen_geom() function, though this is of course an option). We then use solver method steps.solver.Tetexact.setTetCount to set the number of molecules in the central tetrahedron at time t = 0 to the number stored in variable NINJECT (default number in all tetrahedrons is zero set by steps.solver.Tetexact.reset). We will then run our simulation and collect the data in a few lines of code in nested for loops:
End of explanation
"""
res_mean = numpy.mean(res, axis=0)
"""
Explanation: That is all the code we require to run our simple diffusion simulation.
Notice that the steps.solver.Tetexact.getTetConc returns the concentration in Molar units. Recall that all units in STEPS are s.i. units with the exception of concentration, which is Molar units. We then convert Molar units to micro-Molar by multiplying by $10^6$.
We wish to look at the mean concentration in the tetrahedrons over all our iterations, so we simply use the numpy.mean function as in previous chapters:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def plotres(res_mean, tidx):
if tidx >= INT / DT:
print("Time index is out of range.")
return
plt.scatter(tetrads, res_mean[tidx], s=2)
plt.xlabel('Radial distance of tetrahedron ($\mu$m)')
plt.ylabel('Concentration in tetrahedron ($\mu$M)')
t = tpnts[tidx]
plt.title('Unbounded diffusion. Time: ' + str(t) + 's')
plotanlyt(t)
plt.xlim(0.0, 10.0)
plt.ylim(0.0)
"""
Explanation: Plotting simulation output
So now we come to plotting our data. Now that we have spatial information the data we wish to plot is different to our previous well-mixed simulations where we were plotting the concentration in a well-mixed compartment. We will plot the mean concentration from individual tetrahedrons against their radial distance from origin, but at many different time points. To achieve this we will create another function, this time with a parameter relating to the “time point“ we wish to plot. We can then call this function with a “time point“ argument and our function will plot concentration vs radial distance at the time relating to that “time point“, as desired. In our function we also label the axis and title the plot with the time.
End of explanation
"""
def plotanlyt(t):
segs = 100
anlytconc = numpy.zeros(segs)
radialds = numpy.zeros(segs)
maxrad = 0.0
for i in tetrads:
if i > maxrad: maxrad = i
maxrad *= 1e-6
intervals = maxrad / segs
rad = 0.0
for i in range(segs):
# Find the conc from analytical solution, and convert to mol/L
anlytconc[i] = 1.0e3 * (1 / 6.022e23) * \
((NINJECT / (math.pow((4 * math.pi * DCST * t), 1.5))) * \
(math.exp((-1.0 * (rad * rad)) / (4 * DCST * t))))
radialds[i] = rad * 1e6
rad += intervals
plt.plot(radialds, anlytconc, color='red')
"""
Explanation: You may have noticed that we call a function that we have not defined yet, plotanlyt. This function will plot the concentration from the analytical concentration given by our equation. The function for plotting the analytical solution is provided here, but we will not go through this code in detail. Here we can see why the diffusion constant was stored in variable DCST at the top of our script:
End of explanation
"""
plt.figure(figsize=(10,6))
plotres(res_mean, 100)
"""
Explanation: And that is everything we need to set up and run our simple diffusion simulation and plot the data, alongside the analytical solution for comparison. With this structure, it is intended that the plotting functions is called interactively, giving us the chance to visualise a number of different time plots, then save whichever plots we chose. It often makes sense to output the data to a file, then write plotting functions in separate modules that can load the saved data from these files and plot. This requires some knowledge of writing and reading files in Python, but like most operations in Python, can usually be picked up quite quickly.
Let’s assume we have contained all of our unbounded diffusion code from this chapter in a Python file diffusion.py (which can be found in examples/tutorial). We can then run our simulation interactively in Python by importing the module, then visualising the data with the plotres function we defined. For this example lets provide a call to our plotres function with argument 100 meaning we will plot data at “time point“ 100 (corresponding to time 0.1s), our last “time point“:
End of explanation
"""
import steps.mpi.solver as mpisolvmod
sim_tos = mpisolvmod.TetOpSplit(model, tmgeom, rng, False, \
[0] * tmgeom.ntets)
"""
Explanation: Simulation with TetOpSplit
Version 2.3 and later of STEPS include an approximate spatial stochastic solver called TetOpSplit (steps.mpi.solver.TetOpSplit). This solver applies an approximation for diffusion whereas reactions are solved by the SSA. The solver is designed to approximate the exact solver, Tetexact, to high accuracy and may perform significantly faster then Tetexact for some models. A full description of the method is available at: http://arxiv.org/abs/1512.03126. Although the method differs significantly from the exact stochastic solver, usage is very similar with only one different command necessary in this example to simulate the model with TetOpSplit instead of Tetexact. That is, instead of creating a Tetexact solver object we create a TetOpSplit solver object with the following command:
End of explanation
"""
# Run NITER number of iterations:
for i in range(NITER):
sim_tos.reset()
# Inject all molecules into the central tet:
sim_tos.setTetCount(ctetidx, 'A', NINJECT)
for j in range(ntpnts):
sim_tos.run(tpnts[j])
# Loop over the tetrahedrons we are saving data for
for k in range(SAMPLE):
# Save the concentration in the tetrahedron, in uM
res[i, j, k] = sim_tos.getTetConc(int(tetidxs[k]), 'A') * 1.0e6
res_mean = numpy.mean(res, axis=0)
"""
Explanation: The model can be run with the exact same set of Python commands as for the previous simulation with Tetexact, using the reference to the TetOpSplit object sim_tos, writing over the data recorded to NumPy array res and recreating res_mean for use by the plotting function:
End of explanation
"""
plt.figure(figsize=(10, 6))
plotres(res_mean, 100)
"""
Explanation: We can now plot the results obtained with TetOpSplit
End of explanation
"""
|
benneely/qdact-basic-analysis | notebooks/caresetting.ipynb | gpl-3.0 | from IPython.core.display import display, HTML;from string import Template;
HTML('<script src="//d3js.org/d3.v3.min.js" charset="utf-8"></script>')
css_text2 = '''
#main { float: left; width: 750px;}#sidebar { float: right; width: 100px;}#sequence { width: 600px; height: 70px;}#legend { padding: 10px 0 0 3px;}#sequence text, #legend text { font-weight: 400; fill: #000000; font-size: 0.75em;}#graph-div2 { position: relative;}#graph-div2 { stroke: #fff;}#explanation { position: absolute; top: 330px; left: 405px; width: 140px; text-align: center; color: #666; z-index: -1;}#percentage { font-size: 2.3em;}
'''
with open('interactive_circle_cl.js', 'r') as myfile:
data=myfile.read()
js_text_template2 = Template(data)
html_template = Template('''
<style> $css_text </style>
<div id="sequence"></div>
<div id="graph-div2"></div>
<div id="explanation" style="visibility: hidden;">
<span id="percentage"></span><br/>
of patients meet this criteria
</div>
</div>
<script> $js_text </script>
''');
js_text2 = js_text_template2.substitute({'graphdiv': 'graph-div2'});
HTML(html_template.substitute({'css_text': css_text2, 'js_text': js_text2}))
"""
Explanation: All About: Care Setting
NOTE:
- There is only 1 encounter that contains the value consultloc=7 (Palliative Care Unit) - it is on the third visit for this particular individual. Because there is only 1 individual with this trait, this will cause problems, so I need to collapse this category to something. After discussing with Don (7/26/16), we decided to delete this encounter.
There are only 7 encounters that contain the values consultLoc=6 (Emergency Department) - this will cause problems with modeling, from viewing individuals with this value, it seems the most appropriate level to collapse this too is consultloc=1 (Hospital - ICU ( includes MICU, SICU, TICU)). After discussing with Don (7/26/16), we decided to delete these encounters.
It is important to note that we are dealing with a dataset of 5,066 encounters. As such, it is possible that a particular patient's care setting field (on QDACT) will change (or be different) over time. Therefore for the remainer of this notebook, we will only explore the first care setting assigned to a patient and how that correlated to their number of follow-up visits. Also, it is important to note that due to the nebulous design of this exploration, we are not adjusting for the multiple tests that follow. This could be a critique that many reviewers would have if this work is ever submitted.
Because this is only exploratory (not confirmatory or a clincal trial), I would recommend not adjusting (and have not done so below).
Table of Contents
Follow-up Visit Distribution by Care Setting Graphic
Unadjusted Associations with Care Setting (First Visit Only)
<a id='fuvdig'></a>
Follow-up Visit Distribution by Care Setting (Interactive Graphic)
To explore the entire follow-up distribution of the CMMI population stratified by care setting, we will use an interactive graphic. Because it is interactive, it requires you to place your cursor in the first cell below (starting with 'from IPython.core.display...') and then press the play button in the toolbar above. You will need to press play 5 times. After pressing play 5 times, the interactive graphic will appear. Instructions for interpreting the graphic are given below the figure.
End of explanation
"""
import pandas as pd
table = pd.read_csv(open('./python_scripts/11_primarydiagnosis_tables_catv2_consultLoc.csv','r'))
#Anxiety
table[0:5]
#Appetite
table[5:10]
#Constipation
table[10:15]
#Depression
table[15:20]
#Drowsiness
table[20:25]
#Nausea
table[25:30]
#Pain
table[30:35]
#Shortness
table[35:40]
#Tiredness
table[40:45]
#Well Being
table[45:50]
# PPSScore
table[50:51]
"""
Explanation: Graphic Interpretation
The graphic above illustrates the pattern of follow-ups in the CMMI data set for each of the 1,640 unique patients. Using your cursor, you can hover over a particular color to find out the specific care setting. Each concentric circle going out from the middle represent a new follow-up visit for a person. For example, in the figure above, starting in the center, there is a red layer in the first concentric circle. If you hover over the first red circle, this says 41.8%. This means that 41.8% of the 1,640 patients reported 'Long Term Care' at their first visit. Hovering over the next layer that is black, gives a value of 7.26%. This means that 7.26% of the population had a first visit labeled as 'Long Term Care' and then had no additional visits.
Statistical Inference
I'm not sure if there is a hypothesis we want to test in relation to these two variables (i.e. Care Setting and number of follow-up visits).
<a id='ua'></a>
Unadjusted Associations with Care Setting (First Visit Only)
In this AIM, we will look at testing the null hypothesis of no association between each row variable and the column variable (ConsultLoc). There is obviously a time aspect to this data, but for this aim, we will stick to the first encounter only.
Here is how the data is munged for this aim:
- Get the first encounter for each internalid, we start with a dataset that is 5,066 records in length and end up with a data set that is 1,640 records in length. Note: we sort data by internalid and AssessmentDate, if there is another alternative date variable we should use, please let us know. This will determine the "first visit" by sort order.
- We apply our "set-to-missing" algorithm to every variable to be analyzed. This will limit the the number of indiviudals to at least 1,618 as there are 22 who are initially missing their consultloc. We should also rerun the missing data analysis by incorporating this algorithm as it is a truer state of the data.
- This number will further be limited by running each row variable through the set-to-missing algorithm. Each row will try to be as transparent as possible about this process.
- Because this is a public server, actual data can't be posted, but the source code used to get to these results can. Here is the location to that file.
End of explanation
"""
|
ajrichards/bayesian-examples | reference/linear-algebra.ipynb | bsd-3-clause | import numpy as np
from numpy.random import randn as randn
from numpy.random import randint as randint
"""
Explanation: Linear Algebra with examples using Numpy
End of explanation
"""
from IPython.display import Image
Image('images/vector.png')
x = np.array([1,2,3,4])
print x
print x.shape
"""
Explanation: Linear Algebra and Machine Learning
Ranking web pages in order of importance
Solved as the problem of finding the eigenvector of the page score matrix
Dimensionality reduction - Principal Component Analysis
Movie recommendation
Use singular value decomposition (SVD) to break down user-movie into user-feature and movie-feature matrices, keeping only the top $k$-ranks to identify the best matches
Topic modeling
Extensive use of SVD and matrix factorization can be found in Natural Language Processing, specifically in topic modeling and semantic analysis
Vectors
A vector can be represented by an array of real numbers
$$\mathbf{x} = [x_1, x_2, \ldots, x_n]$$
Geometrically, a vector specifies the coordinates of the tip of the vector if the tail were placed at the origin
End of explanation
"""
print np.sqrt(np.sum(x**2))
print np.linalg.norm(x)
"""
Explanation: The norm of a vector $\mathbf{x}$ is defined by
$$||\boldsymbol{x}|| = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}$$
End of explanation
"""
a = 4
print x
print x + 4
"""
Explanation: Adding a constant to a vector adds the constant to each element
$$a + \boldsymbol{x} = [a + x_1, a + x_2, \ldots, a + x_n]$$
End of explanation
"""
print x
print x*4
print np.linalg.norm(x*4)
#print np.linalg.norm(x)
"""
Explanation: Multiplying a vector by a constant multiplies each term by the constant.
$$a \boldsymbol{x} = [ax_1, ax_2, \ldots, ax_n]$$
End of explanation
"""
y = np.array([4, 3, 2, 1])
print x
print y
np.dot(x,y)
"""
Explanation: If we have two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ of the same length $(n)$, then the dot product is give by
$$\boldsymbol{x} \cdot \boldsymbol{y} = x_1y_1 + x_2y_2 + \cdots + x_ny_n$$
End of explanation
"""
w = np.array([1, 2])
v = np.array([-2, 1])
np.dot(w,v)
"""
Explanation: If $\mathbf{x} \cdot \mathbf{y} = 0$ then $x$ and $y$ are orthogonal (aligns with the intuitive notion of perpindicular)
End of explanation
"""
print np.linalg.norm(x)**2
print np.dot(x,x)
"""
Explanation: The norm squared of a vector is just the vector dot product with itself
$$
||x||^2 = x \cdot x
$$
End of explanation
"""
np.linalg.norm(x-y)
"""
Explanation: The distance between two vectors is the norm of the difference.
$$
d(x,y) = ||x-y||
$$
End of explanation
"""
x = np.array([1,2,3,4])
y = np.array([5,6,7,8])
np.dot(x,y)/(np.linalg.norm(x)*np.linalg.norm(y))
"""
Explanation: Cosine Similarity is the cosine of the angle between the two vectors give by
$$cos(\theta) = \frac{\boldsymbol{x} \cdot \boldsymbol{y}}{||\boldsymbol{x}|| \text{ } ||\boldsymbol{y}||}$$
End of explanation
"""
x_centered = x - np.mean(x)
y_centered = y - np.mean(y)
# The following gives the "Centered Cosine Similarity"
# ... which is equivelent to the "Sample Pearson Correlation Coefficient"
# ... (in the correlation case, we're interpreting the vector as a list of samples)
# ... see: https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#For_a_sample
np.dot(x_centered,y_centered)/(np.linalg.norm(x_centered)*np.linalg.norm(y_centered))
"""
Explanation: If both $\boldsymbol{x}$ and $\boldsymbol{y}$ are zero-centered, this calculation is the correlation between $\boldsymbol{x}$ and $\boldsymbol{y}$
End of explanation
"""
x = np.array([1,2,3,4])
y = np.array([5,6,7,8])
print x+y
a=2
x = np.array([1,2,3,4])
print a*x
"""
Explanation: Linear Combinations of Vectors
If we have two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ of the same length $(n)$, then
$$\boldsymbol{x} + \boldsymbol{y} = [x_1+y_1, x_2+y_2, \ldots, x_n+y_n]$$
End of explanation
"""
a1=2
x1 = np.array([1,2,3,4])
print a1*x1
a2=4
x2 = np.array([5,6,7,8])
print a2*x2
print a1*x1 + a2*x2
"""
Explanation: A linear combination of a collection of vectors $(\boldsymbol{x}_1,
\boldsymbol{x}_2, \ldots,
\boldsymbol{x}_m)$
is a vector of the form
$$a_1 \cdot \boldsymbol{x}_1 + a_2 \cdot \boldsymbol{x}_2 +
\cdots + a_m \cdot \boldsymbol{x}_m$$
End of explanation
"""
X = np.array([[1,2,3],[4,5,6]])
print X[1, 2]
print X
print X.shape
"""
Explanation: Matrices
An $n \times p$ matrix is an array of numbers with $n$ rows and $p$ columns:
$$
X =
\begin{bmatrix}
x_{11} & x_{12} & \cdots & x_{1p} \
x_{21} & x_{22} & \cdots & x_{2p} \
\vdots & \vdots & \ddots & \vdots \
x_{n1} & x_{n2} & \cdots & x_{np}
\end{bmatrix}
$$
$n$ = the number of subjects
$p$ = the number of features
For the following $2 \times 3$ matrix
$$
X =
\begin{bmatrix}
1 & 2 & 3\
4 & 5 & 6
\end{bmatrix}
$$
We can create in Python using NumPY
End of explanation
"""
X = np.array([[1,2,3],[4,5,6]])
print X
Y = np.array([[7,8,9],[10,11,12]])
print Y
print X+Y
X = np.array([[1,2,3],[4,5,6]])
print X
Y = np.array([[7,8,9],[10,11,12]])
print Y
print X-Y
X = np.array([[1,2,3],[4,5,6]])
print X
a=5
print a*X
"""
Explanation: Basic Properties
Let $X$ and $Y$ be matrices of the dimension $n \times p$. Let $x_{ij}$ $y_{ij}$ for $i=1,2,\ldots,n$ and $j=1,2,\ldots,p$ denote the entries in these matrices, then
$X+Y$ is the matrix whose $(i,j)^{th}$ entry is $x_{ij} + y_{ij}$
$X-Y$ is the matrix whose $(i,j)^{th}$ entry is $x_{ij} - y_{ij}$
$aX$, where $a$ is any real number, is the matrix whose $(i,j)^{th}$ entry is $ax_{ij}$
End of explanation
"""
X = np.array([[2,1,0],[-1,2,3]])
print X
Y = np.array([[0,-2],[1,2],[1,1]])
print Y
# Matrix multiply with dot operator
print np.dot(X,Y)
print X.dot(Y)
# Regular multiply operator is just element-wise multiplication
print X
print Y.transpose()
print X*Y.T
"""
Explanation: In order to multiply two matrices, they must be conformable such that the number of columns of the first matrix must be the same as the number of rows of the second matrix.
Let $X$ be a matrix of dimension $n \times k$ and let $Y$ be a matrix of dimension $k \times p$, then the product $XY$ will be a matrix of dimension $n \times p$ whose $(i,j)^{th}$ element is given by the dot product of the $i^{th}$ row of $X$ and the $j^{th}$ column of $Y$
$$\sum_{s=1}^k x_{is}y_{sj} = x_{i1}y_{1j} + \cdots + x_{ik}y_{kj}$$
Note:
$$XY \neq YX$$
If $X$ and $Y$ are square matrices of the same dimension, then the both the product $XY$ and $YX$ exist; however, there is no guarantee the two products will be the same
End of explanation
"""
print X
X_T = X.transpose()
print X_T
print X_T.shape
"""
Explanation: Additional Properties of Matrices
If $X$ and $Y$ are both $n \times p$ matrices,
then $$X+Y = Y+X$$
If $X$, $Y$, and $Z$ are all $n \times p$ matrices,
then $$X+(Y+Z) = (X+Y)+Z$$
If $X$, $Y$, and $Z$ are all conformable,
then $$X(YZ) = (XY)Z$$
If $X$ is of dimension $n \times k$ and $Y$ and $Z$ are of dimension $k \times p$, then $$X(Y+Z) = XY + XZ$$
If $X$ is of dimension $p \times n$ and $Y$ and $Z$ are of dimension $k \times p$, then $$(Y+Z)X = YX + ZX$$
If $a$ and $b$ are real numbers, and $X$ is an $n \times p$ matrix,
then $$(a+b)X = aX+bX$$
If $a$ is a real number, and $X$ and $Y$ are both $n \times p$ matrices,
then $$a(X+Y) = aX+aY$$
If $z$ is a real number, and $X$ and $Y$ are conformable, then
$$X(aY) = a(XY)$$
Matrix Transpose
The transpose of an $n \times p$ matrix is a $p \times n$ matrix with rows and columns interchanged
$$
X^T =
\begin{bmatrix}
x_{11} & x_{12} & \cdots & x_{1n} \
x_{21} & x_{22} & \cdots & x_{2n} \
\vdots & \vdots & \ddots & \vdots \
x_{p1} & x_{p2} & \cdots & x_{pn}
\end{bmatrix}
$$
End of explanation
"""
x = np.array([1,2,3,4])
print x
print x.shape
y = x.reshape(4,1)
z = x[:,np.newaxis]
print y
print z
print y.shape
print z.shape
"""
Explanation: Properties of Transpose
Let $X$ be an $n \times p$ matrix and $a$ a real number, then
$$(cX)^T = cX^T$$
Let $X$ and $Y$ be $n \times p$ matrices, then
$$(X \pm Y)^T = X^T \pm Y^T$$
Let $X$ be an $n \times k$ matrix and $Y$ be a $k \times p$ matrix, then
$$(XY)^T = Y^TX^T$$
Vector in Matrix Form
A column vector is a matrix with $n$ rows and 1 column and to differentiate from a standard matrix $X$ of higher dimensions can be denoted as a bold lower case $\boldsymbol{x}$
$$
\boldsymbol{x} =
\begin{bmatrix}
x_{1}\
x_{2}\
\vdots\
x_{n}
\end{bmatrix}
$$
In numpy, when we enter a vector, it will not normally have the second dimension, so we can reshape it
End of explanation
"""
x_T = y.transpose()
print x_T
print x_T.shape
print x
"""
Explanation: and a row vector is generally written as the transpose
$$\boldsymbol{x}^T = [x_1, x_2, \ldots, x_n]$$
End of explanation
"""
print np.identity(4)
X = np.array([[1,2,3], [0,1,0], [-2, -1, 0]])
Y = np.linalg.inv(X)
print Y
print Y.dot(X)
print np.allclose(X,Y.dot(X))
"""
Explanation: If we have two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ of the same length $(n)$, then the dot product is give by matrix multiplication
$$\boldsymbol{x}^T \boldsymbol{y} =
\begin{bmatrix} x_1& x_2 & \ldots & x_n \end{bmatrix}
\begin{bmatrix}
y_{1}\
y_{2}\
\vdots\
y_{n}
\end{bmatrix} =
x_1y_1 + x_2y_2 + \cdots + x_ny_n$$
Inverse of a Matrix
The inverse of a square $n \times n$ matrix $X$ is an $n \times n$ matrix $X^{-1}$ such that
$$X^{-1}X = XX^{-1} = I$$
Where $I$ is the identity matrix, an $n \times n$ diagonal matrix with 1's along the diagonal.
If such a matrix exists, then $X$ is said to be invertible or nonsingular, otherwise $X$ is said to be noninvertible or singular.
End of explanation
"""
A = np.array([[1, 1], [1, 2]])
vals, vecs = np.linalg.eig(A)
print vals
print vecs
lam = vals[0]
vec = vecs[:,0]
print A.dot(vec)
print lam * vec
"""
Explanation: Properties of Inverse
If $X$ is invertible, then $X^{-1}$ is invertible and
$$(X^{-1})^{-1} = X$$
If $X$ and $Y$ are both $n \times n$ invertible matrices, then $XY$ is invertible and
$$(XY)^{-1} = Y^{-1}X^{-1}$$
If $X$ is invertible, then $X^T$ is invertible and
$$(X^T)^{-1} = (X^{-1})^T$$
Orthogonal Matrices
Let $X$ be an $n \times n$ matrix such than $X^TX = I$, then $X$ is said to be orthogonal which implies that $X^T=X^{-1}$
This is equivalent to saying that the columns of $X$ are all orthogonal to each other (and have unit length).
Matrix Equations
A system of equations of the form:
\begin{align}
a_{11}x_1 + \cdots + a_{1n}x_n &= b_1 \
\vdots \hspace{1in} \vdots \
a_{m1}x_1 + \cdots + a_{mn}x_n &= b_m
\end{align}
can be written as a matrix equation:
$$
A\mathbf{x} = \mathbf{b}
$$
and hence, has solution
$$
\mathbf{x} = A^{-1}\mathbf{b}
$$
Eigenvectors and Eigenvalues
Let $A$ be an $n \times n$ matrix and $\boldsymbol{x}$ be an $n \times 1$ nonzero vector. An eigenvalue of $A$ is a number $\lambda$ such that
$$A \boldsymbol{x} = \lambda \boldsymbol{x}$$
A vector $\boldsymbol{x}$ satisfying this equation is called an eigenvector associated with $\lambda$
Eigenvectors and eigenvalues will play a huge roll in matrix methods later in the course (PCA, SVD, NMF).
End of explanation
"""
|
alsam/jlclaw | src/euler/Euler.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'svg'
from exact_solvers import euler
from exact_solvers import euler_demos
from ipywidgets import widgets
from ipywidgets import interact
State = euler.Primitive_State
gamma = 1.4
"""
Explanation: The Euler equations of gas dynamics
In this notebook, we discuss the equations and the structure of the exact solution to the Riemann problem. In Euler_approximate and FV_compare, we will investigate approximate Riemann solvers.
Fluid dynamics
In this chapter we study the system of hyperbolic PDEs that governs the motions of a compressible gas in the absence of viscosity. These consist of conservation laws for mass, momentum, and energy. Together, they are referred to as the compressible Euler equations, or simply the Euler equations. Our discussion here is fairly brief; for much more detail see <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite> or <cite data-cite="toro2013riemann"><a href="riemann.html#toro2013riemann">(Toro, 2013)</a></cite>.
Mass conservation
We will use $\rho(x,t)$ to denote the fluid density and $u(x,t)$ for its velocity. Then the equation for conservation of mass is just the familiar continuity equation:
$$\rho_t + (\rho u)_x = 0.$$
Momentum conservation
We discussed the conservation of momentum in a fluid already in Acoustics. For convenience, we review the ideas here. The momentum density is given by the product of mass density and velocity, $\rho u$. The momentum flux has two components. First, the momentum is transported in the same way that the density is; this flux is given by the momentum density times the velocity: $\rho u^2$.
To understand the second term in the momentum flux, we must realize that a fluid is made up of many tiny molecules. The density and velocity we are modeling are average values over some small region of space. The individual molecules in that region are not all moving with exactly velocity $u$; that's just their average. Each molecule also has some additional random velocity component. These random velocities are what accounts for the pressure of the fluid, which we'll denote by $p$. These velocity components also lead to a net flux of momentum. Thus the momentum conservation equation is
$$(\rho u)_t + (\rho u^2 + p)_x = 0.$$
This is very similar to the conservation of momentum equation in the shallow water equations, as discussed in Shallow_water, in which case $hu$ is the momentum density and $\frac 1 2 gh^2$ is the hydrostatic pressure. For gas dynamics, a different expression must be used to compute the pressure $p$ from the conserved quantities. This relation is called the equation of state of the gas, as discussed further below.
Energy conservation
The energy has two components: internal energy density $\rho e$ and kinetic energy density $\rho u^2/2$:
$$E = \rho e + \frac{1}{2}\rho u^2.$$
Like the momentum flux, the energy flux involves both bulk transport ($Eu$) and transport due to pressure ($pu$):
$$E_t + (u(E+p)) = 0.$$
Equation of state
You may have noticed that we have 4 unknowns (density, momentum, energy, and pressure) but only 3 conservation laws. We need one more relation to close the system. That relation, known as the equation of state, expresses how the pressure is related to the other quantities. We'll focus on the case of a polytropic ideal gas, for which
$$p = \rho e (\gamma-1).$$
Here $\gamma$ is the ratio of specific heats, which for air is approximately 1.4.
Hyperbolic structure of the 1D Euler equations
We can write the three conservation laws as a single system $q_t + f(q)_x = 0$ by defining
\begin{align}
q & = \begin{pmatrix} \rho \ \rho u \ E\end{pmatrix}, &
f(q) & = \begin{pmatrix} \rho u \ \rho u^2 + p \ u(E+p)\end{pmatrix}.
\label{euler_conserved}
\end{align}
These are the one-dimensional Euler system. As usual, one can define the $3 \times 3$ Jacobian matrix by differentiating this flux function with respect to the three components of $q$.
In our discussion of the structure of these equations, it is convenient to work with the primitive variables $(\rho, u, p)$ rather than the conserved variables. The quasilinear form is particularly simple in the primitive variables:
\begin{align} \label{euler_primitive}
\begin{bmatrix} \rho \ u \ p \end{bmatrix}_t +
\begin{bmatrix} u & \rho & 0 \ 0 & u & 1/\rho \ 0 & \gamma \rho & u \end{bmatrix} \begin{bmatrix} \rho \ u \ p \end{bmatrix}_x & = 0.
\end{align}
Characteristic velocities
The eigenvalues of the flux Jacobian $f'(q)$ for the 1D Euler equations are:
\begin{align}
\lambda_1 & = u-c & \lambda_2 & = u & \lambda_3 & = u+c
\end{align}
Here $c$ is the sound speed:
$$ c = \sqrt{\frac{\gamma p}{\rho}}.$$
These are also the eigenvalues of the coefficient matrix appearing in (\ref{euler_primitive}), and show that acoustic waves propagate at speeds $\pm c$ relative to the fluid velocity $u$. There is also a characteristic speed $\lambda_2 =u$ corresponding to the transport of entropy at the fluid velocity, as discussed further below.
The eigenvectors of the coefficient matrix appearing in (\ref{euler_primitive}) are:
\begin{align}\label{euler_evecs}
r_1 & = \begin{bmatrix} -\rho/c \ 1 \ - \rho c \end{bmatrix} &
r_2 & = \begin{bmatrix} 1 \ 0 \ 0 \end{bmatrix} &
r_3 & = \begin{bmatrix} \rho/c \ 1 \ \rho c \end{bmatrix}.
\end{align}
These vectors show the relation between jumps in the primitive variables across waves in each family. The eigenvectors of the flux Jacobian $f'(q)$ arising from the conservative form (\ref{euler_conserved}) would be different, and would give the relation between jumps in the conserved variables across each wave.
Notice that the second characteristic speed, $\lambda_2$, depends only on $u$ and that $u$ does not change as we move in the direction of $r_2$. In other words, the 2-characteristic velocity is constant on 2-integral curves. This is similar to the wave that carries changes in the tracer that we considered in Shallow_tracer. We say this characteristic field is linearly degenerate; it admits neither shocks nor rarefactions. In a simple 2-wave, all characteristics are parallel. A jump in this family carries a change only in the density, and is referred to as a contact discontinuity.
Mathematically, the $p$th field is linearly degenerate if
\begin{align}\label{lindegen}
\nabla \lambda_p(q) \cdot r_p(q) = 0,
\end{align}
since in this case the eigenvalue $\lambda_p(q)$ does not vary in the direction of the eigenvector $r_p(q)$, and hence is constant along integral curves of this family. (Recall that $r_p(q)$ is the tangent vector at each point on the integral curve.)
The other two fields have characteristic velocities that do vary along the corresponding integral curves. Moreover they vary in a monotonic manner as we move along an integral curve, always increasing as we move in one direction, decreasing in the other. Mathematically, this means that
\begin{align}\label{gennonlin}
\nabla \lambda_p(q) \cdot r_p(q) \ne 0
\end{align}
as $q$ varies, so that the directional derivative of $\lambda_p(q)$ cannot change sign as we move along the curve. This is analogous to the flux for a scalar problem being convex, and means that the 1-wave and the 3-wave in any Riemann solution to the Euler equations will be a single shock or rarefaction wave, not the sort of compound waves we observed in Nonconvex_scalar in the nonconvex scalar case. Any characteristic field satisfying (\ref{gennonlin}) is said to be genuinely nonlinear.
Entropy
Another important quantity in gas dynamics is the specific entropy:
$$ s = c_v \log(p/\rho^\gamma) + C,$$
where $c_v$ and $C$ are constants. From the expression (\ref{euler_evecs}) for the eigenvector $r_2$, we see that the pressure and velocity are constant across a 2-wave.
A simple 2-wave is also called an entropy wave because a variation in density while the pressure remains constant requires a variation in the entropy of the gas as well. On the other hand a simple acoustic wave (a continuously varying pure 1-wave or 3-wave) has constant entropy throughout the wave; the specific entropy is a Riemann invariant for these families.
A shock wave (either a 1-wave or 3-wave) satisfies the Rankine-Hugoniot conditions and exhibits a jump in entropy. To be physically correct, the entropy of the gas must increase as gas molecules pass through the shock, leading to the entropy condition for selecting shock waves. We have already seen this term used in the context of scalar nonlinear equations shallow water flow, even though the entropy condition in those cases did not involve the physical entropy.
Riemann invariants
Since the Euler equations have three components, we expect each integral curve (a 1D set in 3D space) to be defined by two Riemann invariants. These are:
\begin{align}
1 & : s, u+\frac{2c}{\gamma-1} \
2 & : u, p \
3 & : s, u-\frac{2c}{\gamma-1}.
\end{align}
Integral curves
The level sets of these Riemann invariants are two-dimensional surfaces; the intersection of two appropriate level sets defines an integral curve.
The 2-integral curves, of course, are simply lines of constant pressure and velocity (with varying density). Since the field is linearly degenerate, these coincide with the Hugoniot loci.
We can determine the form of the 1- and 3-integral curves using the Riemann invariants above. For a curve passing through $(\rho_0,u_0,p_0)$, we find
\begin{align}
\rho(p) &= (p/p_0)^{1/\gamma} \rho_0,\
u(p) & = u_0 \pm \frac{2c_0}{\gamma-1}\left(1-(p/p_0)^{(\gamma-1)/(2\gamma)}\right).
\end{align}
Here the plus sign is for 1-waves and the minus sign is for 3-waves.
Below we plot the projection of some integral curves on the $p-u$ plane.
End of explanation
"""
interact(euler.plot_integral_curves,
gamma=widgets.FloatSlider(min=1.1,max=3,value=1.4),
rho_0=widgets.FloatSlider(min=0.1,max=3.,value=1.,
description=r'$\rho_0$'));
"""
Explanation: If you wish to examine the Python code for this chapter, see:
exact_solvers/euler.py ...
on github,
exact_solvers/euler_demos.py ...
on github.
End of explanation
"""
interact(euler.plot_hugoniot_loci,
gamma=widgets.FloatSlider(min=1.1,max=3,value=1.4),
rho_0=widgets.FloatSlider(min=0.1,max=3.,value=1.,
description=r'$\rho_0$'));
"""
Explanation: Rankine-Hugoniot jump conditions
The Hugoniot loci for 1- and 3-shocks are
\begin{align}
\rho(p) &= \left(\frac{1 + \beta p/p_0}{p/p_\ell + \beta} \right),\
u(p) & = u_0 \pm \frac{2c_0}{\sqrt{2\gamma(\gamma-1)}}
\left(\frac{1-p/p_0}{\sqrt{1+\beta p/p_0}}\right), \
\end{align}
where $\beta = (\gamma+1)/(\gamma-1)$.
Here the plus sign is for 1-shocks and the minus sign is for 3-shocks.
Below we plot the projection of some Hugoniot loci on the $p-u$ plane.
End of explanation
"""
left_state = State(Density = 3.,
Velocity = 0.,
Pressure = 3.)
right_state = State(Density = 1.,
Velocity = 0.,
Pressure = 1.)
euler.riemann_solution(left_state,right_state)
"""
Explanation: Entropy condition
As mentioned above, a shock wave is physically relevant only if the entropy of the gas increases as the gas particles move through the shock. A discontinuity satisfying the Rankine-Hugoniot jump conditions that violates this entropy condition (an "entropy-violating shock") is not physically correct and should be replaced by a rarefaction wave in the Riemann solution.
This physical entropy condition is equivalent to the mathematical condition that for a 1-shock to be physically relevant, the 1-characteristics must impinge on the shock (the Lax entropy condition). If the entropy condition is violated, the 1-characteristics would spread out, allowing the insertion of an expansion fan (rarefaction wave).
Exact solution of the Riemann problem
The general Riemann solution is found following the steps listed below. This is essentially the same procedure used to determine the correct solution to the Riemann problem for the shallow water equations in Shallow_water, where more details are given.
The Euler equations are a system of three equations and the general Riemann solution consists of three waves, so we must determine two intermediate states rather than the one intermediate state in the shallow water equations. However, it is nearly as simple because of the fact that we know the pressure and velocity are constant across the 2-wave, and so there is a single intermediate pressure $p_m$ and velocity $u_m$ in both intermediate states, and it is only the density that takes different values $\rho_{m1}$ and $\rho_{m2}$. Moreover any jump in density is allowed across the 2-wave, and we have expressions given above for how $u(p)$ varies along any integral curve or Hugoniot locus, expressions that do not explicitly involve $\rho$. So we can determine the intermediate $p_m$ by finding the intersection point of two relevant curves, in step 3 of this general algorithm:
Define a piecewise function giving the middle state velocity $u_m$ that can be connected to the left state by an entropy-satisfying shock or rarefaction, as a function of the middle-state pressure $p_m$.
Define a piecewise function giving the middle state velocity $u_m$ that can be connected to the right state by an entropy-satisfying shock or rarefaction, as a function of the middle-state pressure $p_m$.
Use a nonlinear rootfinder to find the intersection of the two functions defined above.
Use the Riemann invariants to find the intermediate state densities and the solution structure inside any rarefaction waves.
Step 4 above requires finding the structure of rarefaction waves. This can be done using the the fact that the Riemann invariants are constant through the rarefaction wave. See Chapter 14 of <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite> for more details.
Examples of Riemann solutions
Here we present some representative examples of Riemann problems and solutions. The examples chosen are closely related to the examples used in Shallow_water and you might want to refer back to that notebook and compare the results.
Problem 1: Sod shock tube
First we consider the classic shock tube problem. The initial condition consists of high density and pressure on the left, low density and pressure on the right and zero velocity on both sides. The solution is composed of a shock propagating to the right (3-shock), while a left-going rarefaction forms (1-rarefaction). In between these two waves, there is a jump in the density, which is the contact discontinuity (2-wave) in the linearly degenerate characteristic field.
Note that this set of initial conditions is analogous to the "dam break" problem for shallow water quations, and the resulting structure of the solution is very similar to that obtained when those equations are solved with the addition of a scalar tracer. However, in the Euler equations the entropy jump across a 2-wave does affect the fluid dynamics on either side, so this is not a passive tracer and solving the Riemann problem is slightly more complex.
End of explanation
"""
euler.phase_plane_plot(left_state, right_state)
"""
Explanation: Here is a plot of the solution in the phase plane, showing the integral curve connecting the left and middle states, and the Hugoniot locus connecting the middle and right states.
End of explanation
"""
left_state = State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
right_state = State(Density = 1.,
Velocity = 3.,
Pressure = 1.)
euler.riemann_solution(left_state,right_state);
euler.phase_plane_plot(left_state, right_state)
"""
Explanation: Problem 2: Symmetric expansion
Next we consider the case of equal densities and pressures, and equal and opposite velocities, with the initial states moving away from each other. The result is two rarefaction waves (the contact has zero strength).
End of explanation
"""
left_state = State(Density = 1.,
Velocity = 3.,
Pressure = 1.)
right_state = State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
euler.riemann_solution(left_state,right_state)
euler.phase_plane_plot(left_state, right_state)
"""
Explanation: Problem 3: Colliding flows
Next, consider the case in which the left and right states are moving toward each other. This leads to a pair of shocks, with a high-density, high-pressure state in between.
End of explanation
"""
left_state = State(Density = 3.,
Velocity = 0.,
Pressure = 3.)
right_state = State(Density = 1.,
Velocity = 0.,
Pressure = 1.)
euler.plot_riemann_trajectories(left_state, right_state)
"""
Explanation: Plot particle trajectories
In the next plot of the Riemann solution in the $x$-$t$ plane, we also plot the trajectories of a set of particles initially distributed along the $x$ axis at $t=0$, with the spacing inversely proportional to the density. The evolution of the distance between particles gives an indication of how the density changes.
End of explanation
"""
def plot_with_stripes_t_slider(t):
euler_demos.plot_with_stripes(rho_l=3.,u_l=0.,p_l=3.,
rho_r=1.,u_r=0.,p_r=1.,
gamma=gamma,t=t)
interact(plot_with_stripes_t_slider,
t=widgets.FloatSlider(min=0.,max=1.,step=0.1,value=0.5));
"""
Explanation: Since the distance between particles in the above plot is inversely proportional to density, we see that the density around a particle increases as it goes through the shock wave but decreases through the rarefaction wave, and that in general there is a jump in density across the contact discontinuity, which lies along the particle trajectory emanating from $x=0$ at $t=0$.
Riemann solution with a colored tracer
Next we plot the Riemann solution with the density plot also showing an advected color to help visualize the flow better. The fluid initially to the left of $x=0$ is colored red and that initially to the right of $x=0$ is colored blue, with stripes of different shades of these colors to help visualize the motion of the fluid.
Let's plot the Sod shock tube data with this colored tracer:
End of explanation
"""
euler_demos.euler_demo1(rho_l=2.,u_l=0.,p_l=2.5,
rho_r=3.,u_r=0.,p_r=5., gamma=gamma)
"""
Explanation: Note the following in the figure above:
The edges of each stripe are being advected with the fluid velocity, so you can visualize how the fluid is moving.
The width of each stripe initially is inversely proportional to the density of the fluid, so that the total mass of gas within each stripe is the same.
The total mass within each stripe remains constant as the flow evolves, and the width of each stripe remains inversely proportional to the local density.
The interface between the red and blue gas moves with the contact discontinuity. The velocity and pressure are constant but the density can vary across this wave.
Interactive Riemann solver
The initial configuration specified below gives a rather different looking solution than when using initial conditions of Sod, but with the same mathematical structure. In the live notebook, you can easily adjust the initial data and immediately see the resulting solution.
End of explanation
"""
left_state = State(Density =0.,
Velocity = 0.,
Pressure = 0.)
right_state = State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
euler.riemann_solution(left_state,right_state)
euler.phase_plane_plot(left_state, right_state)
"""
Explanation: Riemann problems with vacuum
A vacuum state (with zero pressure and density) in the Euler equations is similar to a dry state (with depth $h=0$) in the shallow water equations. It can arise in the solution of the Riemann problem in two ways:
An initial left or right vacuum state: in this case the Riemann solution consists of a single rarefaction, connecting the non-vacuum state to vacuum.
A problem where the left and right states are not vacuum but middle states are vacuum. Since this means the middle pressure is smaller than that to the left or right, this can occur only if the 1- and 3-waves are both rarefactions. These rarefactions are precisely those required to connect the left and right states to the middle vacuum state.
Initial vacuum state
Next we start with the density and pressure set to 0 in the left state. The velocity plot looks a bit strange, but note that the velocity is undefined in vacuum. The solution structure consists of a rarefaction wave, similar to what is observed in the analogous case of a dam break problem with dry land on one side (depth $=0$), as discussed in Shallow_water.
End of explanation
"""
left_state = State(Density =1.,
Velocity = -10.,
Pressure = 1.)
right_state = State(Density = 1.,
Velocity = 10.,
Pressure = 1.)
euler.riemann_solution(left_state,right_state)
euler.phase_plane_plot(left_state, right_state)
"""
Explanation: The phase plane plot may look odd, but recall that in the vacuum state velocity is undefined, and since $p_\ell = p_m = 0$, the left and middle states are actually the same.
Middle vacuum state
Finally, we consider an example where there is sufficiently strong outflow ($u_\ell<0$ and $u_r>0$) that a vacuum state forms, analogous to the dry state that appears in the similar example in Shallow_water.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ncc/cmip6/models/sandbox-1/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ncc/cmip6/models/noresm2-lme/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lme', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-LME
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
ppossemiers/analyzer | analyzer.ipynb | mit | # Imports and directives
%matplotlib inline
import numpy as np
from math import log
import matplotlib.pyplot as plt
from matplotlib.mlab import PCA as mlabPCA
import javalang
import os, re, requests, zipfile, json, operator
from collections import Counter
import colorsys
import random
from StringIO import StringIO
from subprocess import Popen, PIPE
from sklearn.cluster import KMeans
from tabulate import tabulate
from sklearn import svm
# Variables
USER = 'apache' # github user of the repo that is analysed
REPO = 'tomcat' # repository to investigate
BASE_PATH = '/Users/philippepossemiers/Documents/Dev/Spark/data/analyzer/' # local expansion path
COMMENT_LINES = ['/*', '//', '*/', '* '] # remove comments from code
KEY_WORDS = ['abstract','continue','for','new','switch','assert','default','goto','synchronized',
'boolean','do','if','private','this','break','double','implements','protected','throw',
'byte','else','public','throws','case','enum','instanceof','return','transient',
'catch','extends','int','short','try','char','final','interface','static','void',
'class','finally','long','strictfp','volatile','const','float','native','super','while'
'true','false','null']
TOP = 25 # number of items to show in graphs
# list of operators to find in source code
OPERATORS = ['\+\+','\-\-','\+=','\-=','\*\*','==','!=','>=','<=','\+','=','\-','\*','/','%','!','&&', \
'\|\|','\?','instanceof','~','<<','>>','>>>','&','\^','<','>']
# list of variable types to find in source code
OPERANDS = ['boolean','byte','char','short','int','long','float','double','String']
GIT_COMMIT_FIELDS = ['author_name', 'committer name', 'date', 'message', 'name']
GIT_LOG_FORMAT = ['%an', '%cn', '%ad', '%s']
GIT_LOG_FORMAT = '%x1f'.join(GIT_LOG_FORMAT) + '%x1e'
# List of Apache Java projects on github
APACHE_PROJECTS = ['abdera', 'accumulo', 'ace', 'activemq', 'airavata', 'ambari', 'ant', 'ant-antlibs-antunit', \
'any23', 'archiva', 'aries', 'webservices-axiom', 'axis2-java', \
'bigtop', 'bookkeeper', 'bval', 'calcite', 'camel', 'cassandra', 'cayenne', \
'chainsaw', 'chukwa', 'clerezza', 'commons-bcel', \
'commons-beanutils', 'commons-bsf', 'commons-chain', 'commons-cli', 'commons-codec', \
'commons-collections', 'commons-compress', 'commons-configuration', 'commons-daemon', \
'commons-dbcp', 'commons-dbutils', 'commons-digester', 'commons-discovery', \
'commons-email', 'commons-exec', 'commons-fileupload', 'commons-functor', 'httpcomponents-client', \
'commons-io', 'commons-jci', 'commons-jcs', 'commons-jelly', 'commons-jexl', 'commons-jxpath', \
'commons-lang', 'commons-launcher', 'commons-logging', 'commons-math', \
'commons-net', 'commons-ognl', 'commons-pool', 'commons-proxy', 'commons-rng', 'commons-scxml', \
'commons-validator', 'commons-vfs', 'commons-weaver', 'continuum', 'crunch', \
'ctakes', 'curator', 'cxf', 'derby', 'directmemory', \
'directory-server', 'directory-studio', 'drill', 'empire-db', 'falcon', 'felix', 'flink', \
'flume', 'fop', 'directory-fortress-core', 'ftpserver', 'geronimo', 'giraph', 'gora', \
'groovy', 'hadoop', 'hama', 'harmony', 'hbase', 'helix', 'hive', 'httpcomponents-client', \
'httpcomponents-core', 'jackrabbit', 'jena', 'jmeter', 'lens', 'log4j', \
'lucene-solr', 'maven', 'maven-doxia', 'metamodel', 'mina', 'mrunit', 'myfaces', 'nutch', 'oozie', \
'openjpa', 'openmeetings', 'openwebbeans', 'orc', 'phoenix', 'pig', 'poi','rat', 'river', \
'shindig', 'sling', \
'sqoop', 'struts', 'synapse', 'syncope', 'tajo', 'tika', 'tiles', 'tomcat', 'tomee', \
'vxquery', 'vysper', 'whirr', 'wicket', 'wink', 'wookie', 'xmlbeans', 'zeppelin', 'zookeeper']
print len(APACHE_PROJECTS)
# Global dictionaries
joined = [] # list with all source files
commit_dict = {} # commits per class
reference_dict = {} # number of times a class is referenced
lines_dict = {} # number of lines per class
methods_dict = {} # number of functions per class
operators_dict = {} # number of operators per class
operands_dict = {} # number of operands per class
halstead_dict = {} # Halstead complexity measures
cyclomatic_dict = {} # cyclomatic complexity
# Utility functions
# TODO : check if we can use this
def sanitize(contents):
lines = contents.split('\n')
# remove stop lines
for stop_line in COMMENT_LINES:
lines = [line.lower().lstrip().replace(';', '') for line in lines if stop_line not in line and line <> '']
return '\n'.join(lines)
def find_whole_word(word):
return re.compile(r'\b({0})\b'.format(word), flags=re.IGNORECASE).search
def all_files(directory):
for path, dirs, files in os.walk(directory):
for f in files:
yield os.path.join(path, f)
def build_joined(repo):
src_list = []
repo_url = 'https://github.com/' + repo[0] + '/' + repo[1]
os.chdir(BASE_PATH)
os.system('git clone {}'.format(repo_url))
# get all java source files
src_files = [f for f in all_files(BASE_PATH + repo[1]) if f.endswith('.java')]
for f in src_files:
try:
# read contents
code = open(f, 'r').read()
# https://github.com/c2nes/javalang
tree = javalang.parse.parse(code)
# create tuple with package + class name and code + tree + file path
src_list.append((tree.package.name + '.' + tree.types[0].name, (code, tree, f)))
except:
pass
return src_list
def parse_git_log(repo_dir, src):
# first the dictionary with all classes
# and their commit count
total = 0
p = Popen('git log --name-only --pretty=format:', shell=True, stdout=PIPE, cwd=repo_dir)
(log, _) = p.communicate()
log = log.strip('\n\x1e').split('\x1e')
log = [r.strip().split('\n') for r in log]
log = [r for r in log[0] if '.java' in r]
log2 = []
for f1 in log:
for f2 in src:
if f2[1][2].find(f1) > -1:
log2.append(f2[0])
cnt_dict = Counter(log2)
for key, value in cnt_dict.items():
total += value
cnt_dict['total'] = total
# and then the list of commits as dictionaries
p = Popen('git log --format="%s"' % GIT_LOG_FORMAT, shell=True, stdout=PIPE, cwd=repo_dir)
(log, _) = p.communicate()
log = log.strip('\n\x1e').split("\x1e")
log = [row.strip().split("\x1f") for row in log]
log = [dict(zip(GIT_COMMIT_FIELDS, row)) for row in log]
# now get list of distinct committers
committers = len(set([x['committer name'] for x in log]))
cnt_dict['committers'] = committers
return cnt_dict
def count_inheritance(src):
count = 0
for name, tup in src:
if find_whole_word('extends')(tup[0]):
count += 1
return count
def count_references(src):
names, tups = zip(*src)
dict = {e : 0 for i, e in enumerate(names)}
total = 0
for name in names:
c_name = name[name.rfind('.'):]
for tup in tups:
if find_whole_word(c_name)(tup[0]):
dict[name] += 1
total += 1
dict['total'] = total
# sort by amount of references
return {k: v for k, v in dict.iteritems() if v > 1}
def count_lines(src):
dict = {e : 0 for i, e in enumerate(src)}
total = 0
for name, tup in src:
dict[name] = 0
lines = tup[0].split('\n')
for line in lines:
if line != '\n':
dict[name] += 1
total += 1
dict['total'] = total
# sort by amount of lines
return {k: v for k, v in dict.iteritems()}
# constructors not counted
def count_methods(src):
dict = {e : 0 for i, e in enumerate(src)}
total = 0
for name, tup in src:
dict[name] = len(tup[1].types[0].methods)
total += dict[name]
dict['total'] = total
# sort by amount of functions
return {k: v for k, v in dict.iteritems()}
def count_operators(src):
dict = {key: 0 for key in OPERATORS}
for name, tup in src:
for op in OPERATORS:
# if operator is in list, match it without anything preceding or following it
# eg +, but not ++ or +=
if op in ['\+','\-','!','=']:
# regex excludes followed_by (?!) and preceded_by (?<!)
dict[op] += len(re.findall('(?!\-|\*|&|>|<|>>)(?<!\-|\+|=|\*|&|>|<)' + op, tup[0]))
else:
dict[op] += len(re.findall(op, tup[0]))
# TODO : correct bug with regex for the '++'
dict['\+'] -= dict['\+\+']
total = 0
distinct = 0
for key in dict:
if dict[key] > 0:
total += dict[key]
distinct += 1
dict['total'] = total
dict['distinct'] = distinct
return dict
def count_operands(src):
dict = {key: 0 for key in OPERANDS}
for name, tup in src:
lines = tup[0].split('\n')
for line in lines:
for op in OPERANDS:
if op in line:
dict[op] += 1 + line.count(',')
total = 0
distinct = 0
for key in dict:
if dict[key] > 0:
total += dict[key]
distinct += 1
dict['total'] = total
dict['distinct'] = distinct
return dict
def calc_cyclomatic_complexity(src):
dict = {}
total = 0
for name, tup in src:
dict[name] = 1
dict[name] += len(re.findall('if|else|for|switch|while', tup[0]))
total += dict[name]
dict['total'] = total
# sort by amount of complexity
return {k: v for k, v in dict.iteritems()}
def make_hbar_plot(dictionary, title, x_label, top=TOP):
# show top classes
vals = sorted(dictionary.values(), reverse=True)[:top]
lbls = sorted(dictionary, key=dictionary.get, reverse=True)[:top]
# make plot
fig = plt.figure(figsize=(10, 7))
fig.suptitle(title, fontsize=15)
ax = fig.add_subplot(111)
# set ticks
y_pos = np.arange(len(lbls)) + 0.5
ax.barh(y_pos, vals, align='center', alpha=0.4, color='lightblue')
ax.set_yticks(y_pos)
ax.set_yticklabels(lbls)
ax.set_xlabel(x_label)
plt.show()
pass
# Clustering
def random_centroid_selector(total_clusters , clusters_plotted):
random_list = []
for i in range(0, clusters_plotted):
random_list.append(random.randint(0, total_clusters - 1))
return random_list
def plot_cluster(kmeansdata, centroid_list, names, num_cluster, title):
mlab_pca = mlabPCA(kmeansdata)
cutoff = mlab_pca.fracs[1]
users_2d = mlab_pca.project(kmeansdata, minfrac=cutoff)
centroids_2d = mlab_pca.project(centroid_list, minfrac=cutoff)
# make plot
fig = plt.figure(figsize=(20, 15))
fig.suptitle(title, fontsize=15)
ax = fig.add_subplot(111)
plt.xlim([users_2d[:, 0].min() - 3, users_2d[:, 0].max() + 3])
plt.ylim([users_2d[:, 1].min() - 3, users_2d[:, 1].max() + 3])
random_list = random_centroid_selector(num_cluster, 50)
for i, position in enumerate(centroids_2d):
if i in random_list:
plt.scatter(centroids_2d[i, 0], centroids_2d[i, 1], marker='o', c='red', s=100)
for i, position in enumerate(users_2d):
plt.scatter(users_2d[i, 0], users_2d[i, 1], marker='o', c='lightgreen')
for label, x, y in zip(names, users_2d[:, 0], users_2d[:, 1]):
ax.annotate(
label,
xy = (x, y), xytext=(-15, 15),
textcoords = 'offset points', ha='right', va='bottom',
bbox = dict(boxstyle='round,pad=0.5', fc='white', alpha=0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
pass
"""
Explanation: Analyzer
Analyzer is a python program that tries to gauge the evolvability and maintainability of java software. To achieve this, it tries to measure the complexity of the software under evalutation.
A. What is software evolvability and maintainability?
We define software evolvability as the ease with which a software system or a
component can evolve while preserving its design as much as possible. In the case of OO class libraries, we restrict the preservation of the design to the preservation of the library interface. This is important when we consider that the evolution of a system that uses a library is directly influenced by the evolvability of the library. For instance, a system that uses version i of a library can easily be upgraded with version i+1 of the same library if the new version preserves the interface of the older one.
B. What is software complexity?
As the Wikipedia article (https://en.wikipedia.org/wiki/Programming_complexity) on programming complexity states :
"As the number of entities increases, the number of interactions between them would increase exponentially, and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible."
C. How can we measure sofware complexity?
To measure sofware complexity, we have to break this down into metrics. Therefore, we propose to use the metrics as proposed by Sanjay Misra en Ferid Cafer in their paper 'ESTIMATING COMPLEXITY OF PROGRAMS IN PYTHON LANGUAGE'.
To quote from this paper :
"Complexity of a system depends on the following factors :
1. Complexity due to classes. Class is a basic unit of object oriented software development. All the functions are distributed in different classes. Further classes in the object-oriented code either are in inheritance hierarchy or distinctly distributed. Accordingly, the complexity of all the classes is due to classes in inheritance hierarchy and the complexity of distinct classes.
2. Complexity due to global factors: The second important factor, which is normally neglected in calculating complexity of object-oriented codes, is the complexity of global factors in main program.
3. Complexity due to coupling: Coupling is one of the important factors for increasing complexity of object- oriented code."
Whitin the Analyzer program, we try to measure complexity using following metrics :
Commit frequency. This can find the 'hotspots' in code where many changes were performed and which can be problem zones. This idea was proposed by Adam Tornhill in 'Your Code as a Crime Scene'.
Distinct number of committers. This metric will tell us how many people worked on the code, thereby increasing complexity.
Class reference count. This metric measures the degree of coupling between classes by counting the references to them.
Inheritance count. This is a measure of the coupling that exists because of inheritance.
Lines of code. A rather crude metric that tries to measure the length of our software system.
Number of methods. This is a measure of the complexity of the system.
Halstead complexity measures : https://en.wikipedia.org/wiki/Halstead_complexity_measures
Cyclomatic Complexity : https://en.wikipedia.org/wiki/Cyclomatic_complexity
D. Interpreting the metrics
Now we try to interpret these measures by clustering, or grouping together the results from analyzing 134 open-source Apache Java projects. To do that, we will use the k-means algorithm, a classic machine-learning algorithm originally developed in 1957.
Clustering is an unsupervised learning technique and we use clustering algorithms for exploring data. Using clustering allows us to group similar software projects together, and we can explore the trends in each cluster independently.
End of explanation
"""
# first build list of source files
joined = build_joined((USER, REPO))
"""
Explanation: Analyzing one project
End of explanation
"""
commit_dict = parse_git_log(BASE_PATH + REPO, joined)
make_hbar_plot(commit_dict, 'Commit frequency', 'Commits', TOP)
"""
Explanation: 1. Commit frequency
End of explanation
"""
print 'Distinct committers : ' + str(commit_dict['committers'])
"""
Explanation: 2. Distinct committers
End of explanation
"""
reference_dict = count_references(joined)
make_hbar_plot(reference_dict, 'Top 25 referenced classes', 'References', TOP)
"""
Explanation: 3. Class reference count
End of explanation
"""
inheritance_count = count_inheritance(joined)
print 'Inheritance count : ' + inheritance_count
"""
Explanation: 4. Inheritance count
End of explanation
"""
lines_dict = count_lines(joined)
make_hbar_plot(lines_dict, 'Largest 25 classes', 'Lines of code', TOP)
"""
Explanation: 5. Lines of code
End of explanation
"""
methods_dict = count_methods(joined)
make_hbar_plot(methods_dict, 'Top 25 classes in nr of methods', 'Number of methods', TOP)
"""
Explanation: 6. Number of methods
End of explanation
"""
operators_dict = count_operators(joined)
make_hbar_plot(operators_dict, 'Top 25 operators', 'Number of operators', TOP)
"""
Explanation: 7. Halstead complexity measures
To measure the Halstead complexity, following metrics are taken into account :
* the number of distinct operators (https://docs.oracle.com/javase/tutorial/java/nutsandbolts/opsummary.html)
* the number of distinct operands
* the total number of operators
* the total number of operands
a) Number of operators
End of explanation
"""
operands_dict = count_operands(joined)
make_hbar_plot(operands_dict, 'Top 25 operand types', 'Number of operands', TOP)
"""
Explanation: b) Number of operands
End of explanation
"""
halstead_dict['PROGRAM_VOCABULARY'] = operators_dict['distinct'] + operands_dict['distinct']
halstead_dict['PROGRAM_LENGTH'] = round(operators_dict['total'] + operands_dict['total'], 0)
halstead_dict['VOLUME'] = round(halstead_dict['PROGRAM_LENGTH'] * log(halstead_dict['PROGRAM_VOCABULARY'], 2), 0)
halstead_dict['DIFFICULTY'] = (operators_dict['distinct'] / 2) * (operands_dict['total'] / operands_dict['distinct'])
halstead_dict['EFFORT'] = round(halstead_dict['VOLUME'] * halstead_dict['DIFFICULTY'], 0)
halstead_dict['TIME'] = round(halstead_dict['EFFORT'] / 18, 0)
halstead_dict['BUGS'] = round(halstead_dict['VOLUME'] / 3000, 0)
print halstead_dict
"""
Explanation: Complexity measures
End of explanation
"""
cyclomatic_dict = calc_cyclomatic_complexity(joined)
make_hbar_plot(cyclomatic_dict, 'Top 25 classes with cyclomatic complexity', 'Level of complexity', TOP)
"""
Explanation: 8. Cyclomatic complexity
End of explanation
"""
# featurize all metrics
def make_features(repo, dict):
features = []
for key, value in dict.items():
features.append(int(value))
return features
# iterate all repos and build
# dictionary with all metrics
def make_rows(repos):
rows = []
try:
for repo in repos:
dict = {}
joined = build_joined(repo)
github_dict = parse_git_log(BASE_PATH + repo[1], joined)
dict['commits'] = github_dict['total']
#dict['committers'] = github_dict['committers'] Uncomment this line for the next run.
# Was added at the last minute
dict['references'] = count_references(joined)['total']
dict['inheritance'] = count_inheritance(joined)
dict['lines'] = count_lines(joined)['total']
dict['methods'] = count_methods(joined)['total']
operators_dict = count_operators(joined)
operands_dict = count_operands(joined)
dict['program_vocabulary'] = operators_dict['distinct'] + operands_dict['distinct']
dict['program_length'] = round(operators_dict['total'] + operands_dict['total'], 0)
dict['volume'] = round(dict['program_length'] * log(dict['program_vocabulary'], 2), 0)
dict['difficulty'] = (operators_dict['distinct'] / 2) * (operands_dict['total'] / operands_dict['distinct'])
dict['effort'] = round(dict['volume'] * dict['difficulty'], 0)
dict['time'] = round(dict['effort'] / 18, 0)
dict['bugs'] = round(dict['volume'] / 3000, 0)
dict['cyclomatic'] = calc_cyclomatic_complexity(joined)['total']
rows.append(make_features(repo, dict))
except:
pass
return rows
def cluster_repos(arr, nr_clusters):
kmeans = KMeans(n_clusters=nr_clusters)
kmeans.fit(arr)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
return (centroids, labels)
"""
Explanation: Analyzing Apache Java projects
End of explanation
"""
repositories = [('apache', x) for x in APACHE_PROJECTS]
"""
Explanation: Construct model with Apache projects
End of explanation
"""
rows = make_rows(repositories[:5])
rows.extend(make_rows(repositories[5:10]))
rows.extend(make_rows(repositories[10:15]))
rows.extend(make_rows(repositories[15:20]))
rows.extend(make_rows(repositories[20:25]))
rows.extend(make_rows(repositories[25:30]))
rows.extend(make_rows(repositories[30:35]))
rows.extend(make_rows(repositories[35:40]))
rows.extend(make_rows(repositories[40:45]))
rows.extend(make_rows(repositories[45:50]))
rows.extend(make_rows(repositories[50:55]))
rows.extend(make_rows(repositories[55:60]))
rows.extend(make_rows(repositories[60:65]))
rows.extend(make_rows(repositories[65:70]))
rows.extend(make_rows(repositories[70:75]))
rows.extend(make_rows(repositories[75:80]))
rows.extend(make_rows(repositories[80:85]))
rows.extend(make_rows(repositories[85:90]))
rows.extend(make_rows(repositories[90:95]))
rows.extend(make_rows(repositories[95:100]))
rows.extend(make_rows(repositories[100:105]))
rows.extend(make_rows(repositories[105:110]))
rows.extend(make_rows(repositories[110:115]))
rows.extend(make_rows(repositories[115:120]))
rows.extend(make_rows(repositories[120:125]))
rows.extend(make_rows(repositories[125:130]))
rows.extend(make_rows(repositories[130:133]))
rows.extend(make_rows(repositories[133:134]))
print rows
"""
Explanation: We break the projects down in batches of five to make the analysis manageable
End of explanation
"""
# TWO clusters
NR_CLUSTERS = 2
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
centroids = tup[0]
plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters')
# THREE clusters
NR_CLUSTERS = 3
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
centroids = tup[0]
plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters')
# FOUR clusters
NR_CLUSTERS = 4
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
centroids = tup[0]
plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters')
"""
Explanation: Clustering Apache Java projects
End of explanation
"""
names = [x[1] for x in repositories]
print names.index('synapse')
print names.index('tomcat')
print names.index('groovy')
print names.index('hama')
"""
Explanation: Clustering results
The clustering shows us that with four clusters, we can cover the whole graph.
This gives us four clearly defined areas in which all projects can be mapped. The task is now to discover what parameters have the largest importance in this clustering.
We can do this by examining the features of the four projects closest to the centroids and comparing them.
End of explanation
"""
headers = ['Repo', 'Com', 'Ref', 'Inh', 'Line', 'Meth', 'Voc', \
'Len', 'Vol', 'Diff', 'Eff', 'Time', 'Bug','Cycl']
print tabulate([[names[118]] + [x for x in rows[118]], [names[123]] + [x for x in rows[123]], \
[names[82]] + [x for x in rows[82]], [names[84]] + [x for x in rows[84]]], headers=headers)
"""
Explanation: Tabulating groovy and synapse
End of explanation
"""
# THREE clusters
NR_CLUSTERS = 4
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
labels = tup[1]
"""
Explanation: Construct a prediction model with the Apache projects
Labeling all projects using four clusters
End of explanation
"""
clf = svm.SVC(gamma=0.001, C=100.)
clf.fit(rows, labels)
"""
Explanation: Construct a Support Vector Classification model
End of explanation
"""
print labels
print clf.predict(rows[3])
print clf.predict(rows[34])
"""
Explanation: Test it
End of explanation
"""
#repositories = [('qos-ch', 'slf4j'), ('mockito', 'mockito'), ('elastic', 'elasticsearch')]
repositories = [('JetBrains', 'kotlin')]
rows = make_rows(repositories)
print clf.predict(rows[0])
print tabulate([['Kotlin'] + [x for x in rows[0]]], headers=headers)
"""
Explanation: Analyze JetBrains kotlin project
End of explanation
"""
|
khalido/nd101 | keyboard-shortcuts.ipynb | gpl-3.0 | # mode practice
"""
Explanation: Keyboard shortcuts
In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.
First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.
By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.
Exercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.
End of explanation
"""
## Practice here
def fibo(n): # Recursive Fibonacci sequence!
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
"""
Explanation: Help with commands
If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.
Creating new cells
One of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.
Exercise: Create a cell above this cell using the keyboard command.
Exercise: Create a cell below this cell using the keyboard command.
Switching between Markdown and code
With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to cell, press Y. To switch from code to Markdown, press M.
Exercise: Switch the cell below between Markdown and code cells.
End of explanation
"""
# Move this cell down
# below this cell
"""
Explanation: Line numbers
A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.
Exercise: Turn line numbers on and off in the above code cell.
Deleting cells
Deleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice!
Exercise: Delete the cell below.
Saving the notebook
Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!
The Command Palette
You can easily access the command palette by pressing Shift + Control/Command + P.
Note: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.
This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands.
Exercise: Use the command palette to move the cell below down one position.
End of explanation
"""
|
Jackporter415/phys202-2015-work | assignments/assignment05/InteractEx01.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 01
Import
End of explanation
"""
def print_sum(a, b):
print (a+b)
"""
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
"""
interact(print_sum,a = (-10.,10.,0.1), b = (-8,8,2));
assert True # leave this for grading the print_sum exercise
"""
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
"""
def print_string(s, length=False):
"""Print the string s and optionally its length."""
print (s)
if length == True:
print (len(s))
"""
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
"""
interact(print_string,s = 'Hello World!', length = True)
assert True # leave this for grading the print_string exercise
"""
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation
"""
|
radu941208/DeepLearning | Convolutional_Neural_Network/Convolution+model+-+Application+-+v1.ipynb | mit | import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
"""
Explanation: Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
Implement helper functions that you will use when implementing a TensorFlow model
Implement a fully functioning ConvNet using TensorFlow
After this assignment you will be able to:
Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks").
1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
End of explanation
"""
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
"""
Explanation: Run the next cell to load the "SIGNS" dataset you are going to use.
End of explanation
"""
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
"""
Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples.
End of explanation
"""
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
"""
Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
End of explanation
"""
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
"""
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape=[None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
"""
Explanation: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint.
End of explanation
"""
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4,4,3,8], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable("W2", [2,2,8,16], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
"""
Explanation: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
More Info.
End of explanation
"""
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X,W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME')
# FLATTEN
F = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(F, num_outputs=6,activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
W1 =
</td>
<td>
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br>
-0.06847463 0.05245192]
</td>
</tr>
<tr>
<td>
W2 =
</td>
<td>
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br>
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br>
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
</td>
</tr>
</table>
1.2 - Forward propagation
In TensorFlow, there are built-in functions that carry out the convolution steps for you.
tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here
tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here
tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here.
tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here.
tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here.
In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
Exercise:
Implement the forward_propagation function below to build the following model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
End of explanation
"""
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
"""
Explanation: Expected Output:
<table>
<td>
Z3 =
</td>
<td>
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br>
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
</td>
</table>
1.3 - Compute cost
Implement the compute cost function below. You might find these two functions helpful:
tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here.
tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here.
Exercise: Compute the cost below using the function above.
End of explanation
"""
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
"""
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
"""
Explanation: Expected Output:
<table>
<td>
cost =
</td>
<td>
2.91034
</td>
</table>
1.4 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches.
Exercise: Complete the function below.
The model below should:
create placeholders
initialize parameters
forward propagate
compute the cost
create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables
End of explanation
"""
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
"""
Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
End of explanation
"""
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
"""
Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
End of explanation
"""
|
yl565/statsmodels | examples/notebooks/generic_mle.ipynb | bsd-3-clause | from __future__ import print_function
import numpy as np
from scipy import stats
import statsmodels.api as sm
from statsmodels.base.model import GenericLikelihoodModel
"""
Explanation: Maximum Likelihood Estimation (Generic models)
This tutorial explains how to quickly implement new maximum likelihood models in statsmodels. We give two examples:
Probit model for binary dependent variables
Negative binomial model for count data
The GenericLikelihoodModel class eases the process by providing tools such as automatic numeric differentiation and a unified interface to scipy optimization functions. Using statsmodels, users can fit new MLE models simply by "plugging-in" a log-likelihood function.
Example 1: Probit model
End of explanation
"""
data = sm.datasets.spector.load_pandas()
exog = data.exog
endog = data.endog
print(sm.datasets.spector.NOTE)
print(data.exog.head())
"""
Explanation: The Spector dataset is distributed with statsmodels. You can access a vector of values for the dependent variable (endog) and a matrix of regressors (exog) like this:
End of explanation
"""
exog = sm.add_constant(exog, prepend=True)
"""
Explanation: Them, we add a constant to the matrix of regressors:
End of explanation
"""
class MyProbit(GenericLikelihoodModel):
def loglike(self, params):
exog = self.exog
endog = self.endog
q = 2 * endog - 1
return stats.norm.logcdf(q*np.dot(exog, params)).sum()
"""
Explanation: To create your own Likelihood Model, you simply need to overwrite the loglike method.
End of explanation
"""
sm_probit_manual = MyProbit(endog, exog).fit()
print(sm_probit_manual.summary())
"""
Explanation: Estimate the model and print a summary:
End of explanation
"""
sm_probit_canned = sm.Probit(endog, exog).fit()
print(sm_probit_canned.params)
print(sm_probit_manual.params)
print(sm_probit_canned.cov_params())
print(sm_probit_manual.cov_params())
"""
Explanation: Compare your Probit implementation to statsmodels' "canned" implementation:
End of explanation
"""
import numpy as np
from scipy.stats import nbinom
def _ll_nb2(y, X, beta, alph):
mu = np.exp(np.dot(X, beta))
size = 1/alph
prob = size/(size+mu)
ll = nbinom.logpmf(y, size, prob)
return ll
"""
Explanation: Notice that the GenericMaximumLikelihood class provides automatic differentiation, so we didn't have to provide Hessian or Score functions in order to calculate the covariance estimates.
Example 2: Negative Binomial Regression for Count Data
Consider a negative binomial regression model for count data with
log-likelihood (type NB-2) function expressed as:
$$
\mathcal{L}(\beta_j; y, \alpha) = \sum_{i=1}^n y_i ln
\left ( \frac{\alpha exp(X_i'\beta)}{1+\alpha exp(X_i'\beta)} \right ) -
\frac{1}{\alpha} ln(1+\alpha exp(X_i'\beta)) + ln \Gamma (y_i + 1/\alpha) - ln \Gamma (y_i+1) - ln \Gamma (1/\alpha)
$$
with a matrix of regressors $X$, a vector of coefficients $\beta$,
and the negative binomial heterogeneity parameter $\alpha$.
Using the nbinom distribution from scipy, we can write this likelihood
simply as:
End of explanation
"""
from statsmodels.base.model import GenericLikelihoodModel
class NBin(GenericLikelihoodModel):
def __init__(self, endog, exog, **kwds):
super(NBin, self).__init__(endog, exog, **kwds)
def nloglikeobs(self, params):
alph = params[-1]
beta = params[:-1]
ll = _ll_nb2(self.endog, self.exog, beta, alph)
return -ll
def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds):
# we have one additional parameter and we need to add it for summary
self.exog_names.append('alpha')
if start_params == None:
# Reasonable starting values
start_params = np.append(np.zeros(self.exog.shape[1]), .5)
# intercept
start_params[-2] = np.log(self.endog.mean())
return super(NBin, self).fit(start_params=start_params,
maxiter=maxiter, maxfun=maxfun,
**kwds)
"""
Explanation: New Model Class
We create a new model class which inherits from GenericLikelihoodModel:
End of explanation
"""
import statsmodels.api as sm
medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data
medpar.head()
"""
Explanation: Two important things to notice:
nloglikeobs: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix).
start_params: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization.
That's it! You're done!
Usage Example
The Medpar
dataset is hosted in CSV format at the Rdatasets repository. We use the read_csv
function from the Pandas library to load the data
in memory. We then print the first few columns:
End of explanation
"""
y = medpar.los
X = medpar[["type2", "type3", "hmo", "white"]].copy()
X["constant"] = 1
"""
Explanation: The model we are interested in has a vector of non-negative integers as
dependent variable (los), and 5 regressors: Intercept, type2,
type3, hmo, white.
For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.
End of explanation
"""
mod = NBin(y, X)
res = mod.fit()
"""
Explanation: Then, we fit the model and extract some information:
End of explanation
"""
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('P-values: ', res.pvalues)
print('AIC: ', res.aic)
"""
Explanation: Extract parameter estimates, standard errors, p-values, AIC, etc.:
End of explanation
"""
print(res.summary())
"""
Explanation: As usual, you can obtain a full list of available information by typing
dir(res).
We can also look at the summary of the estimation results.
End of explanation
"""
res_nbin = sm.NegativeBinomial(y, X).fit(disp=0)
print(res_nbin.summary())
print(res_nbin.params)
print(res_nbin.bse)
"""
Explanation: Testing
We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.